Xcode: 使用核心图像和 alpha 合成

8
我想创建一个CoreImage滤镜链,并能够通过与alpha或不透明度设置混合其独特效果来控制链中每个滤镜的“强度”,但我在文档中没有看到使用alpha或不透明度混合的方法。
我猜可以跳出Core Image滤镜链并使用核心图形上下文进行混合。
3个回答

27

CIColorMatrix滤镜可用于改变CIImage的alpha分量,然后您可以将其组合到背景图像上:

CIImage *overlayImage = … // from file, CGImage etc
CIImage *backgroundImage = … // likewise

CGFloat alpha = 0.5;
CGFloat rgba[4] = {0.0, 0.0, 0.0, alpha};
CIFilter *colorMatrix = [CIFilter filterWithName:@"CIColorMatrix"];
[colorMatrix setDefaults];
[colorMatrix setValue:overlayImage forKey: kCIInputImageKey];
[colorMatrix setValue:[CIVector vectorWithValues:rgba count:4] forKey:@"inputAVector"];

CIFilter *composite = [CIFilter filterWithName:@"CISourceOverCompositing"];
[composite setDefaults];
[composite setValue:colorMatrix.outputImage forKey: kCIInputImageKey];
[composite setValue:backgroundImage forKey: kCIInputBackgroundImageKey];

UIImage *blendedImage = [UIImage imageWithCIImage:composite.outputImage];

3

mrwalker提供了一个很棒的答案,我将其转换为Swift代码。

Swift 5

let sourceImage: CIImage = ...

// This is where I get the overlayImage, and intensity is alpha, used later
guard let ciFilter = LUTBuilder.getCIFilter(from: filter.filter), let intensity = filter.intensity else { return CIImage() }
ciFilter.setValue(sourceImage.clampedToExtent(), forKey: kCIInputImageKey)

let overlayImage = ciFilter.outputImage!.cropped(to: sourceImage.extent)

let alpha = CGFloat(intensity)
let rgba = [0.0, 0.0, 0.0, alpha]

guard
    let colorMatrix = CIFilter(name: "CIColorMatrix"),
    let composite = CIFilter(name: "CISourceOverCompositing")
else { return CIImage() }

colorMatrix.setDefaults()
colorMatrix.setValue(overlayImage, forKey: kCIInputImageKey)
colorMatrix.setValue(CIVector(values: rgba, count: 4), forKey: "inputAVector")

composite.setDefaults()
composite.setValue(colorMatrix.outputImage, forKey: kCIInputImageKey)
composite.setValue(sourceImage, forKey: kCIInputBackgroundImageKey)

return composite.outputImage ?? CIImage()

-4
最后按照以下方式完成了。代码来自于这个答案:https://dev59.com/tnM_5IYBdhLWcg3wmkUK#3188761
UIImage *bottomImage = inputImage;
UIImage *image = filterOutput;
CGSize newSize = CGSizeMake(inputImage.size.width, inputImage.size.height);
UIGraphicsBeginImageContext( newSize );
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:_opacity];
UIImage *blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

1
如果在 Core Image 和 Core Graphics 之间进行切换,那么您就会在使用 GPU 和 CPU 进行绘图之间进行移动。这样做会有性能成本。通常情况下,如果尽可能多地停留在 GPU 上(并尽量减少切换),则可以获得更好的绘图/合成任务性能。 - fattjake

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接