当我使用CIGaussianBlur对图像进行处理时,图像的角落会变得模糊,因此它看起来比原始图像要小。因此,我需要正确裁剪它以避免图像具有透明边缘。但是,如何根据模糊程度计算需要裁剪多少呢?
示例:
原始图像:
使用50个inputRadius CIGaussianBlur处理的图像(蓝色为背景):
当我使用CIGaussianBlur对图像进行处理时,图像的角落会变得模糊,因此它看起来比原始图像要小。因此,我需要正确裁剪它以避免图像具有透明边缘。但是,如何根据模糊程度计算需要裁剪多少呢?
示例:
原始图像:
使用50个inputRadius CIGaussianBlur处理的图像(蓝色为背景):
以以下代码为例...
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:5.0f] forKey:@"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[result extent]];
这将导致您提供的图像。但是,如果我改为使用原始图像矩形来创建CGImage,并离开上下文,那么生成的图像将具有所需的大小。
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
有两个问题。第一个问题是模糊滤镜采样了输入图像边缘外的像素,而这些像素是透明的,这就是透明像素来自的地方。 解决方法是在应用模糊滤镜之前扩展边缘。可以通过夹紧过滤器来实现,例如:
CIFilter *affineClampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
CGAffineTransform xform = CGAffineTransformMakeScale(1.0, 1.0);
[affineClampFilter setValue:[NSValue valueWithBytes:&xform
objCType:@encode(CGAffineTransform)]
forKey:@"inputTransform"];
这个滤镜会无限延伸图像的边缘并消除透明度。接下来需要应用模糊滤镜。
第二个问题有点奇怪。有些渲染器会为模糊滤镜产生更大的输出图像,你必须通过一些偏移量来调整生成的 CIImage 的原点,例如:
CGImageRef cgImage = [context createCGImage:outputImage
fromRect:CGRectOffset([inputImage extend],
offset, offset)];
我的iPhone上的软件渲染器需要三倍于偏移量的模糊半径。同一部iPhone上的硬件渲染器根本不需要任何偏移量。也许你可以从输入和输出图像的尺寸差异来推断偏移量,但我没有尝试过......
要获得一张硬边缘图像的漂亮模糊版本,您首先需要将CIAffineClamp应用于源图像,将其边缘延伸出来,然后确保在生成输出图像时使用输入图像的范围。
代码如下:
CIContext *context = [CIContext contextWithOptions:nil];
UIImage *image = [UIImage imageNamed:@"Flower"];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *clampFilter = [CIFilter filterWithName:@"CIAffineClamp"];
[clampFilter setDefaults];
[clampFilter setValue:inputImage forKey:kCIInputImageKey];
CIFilter *blurFilter = [CIFilter filterWithName:@"CIGaussianBlur"];
[blurFilter setValue:clampFilter.outputImage forKey:kCIInputImageKey];
[blurFilter setValue:@10.0f forKey:@"inputRadius"];
CIImage *result = [blurFilter valueForKey:kCIOutputImageKey];
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *result = [[UIImage alloc] initWithCGImage:cgImage scale:image.scale orientation:UIImageOrientationUp];
CGImageRelease(cgImage);
请注意,此代码在iOS上进行了测试。对于OS X应该是类似的(使用NSImage替换UIImage)。我看到了一些解决方案,想推荐一个更现代的方案,基于这里分享的一些思路:
private lazy var coreImageContext = CIContext() // Re-use this.
func blurredImage(image: CIImage, radius: CGFloat) -> CGImage? {
let blurredImage = image
.clampedToExtent()
.applyingFilter(
"CIGaussianBlur",
parameters: [
kCIInputRadiusKey: radius,
]
)
.cropped(to: image.extent)
return coreImageContext.createCGImage(blurredImage, from: blurredImage.extent)
}
如果您需要之后使用UIImage
,您可以按照以下方式获得:
let image = UIImage(cgImage: cgImage)
对于那些想知道为什么返回一个 CGImage
的原因(如Apple文档中所述):
由于Core Image的坐标系统与
UIKit
不匹配,当在UIImageView
中使用 "contentMode" 时,这种过滤方法可能会导致意外的结果。请确保将其支持 contentMode 的方式与CGImage
结合使用。
如果您需要一个 CIImage
,您可以返回它,但在此情况下如果您要显示图像,您可能需要小心一些。
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [[CIImage alloc] initWithImage:image];
CIFilter *blurFilter = [CIFilter filterWithName:@"CIGaussianBlur"];
[blurFilter setDefaults];
[blurFilter setValue:inputImage forKey:@"inputImage"];
CGFloat blurLevel = 20.0f; // Set blur level
[blurFilter setValue:[NSNumber numberWithFloat:blurLevel] forKey:@"inputRadius"]; // set value for blur level
CIImage *outputImage = [blurFilter valueForKey:@"outputImage"];
CGRect rect = inputImage.extent; // Create Rect
rect.origin.x += blurLevel; // and set custom params
rect.origin.y += blurLevel; //
rect.size.height -= blurLevel*2.0f; //
rect.size.width -= blurLevel*2.0f; //
CGImageRef cgImage = [context createCGImage:outputImage fromRect:rect]; // Then apply new rect
imageView.image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
func applyBlurEffect() -> UIImage? {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: self)
let clampFilter = CIFilter(name: "CIAffineClamp")!
clampFilter.setDefaults()
clampFilter.setValue(imageToBlur, forKey: kCIInputImageKey)
//The CIAffineClamp filter is setting your extent as infinite, which then confounds your context. Try saving off the pre-clamp extent CGRect, and then supplying that to the context initializer.
let inputImageExtent = imageToBlur!.extent
guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
return nil
}
currentFilter.setValue(clampFilter.outputImage, forKey: kCIInputImageKey)
currentFilter.setValue(10, forKey: "inputRadius")
guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: inputImageExtent) else {
return nil
}
return UIImage(cgImage: cgimg)
}
func applyBlurEffect(image: UIImage) -> UIImage {
let context = CIContext(options: nil)
let imageToBlur = CIImage(image: image)
let blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter!.setValue(imageToBlur, forKey: "inputImage")
blurfilter!.setValue(5.0, forKey: "inputRadius")
let resultImage = blurfilter!.valueForKey("outputImage") as! CIImage
let cgImage = context.createCGImage(resultImage, fromRect: resultImage.extent)
let blurredImage = UIImage(CGImage: cgImage)
return blurredImage
}
试试这个,让输入的范围成为-createCGImage:fromRect:
的参数:
-(UIImage *)gaussianBlurImageWithRadius:(CGFloat)radius {
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *input = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];
[filter setValue:input forKey:kCIInputImageKey];
[filter setValue:@(radius) forKey:kCIInputRadiusKey];
CIImage *output = [filter valueForKey:kCIOutputImageKey];
CGImageRef imgRef = [context createCGImage:output
fromRect:input.extent];
UIImage *outImage = [UIImage imageWithCGImage:imgRef
scale:UIScreen.mainScreen.scale
orientation:UIImageOrientationUp];
CGImageRelease(imgRef);
return outImage;
}
以下是两个适用于Xamarin(C#)的实现。
public static UIImage Blur(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(cgimage);
}
}
}
使用上述方法在iOS 7上无法正常工作(至少目前在Xamarin 7.0.1上是这样)。因此,我决定以另一种方式添加裁剪(尺寸可能取决于模糊半径)。
private static UIImage BlurImage(UIImage image)
{
using(var blur = new CIGaussianBlur())
{
blur.Image = new CIImage(image);
blur.Radius = 6.5f;
using(CIImage output = blur.OutputImage)
using(CIContext context = CIContext.FromOptions(null))
using(CGImage cgimage = context.CreateCGImage (output, new RectangleF(0, 0, image.Size.Width, image.Size.Height)))
{
return UIImage.FromImage(Crop(CIImage.FromCGImage(cgimage), image.Size.Width, image.Size.Height));
}
}
}
private static CIImage Crop(CIImage image, float width, float height)
{
var crop = new CICrop
{
Image = image,
Rectangle = new CIVector(10, 10, width - 20, height - 20)
};
return crop.OutputImage;
}