将UIImage转换为灰度图像,同时保持图像质量

32

我有一个在 obj-c 中的扩展,我把它转换成了 Swift3,用于获取相同的灰度 UIImage

public func getGrayScale() -> UIImage
{
    let imgRect = CGRect(x: 0, y: 0, width: width, height: height)

    let colorSpace = CGColorSpaceCreateDeviceGray()

    let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue).rawValue)
    context?.draw(self.cgImage!, in: imgRect)

    let imageRef = context!.makeImage()
    let newImg = UIImage(cgImage: imageRef!)

    return newImg
}

我能看到这张灰色的图片,但是它的质量非常糟糕...唯一与质量相关的东西是上下文构造器中的bitsPerComponent: 8。然而查看了苹果的文档,我得到了这个:

点击查看图片描述

显示iOS只支持8bpc...那么为什么我不能改进图片质量呢?


1
请检查您的宽度和高度,确保原始大小不是2倍。 - MoDJ
这在技术上并不是你问题的答案,但是假设你想在UIImageView中显示灰色UIImage,你可以像这里描述的那样向该视图添加一个带有合成滤镜的层:https://dev59.com/zGTWa4cB1Zd3GeqPBkRk#67436327 这个工作流具有一些优点:轻量级、代码更少、可以轻松切换。 - Tilman
7个回答

46

尝试以下代码:

注意:代码已更新且错误已修复...

  • 代码已在Swift 3中测试。
  • originalImage是您要转换的图像。

答案1:

     var context = CIContext(options: nil)

更新:CIContext是Core Image组件,用于处理渲染和所有核心图像的处理都在CIContext中完成。这与Core GraphicsOpenGL上下文有些相似。更多信息可在Apple Doc.中获取。

     func Noir() {

        let currentFilter = CIFilter(name: "CIPhotoEffectNoir") 
        currentFilter!.setValue(CIImage(image: originalImage.image!), forKey: kCIInputImageKey)
        let output = currentFilter!.outputImage 
        let cgimg = context.createCGImage(output!,from: output!.extent)
        let processedImage = UIImage(cgImage: cgimg!)
        originalImage.image = processedImage
      }

另外你需要考虑以下滤镜,它们可以产生类似的效果:

  • CIPhotoEffectMono
  • CIPhotoEffectTonal

来自答案1的输出:

输入图像描述

来自答案2的输出:

输入图像描述

改进后的答案:

答案2:在应用 CoreImage 滤镜之前自动调整输入图像。

var context = CIContext(options: nil)

func Noir() {


    //Auto Adjustment to Input Image
    var inputImage = CIImage(image: originalImage.image!)
    let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
    let filters = inputImage!.autoAdjustmentFilters(options: options)

    for filter: CIFilter in filters {
       filter.setValue(inputImage, forKey: kCIInputImageKey)
   inputImage =  filter.outputImage
      }
    let cgImage = context.createCGImage(inputImage!, from: inputImage!.extent)
    self.originalImage.image =  UIImage(cgImage: cgImage!)

    //Apply noir Filter
    let currentFilter = CIFilter(name: "CIPhotoEffectTonal") 
    currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)

    let output = currentFilter!.outputImage 
    let cgimg = context.createCGImage(output!, from: output!.extent)
    let processedImage = UIImage(cgImage: cgimg!)
    originalImage.image = processedImage

}

注意:如果您希望看到更好的结果,应该在真实设备上测试您的代码,而不是在模拟器上...


33

一个Swift 4.0扩展,返回一个可选的UIImage以避免未来可能出现的任何崩溃。

import UIKit

extension UIImage {
    var noir: UIImage? {
        let context = CIContext(options: nil)
        guard let currentFilter = CIFilter(name: "CIPhotoEffectNoir") else { return nil }
        currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
        if let output = currentFilter.outputImage,
            let cgImage = context.createCGImage(output, from: output.extent) {
            return UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
        }
        return nil
    }
}

使用方法:

let image = UIImage(...)
let noirImage = image.noir // noirImage is an optional UIImage (UIImage?)

你如何在代码中使用它?你能举个例子吗? - Sasuke Uchiha
在 Swift 5.1 中完美运行。 - ytpm

13

Joe为Swift 4编写的UIImage扩展,可以正确地适应不同比例:

extension UIImage {
    var noir: UIImage {
        let context = CIContext(options: nil)
        let currentFilter = CIFilter(name: "CIPhotoEffectNoir")!
        currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
        let output = currentFilter.outputImage!
        let cgImage = context.createCGImage(output, from: output.extent)!
        let processedImage = UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)

        return processedImage
    }
}

我们如何恢复原始图像? - Sasuke Uchiha
@SasukeUchiha 请查看CodeBender的回答。恐怕没有办法将原始图像恢复,因为应用noir滤镜后大多数颜色信息都丢失了。 - Peter Prokop

7
我会使用CoreImage来保持质量。
func convertImageToBW(image:UIImage) -> UIImage {

    let filter = CIFilter(name: "CIPhotoEffectMono")

    // convert UIImage to CIImage and set as input

    let ciInput = CIImage(image: image)
    filter?.setValue(ciInput, forKey: "inputImage")

    // get output CIImage, render as CGImage first to retain proper UIImage scale

    let ciOutput = filter?.outputImage
    let ciContext = CIContext()
    let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)

    return UIImage(cgImage: cgImage!)
}

根据您使用此代码的方式,出于性能考虑,您可能希望在代码之外创建CIContext。

6
这是一个关于Objective-C的类别。需要注意的是,这个版本关键地考虑了比例尺。
- (UIImage *)grayscaleImage{
    return [self imageWithCIFilter:@"CIPhotoEffectMono"];
}

- (UIImage *)imageWithCIFilter:(NSString*)filterName{
    CIImage *unfiltered = [CIImage imageWithCGImage:self.CGImage];
    CIFilter *filter = [CIFilter filterWithName:filterName];
    [filter setValue:unfiltered forKey:kCIInputImageKey];
    CIImage *filtered = [filter outputImage];
    CIContext *context = [CIContext contextWithOptions:nil];
    CGImageRef cgimage = [context createCGImage:filtered fromRect:CGRectMake(0, 0, self.size.width*self.scale, self.size.height*self.scale)];
    // Do not use initWithCIImage because that renders the filter each time the image is displayed.  This causes slow scrolling in tableviews.
    UIImage *image = [[UIImage alloc] initWithCGImage:cgimage scale:self.scale orientation:self.imageOrientation];
    CGImageRelease(cgimage);
    return image;
}

3
但他不是那里唯一的人 :),谢谢 @arsenius - slxl
是的,这正是我所需要的 :) - Jeremy Fuller

5

所有上述解决方案都依赖于CIImage,而UIImage通常具有CGImage作为其底层图像,而不是CIImage。这意味着您必须在开始时将底层图像转换为CIImage,并在最后将其转换回CGImage(如果不这样做,则使用CIImage构造UIImage将为您有效地执行此操作)。

尽管对于许多用例来说这可能是可以接受的,但CGImageCIImage之间的转换是不免费的:它可能会很慢,并且在转换过程中会创建一个大的内存峰值。

因此,我想提及完全不需要来回转换图像的完全不同的解决方案。它使用Accelerate,由苹果在这里完美地描述。

这里有一个游乐场示例,演示了两种方法。

import UIKit
import Accelerate

extension CIImage {

    func toGrayscale() -> CIImage? {

        guard let output = CIFilter(name: "CIPhotoEffectNoir", parameters: [kCIInputImageKey: self])?.outputImage else {
            return nil
        }

        return output
    }

}

extension CGImage {

    func toGrayscale() -> CGImage {

        guard let format = vImage_CGImageFormat(cgImage: self),
              // The source image bufffer
              var sourceBuffer = try? vImage_Buffer(
                cgImage: self,
                format: format
              ),
              // The 1-channel, 8-bit vImage buffer used as the operation destination.
              var destinationBuffer = try? vImage_Buffer(
                width: Int(sourceBuffer.width),
                height: Int(sourceBuffer.height),
                bitsPerPixel: 8
              ) else {
            return self
        }

        // Declare the three coefficients that model the eye's sensitivity
        // to color.
        let redCoefficient: Float = 0.2126
        let greenCoefficient: Float = 0.7152
        let blueCoefficient: Float = 0.0722

        // Create a 1D matrix containing the three luma coefficients that
        // specify the color-to-grayscale conversion.
        let divisor: Int32 = 0x1000
        let fDivisor = Float(divisor)

        var coefficientsMatrix = [
            Int16(redCoefficient * fDivisor),
            Int16(greenCoefficient * fDivisor),
            Int16(blueCoefficient * fDivisor)
        ]

        // Use the matrix of coefficients to compute the scalar luminance by
        // returning the dot product of each RGB pixel and the coefficients
        // matrix.
        let preBias: [Int16] = [0, 0, 0, 0]
        let postBias: Int32 = 0

        vImageMatrixMultiply_ARGB8888ToPlanar8(
            &sourceBuffer,
            &destinationBuffer,
            &coefficientsMatrix,
            divisor,
            preBias,
            postBias,
            vImage_Flags(kvImageNoFlags)
        )

        // Create a 1-channel, 8-bit grayscale format that's used to
        // generate a displayable image.
        guard let monoFormat = vImage_CGImageFormat(
            bitsPerComponent: 8,
            bitsPerPixel: 8,
            colorSpace: CGColorSpaceCreateDeviceGray(),
            bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue),
            renderingIntent: .defaultIntent
        ) else {
                return self
        }

        // Create a Core Graphics image from the grayscale destination buffer.
        guard let result = try? destinationBuffer.createCGImage(format: monoFormat) else {
            return self
        }

        return result
    }
}

我使用完整的图像来测试。

let start = Date()
var prev = start.timeIntervalSinceNow * -1

func info(_ id: String) {
    print("\(id)\t: \(start.timeIntervalSinceNow * -1 - prev)")
    prev = start.timeIntervalSinceNow * -1
}

info("started")
let original = UIImage(named: "Golden_Gate_Bridge_2021.jpg")!
info("loaded UIImage(named)")

let cgImage = original.cgImage!
info("original.cgImage")
let cgImageToGreyscale = cgImage.toGrayscale()
info("cgImage.toGrayscale()")
let uiImageFromCGImage = UIImage(cgImage: cgImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(cgImage)")

let ciImage = CIImage(image: original)!
info("CIImage(image: original)!")
let ciImageToGreyscale = ciImage.toGrayscale()!
info("ciImage.toGrayscale()")
let uiImageFromCIImage = UIImage(ciImage: ciImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(ciImage)")

结果(以秒为单位)

CGImage 方法总共花费约 1 秒钟:

original.cgImage        : 0.5257829427719116
cgImage.toGrayscale()   : 0.46222901344299316
UIImage(cgImage)        : 0.1819549798965454

CIImage 方法总共花费了约 7 秒钟:

CIImage(image: original)!   : 0.6055610179901123
ciImage.toGrayscale()       : 4.969912052154541
UIImage(ciImage)            : 2.395193934440613

将图片保存为JPEG格式时,使用CGImage创建的图片与使用CIImage创建的图片相比,文件大小要小3倍(5 MB vs. 17 MB)。两种图片的质量都很好。以下是符合SO限制的小尺寸图片:

enter image description here


1
根据Joe的答案,我们可以轻松地将原始图像转换为黑白图像。但是,如果需要将图像转换回原始状态,请参考以下代码:
var context = CIContext(options: nil)
var startingImage : UIImage = UIImage()

func Noir() {     
    startingImage = imgView.image!
    var inputImage = CIImage(image: imgView.image!)!
    let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
    let filters = inputImage.autoAdjustmentFilters(options: options)

    for filter: CIFilter in filters {
        filter.setValue(inputImage, forKey: kCIInputImageKey)
        inputImage =  filter.outputImage!
    }
    let cgImage = context.createCGImage(inputImage, from: inputImage.extent)
    self.imgView.image =  UIImage(cgImage: cgImage!)

    //Filter Logic
    let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
    currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)

    let output = currentFilter!.outputImage
    let cgimg = context.createCGImage(output!, from: output!.extent)
    let processedImage = UIImage(cgImage: cgimg!)
    imgView.image = processedImage
}

func Original(){ 
    imgView.image = startingImage
}

这不是恢复原始照片的方法。如果添加了滤镜两次并想要恢复到原始照片,那么这个解决方案是行不通的。 - stan liu

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接