将图像转换为灰度

50

我正在尝试以以下方式将图片转换为灰度:

#define bytesPerPixel 4
#define bitsPerComponent 8

-(unsigned char*) getBytesForImage: (UIImage*)pImage
{
    CGImageRef image = [pImage CGImage];
    NSUInteger width = CGImageGetWidth(image);
    NSUInteger height = CGImageGetHeight(image);

    NSUInteger bytesPerRow = bytesPerPixel * width;

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    unsigned char *rawData = malloc(height * width * 4);
    CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);

    CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
    CGContextRelease(context);

    return rawData;
}

-(UIImage*) processImage: (UIImage*)pImage
{   
    DebugLog(@"processing image");
    unsigned char *rawData = [self getBytesForImage: pImage];

    NSUInteger width = pImage.size.width;
    NSUInteger height = pImage.size.height;

    DebugLog(@"width: %d", width);
    DebugLog(@"height: %d", height);

    NSUInteger bytesPerRow = bytesPerPixel * width;

    for (int xCoordinate = 0; xCoordinate < width; xCoordinate++)
    {
        for (int yCoordinate = 0; yCoordinate < height; yCoordinate++)
        {
            int byteIndex = (bytesPerRow * yCoordinate) + xCoordinate * bytesPerPixel;

            //Getting original colors
            float red = ( rawData[byteIndex] / 255.f );
            float green = ( rawData[byteIndex + 1] / 255.f );
            float blue = ( rawData[byteIndex + 2] / 255.f );

            //Processing pixel data
            float averageColor = (red + green + blue) / 3.0f;

            red = averageColor;
            green = averageColor;
            blue = averageColor;

            //Assigning new color components
            rawData[byteIndex] = (unsigned char) red * 255;
            rawData[byteIndex + 1] = (unsigned char) green * 255;
            rawData[byteIndex + 2] = (unsigned char) blue * 255;


        }
    }

    NSData* newPixelData = [NSData dataWithBytes: rawData length: height * width * 4];
    UIImage* newImage = [UIImage imageWithData: newPixelData];

    free(rawData);

    DebugLog(@"image processed");

    return newImage;

}

因此,当我想转换图像时,我只需调用processImage函数:

imageToDisplay.image = [self processImage: image];

但是imageToDisplay没有显示。可能的问题是什么?

谢谢。


2
哪只顽皮的猴子将它添加到收藏夹而没有点赞?完全缺乏慷慨! - P i
12个回答

50

我需要一个保留阿尔法通道的版本,所以我修改了Dutchie432发布的代码:

@implementation UIImage (grayscale)

typedef enum {
    ALPHA = 0,
    BLUE = 1,
    GREEN = 2,
    RED = 3
} PIXELS;

- (UIImage *)convertToGrayscale {
    CGSize size = [self size];
    int width = size.width;
    int height = size.height;

    // the pixels will be painted to this array
    uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));

    // clear the pixels so any transparency is preserved
    memset(pixels, 0, width * height * sizeof(uint32_t));

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // create a context with RGBA pixels
    CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, 
                                                 kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);

    // paint the bitmap to our context which will fill in the pixels array
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);

    for(int y = 0; y < height; y++) {
        for(int x = 0; x < width; x++) {
            uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];

            // convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
            uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];

            // set the pixels to gray
            rgbaPixel[RED] = gray;
            rgbaPixel[GREEN] = gray;
            rgbaPixel[BLUE] = gray;
        }
    }

    // create a new CGImageRef from our context with the modified pixels
    CGImageRef image = CGBitmapContextCreateImage(context);

    // we're done with the context, color space, and pixels
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    free(pixels);

    // make a new UIImage to return
    UIImage *resultUIImage = [UIImage imageWithCGImage:image];

    // we're done with image now too
    CGImageRelease(image);

    return resultUIImage;
}

@end

它可以工作,但不支持Retina显示屏,ruralcoder(下面)会更新此问题。 - android iPhone

45

这里是一个只使用UIKit和亮度混合模式的代码。虽然有点“hack”,但它运行良好。

// Transform the image in grayscale.
- (UIImage*) grayishImage: (UIImage*) inputImage {

    // Create a graphic context.
    UIGraphicsBeginImageContextWithOptions(inputImage.size, YES, 1.0);
    CGRect imageRect = CGRectMake(0, 0, inputImage.size.width, inputImage.size.height);

    // Draw the image with the luminosity blend mode.
    // On top of a white background, this will give a black and white image.
    [inputImage drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0];

    // Get the resulting image.
    UIImage *filteredImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return filteredImage;

}
为了保持透明度,您可以将UIGraphicsBeginImageContextWithOptionsopaque模式参数设置为NO。需要检查。

对于读者:UIGraphicsBeginImageContextWithOptions 仅适用于 iOS 4。将 opaque 设置为 NO 无法保留 alpha 值。 - Ivan Vučica
这个方法需要10毫秒,而不是我的图像上被接受的答案所需的2毫秒。 - thierryb

37

基于Cam的代码,具有适应Retina显示屏缩放的能力。

- (UIImage *) toGrayscale 
{
    const int RED = 1;
    const int GREEN = 2;
    const int BLUE = 3;

    // Create image rectangle with current image width/height
    CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);

    int width = imageRect.size.width;
    int height = imageRect.size.height;

    // the pixels will be painted to this array
    uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));

    // clear the pixels so any transparency is preserved
    memset(pixels, 0, width * height * sizeof(uint32_t));

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // create a context with RGBA pixels
    CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, 
                                                 kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);

    // paint the bitmap to our context which will fill in the pixels array
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);

    for(int y = 0; y < height; y++) {
        for(int x = 0; x < width; x++) {
            uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];

            // convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
            uint8_t gray = (uint8_t) ((30 * rgbaPixel[RED] + 59 * rgbaPixel[GREEN] + 11 * rgbaPixel[BLUE]) / 100); 

            // set the pixels to gray
            rgbaPixel[RED] = gray;
            rgbaPixel[GREEN] = gray;
            rgbaPixel[BLUE] = gray;
        }
    }

    // create a new CGImageRef from our context with the modified pixels
    CGImageRef image = CGBitmapContextCreateImage(context);

    // we're done with the context, color space, and pixels
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    free(pixels);

    // make a new UIImage to return
    UIImage *resultUIImage = [UIImage imageWithCGImage:image
                                                 scale:self.scale 
                                           orientation:UIImageOrientationUp];

    // we're done with image now too
    CGImageRelease(image);

    return resultUIImage;
}

非常好用,包括Retina。谢谢Ivan! - Christopher
2
即使在透明背景的PNG上也运行得非常好!谢谢你! - Chris Chen
太棒了!非常感谢!运行良好,而且没有(担心的)性能损失。 - crgt
1
我通过用整数替换浮点数来提高了性能。只需一个 for 循环并在每次迭代中将 4 添加到 rgbaPixel 指针,而不是计算其位置,就可以进一步提高性能。 - Sulthan
我不确定这是否正确。但是RED对应于最高有效位。因此在这种情况下,应该是RED = 3。 - p0lAris
显示剩余2条评论

31

我喜欢Mathieu Godart的回答,但是它似乎不能很好地处理视网膜或alpha图像。以下是一个更新版本,在我的情况下似乎对这两种情况都有效:

- (UIImage*)convertToGrayscale
{
    UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
    CGRect imageRect = CGRectMake(0.0f, 0.0f, self.size.width, self.size.height);

    CGContextRef ctx = UIGraphicsGetCurrentContext();

    // Draw a white background
    CGContextSetRGBFillColor(ctx, 1.0f, 1.0f, 1.0f, 1.0f);
    CGContextFillRect(ctx, imageRect);

    // Draw the luminosity on top of the white background to get grayscale
    [self drawInRect:imageRect blendMode:kCGBlendModeLuminosity alpha:1.0f];

    // Apply the source image's alpha
    [self drawInRect:imageRect blendMode:kCGBlendModeDestinationIn alpha:1.0f];

    UIImage* grayscaleImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return grayscaleImage;
}

1
使我的半透明彩色图像变得太浅。 - malhal

30

当您使用此函数时,究竟发生了什么?函数是否返回无效图像,还是显示不正确?

这是我用来转换为灰度的方法。

- (UIImage *) convertToGreyscale:(UIImage *)i {

    int kRed = 1;
    int kGreen = 2;
    int kBlue = 4;

    int colors = kGreen | kBlue | kRed;
    int m_width = i.size.width;
    int m_height = i.size.height;

    uint32_t *rgbImage = (uint32_t *) malloc(m_width * m_height * sizeof(uint32_t));
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(rgbImage, m_width, m_height, 8, m_width * 4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
    CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
    CGContextSetShouldAntialias(context, NO);
    CGContextDrawImage(context, CGRectMake(0, 0, m_width, m_height), [i CGImage]);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // now convert to grayscale
    uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height);
    for(int y = 0; y < m_height; y++) {
        for(int x = 0; x < m_width; x++) {
            uint32_t rgbPixel=rgbImage[y*m_width+x];
            uint32_t sum=0,count=0;
            if (colors & kRed) {sum += (rgbPixel>>24)&255; count++;}
            if (colors & kGreen) {sum += (rgbPixel>>16)&255; count++;}
            if (colors & kBlue) {sum += (rgbPixel>>8)&255; count++;}
            m_imageData[y*m_width+x]=sum/count;
        }
    }
    free(rgbImage);

    // convert from a gray scale image back into a UIImage
    uint8_t *result = (uint8_t *) calloc(m_width * m_height *sizeof(uint32_t), 1);

    // process the image back to rgb
    for(int i = 0; i < m_height * m_width; i++) {
        result[i*4]=0;
        int val=m_imageData[i];
        result[i*4+1]=val;
        result[i*4+2]=val;
        result[i*4+3]=val;
    }

    // create a UIImage
    colorSpace = CGColorSpaceCreateDeviceRGB();
    context = CGBitmapContextCreate(result, m_width, m_height, 8, m_width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
    CGImageRef image = CGBitmapContextCreateImage(context);
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    UIImage *resultUIImage = [UIImage imageWithCGImage:image];
    CGImageRelease(image);

    free(m_imageData);

    // make sure the data will be released by giving it to an autoreleased NSData
    [NSData dataWithBytesNoCopy:result length:m_width * m_height];

    return resultUIImage;
}

3
谢谢,它可以运行,但我得到的图像与原始位置相比旋转了90度。我该怎么办才能修复它? - Ilya Suzdalnitski
12
维基百科和其他资料似乎暗示正确的颜色分布应该是0.3红+0.59绿+0.11蓝,而不仅仅是三种颜色值的简单平均。 - mahboudz
2
FYI,关于Dutchie432的答案存在内存泄漏问题:
<code>uint8_t *m_imageData = (uint8_t *) malloc(m_width * m_height);
...这段代码没有被释放,应该进行释放。
- graemer957
5
这段代码似乎没有将RGB图像数据正确地转换为相应的灰度级别。这比@mahboudz的评论中描述的情况更糟,因为它甚至没有对三个颜色分量进行平均处理。相反,由于似乎存在某种bug,它实际上最终只取每个像素的绿色分量,并使其成为灰度值。由于眼睛对绿色分量的反应比其他两个分量更强,所以可以理解为什么回答者(和其他人)可能会认为一切正常... - martineau
1
我认为问题出在 int colors = kGreen; 这一行,它似乎只处理像素的绿色分量。要进行更正,请尝试使用 int colors = kGreen | kBlue | kRed; - mahboudz
显示剩余5条评论

13

使用CIFilter的不同方法。保留Alpha通道,并且可以在透明背景下工作:

+ (UIImage *)convertImageToGrayScale:(UIImage *)image
{
    CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
    CIContext *context = [CIContext contextWithOptions:nil];

    CIFilter *filter = [CIFilter filterWithName:@"CIColorControls"];
    [filter setValue:inputImage forKey:kCIInputImageKey];
    [filter setValue:@(0.0) forKey:kCIInputSaturationKey];

    CIImage *outputImage = filter.outputImage;

    CGImageRef cgImageRef = [context createCGImage:outputImage fromRect:outputImage.extent];

    UIImage *result = [UIImage imageWithCGImage:cgImageRef];
    CGImageRelease(cgImageRef);

    return result;

}

这似乎无法正确保留大小。使用此方法将我的图像放大了2倍。 - arsenius
1
由于我无法编辑我的评论,使用[UIImage imageWithCGImage:cgImageRef scale:self.scale orientation:self.imageOrientation]; 可以确保支持retina。 - arsenius

11

一个快速扩展UIImage的方法,保留alpha通道:

extension UIImage {

    private func convertToGrayScaleNoAlpha() -> CGImageRef {
        let colorSpace = CGColorSpaceCreateDeviceGray();
        let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.None.rawValue)
        let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, colorSpace, bitmapInfo)
        CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage)
        return CGBitmapContextCreateImage(context)
    }


    /**
        Return a new image in shades of gray + alpha
    */
     func convertToGrayScale() -> UIImage {
        let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.Only.rawValue)
        let context = CGBitmapContextCreate(nil, UInt(size.width), UInt(size.height), 8, 0, nil, bitmapInfo)
        CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), self.CGImage);
        let mask = CGBitmapContextCreateImage(context)
        return UIImage(CGImage: CGImageCreateWithMask(convertToGrayScaleNoAlpha(), mask), scale: scale, orientation:imageOrientation)!
    }
}

9

这里是UIImage类别方法的另一个好解决方案。它基于这篇博客文章及其评论。但我在这里修复了一个内存问题:

- (UIImage *)grayScaleImage {
    // Create image rectangle with current image width/height
    CGRect imageRect = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
    // Grayscale color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
    // Create bitmap content with current image size and grayscale colorspace
    CGContextRef context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, colorSpace, kCGImageAlphaNone);
    // Draw image into current context, with specified rectangle
    // using previously defined context (with grayscale colorspace)
    CGContextDrawImage(context, imageRect, [self CGImage]);
    // Create bitmap image info from pixel data in current context
    CGImageRef grayImage = CGBitmapContextCreateImage(context);
    // release the colorspace and graphics context
    CGColorSpaceRelease(colorSpace);
    CGContextRelease(context);
    // make a new alpha-only graphics context
    context = CGBitmapContextCreate(nil, self.size.width * self.scale, self.size.height * self.scale, 8, 0, nil, kCGImageAlphaOnly);
    // draw image into context with no colorspace
    CGContextDrawImage(context, imageRect, [self CGImage]);
    // create alpha bitmap mask from current context
    CGImageRef mask = CGBitmapContextCreateImage(context);
    // release graphics context
    CGContextRelease(context);
    // make UIImage from grayscale image with alpha mask
    CGImageRef cgImage = CGImageCreateWithMask(grayImage, mask);
    UIImage *grayScaleImage = [UIImage imageWithCGImage:cgImage scale:self.scale orientation:self.imageOrientation];
    // release the CG images
CGImageRelease(cgImage);
    CGImageRelease(grayImage);
    CGImageRelease(mask);
    // return the new grayscale image
    return grayScaleImage;
}

9
一种快速高效的Swift 3实现,适用于iOS 9/10。我感觉这个方法非常高效,因为我已经尝试过所有能找到的图像过滤方法来处理大量图像(在使用AlamofireImage的ImageFilter选项下载时)。对于我的用例来说,我选择了这种方法,因为它在内存和速度方面比我尝试过的任何其他方法都要好得多。
func convertToGrayscale() -> UIImage? {

    UIGraphicsBeginImageContextWithOptions(self.size, false, self.scale)
    let imageRect = CGRect(x: 0.0, y: 0.0, width: self.size.width, height: self.size.height)
    let context = UIGraphicsGetCurrentContext()

    // Draw a white background
    context!.setFillColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)
    context!.fill(imageRect)

    // optional: increase contrast with colorDodge before applying luminosity 
    // (my images were too dark when using just luminosity - you may not need this)
    self.draw(in: imageRect, blendMode: CGBlendMode.colorDodge, alpha: 0.7)


    // Draw the luminosity on top of the white background to get grayscale of original image
    self.draw(in: imageRect, blendMode: CGBlendMode.luminosity, alpha: 0.90)

    // optional: re-apply alpha if your image has transparency - based on user1978534's answer (I haven't tested this as I didn't have transparency - I just know this would be the the syntax)
    // self.draw(in: imageRect, blendMode: CGBlendMode.destinationIn, alpha: 1.0)

    let grayscaleImage = UIGraphicsGetImageFromCurrentImageContext()
    UIGraphicsEndImageContext()
    return grayscaleImage
}


关于使用colorDodge:一开始我遇到了问题,无法将我的图像变得足够轻以匹配使用CIFilter(“CIPhotoEffectTonal”)产生的灰度着色-我的结果太暗了。通过应用CGBlendMode.colorDodge @ ~ 0.7 alpha,我能够获得相当不错的匹配效果,这似乎增加了整体对比度。
其他颜色混合效果也可能有效-但我认为您需要在亮度之前应用,即灰度过滤效果。我发现这个页面非常有帮助,可以参考不同的混合模式
关于效率提升我发现:我需要处理从服务器加载的数百个缩略图图像(使用AlamofireImage进行异步加载、缓存和应用过滤器)。当我的图像总大小超过缓存大小时,我开始遇到崩溃问题,因此我尝试了其他方法。

我尝试过基于CPU的CoreImage CIFilter方法,但对于我处理的大量图像来说,内存效率不够。

我还尝试了使用EAGLContext(api: .openGLES3)通过GPU应用CIFilter,但实际上更加消耗内存 - 加载200多个图像时,我甚至会收到450 MB以上内存使用的警告。

我尝试了位图处理(即CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.none.rawValue)...),这种方法效果很好,但我无法为现代视网膜设备获得足够高的分辨率。即使我添加了context.scaleBy(x: scaleFactor, y: scaleFactor),图像仍然非常粗糙。

所以在我尝试的所有方法中,这种方法(UIGraphics Context Draw)在将其作为AlamofireImage滤镜应用时,在速度和内存方面都要高得多。也就是说,当处理200多个图像并且它们基本上立即加载时,我看到的RAM少于70 MB,而不是使用openEAGL方法需要约35秒。我知道这些不是非常科学的基准。如果有人非常好奇,我会进行测试 :)


最后,如果您确实需要将此或其他灰度滤镜传递到AlamofireImage中-以下是如何执行此操作的:(请注意,您必须将AlamofireImage导入到您的类中才能使用ImageFilter)
public struct GrayScaleFilter: ImageFilter {
    public init() {
    }

    public var filter: (UIImage) -> UIImage {
        return { image in
            return image.convertToGrayscale() ?? image
        }
    }
}

要使用它,创建此类过滤器并将其传递到af_setImage中,如下所示:
let filter = GrayScaleFilter()
imageView.af_setImage(withURL: url, filter: filter)

真的很快!已经测试过了! - jalopezsuarez

6
@interface UIImageView (Settings)

- (void)convertImageToGrayScale;

@end

@implementation UIImageView (Settings)

- (void)convertImageToGrayScale
{
    // Create image rectangle with current image width/height
    CGRect imageRect = CGRectMake(0, 0, self.image.size.width, self.image.size.height);

    // Grayscale color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();

    // Create bitmap content with current image size and grayscale colorspace
    CGContextRef context = CGBitmapContextCreate(nil, self.image.size.width, self.image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);

    // Draw image into current context, with specified rectangle
    // using previously defined context (with grayscale colorspace)
    CGContextDrawImage(context, imageRect, [self.image CGImage]);

    // Create bitmap image info from pixel data in current context
    CGImageRef imageRef = CGBitmapContextCreateImage(context);

    // Create a new UIImage object
    UIImage *newImage = [UIImage imageWithCGImage:imageRef];

    // Release colorspace, context and bitmap information
    CGColorSpaceRelease(colorSpace);
    CGContextRelease(context);
    CFRelease(imageRef);

    // Return the new grayscale image
    self.image = newImage;
}

@end

1
这个回答需要更多的赞。它几乎和使用混合模式一样简洁优雅,而且性能更好。我刚刚发布了一个类似的答案,增加了关于不透明度和视网膜支持的更多细节。 - Rolf Hendriks

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接