获取UIImage的像素颜色

70

我如何获取UIImage中特定像素的RGB值?

9个回答

92

尝试这个非常简单的代码:

我曾经在我的迷宫游戏中使用这段代码来检测墙壁(我只需要 alpha 通道的信息,但是我包含了获取其他颜色的代码供您参考):

- (BOOL)isWallPixel:(UIImage *)image xCoordinate:(int)x yCoordinate:(int)y {

    CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
    const UInt8* data = CFDataGetBytePtr(pixelData);

    int pixelInfo = ((image.size.width  * y) + x ) * 4; // The image is png

    //UInt8 red = data[pixelInfo];         // If you need this info, enable it
    //UInt8 green = data[(pixelInfo + 1)]; // If you need this info, enable it
    //UInt8 blue = data[pixelInfo + 2];    // If you need this info, enable it
    UInt8 alpha = data[pixelInfo + 3];     // I need only this info for my maze game
    CFRelease(pixelData);

    //UIColor* color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f]; // The pixel color info

    if (alpha) return YES;
    else return NO;

}

请注意,这段代码在灰度图像上会崩溃。第一个“4”需要改为“(颜色组件的数量)”,即对于灰度PNG应该是1。 - Adam
这在RGBA图像上运行良好。如果图像是ARGB呢?更重要的是,如何检测图像是否为ARGB?(因为其余部分很容易,只需切换红、绿、蓝和alpha变量的顺序即可) - Gik
我找到了(部分)答案:CGImageGetAlphaInfo(image.CGImage)。但问题是,如果答案是kCGImageAlphaPremultipliedFirst,则值很奇怪。例如,带有255 alpha的红色像素为[0 0 255 255]而不是(我猜测的)[255 255 0 0],有什么想法吗? - Gik
@MinasPetterson,直到现在这个方案对我来说完美无缺,但在iPhone6plus模拟器上数值非常奇怪。有什么想法为什么会这样? - Tiago Lira
1
@TiagoLira 我认为这是由于比例因素(iPhone 6 Plus具有3倍比例因素)造成的,因此在计算x和y值时,您必须添加比例因素以进行调整。 - Povilas
显示剩余9条评论

19

OnTouch

-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
    UITouch *touch = [[touches allObjects] objectAtIndex:0];
    CGPoint point1 = [touch locationInView:self.view];
    touch = [[event allTouches] anyObject]; 
    if ([touch view] == imgZoneWheel)
    {
        CGPoint location = [touch locationInView:imgZoneWheel];
        [self getPixelColorAtLocation:location];
        if(alpha==255)
        {
            NSLog(@"In Image Touch view alpha %d",alpha);
            [self translateCurrentTouchPoint:point1.x :point1.y];
            [imgZoneWheel setImage:[UIImage imageNamed:[NSString stringWithFormat:@"blue%d.png",GrndFild]]];
        }
    }
}



- (UIColor*) getPixelColorAtLocation:(CGPoint)point 
{

    UIColor* color = nil;

    CGImageRef inImage;

    inImage = imgZoneWheel.image.CGImage;


    // Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
    CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
    if (cgctx == NULL) { return nil; /* error */ }

    size_t w = CGImageGetWidth(inImage);
    size_t h = CGImageGetHeight(inImage);
    CGRect rect = {{0,0},{w,h}};


    // Draw the image to the bitmap context. Once we draw, the memory 
    // allocated for the context for rendering will then contain the 
    // raw image data in the specified color space.
    CGContextDrawImage(cgctx, rect, inImage); 

    // Now we can get a pointer to the image data associated with the bitmap
    // context.
    unsigned char* data = CGBitmapContextGetData (cgctx);
    if (data != NULL) {
        //offset locates the pixel in the data from x,y. 
        //4 for 4 bytes of data per pixel, w is width of one row of data.
        int offset = 4*((w*round(point.y))+round(point.x));
        alpha =  data[offset]; 
        int red = data[offset+1]; 
        int green = data[offset+2]; 
        int blue = data[offset+3]; 
        color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
    }

    // When finished, release the context
    //CGContextRelease(cgctx); 
    // Free image data memory for the context
    if (data) { free(data); }

    return color;
}

- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef)inImage 
{
    CGContextRef    context = NULL;
    CGColorSpaceRef colorSpace;
    void *          bitmapData;
    int             bitmapByteCount;
    int             bitmapBytesPerRow;

    // Get image width, height. We'll use the entire image.
    size_t pixelsWide = CGImageGetWidth(inImage);
    size_t pixelsHigh = CGImageGetHeight(inImage);

    // Declare the number of bytes per row. Each pixel in the bitmap in this
    // example is represented by 4 bytes; 8 bits each of red, green, blue, and
    // alpha.
    bitmapBytesPerRow   = (pixelsWide * 4);
    bitmapByteCount     = (bitmapBytesPerRow * pixelsHigh);

    // Use the generic RGB color space.
    colorSpace = CGColorSpaceCreateDeviceRGB();

    if (colorSpace == NULL)
    {
        fprintf(stderr, "Error allocating color space\n");
        return NULL;
    }

    // Allocate memory for image data. This is the destination in memory
    // where any drawing to the bitmap context will be rendered.
    bitmapData = malloc( bitmapByteCount );
    if (bitmapData == NULL) 
    {
        fprintf (stderr, "Memory not allocated!");
        CGColorSpaceRelease( colorSpace );
        return NULL;
    }

    // Create the bitmap context. We want pre-multiplied ARGB, 8-bits 
    // per component. Regardless of what the source image format is 
    // (CMYK, Grayscale, and so on) it will be converted over to the format
    // specified here by CGBitmapContextCreate.
    context = CGBitmapContextCreate (bitmapData,
                                     pixelsWide,
                                     pixelsHigh,
                                     8,      // bits per component
                                     bitmapBytesPerRow,
                                     colorSpace,
                                     kCGImageAlphaPremultipliedFirst);
    if (context == NULL)
    {
        free (bitmapData);
        fprintf (stderr, "Context not created!");
    }

    // Make sure and release colorspace before returning
    CGColorSpaceRelease( colorSpace );

    return context;
}

point = CGPointMake(point.x * image.scale, point.y * image.scale); - uranpro
这是最好的答案,因为它考虑了像素格式。它可以是任何格式,并且将被转换为ARGB。谢谢。 - Cristi
适用于Display P3的工作方法!谢谢!! - sodino

17

基于Minas的答案,这是一些Swift代码。原本我有一些代码来找出像素跨度,但是我已经更新了答案,使用了来自Desmond的答案的ComponentLayout。我还将扩展移到了CGImage。

Swift 5:

public extension UIImage {
    func getPixelColor(_ point: CGPoint) -> UIColor {
        guard let cgImage = self.cgImage else {
            return UIColor.clear
        }
        return cgImage.getPixelColor(point)
    }
}
public extension CGBitmapInfo {
    // See https://dev59.com/qHRB5IYBdhLWcg3w-8A9#60247648
    // I've extended it to include .a
    enum ComponentLayout {

        case a
        case bgra
        case abgr
        case argb
        case rgba
        case bgr
        case rgb

        var count: Int {
            switch self {
            case .a: return 1
            case .bgr, .rgb: return 3
            default: return 4
            }
        }
    }

    var isAlphaPremultiplied: Bool {
        let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue)
        return alphaInfo == .premultipliedFirst || alphaInfo == .premultipliedLast
    }

    // [...] skipping the rest
}

public extension CGImage {

    func getPixelColor(_ point: CGPoint) -> UIColor {
        guard let pixelData = self.dataProvider?.data, let layout = bitmapInfo.componentLayout, let data = CFDataGetBytePtr(pixelData) else {
            return .clear
        }
        let x = Int(point.x)
        let y = Int(point.y)
        let w = self.width
        let h = self.height
        let index = w * y + x
        let numBytes = CFDataGetLength(pixelData)
        let numComponents = layout.count
        if numBytes != w * h * numComponents {
            NSLog("Unexpected size: \(numBytes) != \(w)x\(h)x\(numComponents)")
            return .clear
        }
        let isAlphaPremultiplied = bitmapInfo.isAlphaPremultiplied
        switch numComponents {
        case 1:
            return UIColor(red: 0, green: 0, blue: 0, alpha: CGFloat(data[index])/255.0)
        case 3:
            let c0 = CGFloat((data[3*index])) / 255
            let c1 = CGFloat((data[3*index+1])) / 255
            let c2 = CGFloat((data[3*index+2])) / 255
            if layout == .bgr {
                return UIColor(red: c2, green: c1, blue: c0, alpha: 1.0)
            }
            return UIColor(red: c0, green: c1, blue: c2, alpha: 1.0)
        case 4:
            let c0 = CGFloat((data[4*index])) / 255
            let c1 = CGFloat((data[4*index+1])) / 255
            let c2 = CGFloat((data[4*index+2])) / 255
            let c3 = CGFloat((data[4*index+3])) / 255
            var r: CGFloat = 0
            var g: CGFloat = 0
            var b: CGFloat = 0
            var a: CGFloat = 0
            switch layout {
            case .abgr:
                a = c0; b = c1; g = c2; r = c3
            case .argb:
                a = c0; r = c1; g = c2; b = c3
            case .bgra:
                b = c0; g = c1; r = c2; a = c3
            case .rgba:
                r = c0; g = c1; b = c2; a = c3
            default:
                break
            }
            if isAlphaPremultiplied && a > 0 {
                r = r / a
                g = g / a
                b = b / a
            }
            return UIColor(red: r, green: g, blue: b, alpha: a)
        default:
            return .clear
        }
    }

我尝试使用范围进行重构,但好像不起作用。

    let start = numComponents * index
    let end = numComponents * (index + 1)
    let c = data[start ..< end] // expects Int, not a Range...   

由于我不太确定,所以向其他人提问。我认为如果每个像素只有1个字节,那么它应该是白色值,而不是alpha值。其他人可以确认吗? - funct7
它可以是任何一种;你必须做出判断。图像可以是灰度图像,在这种情况下,值将为白色,但它也可以是透明度掩码,在这种情况下,它将是alpha。我认为透明度掩码可能比灰度图像更常见,因此使用alpha的决定是合理的。不过,就个别情况而言,我认为这可以得到改进,因为在迭代大量像素时每次测试像素时都执行所有这些代码是不高效的。 - Ash
可以使用CGImageisMask属性来判断一张图片是否为蒙版。 - Ash
3
不要使用 image.size,而要使用 cgImage.widthcgImage.height。此外,使用 image.scale 调整给定的点。否则,这段代码无法与 Retina 图像(@2x 和 @3x)一起使用。 - Tom van Zummeren
改进的答案添加支持灰度图像 - jomafer
显示剩余2条评论

12

Swift 5 版本

这里给出的答案要么过时,要么不正确,因为它们没有考虑以下问题:

  1. 图像的像素大小可能与由 image.size.width/image.size.height 返回的点大小不同。
  2. 图像中可以使用各种布局的像素组件,例如 BGRA、ABGR、ARGB 等,或者根本没有 alpha 组件,例如 BGR 和 RGB。例如,UIView.drawHierarchy(in:afterScreenUpdates:) 方法可以生成 BGRA 图像。
  3. 所有像素的颜色分量都可以被 alpha 预乘,并且需要通过分割 alpha 来恢复原始颜色。
  4. 由于 CGImage 使用的内存优化,像素行的字节大小可能大于仅通过像素宽度乘以 4 的乘法得到的值。

下面的代码提供了一个通用的 Swift 5 解决方案,用于获取所有这些特殊情况下像素的 UIColor。该代码针对可用性和清晰度进行了优化,但不是针对性能进行优化。

public extension UIImage {
    var pixelWidth: Int {
        return cgImage?.width ?? 0
    }

    var pixelHeight: Int {
        return cgImage?.height ?? 0
    }

    func pixelColor(x: Int, y: Int) -> UIColor {
        assert(
            0 ..< pixelWidth ~= x && 0 ..< pixelHeight ~= y,
            "Pixel coordinates are out of bounds"
        )

        guard
            let cgImage = cgImage,
            let data = cgImage.dataProvider?.data,
            let dataPtr = CFDataGetBytePtr(data),
            let colorSpaceModel = cgImage.colorSpace?.model,
            let componentLayout = cgImage.bitmapInfo.componentLayout
        else {
            assertionFailure("Could not get a pixel of an image")
            return .clear
        }

        assert(
            colorSpaceModel == .rgb,
            "The only supported color space model is RGB"
        )
        assert(
            cgImage.bitsPerPixel == 32 || cgImage.bitsPerPixel == 24,
            "A pixel is expected to be either 4 or 3 bytes in size"
        )

        let bytesPerRow = cgImage.bytesPerRow
        let bytesPerPixel = cgImage.bitsPerPixel / 8
        let pixelOffset = y * bytesPerRow + x * bytesPerPixel

        if componentLayout.count == 4 {
            let components = (
                dataPtr[pixelOffset + 0],
                dataPtr[pixelOffset + 1],
                dataPtr[pixelOffset + 2],
                dataPtr[pixelOffset + 3]
            )

            var alpha: UInt8 = 0
            var red: UInt8 = 0
            var green: UInt8 = 0
            var blue: UInt8 = 0

            switch componentLayout {
            case .bgra:
                alpha = components.3
                red = components.2
                green = components.1
                blue = components.0
            case .abgr:
                alpha = components.0
                red = components.3
                green = components.2
                blue = components.1
            case .argb:
                alpha = components.0
                red = components.1
                green = components.2
                blue = components.3
            case .rgba:
                alpha = components.3
                red = components.0
                green = components.1
                blue = components.2
            default:
                return .clear
            }

            /// If chroma components are premultiplied by alpha and the alpha is `0`,
            /// keep the chroma components to their current values.
            if cgImage.bitmapInfo.chromaIsPremultipliedByAlpha, alpha != 0 {
                let invisibleUnitAlpha = 255 / CGFloat(alpha)
                red = UInt8((CGFloat(red) * invisibleUnitAlpha).rounded())
                green = UInt8((CGFloat(green) * invisibleUnitAlpha).rounded())
                blue = UInt8((CGFloat(blue) * invisibleUnitAlpha).rounded())
            }

            return .init(red: red, green: green, blue: blue, alpha: alpha)

        } else if componentLayout.count == 3 {
            let components = (
                dataPtr[pixelOffset + 0],
                dataPtr[pixelOffset + 1],
                dataPtr[pixelOffset + 2]
            )

            var red: UInt8 = 0
            var green: UInt8 = 0
            var blue: UInt8 = 0

            switch componentLayout {
            case .bgr:
                red = components.2
                green = components.1
                blue = components.0
            case .rgb:
                red = components.0
                green = components.1
                blue = components.2
            default:
                return .clear
            }

            return .init(red: red, green: green, blue: blue, alpha: UInt8(255))

        } else {
            assertionFailure("Unsupported number of pixel components")
            return .clear
        }
    }
}

public extension UIColor {
    convenience init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
        self.init(
            red: CGFloat(red) / 255,
            green: CGFloat(green) / 255,
            blue: CGFloat(blue) / 255,
            alpha: CGFloat(alpha) / 255
        )
    }
}

public extension CGBitmapInfo {
    enum ComponentLayout {
        case bgra
        case abgr
        case argb
        case rgba
        case bgr
        case rgb

        var count: Int {
            switch self {
            case .bgr, .rgb: return 3
            default: return 4
            }
        }
    }

    var componentLayout: ComponentLayout? {
        guard let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue) else { return nil }
        let isLittleEndian = contains(.byteOrder32Little)

        if alphaInfo == .none {
            return isLittleEndian ? .bgr : .rgb
        }
        let alphaIsFirst = alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst

        if isLittleEndian {
            return alphaIsFirst ? .bgra : .abgr
        } else {
            return alphaIsFirst ? .argb : .rgba
        }
    }

    var chromaIsPremultipliedByAlpha: Bool {
        let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue)
        return alphaInfo == .premultipliedFirst || alphaInfo == .premultipliedLast
    }
}

我又做了一些阅读,发现在小端模式下,组件被交换了,所以你的代码是正确的。感谢您的评论。非常好而且强大的答案。 - RunLoop
我从这个答案中学到了一些新东西。代码非常详细。谢谢。 - rule_it_subir

12

您无法直接访问原始数据,但通过获取此图像的CGImage,您可以访问它。这里是另一个问题的链接,回答了有关详细图像操作的问题,包括您可能有的其他问题:CGImage


10

以下是建立在Minas Petterson的回答之上的获取UI图像像素颜色的通用方法:

- (UIColor*)pixelColorInImage:(UIImage*)image atX:(int)x atY:(int)y {

    CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
    const UInt8* data = CFDataGetBytePtr(pixelData);

    int pixelInfo = ((image.size.width * y) + x ) * 4; // 4 bytes per pixel

    UInt8 red   = data[pixelInfo + 0];
    UInt8 green = data[pixelInfo + 1];
    UInt8 blue  = data[pixelInfo + 2];
    UInt8 alpha = data[pixelInfo + 3];
    CFRelease(pixelData);

    return [UIColor colorWithRed:red  /255.0f
                           green:green/255.0f
                            blue:blue /255.0f
                           alpha:alpha/255.0f];
}

请注意X和Y可能会交换;该函数直接访问底层位图,不考虑可能包含在UIImage中的旋转。


有没有办法用这些颜色编号重新组合图像? - anivader
1
这个函数不考虑格式,对我来说是BGR。 - pronebird

7
- (UIColor *)colorAtPixel:(CGPoint)point inImage:(UIImage *)image {

    if (!CGRectContainsPoint(CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), point)) {
        return nil;
    }

    // Create a 1x1 pixel byte array and bitmap context to draw the pixel into.
    NSInteger pointX = trunc(point.x);
    NSInteger pointY = trunc(point.y);
    CGImageRef cgImage = image.CGImage;
    NSUInteger width = image.size.width;
    NSUInteger height = image.size.height;
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    int bytesPerPixel = 4;
    int bytesPerRow = bytesPerPixel * 1;
    NSUInteger bitsPerComponent = 8;
    unsigned char pixelData[4] = { 0, 0, 0, 0 };
    CGContextRef context = CGBitmapContextCreate(pixelData, 1, 1, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);
    CGContextSetBlendMode(context, kCGBlendModeCopy);

    // Draw the pixel we are interested in onto the bitmap context
    CGContextTranslateCTM(context, -pointX, pointY-(CGFloat)height);
    CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), cgImage);
    CGContextRelease(context);

    // Convert color values [0..255] to floats [0.0..1.0]
    CGFloat red   = (CGFloat)pixelData[0] / 255.0f;
    CGFloat green = (CGFloat)pixelData[1] / 255.0f;
    CGFloat blue  = (CGFloat)pixelData[2] / 255.0f;
    CGFloat alpha = (CGFloat)pixelData[3] / 255.0f;
    return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
}

我认为结果是错误的,因为位图上下文的alpha信息是kCGImageAlphaPremultipliedLast。但是,当你检索像素颜色时,你将其视为未预乘值。 - Swordsfrog

5

Minas答案的Swift版本

extension CGImage {
    func pixel(x: Int, y: Int) -> (r: Int, g: Int, b: Int, a: Int)? { // swiftlint:disable:this large_tuple
        guard let pixelData = dataProvider?.data,
            let data = CFDataGetBytePtr(pixelData) else { return nil }

        let pixelInfo = ((width  * y) + x ) * 4

        let red = Int(data[pixelInfo])         // If you need this info, enable it
        let green = Int(data[(pixelInfo + 1)]) // If you need this info, enable it
        let blue = Int(data[pixelInfo + 2])    // If you need this info, enable it
        let alpha = Int(data[pixelInfo + 3])   // I need only this info for my maze game

        return (red, green, blue, alpha)
    }
}

0
首先创建并附加点击手势识别器以允许用户交互:
UITapGestureRecognizer * tapRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(tapGesture:)];
[self.label addGestureRecognizer:tapRecognizer];
self.label.userInteractionEnabled = YES;

现在实现 -tapGesture: 方法:

- (void)tapGesture:(UITapGestureRecognizer *)recognizer
{
    CGPoint point = [recognizer locationInView:self.label];

    UIGraphicsBeginImageContext(self.label.bounds.size);
    CGContextRef context = UIGraphicsGetCurrentContext();
    [self.label.layer renderInContext:context];

    int bpr = CGBitmapContextGetBytesPerRow(context);
    unsigned char * data = CGBitmapContextGetData(context);
    if (data != NULL)
    {
        int offset = bpr*round(point.y) + 4*round(point.x);
        int blue = data[offset+0];
        int green = data[offset+1];
        int red = data[offset+2];
        int alpha =  data[offset+3];

        NSLog(@"%d %d %d %d", alpha, red, green, blue);

        if (alpha == 0)
        {
            // Here is tap out of text
        }
        else
        {
            // Here is tap right into text
        }
    }

    UIGraphicsEndImageContext();
}

这将适用于具有透明背景的UILabel,如果这不是您想要的,您可以将alpha、red、green、blue与self.label.backgroundColor进行比较...


3
这与点击手势有什么关系? - amleszk

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接