UIImagePNGRepresentation和掩码图像

3
  1. I created a masked image using a function form an iphone blog:

    UIImage *imgToSave = [self maskImage:[UIImage imageNamed:@"pic.jpg"] withMask:[UIImage imageNamed:@"sd-face-mask.png"]];

  2. Looks good in a UIImageView

    UIImageView *imgView = [[UIImageView alloc] initWithImage:imgToSave];
    imgView.center = CGPointMake(160.0f, 140.0f);
    [self.view addSubview:imgView];
    
  3. UIImagePNGRepresentation to save to disk:

    [UIImagePNGRepresentation(imgToSave) writeToFile:[self findUniqueSavePath] atomically:YES];

UIImagePNGRepresentation会返回一张外观不同的图像的NSData。

输出结果是反向图像遮罩。 在应用程序中被剪切掉的区域现在在文件中可见。 在应用程序中可见的区域现在被移除。可见性相反

我的遮罩是设计为除了图片中的脸部区域,其他所有东西都要删除。UIImage在应用程序中看起来没问题,但当我将其保存到磁盘上后,文件看起来相反。脸部已被删除,但其他所有东西还在。

请告诉我您是否可以帮忙!


我最终使用反向掩模图像进行保存。 - Alex L
2个回答

3
在quartz中,你可以使用图像遮罩(黑色透过,白色阻挡)或正常图像(白色透过,黑色阻挡)进行掩蔽。似乎由于某种原因,保存时将图像遮罩视为正常图像进行掩蔽。其中一个想法是渲染到位图上下文,然后从中创建要保存的图像。

0

我遇到了完全相同的问题,当我保存文件时它是正常的,但在内存中返回的图像却完全相反。

罪魁祸首和解决方案是UIImagePNGRepresentation()。在保存应用内的图像之前,它会修复该图像,因此我只需将该函数作为创建蒙版图像的最后一步插入,并返回该图像。

这可能不是最优雅的解决方案,但它有效。我从我的应用程序中复制了一些代码并压缩了它们,如果下面的代码不能正常工作,则可能只是一些笔误。

祝愉快。:)

// MyImageHelperObj.h

@interface MyImageHelperObj : NSObject

+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;

@end





// MyImageHelperObj.m

#import <QuartzCore/QuartzCore.h>
#import "MyImageHelperObj.h"


@implementation MyImageHelperObj


+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
{
    // create image size rect
    CGRect newRect = CGRectZero;
    newRect.size = newSize;

    // draw source image
    UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0f);
    [sourceImage drawInRect:newRect];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

    // draw mask image
    [maskImage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1.0f];
    maskImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    // create grayscale version of mask image to make the "image mask"
    UIImage *grayScaleMaskImage = [MyImageHelperObj createGrayScaleImage:maskImage];
    CGFloat width = CGImageGetWidth(grayScaleMaskImage.CGImage);
    CGFloat height = CGImageGetHeight(grayScaleMaskImage.CGImage);
    CGFloat bitsPerPixel = CGImageGetBitsPerPixel(grayScaleMaskImage.CGImage);
    CGFloat bytesPerRow = CGImageGetBytesPerRow(grayScaleMaskImage.CGImage);
    CGDataProviderRef providerRef = CGImageGetDataProvider(grayScaleMaskImage.CGImage);
    CGImageRef imageMask = CGImageMaskCreate(width, height, 8, bitsPerPixel, bytesPerRow, providerRef, NULL, false);

    CGImageRef maskedImage = CGImageCreateWithMask(newImage.CGImage, imageMask);
    CGImageRelease(imageMask);
    newImage = [UIImage imageWithCGImage:maskedImage];
    CGImageRelease(maskedImage);
    return [UIImage imageWithData:UIImagePNGRepresentation(newImage)];
}

+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
{
    //create gray device colorspace.
    CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
    //create 8-bit bimap context without alpha channel.
    CGContextRef bitmapContext = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, 0, space, kCGImageAlphaNone);
    CGColorSpaceRelease(space);
    //Draw image.
    CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
    CGContextDrawImage(bitmapContext, bounds, originalImage.CGImage);
    //Get image from bimap context.
    CGImageRef grayScaleImage = CGBitmapContextCreateImage(bitmapContext);
    CGContextRelease(bitmapContext);
    //image is inverted. UIImage inverts orientation while converting CGImage to UIImage.
    UIImage* image = [UIImage imageWithCGImage:grayScaleImage];
    CGImageRelease(grayScaleImage);
    return image;
}

@end

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接