http://www.1976inc.com/dev/iphone/beast.jpg
我的问题是,当您触摸最顶部的UIImageView时,整个图像(包括透明度)都会注册触摸事件。我想要做的是只在png的非透明部分注册触摸事件。因此,使用户可以与所有三个UIImageView进行交互。我相信这很简单,但我是iPhone开发的新手,似乎无法弄清楚。
更新 所以我意识到实现我想要做的最简单的方法是循环创建每个png的上下文,然后获取触摸事件发生的像素的颜色数据。如果像素代表透明区域,则继续移动到下一个图像并尝试相同的操作。这个方法有效,但仅限于第一次。例如,第一次点击主视图时,我会得到这个输出
2010-07-26 15:50:06.285 colorTest[21501:207] 帽子2010-07-26 15:50:06.286 colorTest[21501:207] 偏移量:227024 颜色:RGB A 0 0 0 0
2010-07-26 15:50:06.293 colorTest[21501:207] 嘴巴
2010-07-26 15:50:06.293 colorTest[21501:207] 偏移量:227024 颜色:RGB A 0 0 0 0
2010-07-26 15:50:06.298 colorTest[21501:207] 身体
2010-07-26 15:50:06.299 colorTest[21501:207] 偏移量:227024 颜色:RGB A 255 255 255 255
这正是我想看到的。但如果我再次单击同一区域,则会得到以下结果。
2010-07-26 15:51:21.625 colorTest[21501:207] 帽子
2010-07-26 15:51:21.626 colorTest[21501:207] 偏移量:283220 颜色:RGB A 255 255 255 255
2010-07-26 15:51:21.628 colorTest[21501:207] 嘴巴
2010-07-26 15:51:21.628 colorTest[21501:207] 偏移量:283220 颜色:RGB A 255 255 255 255
2010-07-26 15:51:21.630 colorTest[21501:207] 身体
2010-07-26 15:51:21.631 colorTest[21501:207] 偏移量:283220 颜色:RGB A 255 255 255 255
以下是我正在使用的代码;
触摸事件存在于应用程序的主视图中
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
NSLog(@"Touched balls");
UITouch *touch = [touches anyObject];
CGPoint point = [touch locationInView:self.view];
UIColor *transparent = [UIColor colorWithRed:0 green:0 blue:0 alpha:0];
for( viewTest *currentView in imageArray){
//UIColor *testColor = [self getPixelColorAtLocation:point image:currentView.image];
[currentView getPixelColorAtLocation:point];
}
}
它调用了一个自定义类中继承imageView的方法。该函数返回触摸事件下像素的颜色。
- (UIColor*) getPixelColorAtLocation:(CGPoint)point
{
UIColor *color = nil;
CGImageRef inImage = self.image.CGImage;
CGContextRef context = [self createARGBBitmapContextFromImage:inImage];
if(context == NULL) return nil;
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(context, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (context);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(@"%@",name);
NSLog(@"offset: %i colors: RGB A %i %i %i %i ",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(context);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();//CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
更新 2 感谢您的快速回复。我不确定是否理解您的意思。如果我将 hidden 更改为 true,则 UIImageView“层”将被隐藏。我想要的是 PNG 的透明部分不注册触摸事件。例如,如果您查看我在帖子中包含的图像。如果您单击蠕虫、茎或叶子(它们都是同一 PNG 的一部分),则该 ImageView 将触发触摸事件,但是如果您触摸圆圈,则该 ImageView 将触发触摸事件。顺便说一下,这是我用来放置它们在视图中的代码。
UIView *tempView = [[UIView alloc] init];
[self.view addSubview:tempView];
UIImageView *imageView1 = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"body.png"] ];
[imageView1 setUserInteractionEnabled:YES];
UIImageView *imageView2 = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"mouth.png"] ];
[imageView2 setUserInteractionEnabled:YES];
UIImageView *imageView3 = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"hat.png"] ];
[imageView3 setUserInteractionEnabled:YES];
[tempView addSubview:imageView1];
[tempView addSubview:imageView2];
[tempView addSubview:imageView3];
[self.view addSubview:tempView];