屏幕截图代码在iPad上无法工作,但在iPhone上可以。

3
我遇到了一个奇怪的问题... 我需要使用以下代码捕获屏幕数据并将其转换为图像。这段代码在iPhone/iPad模拟器和iPhone设备上正常工作,但只有在iPad上出现问题。 iPhone设备运行的是iOS版本3.1.1,而iPad则是iOS 4.2...
- (UIImage *)screenshotImage {
CGRect screenBounds = [[UIScreen mainScreen] bounds];
int backingWidth = screenBounds.size.width;
int backingHeight =screenBounds.size.height;
NSInteger myDataLength = backingWidth * backingHeight * 4;
GLuint *buffer = (GLuint *) malloc(myDataLength);
glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA4, GL_UNSIGNED_BYTE, buffer);
for(int y = 0; y < backingHeight / 2; y++) {
    for(int xt = 0; xt < backingWidth; xt++) {
        GLuint top = buffer[y * backingWidth + xt];
        GLuint bottom = buffer[(backingHeight - 1 - y) * backingWidth + xt];
        buffer[(backingHeight - 1 - y) * backingWidth + xt] = top;
        buffer[y * backingWidth + xt] = bottom;
    }
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, releaseScreenshotData);
const int bitsPerComponent = 8;
const int bitsPerPixel = 4 * bitsPerComponent;
const int bytesPerRow = 4 * backingWidth;

CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(backingWidth,backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);

UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);

// myImage = [self addIconToImage:myImage];
return myImage;}    

有什么想法出了问题吗..??
3个回答

2

这两行不匹配

NSInteger myDataLength = backingWidth * backingHeight * 4;

glReadPixels(0, 0, backingWidth, backingHeight, GL_RGBA4, GL_UNSIGNED_BYTE, buffer);

GL_RGB4 表示每个通道使用 4 比特,但您正在分配每个通道 8 比特。适当的标记是 GL_RGB8。在 iPhone 上,可能不支持 GL_RGB4 并会回退到 GL_RGBA

此外,请确保从正确的缓冲区(前缓冲区 vs 左缓冲区 vs 任何(错误地)绑定的 FBOs)中读取。建议在进行缓冲区交换之前从后缓冲区读取。


谢谢您的快速回复。现在我已经将GL_RGB4更改为GL_RGBA。代码在iPhone设备上运行良好,但在iPad设备上无法运行。有什么问题吗?我在两个设备上使用相同的代码。 - Tornado
改成GL_RGBA8,不要只是GL_RGBA(注意‘8’)。 - datenwolf
嘿,我的Xcode没有给我GL_RGBA8选项,而是给了GL_RGBA8_OES选项。当我使用GL_RGBA8_OES时,它会在编译时出现错误,提示“此范围中未定义GL_RGBA8_OES”。 - Tornado
@Tornado: OES 头文件已经包含了吗? - datenwolf
嗯嗯嗯.. Opengl 框架已经包含在项目中了... 我已经包含了 #import <OpenGLES/ES2/gl.h> ... 还需要导入什么? - Tornado
已经包含了所有的OES头文件...在iPad设备上尝试GL_RGBA8_OES不起作用......嘿,奇怪的是在模拟器中它可以工作,无论我将其更改为任何选项GL_RGBA4、Gl_RGB等。 - Tornado

0

来自OpenGL ES苹果文档的截图

- (UIImage*)snapshot:(UIView*)eaglview
{
    GLint backingWidth, backingHeight;

    // Bind the color renderbuffer used to render the OpenGL ES view
    // If your application only creates a single color renderbuffer which is already bound at this point, 
    // this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
    // Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);

    // Get the size of the backing CAEAGLLayer
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
    glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);

    NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
    NSInteger dataLength = width * height * 4;
    GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));

    // Read pixel data from the framebuffer
    glPixelStorei(GL_PACK_ALIGNMENT, 4);
    glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

    // Create a CGImage with the pixel data
    // If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
    // otherwise, use kCGImageAlphaPremultipliedLast
    CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
    CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
    CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
                                    ref, NULL, true, kCGRenderingIntentDefault);

    // OpenGL ES measures data in PIXELS
    // Create a graphics context with the target size measured in POINTS
    NSInteger widthInPoints, heightInPoints;
    if (NULL != UIGraphicsBeginImageContextWithOptions) {
        // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
        // Set the scale parameter to your OpenGL ES view's contentScaleFactor
        // so that you get a high-resolution snapshot when its value is greater than 1.0
        CGFloat scale = eaglview.contentScaleFactor;
        widthInPoints = width / scale;
        heightInPoints = height / scale;
        UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
    }
    else {
        // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
        widthInPoints = width;
        heightInPoints = height;
        UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
    }

    CGContextRef cgcontext = UIGraphicsGetCurrentContext();

    // UIKit coordinate system is upside down to GL/Quartz coordinate system
    // Flip the CGImage by rendering it to the flipped bitmap context
    // The size of the destination area is measured in POINTS
    CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
    CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

    // Retrieve the UIImage from the current context
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    // Clean up
    free(data);
    CFRelease(ref);
    CFRelease(colorspace);
    CGImageRelease(iref);

    return image;
}

0

对于 iOS 4 或更高版本,我使用多重采样技术进行抗锯齿处理...glReadpixels() 无法直接从多重采样的 FBO 中读取数据,您需要将其解析为单一采样缓冲区,然后再尝试读取它...请参考以下帖子:

使用多重采样的 glReadPixel() 读取数据


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接