我正在尝试获取UIImage
的长度。 不是图像的宽度或高度,而是数据的大小。
UIImage *img = [UIImage imageNamed:@"sample.png"];
NSData *imgData = UIImageJPEGRepresentation(img, 1.0);
NSLog(@"Size of Image(bytes):%d",[imgData length]);
UIImage
的底层数据可能会有所不同,因此对于相同的“图像”,其数据大小可能会有所变化。您可以使用UIImagePNGRepresentation
或UIImageJPEGRepresentation
来获取它们等效的NSData
构造体,然后检查其大小。
UIImageJPEGRepresentation
ÂíåUIImagePNGRepresentation
‰∏牺öËøîÂõûÂéüÂßã§ßÂ∞è„ÄÇ - NSLeader使用UIImage的CGImage属性。然后结合CGImageGetBytesPerRow * CGImageGetHeight和UIImage的sizeof,你应该可以接近实际大小。
如果你想将其用于例如malloc以准备位图操作(假设像素格式为4字节,其中3字节为RGB和1字节为Alpha)等目的,则这将返回图像的未压缩大小。
int height = image.size.height,
width = image.size.width;
int bytesPerRow = 4*width;
if (bytesPerRow % 16)
bytesPerRow = ((bytesPerRow / 16) + 1) * 16;
int dataSize = height*bytesPerRow;
image.size.height
和 image.size.width
会给你CG单位的大小,不一定是像素。如果.scale
属性不是1,你会得到错误的答案。(2)你假设每个像素占4个字节,这对于RGBA图像是正确的,但对于灰度图像不正确。 - Todd Lehman- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)editInfo
{
UIImage *image=[editInfo valueForKey:UIImagePickerControllerOriginalImage];
NSURL *imageURL=[editInfo valueForKey:UIImagePickerControllerReferenceURL];
__block long long realSize;
ALAssetsLibraryAssetForURLResultBlock resultBlock=^(ALAsset *asset)
{
ALAssetRepresentation *representation=[asset defaultRepresentation];
realSize=[representation size];
};
ALAssetsLibraryAccessFailureBlock failureBlock=^(NSError *error)
{
NSLog(@"%@", [error localizedDescription]);
};
if(imageURL)
{
ALAssetsLibrary *assetsLibrary=[[[ALAssetsLibrary alloc] init] autorelease];
[assetsLibrary assetForURL:imageURL resultBlock:resultBlock failureBlock:failureBlock];
}
}
在 Swift 中的示例:
let img: UIImage? = UIImage(named: "yolo.png")
let imgData: NSData = UIImageJPEGRepresentation(img, 0)
println("Size of Image: \(imgData.length) bytes")
UIImage+MemorySize
中:#import <objc/runtime.h>
- (size_t) memorySize
{
CGImageRef image = self.CGImage;
size_t instanceSize = class_getInstanceSize(self.class);
size_t pixmapSize = CGImageGetHeight(image) * CGImageGetBytesPerRow(image);
size_t totalSize = instanceSize + pixmapSize;
return totalSize;
}
- (size_t) memorySize
{
return CGImageGetHeight(self.CGImage) * CGImageGetBytesPerRow(self.CGImage);
}
Swift 3:
let image = UIImage(named: "example.jpg")
if let data = UIImageJPEGRepresentation(image, 1.0) {
print("Size: \(data.count) bytes")
}
Swift 4&5:
extension UIImage {
var sizeInBytes: Int {
guard let cgImage = self.cgImage else {
// This won't work for CIImage-based UIImages
assertionFailure()
return 0
}
return cgImage.bytesPerRow * cgImage.height
}
}
我尝试使用以下代码获取图像大小
let imgData = image.jpegData(compressionQuality: 1.0)
但它给出的图像大小比实际大小小。然后我尝试使用PNG表示来获取大小。
let imageData = image.pngData()
但它给出的字节计数比实际图像大小要大。
对我来说,唯一完美运作的事情。
public func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
var asset: PHAsset!
if #available(iOS 11.0, *) {
asset = info[UIImagePickerControllerPHAsset] as? PHAsset
} else {
if let url = info[UIImagePickerControllerReferenceURL] as? URL {
asset = PHAsset.fetchAssets(withALAssetURLs: [url], options: .none).firstObject!
}
}
if #available(iOS 13, *) {
PHImageManager.default().requestImageDataAndOrientation(for: asset, options: .none) { data, string, orien, info in
let imgData = NSData(data:data!)
var imageSize: Int = imgData.count
print("actual size of image in KB: %f ", Double(imageSize) / 1024.0)
}
} else {
PHImageManager.default().requestImageData(for: asset, options: .none) { data, string, orientation, info in
let imgData = NSData(data:data!)
var imageSize: Int = imgData.count
print("actual size of image in KB: %f ", Double(imageSize) / 1024.0)
}
}
}
我不确定你的情况。如果你需要实际的字节大小,我认为你不需要这样做。你可以使用UIImagePNGRepresentation或UIImageJPEGRepresentation获取图像压缩数据的NSData对象。
我认为你想要获取未压缩图像(像素数据)的实际大小。你需要将UIImage*或CGImageRef转换为原始数据。这是一个将UIImage转换为IplImage(来自OpenCV)的示例。你只需要分配足够的内存并将指针传递给CGBitmapContextCreate的第一个参数即可。
UIImage *image = //Your image
CGImageRef imageRef = image.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
IplImage *ret = cvCreateImage(cvGetSize(iplimage), IPL_DEPTH_8U, 3);
cvCvtColor(iplimage, ret, CV_RGBA2BGR);
cvReleaseImage(&iplimage);