如何使用ARCore拍摄相机照片

6
3个回答

9
我假设您指的是相机所拍摄到的图像和增强现实物体的图片。在高层次上,您需要获取写入外部存储的许可以保存照片,从OpenGL复制帧,然后将其保存为png格式(例如)。以下是具体步骤:
WRITE_EXTERNAL_STORAGE 权限添加到 AndroidManifest.xml 文件中。
   <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

然后将CameraPermissionHelper更改为迭代遍历相机和写入外部存储权限,以确保它们已被授予。

 private static final String REQUIRED_PERMISSIONS[] = {
          Manifest.permission.WRITE_EXTERNAL_STORAGE,
          Manifest.permission.CAMERA
  };

  /**
   * Check to see we have the necessary permissions for this app.
   */
  public static boolean hasCameraPermission(Activity activity) {
    for (String p : REQUIRED_PERMISSIONS) {
      if (ContextCompat.checkSelfPermission(activity, p) !=
            PackageManager.PERMISSION_GRANTED) {
        return false;
      }
    }
    return true;
  }

  /**
   * Check to see we have the necessary permissions for this app,
   *   and ask for them if we don't.
   */
  public static void requestCameraPermission(Activity activity) {
    ActivityCompat.requestPermissions(activity, REQUIRED_PERMISSIONS,
            CAMERA_PERMISSION_CODE);
  }

  /**
   * Check to see if we need to show the rationale for this permission.
   */
  public static boolean shouldShowRequestPermissionRationale(Activity activity) {
    for (String p : REQUIRED_PERMISSIONS) {
      if (ActivityCompat.shouldShowRequestPermissionRationale(activity, p)) {
        return true;
      }
    }
    return false;
  }

接下来,向HelloARActivity添加几个字段来跟踪帧的尺寸和布尔值以指示何时保存图片。
 private int mWidth;
 private int mHeight;
 private  boolean capturePicture = false;

onSurfaceChanged()中设置宽度和高度。

 public void onSurfaceChanged(GL10 gl, int width, int height) {
     mDisplayRotationHelper.onSurfaceChanged(width, height);
     GLES20.glViewport(0, 0, width, height);
     mWidth = width;
     mHeight = height;
 }

onDrawFrame()的底部,添加一个检查捕获标志的步骤。这应该在所有其他绘图操作之后进行。
         if (capturePicture) {
             capturePicture = false;
             SavePicture();
         }

然后为按钮添加onClick方法以拍摄照片,并添加实际代码保存图像:

  public void onSavePicture(View view) {
    // Here just a set a flag so we can copy
    // the image from the onDrawFrame() method.
    // This is required for OpenGL so we are on the rendering thread.
    this.capturePicture = true;
  }

  /**
   * Call from the GLThread to save a picture of the current frame.
   */
  public void SavePicture() throws IOException {
    int pixelData[] = new int[mWidth * mHeight];

    // Read the pixels from the current GL frame.
    IntBuffer buf = IntBuffer.wrap(pixelData);
    buf.position(0);
    GLES20.glReadPixels(0, 0, mWidth, mHeight,
            GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);

    // Create a file in the Pictures/HelloAR album.
    final File out = new File(Environment.getExternalStoragePublicDirectory(
            Environment.DIRECTORY_PICTURES) + "/HelloAR", "Img" +
            Long.toHexString(System.currentTimeMillis()) + ".png");

    // Make sure the directory exists
    if (!out.getParentFile().exists()) {
      out.getParentFile().mkdirs();
    }

    // Convert the pixel data from RGBA to what Android wants, ARGB.
    int bitmapData[] = new int[pixelData.length];
    for (int i = 0; i < mHeight; i++) {
      for (int j = 0; j < mWidth; j++) {
        int p = pixelData[i * mWidth + j];
        int b = (p & 0x00ff0000) >> 16;
        int r = (p & 0x000000ff) << 16;
        int ga = p & 0xff00ff00;
        bitmapData[(mHeight - i - 1) * mWidth + j] = ga | r | b;
      }
    }
    // Create a bitmap.
    Bitmap bmp = Bitmap.createBitmap(bitmapData,
                     mWidth, mHeight, Bitmap.Config.ARGB_8888);

    // Write it to disk.
    FileOutputStream fos = new FileOutputStream(out);
    bmp.compress(Bitmap.CompressFormat.PNG, 100, fos);
    fos.flush();
    fos.close();
    runOnUiThread(new Runnable() {
      @Override
      public void run() {
        showSnackbarMessage("Wrote " + out.getName(), false);
      }
    });
  }

最后一步是将按钮添加到activity_main.xml布局的末尾。
<Button
    android:id="@+id/fboRecord_button"
    android:layout_width="wrap_content"
    android:layout_height="wrap_content"
    android:layout_alignStart="@+id/surfaceview"
    android:layout_alignTop="@+id/surfaceview"
    android:onClick="onSavePicture"
    android:text="Snap"
    tools:ignore="OnClick"/>

1
新代码没有onSurfaceChanged函数,如果额外编写,则永远不会被调用。 - Maveňツ

5

获取图像缓冲区

在最新的ARCore SDK中,我们可以通过公共类Frame访问图像缓冲区。以下是示例代码,可让我们访问图像缓冲区。

private void onSceneUpdate(FrameTime frameTime) {
    try {
        Frame currentFrame = sceneView.getArFrame();
        Image currentImage = currentFrame.acquireCameraImage();
        int imageFormat = currentImage.getFormat();
        if (imageFormat == ImageFormat.YUV_420_888) {
            Log.d("ImageFormat", "Image format is YUV_420_888");
        }
}

如果您在回调函数setOnUpdateListener()中注册了onSceneUpdate()函数,则每次更新都会调用它。图像将采用YUV_420_888格式,但它将具有本机高分辨率相机的全视场。
此外,请勿忘记通过调用currentImage.close()关闭接收到的图像资源。否则,在下一次运行onSceneUpdate时,您将收到一个ResourceExhaustedException异常。 将获取的图像缓冲区写入文件 以下实现将YUV缓冲区转换为压缩后的JPEG字节数组。
private static byte[] NV21toJPEG(byte[] nv21, int width, int height) {
    ByteArrayOutputStream out = new ByteArrayOutputStream();
    YuvImage yuv = new YuvImage(nv21, ImageFormat.NV21, width, height, null);
    yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
    return out.toByteArray();
}

public static void WriteImageInformation(Image image, String path) {
    byte[] data = null;
    data = NV21toJPEG(YUV_420_888toNV21(image),
                image.getWidth(), image.getHeight());
    BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(path));                       
    bos.write(data);
    bos.flush();
    bos.close();
}
    
private static byte[] YUV_420_888toNV21(Image image) {
    byte[] nv21;
    ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
    ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
    ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();

    int ySize = yBuffer.remaining();
    int uSize = uBuffer.remaining();
    int vSize = vBuffer.remaining();

    nv21 = new byte[ySize + uSize + vSize];

    //U and V are swapped
    yBuffer.get(nv21, 0, ySize);
    vBuffer.get(nv21, ySize, vSize);
    uBuffer.get(nv21, ySize + vSize, uSize);

    return nv21;
}

缺少方法/类YUV_420_888toNV21。此外,由于onSceneUpdate被不断调用,我们是否应该将图像保存到字段/属性中,并在想要保存图像时调用“WriteImageInformation”? - MichaelThePotato
1
@MichaelThePotato 我已经更新了答案。你可以自行决定何时将更新后的图像缓冲区写入文件。 - nbsrujan
1
如何在纵向模式下使用上述代码?AcquireCameraImage方向总是横向(640 x 480)。我需要先旋转图像吗?如果是的话,应该怎么做? - Kushagra
1
@Kushagra ARCore 在横向模式下处理图像。您无法获取纵向模式下的图像。在获取缓冲区后,您可能需要手动旋转图像。 - nbsrujan
1
谢谢@nbsrujan,我使用以下代码成功将图像转为纵向:Image image = Objects.requireNonNull(ArCoreManager.getInstance().getArFrame()).acquireCameraImage(); if (isPortrait()) { Matrix matrix = new Matrix(); matrix.postRotate(90); imageBitmap = Bitmap.createBitmap(imageBitmap, 0, 0, image.getWidth(), image.getHeight(), matrix, true); } - Kushagra

2
抱歉回复晚了。您可以使用代码在ARCore中点击图片:
private String generateFilename() {
    String date =
            new SimpleDateFormat("yyyyMMddHHmmss", java.util.Locale.getDefault()).format(new Date());
    return Environment.getExternalStoragePublicDirectory(
            Environment.DIRECTORY_PICTURES) + File.separator + "Sceneform/" + date + "_screenshot.jpg";
}

private void saveBitmapToDisk(Bitmap bitmap, String filename) throws IOException {

    File out = new File(filename);
    if (!out.getParentFile().exists()) {
        out.getParentFile().mkdirs();
    }
    try (FileOutputStream outputStream = new FileOutputStream(filename);
         ByteArrayOutputStream outputData = new ByteArrayOutputStream()) {
        bitmap.compress(Bitmap.CompressFormat.PNG, 100, outputData);
        outputData.writeTo(outputStream);
        outputStream.flush();
        outputStream.close();
    } catch (IOException ex) {
        throw new IOException("Failed to save bitmap to disk", ex);
    }
}

private void takePhoto() {
    final String filename = generateFilename();
    /*ArSceneView view = fragment.getArSceneView();*/
    mSurfaceView = findViewById(R.id.surfaceview);
    // Create a bitmap the size of the scene view.
    final Bitmap bitmap = Bitmap.createBitmap(mSurfaceView.getWidth(), mSurfaceView.getHeight(),
            Bitmap.Config.ARGB_8888);

    // Create a handler thread to offload the processing of the image.
    final HandlerThread handlerThread = new HandlerThread("PixelCopier");
    handlerThread.start();
    // Make the request to copy.
    PixelCopy.request(mSurfaceView, bitmap, (copyResult) -> {
        if (copyResult == PixelCopy.SUCCESS) {
            try {
                saveBitmapToDisk(bitmap, filename);
            } catch (IOException e) {
                Toast toast = Toast.makeText(DrawAR.this, e.toString(),
                        Toast.LENGTH_LONG);
                toast.show();
                return;
            }
            Snackbar snackbar = Snackbar.make(findViewById(android.R.id.content),
                    "Photo saved", Snackbar.LENGTH_LONG);
            snackbar.setAction("Open in Photos", v -> {
                File photoFile = new File(filename);

                Uri photoURI = FileProvider.getUriForFile(DrawAR.this,
                        DrawAR.this.getPackageName() + ".ar.codelab.name.provider",
                        photoFile);
                Intent intent = new Intent(Intent.ACTION_VIEW, photoURI);
                intent.setDataAndType(photoURI, "image/*");
                intent.addFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION);
                startActivity(intent);

            });
            snackbar.show();
        } else {
            Log.d("DrawAR", "Failed to copyPixels: " + copyResult);
            Toast toast = Toast.makeText(DrawAR.this,
                    "Failed to copyPixels: " + copyResult, Toast.LENGTH_LONG);
            toast.show();
        }
        handlerThread.quitSafely();
    }, new Handler(handlerThread.getLooper()));
}

1
更多细节可以在这里找到:https://codelabs.developers.google.com/codelabs/sceneform-intro/index.html?index=..%2F..io2018#14 - M. Noreikis
为什么生成的图像结果在我放置物体的区域会有白色空间? - chia yongkang

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接