将jpct-ae集成到高通的Vuforia引擎中在安卓上

8
在Android中,我正在尝试使用这个教程将JPCT集成到Vuforia中: http://www.jpct.net/wiki/index.php/Integrating_JPCT-AE_with_Vuforia 第一次启动应用程序时,它可以工作,但当我返回并再次点击“播放”时,它会崩溃。
当应用程序崩溃时,这些是我的LogCat中的错误:
FATAL EXCEPTION: main
java.lang.RuntimeException: [ 1362671862690 ] - ERROR:  A texture with the name  'texture' has been declared twice!
at com.threed.jpct.Logger.log(Logger.java:189)
at com.threed.jpct.TextureManager.addTexture(TextureManager.java:138)
at com.qualcomm.QCARSamples.ImageTargets.ImageTargetsRenderer.<init>    (ImageTargetsRenderer.java:78)
at     com.qualcomm.QCARSamples.ImageTargets.ImageTargets.initApplicationAR(ImageTargets.java:807)
at com.qualcomm.QCARSamples.ImageTargets.ImageTargets.updateApplicationStatus(ImageTargets.java:649)
at com.qualcomm.QCARSamples.ImageTargets.ImageTargets.updateApplicationStatus(ImageTargets.java:641)
at com.qualcomm.QCARSamples.ImageTargets.ImageTargets.access$3(ImageTargets.java:598)
at com.qualcomm.QCARSamples.ImageTargets.ImageTargets$InitQCARTask.onPostExecute(ImageTargets.java:226)
at com.qualcomm.QCARSamples.ImageTargets.ImageTargets$InitQCARTask.onPostExecute(ImageTargets.java:1)
at android.os.AsyncTask.finish(AsyncTask.java:417)
at android.os.AsyncTask.access$300(AsyncTask.java:127)
at android.os.AsyncTask$InternalHandler.handleMessage(AsyncTask.java:429)
at android.os.Handler.dispatchMessage(Handler.java:99)
at android.os.Looper.loop(Looper.java:123)
at android.app.ActivityThread.main(ActivityThread.java:3691)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:507)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:847)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:605)
at dalvik.system.NativeStart.main(Native Method)

这里是Imagetargetsrenderer.java的代码

public class ImageTargetsRenderer implements GLSurfaceView.Renderer
{
public boolean mIsActive = false;

/** Reference to main activity **/
public ImageTargets mActivity;

/** Native function for initializing the renderer. */
public native void initRendering();

/** Native function to update the renderer. */
public native void updateRendering(int width, int height);

private World world=null;
private Light sun = null;
private Object3D cube = null;
private FrameBuffer fb = null;
private float[] modelViewMat=null;
private Camera cam=null;
private float fov=0;
private float fovy=0;

//private Camera cam=null;
private Object3D plane=null;

public ImageTargetsRenderer(ImageTargets activity){
    this.mActivity = activity;
    world = new World();
world.setAmbientLight(20, 20, 20);

sun = new Light(world);
sun.setIntensity(250, 250, 250);

// Create a texture out of the icon...:-)
Texture texture = new Texture(BitmapHelper.rescale(BitmapHelper.convert(mActivity.getResources().getDrawable(R.drawable.ic_launcher)), 64, 64));
TextureManager.getInstance().addTexture("texture", texture);

cube = Primitives.getCube(10);
cube.calcTextureWrapSpherical();
cube.setTexture("texture");
cube.strip();
cube.build();

world.addObject(cube);

 cam = world.getCamera();
/*cam.moveCamera(Camera.CAMERA_MOVEOUT, 50);
cam.lookAt(cube.getTransformedCenter());*/

SimpleVector sv = new SimpleVector();
SimpleVector position=new SimpleVector();
position.x=0;
position.y=0;
position.z=-10;

cube.setOrigin(position);
sv.set(cube.getTransformedCenter());
sv.y -= 100;
sv.z -= 100;

sun.setPosition(sv);
MemoryHelper.compact();

}




/** Called when the surface is created or recreated. */
public void onSurfaceCreated(GL10 gl, EGLConfig config)
{
    DebugLog.LOGD("GLRenderer::onSurfaceCreated");

    // Call native function to initialize rendering:
    initRendering();

    // Call QCAR function to (re)initialize rendering after first use
    // or after OpenGL ES context was lost (e.g. after onPause/onResume):
    QCAR.onSurfaceCreated();
}


/** Called when the surface changed size. */
public void onSurfaceChanged(GL10 gl, int width, int height)
{
    DebugLog.LOGD("GLRenderer::onSurfaceChanged");

    // Call native function to update rendering when render surface
    // parameters have changed:
    updateRendering(width, height);

    // Call QCAR function to handle render surface size changes:
    QCAR.onSurfaceChanged(width, height);

    if (fb != null) {
        fb.dispose();
   }
   fb = new FrameBuffer(width, height);
}


/** The native render function. */
public native void renderFrame();


/** Called to draw the current frame. */
public void onDrawFrame(GL10 gl)
{
    if (!mIsActive)
        return;

    // Update render view (projection matrix and viewport) if needed:
    mActivity.updateRenderView();

    //updateCamera();

    // Call our native function to render content

    renderFrame();

    world.renderScene(fb);

    world.draw(fb);

    fb.display(); 

}

public void updateModelviewMatrix(float mat[]) {
    modelViewMat = mat;
}

public void setFov(float fov_) {
    fov = fov_;
}

public void setFovy(float fovy_) {
    fovy = fovy_;
}

public void updateCamera() {
    Matrix m = new Matrix();
    m.setDump(modelViewMat);
        cam.setBack(m);
        cam.setFOV(fov);
        cam.setYFOV(fovy);

}

}

imagetargets.cpp的代码

JNIEXPORT void JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_renderFrame(JNIEnv  *env, jobject obj)
{

const QCAR::CameraCalibration& cameraCalibration =       QCAR::CameraDevice::getInstance().getCameraCalibration();
QCAR::Vec2F size = cameraCalibration.getSize();
QCAR::Vec2F focalLength = cameraCalibration.getFocalLength();
float fovyRadians = 2 * atan(0.5f * size.data[1] / focalLength.data[1]);
float fovRadians = 2 * atan(0.5f * size.data[0] / focalLength.data[0]);

jclass activityClass = env->GetObjectClass(obj);
jfloatArray modelviewArray = env->NewFloatArray(16);
jmethodID updateMatrixMethod = env->GetMethodID(activityClass, "updateModelviewMatrix",    "([F)V");

jmethodID fovMethod = env->GetMethodID(activityClass, "setFov", "(F)V");
jmethodID fovyMethod = env->GetMethodID(activityClass, "setFovy", "(F)V");

// test
jclass newClass = env->GetObjectClass(obj);
jmethodID updateCameraMethod = env->GetMethodID(newClass, "updateCamera", "()V");

// Clear color and depth buffer 
//glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Get the state from QCAR and mark the beginning of a rendering section
QCAR::State state = QCAR::Renderer::getInstance().begin();
// Explicitly render the Video Background
QCAR::Renderer::getInstance().drawVideoBackground();
// Did we find any trackables this frame?
for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
{
    // Get the trackable:
    const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
    const QCAR::Trackable& trackable = result->getTrackable();
    QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result-  >getPose());
}
QCAR::Renderer::getInstance().end();


for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
{
// Get the trackable:
const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
const QCAR::Trackable& trackable = result->getTrackable();
QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result-    >getPose());

    SampleUtils::rotatePoseMatrix(180.0f, 1.0f, 0, 0, &modelViewMatrix.data[0]);
    // Passes the model view matrix to java
    env->SetFloatArrayRegion(modelviewArray, 0, 16, modelViewMatrix.data);
    env->CallVoidMethod(obj, updateMatrixMethod , modelviewArray);
    env->CallVoidMethod(obj, updateCameraMethod);
    env->CallVoidMethod(obj, fovMethod, fovRadians);
    env->CallVoidMethod(obj, fovyMethod, fovyRadians);




}
env->DeleteLocalRef(modelviewArray);



}

那个异常是什么意思?


你得到了什么错误? - Sam R.
应用程序失败了。程序编译通过,但应用程序在启动时停止了。 - Romain
当它失败时,您的_Logcat_上是否有任何显示? - Sam R.
对于这个问题,你是正确的Sam,我道歉。我解决了它。实际上,主要问题出现在此之后,我们有几个人被阻塞了,因为教程不够精确。对于这个第一部分,应用程序编译通过,但我们没有在场景中看到立方体,当我们触摸imagetargets.cpp(renderframe函数)时,会出现错误。如果您想看的话,我已经编辑了我的消息 :) - Romain
1
顺便说一下,你的代码缺少一行。我会添加一个答案。 - Sam R.
显示剩余5条评论
3个回答

6

ImageTarget.cpp中的renderFrame方法应该从以下方式开始:

jclass activityClass = env->GetObjectClass(obj);
jfloatArray modelviewArray = env->NewFloatArray(16);
jmethodID method = env->GetMethodID(activityClass, "updateModelviewMatrix", "([F)V");

我认为这可以解决你的错误:activityClass在此范围中未声明

注释掉这行代码并再次测试。你不再需要它了。

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

我建议在onDrawFrame()中将renderFrame()方法注释掉,以查看如果QCAR没有提前开始渲染,jPCT是否可以呈现立方体。(仅用于测试目的)

更不用说,QCAR最初默认会更改OpenGL状态。因此,您必须启用其中一些状态才能使用jPCT进行渲染。有关更多信息,请查看视频背景渲染器中的OpenGL状态更改

在我调用onDrawFrame()中的renderFrame之后,我将其用于OpenGL ES 1.x

GL11 gl11 = (GL11) gl;
gl11.glEnable(GL11.GL_DEPTH_TEST);
gl11.glEnable(GL11.GL_CULL_FACE);
gl11.glTexEnvi(GL11.GL_TEXTURE_ENV, GL11.GL_TEXTURE_ENV_MODE, GL11.GL_MODULATE);
gl11.glEnable(GL11.GL_LIGHTING);
gl11.glEnable(GL11.GL_BLEND);

当我注释或渲染帧时,应用程序编译通过,但相机不起作用(屏幕保持黑色)。我将尝试使用glTexture、禁用、启用等方法。 - Romain
这很明显,因为QCAR没有开始渲染,但你应该仍然能看到立方体或其他东西。 - Sam R.
我现在在场景中有了立方体。2行代码丢失了,kelmer在第二个答案中给出了它。但是我没有在标记上看到这个立方体。 - Romain
1
@Amourreux,是哪个教程?在发送到Java之前,您是否旋转了矩阵? - Sam R.
1
@Amourreux,抱歉我已经一年没有碰我的代码了。你有Bitbucket的账户吗?我可以给你访问代码的权限。 - Sam R.
显示剩余7条评论

5

在应用矩阵之前,您必须先告诉相机去看这个物体,才能看到它。

Camera cam = world.getCamera();
cam.moveCamera(Camera.CAMERA_MOVEOUT, 50);
cam.lookAt(cube.getTransformedCenter());

请注意,在使用模型视图矩阵更新相机时,您应该删除这些行。

如果您按照我的教程操作,您实际上不需要激活任何OpenGL状态来在标记上看到东西(尽管您可能会像Sam Rad建议的那样激活它们,出于其他原因)。


嗯,你应该在代码中的某个地方调用updateCamera()方法。尝试将其放置在renderFrame()之后。 - M Rajoy
1
让本地代码调用updateCamera()函数。在Java中不需要显式调用它。 - Sam R.
1
我指的是同一个问题。但是只有在视野中存在可跟踪对象时,updateCamera 才是有效的。否则 mat 为空。您可以从 for(int tIdx = 0; tIdx < state.getNumActiveTrackables(); tIdx++) 中的本地代码调用它。 - Sam R.
1
好的,在你的构造函数中,你正在初始化一个不同于你作为字段拥有的Cam对象。请使用cam = World.getCamera()而不是Camera cam = World.getCamera()。 - M Rajoy
1
@Amourreux 确保在创建立方体时不要应用任何翻译或旋转。 - M Rajoy
显示剩余12条评论

0

将此代码添加到渲染器中,以便移除已加载的纹理

public void cleanup()
{
    TextureManager.getInstance().removeTexture("texture");
}

在 Activity 关闭/暂停时调用此函数

protected void onPause()
{
   mRenderer.cleanup();
}

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接