为什么TFLite模型的准确度与Keras模型如此不同?

3

我制作了一个模型,用于预测图像上的字符以进行车牌识别。在我的电脑上它工作得很好,但是我需要将这个工作放到 Android 应用程序中。所以我开发了一个小应用程序,并将我的 keras 模型转换为 tflite。现在它总是预测相同的字符。

我使用以下方式转换了模型:

mod_path = "License_character_recognition.h5"

def load_model(path,custom_objects={},verbose=0):
    #from tf.keras.models import model_from_json

    path = splitext(path)[0]
    with open('MobileNets_character_recognition.json','r') as json_file:
        model_json = json_file.read()
    model = tf.keras.models.model_from_json(model_json, custom_objects=custom_objects)
    model.load_weights('%s.h5' % path)
    if verbose: print('Loaded from %s' % path)
    return model

keras_mod = load_model(mod_path)

converter = tf.lite.TFLiteConverter.from_keras_model(keras_mod)
tflite_model = converter.convert()

# Save the TF Lite model.
with tf.io.gfile.GFile('ocr.tflite', 'wb') as f:
    f.write(tflite_model)

有更好的方法将模型转换吗?还是我漏掉了什么?

编辑:这是我管理位图的方法。

        try {
            Mat bis = Utils.loadResource(MainActivity.this, R.drawable.plaque, Imgcodecs.IMREAD_COLOR);
            cvtColor(bis, bis, COLOR_BGR2RGB);

            Mat m = Utils.loadResource(MainActivity.this, R.drawable.plaque,Imgcodecs.IMREAD_GRAYSCALE);

            blur(m, blur, new Size(2,2));

            threshold(blur, bin, 0, 255, THRESH_BINARY_INV + THRESH_OTSU);

            ArrayList<MatOfPoint> contours;
            contours = getContours(bin);

            //Try to sort from left to right
            Collections.sort(contours, new SortByTopLeft());
            Log.d("Contour", String.valueOf(contours.size()));
            int i = 0;
            for (MatOfPoint c : contours){
                Rect cont = boundingRect(c);
                float ratio = (float) (cont.height/cont.width);
                Log.d("Ratio", String.valueOf(ratio));
                float pourcent =  ((float) cont.height/ (float) bin.height());
                Log.d("pourcent", String.valueOf(pourcent));
                if (ratio >= 1 && ratio <= 2.5){
                    if(pourcent >=0.5){
                        Log.d("Ui", String.valueOf(cont));
                        rectangle(bis, cont, new Scalar(0,255,0), 2);

                        //Separate numbers
                        Mat curr_num = new Mat(bin, cont);
                        Bitmap curbit = Bitmap.createBitmap(curr_num.cols(), curr_num.rows(), Bitmap.Config.ARGB_8888);
                        Utils.matToBitmap(curr_num, curbit);
                        images[i].setImageBitmap(curbit);
                        int charac = classifier.classify(curbit);
                        Log.d("Result", String.valueOf(charac));
                        result.setText(String.valueOf(charac));
                        if (i < 6){
                            i++;
                        }
                    }

                }

如果你指责安卓,请放置一些进行位图操作的安卓代码来检查。 - Farmaker
我修改了我的问题,如果有助于您的话。 - T.K
请阅读此文章 https://medium.com/@farmaker47/impact-of-different-image-loading-and-resizing-libraries-in-tensorflow-inference-output-and-df3c96a41825,看看使用tensorflow库加载和操作图像是否可以解决您的问题... - Farmaker
谢谢,你是对的,使用Tensorflow支持库来管理图像就是答案!我用你文章中的tensorflow_support_library.java部分重新创建了输入和输出。 - T.K
好的,太棒了!我会添加一个答案,如果你能给我点赞就更好了! - Farmaker
1个回答

2
您可以使用TensorFlow Lite Android Support Library。这个旨在帮助处理TensorFlow Lite模型的输入和输出,并使TensorFlow Lite解释器更易于使用。
请按以下方式使用它,并在本文中查找更多信息:

    Bitmap assetsBitmap = getBitmapFromAsset(mContext, "picture.jpg");
    // Initialization code
    // Create an ImageProcessor with all ops required. For more ops, please
    // refer to the ImageProcessor Architecture.
    ImageProcessor imageProcessor =
            new ImageProcessor.Builder()
                    .add(new ResizeOp(32, 32, ResizeOp.ResizeMethod.BILINEAR))
                    //.add(new NormalizeOp(127.5f, 127.5f))
                    .build();

    // Create a TensorImage object. This creates the tensor of the corresponding
    // tensor type (flot32 in this case) that the TensorFlow Lite interpreter needs.
    TensorImage tImage = new TensorImage(DataType.FLOAT32);

    // Analysis code for every frame
    // Preprocess the image
    tImage.load(assetsBitmap);
    tImage = imageProcessor.process(tImage);

    // Create a container for the result and specify that this is not a quantized model.
    // Hence, the 'DataType' is defined as FLOAT32
    TensorBuffer probabilityBuffer = TensorBuffer.createFixedSize(new int[]{1, 10}, DataType.FLOAT32);
    interpreter.run(tImage.getBuffer(), probabilityBuffer.getBuffer());

    Log.i("RESULT", Arrays.toString(probabilityBuffer.getFloatArray()));

    return getSortedResult(result);
}

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接