有没有一个OCR库可以输出图像中找到的单词的坐标?

33

我的经验是,OCR库通常只会输出图像中找到的文本,而不会输出文本的位置信息。
是否存在一种OCR库,可以同时输出在图像中发现的单词以及这些单词被发现的坐标(x、y、width、height)?

8个回答

27

大多数商业OCR引擎会返回单词和字符的坐标位置,但您需要使用它们的SDK提取信息。即使是Tesseract OCR也会返回位置信息,但是很难获取。版本3.01将使其更易于使用,但仍在开发DLL接口。

不幸的是,大多数免费OCR程序只使用基本版的Tesseract OCR,并且只报告原始ASCII结果。

www.transym.com - Transym OCR - 输出坐标。 www.rerecognition.com - KADMOS引擎返回坐标。

此外,Caere Omnipage、Mitek、Abbyy、Charactell也会返回字符位置。


看起来hOCR输出已经被添加到Tesseract V3.00中:https://code.google.com/p/tesseract-ocr/wiki/ReleaseNotes#Tesseract_release_notes_Sep_30_2010_-_V3.00 我不知道这种格式是否包括具体的坐标,但是看起来这种格式包括布局信息:https://code.google.com/p/hocr-tools/ - David

16

我正在使用TessNet(一个Tesseract的C#封装)并使用下面的代码获取单词坐标:

TextWriter tw = new StreamWriter(@"U:\user files\bwalker\ocrTesting.txt");
Bitmap image = new Bitmap(@"u:\user files\bwalker\2849257.tif");
tessnet2.Tesseract ocr = new tessnet2.Tesseract();
// If digit only
ocr.SetVariable("tessedit_char_whitelist", "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz.,$-/#&=()\"':?");
// To use correct tessdata
ocr.Init(@"C:\Users\bwalker\Documents\Visual Studio 2010\Projects\tessnetWinForms\tessnetWinForms\bin\Release\", "eng", false); 
List<tessnet2.Word> result = ocr.DoOCR(image, System.Drawing.Rectangle.Empty);
string Results = "";
foreach (tessnet2.Word word in result)
{
    Results += word.Confidence + ", " + word.Text + ", " +word.Top+", "+word.Bottom+", "+word.Left+", "+word.Right+"\n";
}
using (StreamWriter writer = new StreamWriter(@"U:\user files\bwalker\ocrTesting2.txt", true))
{
    writer.WriteLine(Results);//+", "+word.Top+", "+word.Bottom+", "+word.Left+", "+word.Right);
    writer.Close();
}
MessageBox.Show("Completed");

7
您可以使用 hocr "configfile" 与 tesseract 进行如下操作:
tesseract syllabus-page1.jpg syllabus-page1 hocr

这将输出一个大部分由HTML5构成的文档,其中包括以下元素:
<div class='ocr_page' id='page_1' title='image "syllabus-page1.jpg"; bbox 0 0 2531 3272; ppageno 0'>
  <div class="ocr_carea" id="block_1_4" title="bbox 265 1183 2147 1778">
    <p class="ocr_par" dir="ltr" id="par_1_8" title="bbox 274 1305 655 1342">
      <span class="ocr_line" id="line_1_14" title="bbox 274 1305 655 1342; baseline -0.005 0; x_size 46.378059; x_descenders 10.378059; x_ascenders 12">
        <span class="ocrx_word" id="word_1_78" title="bbox 274 1307 386 1342; x_wconf 90" lang="eng" dir="ltr">needs</span>
        <span class="ocrx_word" id="word_1_79" title="bbox 402 1318 459 1342; x_wconf 90" lang="eng" dir="ltr">are</span>
        <span class="ocrx_word" id="word_1_80" title="bbox 474 1305 655 1341; x_wconf 86" lang="eng" dir="ltr">different:</span>
      </span>
    </p>
    ...
  </div>  
  ...
</div>

虽然我很确定这不是使用XML的正确方式,但我觉得这比深入研究tesseract API 更容易。

顺便说一句,我知道有几条评论和答案都提到了这个解决方案,但它们没有真正展示如何使用 hocr 选项或描述从中得到的输出。


1
这个真的很容易使用,而且不需要太多设置。我认为这是最好的解决方案。 - mjpablo23

4
谷歌视觉 API 可以完成这个任务。 https://cloud.google.com/vision/docs/detecting-text
"description": "Wake up human!\n",
      "boundingPoly": {
        "vertices": [
          {
            "x": 29,
            "y": 394
          },
          {
            "x": 570,
            "y": 394
          },
          {
            "x": 570,
            "y": 466
          },
          {
            "x": 29,
            "y": 466
          }
        ]
      }

2

针对Java开发人员:

我建议您使用Tesseract和Tess4j

实际上,您可以在Tess4j的一个测试中找到如何在图像中查找单词的示例。

https://github.com/nguyenq/tess4j/blob/master/src/test/java/net/sourceforge/tess4j/TessAPITest.java#L449-L517

public void testResultIterator() throws Exception {
    logger.info("TessBaseAPIGetIterator");
    File tiff = new File(this.testResourcesDataPath, "eurotext.tif");
    BufferedImage image = ImageIO.read(new FileInputStream(tiff)); // require jai-imageio lib to read TIFF
    ByteBuffer buf = ImageIOHelper.convertImageData(image);
    int bpp = image.getColorModel().getPixelSize();
    int bytespp = bpp / 8;
    int bytespl = (int) Math.ceil(image.getWidth() * bpp / 8.0);
    api.TessBaseAPIInit3(handle, datapath, language);
    api.TessBaseAPISetPageSegMode(handle, TessPageSegMode.PSM_AUTO);
    api.TessBaseAPISetImage(handle, buf, image.getWidth(), image.getHeight(), bytespp, bytespl);
    ETEXT_DESC monitor = new ETEXT_DESC();
    TimeVal timeout = new TimeVal();
    timeout.tv_sec = new NativeLong(0L); // time > 0 causes blank ouput
    monitor.end_time = timeout;
    ProgressMonitor pmo = new ProgressMonitor(monitor);
    pmo.start();
    api.TessBaseAPIRecognize(handle, monitor);
    logger.info("Message: " + pmo.getMessage());
    TessResultIterator ri = api.TessBaseAPIGetIterator(handle);
    TessPageIterator pi = api.TessResultIteratorGetPageIterator(ri);
    api.TessPageIteratorBegin(pi);
    logger.info("Bounding boxes:\nchar(s) left top right bottom confidence font-attributes");
    int level = TessPageIteratorLevel.RIL_WORD;

    // int height = image.getHeight();
    do {
        Pointer ptr = api.TessResultIteratorGetUTF8Text(ri, level);
        String word = ptr.getString(0);
        api.TessDeleteText(ptr);
        float confidence = api.TessResultIteratorConfidence(ri, level);
        IntBuffer leftB = IntBuffer.allocate(1);
        IntBuffer topB = IntBuffer.allocate(1);
        IntBuffer rightB = IntBuffer.allocate(1);
        IntBuffer bottomB = IntBuffer.allocate(1);
        api.TessPageIteratorBoundingBox(pi, level, leftB, topB, rightB, bottomB);
        int left = leftB.get();
        int top = topB.get();
        int right = rightB.get();
        int bottom = bottomB.get();
        /******************************************/
        /* COORDINATES AND WORDS ARE PRINTED HERE */
        /******************************************/
        System.out.print(String.format("%s %d %d %d %d %f", word, left, top, right, bottom, confidence));
        // logger.info(String.format("%s %d %d %d %d", str, left, height - bottom, right, height - top)); //
        // training box coordinates

        IntBuffer boldB = IntBuffer.allocate(1);
        IntBuffer italicB = IntBuffer.allocate(1);
        IntBuffer underlinedB = IntBuffer.allocate(1);
        IntBuffer monospaceB = IntBuffer.allocate(1);
        IntBuffer serifB = IntBuffer.allocate(1);
        IntBuffer smallcapsB = IntBuffer.allocate(1);
        IntBuffer pointSizeB = IntBuffer.allocate(1);
        IntBuffer fontIdB = IntBuffer.allocate(1);
        String fontName = api.TessResultIteratorWordFontAttributes(ri, boldB, italicB, underlinedB, monospaceB,
                serifB, smallcapsB, pointSizeB, fontIdB);
        boolean bold = boldB.get() == TRUE;
        boolean italic = italicB.get() == TRUE;
        boolean underlined = underlinedB.get() == TRUE;
        boolean monospace = monospaceB.get() == TRUE;
        boolean serif = serifB.get() == TRUE;
        boolean smallcaps = smallcapsB.get() == TRUE;
        int pointSize = pointSizeB.get();
        int fontId = fontIdB.get();
        logger.info(String.format("  font: %s, size: %d, font id: %d, bold: %b,"
                + " italic: %b, underlined: %b, monospace: %b, serif: %b, smallcap: %b", fontName, pointSize,
                fontId, bold, italic, underlined, monospace, serif, smallcaps));
    } while (api.TessPageIteratorNext(pi, level) == TRUE);

    assertTrue(true);
}

2

0

ABCocr.NET(我们的组件)将允许您获取每个单词找到的坐标。这些值可以通过Word.Bounds属性访问,该属性只是返回一个System.Drawing.Rectangle。

下面的示例演示了如何使用ABCocr.NET对图像进行OCR,并输出所需信息:

using System;
using System.Drawing;
using WebSupergoo.ABCocr3;

namespace abcocr {
    class Program {
        static void Main(string[] args) {

            Bitmap bitmap = (Bitmap)Bitmap.FromFile("example.png");
            Ocr ocr = new Ocr();
            ocr.SetBitmap(bitmap);

            foreach (Word word in ocr.Page.Words) {
                Console.WriteLine("{0}, X: {1}, Y: {2}, Width: {3}, Height: {4}",
                    word.Text,
                    word.Bounds.X,
                    word.Bounds.Y,
                    word.Bounds.Width,
                    word.Bounds.Height);
            }
        }
    }
}

声明:此帖子由WebSupergoo团队成员发布。


0

hocr 是 tesseract OCR 引擎的一种输出格式,它包含单词及其坐标,还有一些额外信息,如单词识别的置信度水平。


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接