我刚接触Python,正在使用《计算机视觉》中的快速入门指南:使用REST API和Python提取印刷文字(OCR)进行文本检测。因此,这个算法会给定Ymin、XMax、Ymin和Xmax坐标,并为每行文本绘制一个边界框,如下图所示。
但是我希望将靠近的文本分组以获得一个单独的边界框。因此,对于上面图片的情况,它将有2个包含最近文本的边界框。
以下代码提供了Ymin、XMax、Ymin和Xmax坐标,并为每行文本绘制一个边界框。
import requests
# If you are using a Jupyter notebook, uncomment the following line.
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from PIL import Image
from io import BytesIO
# Replace <Subscription Key> with your valid subscription key.
subscription_key = "f244aa59ad4f4c05be907b4e78b7c6da"
assert subscription_key
vision_base_url = "https://westcentralus.api.cognitive.microsoft.com/vision/v2.0/"
ocr_url = vision_base_url + "ocr"
# Set image_url to the URL of an image that you want to analyze.
image_url = "https://cdn-ayb.akinon.net/cms/2019/04/04/e494dce0-1e80-47eb-96c9-448960a71260.jpg"
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
params = {'language': 'unk', 'detectOrientation': 'true'}
data = {'url': image_url}
response = requests.post(ocr_url, headers=headers, params=params, json=data)
response.raise_for_status()
analysis = response.json()
# Extract the word bounding boxes and text.
line_infos = [region["lines"] for region in analysis["regions"]]
word_infos = []
for line in line_infos:
for word_metadata in line:
for word_info in word_metadata["words"]:
word_infos.append(word_info)
word_infos
# Display the image and overlay it with the extracted text.
plt.figure(figsize=(100, 20))
image = Image.open(BytesIO(requests.get(image_url).content))
ax = plt.imshow(image)
texts_boxes = []
texts = []
for word in word_infos:
bbox = [int(num) for num in word["boundingBox"].split(",")]
text = word["text"]
origin = (bbox[0], bbox[1])
patch = Rectangle(origin, bbox[2], bbox[3], fill=False, linewidth=3, color='r')
ax.axes.add_patch(patch)
plt.text(origin[0], origin[1], text, fontsize=2, weight="bold", va="top")
# print(bbox)
new_box = [bbox[1], bbox[0], bbox[1]+bbox[3], bbox[0]+bbox[2]]
texts_boxes.append(new_box)
texts.append(text)
# print(text)
plt.axis("off")
texts_boxes = np.array(texts_boxes)
texts_boxes
输出边界框
array([[ 68, 82, 138, 321],
[ 202, 81, 252, 327],
[ 261, 81, 308, 327],
[ 364, 112, 389, 182],
[ 362, 192, 389, 305],
[ 404, 98, 421, 317],
[ 92, 421, 146, 725],
[ 80, 738, 134, 1060],
[ 209, 399, 227, 456],
[ 233, 399, 250, 444],
[ 257, 400, 279, 471],
[ 281, 399, 298, 440],
[ 286, 446, 303, 458],
[ 353, 394, 366, 429]]
但我希望通过接近的距离合并它们。