如何使用Python将纯文本中的URL转换为可点击链接?

3

我有一段纯文本,例如考虑以下的句子:

我正在浏览www.google.com,并发现了一个有趣的网站 www.stackoverflow.com。它非常棒!

在上面的例子中,www.google.com是纯文本,我需要将其转换为像www.google.com这样的内容(包含链接到google.com的锚点标签)。而www.stackoverflow.com已经在锚点标签中,我希望保留原样。如何使用Python正则表达式实现这一点?


使用HTML解析器将是一个更优越的解决方案。正则表达式并不适合这样的任务! - undefined
@Docteur,你能提供一些使用HTML解析器替换文本的简单示例吗?非常感谢!:) - undefined
1个回答

2

这个任务需要分成两部分:

  • 提取所有不在 a 标签中的文本
  • 查找(或者更准确地说是猜测)该文本中的所有 URL 并将其包装

对于第一部分,我建议使用 BeautifulSoup。你也可以使用 html.parser,但那会增加很多额外的工作。

使用递归函数来查找文本:

from bs4 import BeautifulSoup
from bs4.element import NavigableString

your_text = """I was surfing <a href="...">www.google.com</a>, and I found an
interesting site https://www.stackoverflow.com/. It's amazing! I also liked
Heroku (http://heroku.com/pricing)
more.domains.tld/at-the-end-of-line
https://at-the_end_of-text.com"""

soup = BeautifulSoup(your_text, "html.parser")

def wrap_plaintext_links(bs_tag):
    for element in bs_tag.children:
        if type(element) == NavigableString:
            pass # now we have a text node, process it
        # so it is a Tag (or the soup object, which is for most purposes a tag as well)
        elif element.name != "a": # if it isn't the a tag, process it recursively
            wrap_plaintext_links(element)

wrap_plaintext_links(soup) # call the recursive function

你可以通过将pass替换为print(element)来测试它是否只找到你想要的值。
现在需要找到URL并进行替换。所使用的正则表达式的复杂度取决于您需要多精确。我建议使用以下表达式:
(https?://)?        # match http(s):// in separate group if present
(                   # start of the main capturing group, what will be between the tags
  (?:[\w-]+\.)+     #   at least one domain and any subdomains before TLD
  [a-z]+            #   TLD
  (?:/\S*?)?        #   /[anything except whitespace] if present - URL path
)                   # end of the group
(?=[\.,)]?(?:\s|$)) # prevent matching any of ".,)" that might appear immediately after the URL as the text goes...

这个功能和代码的添加包括替换:

import re

def create_replacement(matchobj):
    if matchobj.group(1): # if there's http(s)://, keep it
        full_url = matchobj.group(0)
    else: # otherwise prepend it. it would be a long discussion if https or http. decide.
        full_url = "http://" + matchobj.group(2)
    tag = soup.new_tag("a", href=full_url)
    tag.string = matchobj.group(2)
    return str(tag)

# compile the pattern beforehand, as it's going to be used many times
r = re.compile(r"(https?://)?((?:[\w-]+\.)+[a-z]+(?:/\S*?)?)(?=[\.,)]?(?:\s|$))")

def wrap_plaintext_links(bs_tag):
    for element in bs_tag.children:
        if type(element) == NavigableString:
            replaced = r.sub(create_replacement, str(element))
            element.replaceWith(BeautifulSoup(replaced)) # make it a Soup so that the tags aren't escaped
        elif element.name != "a":
            wrap_plaintext_links(element)

注意:您也可以像我上面写的那样在代码中包含模式解释,参见 re.X 标志。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接