从HTML页面中删除所有样式、脚本和HTML标签

17

这是我目前为止的内容:

from bs4 import BeautifulSoup

def cleanme(html):
    soup = BeautifulSoup(html) # create a new bs4 object from the html data loaded
    for script in soup(["script"]): 
        script.extract()
    text = soup.get_text()
    return text
testhtml = "<!DOCTYPE HTML>\n<head>\n<title>THIS IS AN EXAMPLE </title><style>.call {font-family:Arial;}</style><script>getit</script><body>I need this text captured<h1>And this</h1></body>"

cleaned = cleanme(testhtml)
print (cleaned)

这是在工作中移除脚本


1
你期望的输出是什么? - salmanwahed
6个回答

28

看起来你已经接近成功了,但是还需要移除 HTML 标签和 CSS 样式代码。这是我的解决方案(我更新了函数):

def cleanMe(html):
    soup = BeautifulSoup(html, "html.parser") # create a new bs4 object from the html data loaded
    for script in soup(["script", "style"]): # remove all javascript and stylesheet code
        script.extract()
    # get text
    text = soup.get_text()
    # break into lines and remove leading and trailing space on each
    lines = (line.strip() for line in text.splitlines())
    # break multi-headlines into a line each
    chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
    # drop blank lines
    text = '\n'.join(chunk for chunk in chunks if chunk)
    return text

@Anu,这对我有效:relist = re.split("window.fbAsyncInit+", texttotest) print(relist[0]) 你可以看到正则表达式的分割很好用,我使用了你提供的完整示例文本作为texttotest变量。 - james-see

16
你可以使用decompose完全删除文档中的标签,还可以使用stripped_strings 生成器检索标记内容。
def clean_me(html):
    soup = BeautifulSoup(html)
    for s in soup(['script', 'style']):
        s.decompose()
    return ' '.join(soup.stripped_strings)

>>> clean_me(testhtml) 
'THIS IS AN EXAMPLE I need this text captured And this'

6

以干净的方式删除指定的标签和注释。感谢Kim Hyesung提供这段代码

from bs4 import BeautifulSoup
from bs4 import Comment

def cleanMe(html):
    soup = BeautifulSoup(html, "html5lib")    
    [x.extract() for x in soup.find_all('script')]
    [x.extract() for x in soup.find_all('style')]
    [x.extract() for x in soup.find_all('meta')]
    [x.extract() for x in soup.find_all('noscript')]
    [x.extract() for x in soup.find_all(text=lambda text:isinstance(text, Comment))]
    return soup

4

使用 替代:

# Requirements: pip install lxml

import lxml.html.clean


def cleanme(content):
    cleaner = lxml.html.clean.Cleaner(
        allow_tags=[''],
        remove_unknown_tags=False,
        style=True,
    )
    html = lxml.html.document_fromstring(content)
    html_clean = cleaner.clean_html(html)
    return html_clean.text_content().strip()

testhtml = "<!DOCTYPE HTML>\n<head>\n<title>THIS IS AN EXAMPLE </title><style>.call {font-family:Arial;}</style><script>getit</script><body>I need this text captured<h1>And this</h1></body>"
cleaned = cleanme(testhtml)
print (cleaned)

2
如果你想要一个快速而简单的解决方案,你可以使用:
re.sub(r'<[^>]*?>', '', value)

在 PHP 中创建 strip_tags 的等效功能。 这是您想要的吗?

0

除了Styvane的回答,这里还有另一种实现方法。如果你想要提取大量文本,可以看看selectolax,它比lxml快得多。

代码和在线IDE上的示例

def clean_me(html):
    soup = BeautifulSoup(html, 'lxml')

    body = soup.body
    if body is None:
        return None

    # removing everything besides text
    for tag in body.select('script'):
        tag.decompose()
    for tag in body.select('style'):
        tag.decompose()

    plain_text = body.get_text(separator='\n').strip()
    print(plain_text)

clean_me()

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接