在python中使用正则表达式从文本中删除html标签

2022-01-18 00:00:00 python regex tags html

我正在尝试查看一个 html 文件并从中删除所有标签,以便只留下文本,但我的正则表达式有问题.这是我目前所拥有的.

I'm trying to look at a html file and remove all the tags from it so that only the text is left but I'm having a problem with my regex. This is what I have so far.

import urllib.request, re
def test(url):
html = str(urllib.request.urlopen(url).read())
print(re.findall('<[w/.w]*>',html))

html 是一个带有一些链接和文本的简单页面,但我的正则表达式不会选择 !DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" 和 'a href="...."标签.谁能解释我需要在我的正则表达式中更改什么?

The html is a simple page with a few links and text but my regex won't pick up !DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" and 'a href="...." tags. Can anyone explain what I need to change in my regex?

推荐答案

使用 BeautifulSoup.使用 lxml.不要使用正则表达式解析 HTML.

Use BeautifulSoup. Use lxml. Do not use regular expressions to parse HTML.

编辑 2010-01-29:这将是 lxml 的合理起点:

Edit 2010-01-29: This would be a reasonable starting point for lxml:

from lxml.html import fromstring
from lxml.html.clean import Cleaner
import requests

url = "https://stackoverflow.com/questions/2165943/removing-html-tags-from-a-text-using-regular-expression-in-python"
html = requests.get(url).text

doc = fromstring(html)

tags = ['h1','h2','h3','h4','h5','h6',
       'div', 'span', 
       'img', 'area', 'map']
args = {'meta':False, 'safe_attrs_only':False, 'page_structure':False, 
       'scripts':True, 'style':True, 'links':True, 'remove_tags':tags}
cleaner = Cleaner(**args)

path = '/html/body'
body = doc.xpath(path)[0]

print cleaner.clean_html(body).text_content().encode('ascii', 'ignore')

你想要内容,所以大概你不想要任何 javascript 或 CSS.此外,大概您只想要正文中的内容,而不是头部的 HTML.阅读 lxml.html.clean 了解您可以轻松完成的内容剥离.比正则表达式更聪明,不是吗?

You want the content, so presumably you don't want any javascript or CSS. Also, presumably you want only the content in the body and not HTML from the head, too. Read up on lxml.html.clean to see what you can easily strip out. Way smarter than regular expressions, no?

另外,请注意 unicode 编码问题.您很容易得到无法打印的 HTML.

Also, watch out for unicode encoding problems. You can easily end up with HTML that you cannot print.

2012-11-08:从使用 urllib2 更改为 requests.只需使用请求!

2012-11-08: changed from using urllib2 to requests. Just use requests!

相关文章