防止空格将段落编号拆分成句子
问题描述
我正在使用Spacy对使用段落编号的文本进行句子切分,例如:
text = '3. English law takes a dim view of stealing stuff from the shops. Some may argue that this is a pity.'
我正在尝试强制Spacy的句子分割器不要将3.
拆分成它自己的句子。
目前,以下代码返回三个单独的句子:
nlp = spacy.load("en_core_web_sm")
text = """3. English law takes a dim view of stealing stuff from the shops. Some may argue that this is a pity."""
doc = nlp(text)
for sent in doc.sents:
print("****", sent.text)
返回:
**** 3.
**** English law takes a dim view of stealing stuff from the shops.
**** Some may argue that this is a pity.
我一直试图通过在解析器之前将自定义规则传递到管道中来阻止这种情况的发生:
if token.text == r'd.':
doc[token.i+1].is_sent_start = False
这似乎没有任何效果。以前有人遇到过这个问题吗?
解决方案
类似的内容?
text = ["""3. English law takes a dim view of stealing stuff from the shops. Some may argue that this is a pity. Are you upto something?""",
"""4. It's hilarious and I think this can be more of a political moment. Don't you think so? Will Robots replace humans?"""]
for i in text:
doc = nlp(i)
span = doc[0:5]
span.merge()
for sent in doc.sents:
print("****", sent.text)
print("
")
输出:
**** 3. English law takes a dim view of stealing stuff from the shops.
**** Some may argue that this is a pity.
**** Are you upto something?
**** 4. It's hilarious and I think this can be more of a political moment.
**** Don't you think so?
**** Will Robots replace humans?
引用:span.merge()
相关文章