如何使用BERT对相似的句子进行聚类

30
对于ElMo、FastText和Word2Vec,我将在句子中对单词嵌入求平均值,并使用HDBSCAN/KMeans聚类来分组相似的句子。
这种实现的一个很好的例子可以在这篇简短的文章中看到:http://ai.intelligentonlinetools.com/ml/text-clustering-word-embedding-machine-learning/ 我想使用BERT做同样的事情(使用来自Hugging Face的BERT python包),但是我不太熟悉如何提取原始的单词/句子向量以便将它们输入到聚类算法中。我知道BERT可以输出句子表示 - 那么我应该如何从句子中提取原始向量?
任何信息都会有所帮助。

1
不要使用BERT进行此操作,因为它从未针对语义相似性目标进行过训练。 - jamix
6个回答

22

您可以使用Sentence Transformers生成句子嵌入。这些嵌入比通过bert-as-service获取的嵌入更有意义,因为它们经过微调,使得语义上相似的句子具有更高的相似度得分。如果要聚类的句子数量在百万或更多,则可以使用基于FAISS的聚类算法,因为传统的K-means聚类算法需要二次时间。


2
让我困惑的是为什么有这么多人尝试使用BERT嵌入来实现语义相似度。BERT从未针对语义相似性目标进行过训练。 - jamix
5
嘿@jamix。请注意,我们在这里并不直接使用原始的BERT嵌入。我们已经修改了下游任务,使用类似连体网络的方式生成了丰富的句子嵌入。请阅读以下论文:https://arxiv.org/abs/1908.10084。 - Subham Kumar
3
谢谢!在我的评论中,实际上我赞同你的方法。对原问题使用基础 BERT 模型的方式引起了我的抱怨。 - jamix

12
您需要首先为句子生成bert嵌入。 bert-as-service提供了一种非常简单的方法来生成句子的嵌入向量。 以下是如何为您需要聚类的句子列表生成bert向量的方式。在bert-as-service存储库中已经很好地解释了这一点: https://github.com/hanxiao/bert-as-service 安装:
pip install bert-serving-server  # server
pip install bert-serving-client  # client, independent of `bert-serving-server`

https://github.com/google-research/bert 下载其中一个预训练模型。

启动服务:

bert-serving-start -model_dir /your_model_directory/ -num_worker=4 

为句子列表生成向量:

from bert_serving.client import BertClient
bc = BertClient()
vectors=bc.encode(your_list_of_sentences)

这将为您提供向量列表,您可以将它们写入csv文件并将任何聚类算法用于句子被转换成数字的数据。


1
很棒的解决方案,对我42,000个标签起作用了。 - Gonzalo Garcia
1
BERT并不是为生成句向量或使用余弦相似度等指标来评估相似性进行优化的。即使它可能会工作,结果也可能会误导人。请参阅此讨论:https://github.com/UKPLab/sentence-transformers/issues/80 - Cristian Arteaga
只要您使用专门为此制作的经过微调的BERT,例如句子BERT,这就没问题了。 - Jules G.M.

5

正如Subham Kumar提到的那样,可以使用这个Python 3库计算句子相似度:https://github.com/UKPLab/sentence-transformers

该库有一些代码示例来执行聚类:

fast_clustering.py

"""
This is a more complex example on performing clustering on large scale dataset.

This examples find in a large set of sentences local communities, i.e., groups of sentences that are highly
similar. You can freely configure the threshold what is considered as similar. A high threshold will
only find extremely similar sentences, a lower threshold will find more sentence that are less similar.

A second parameter is 'min_community_size': Only communities with at least a certain number of sentences will be returned.

The method for finding the communities is extremely fast, for clustering 50k sentences it requires only 5 seconds (plus embedding comuptation).

In this example, we download a large set of questions from Quora and then find similar questions in this set.
"""
from sentence_transformers import SentenceTransformer, util
import os
import csv
import time


# Model for computing sentence embeddings. We use one trained for similar questions detection
model = SentenceTransformer('paraphrase-MiniLM-L6-v2')

# We donwload the Quora Duplicate Questions Dataset (https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs)
# and find similar question in it
url = "http://qim.fs.quoracdn.net/quora_duplicate_questions.tsv"
dataset_path = "quora_duplicate_questions.tsv"
max_corpus_size = 50000 # We limit our corpus to only the first 50k questions


# Check if the dataset exists. If not, download and extract
# Download dataset if needed
if not os.path.exists(dataset_path):
    print("Download dataset")
    util.http_get(url, dataset_path)

# Get all unique sentences from the file
corpus_sentences = set()
with open(dataset_path, encoding='utf8') as fIn:
    reader = csv.DictReader(fIn, delimiter='\t', quoting=csv.QUOTE_MINIMAL)
    for row in reader:
        corpus_sentences.add(row['question1'])
        corpus_sentences.add(row['question2'])
        if len(corpus_sentences) >= max_corpus_size:
            break

corpus_sentences = list(corpus_sentences)
print("Encode the corpus. This might take a while")
corpus_embeddings = model.encode(corpus_sentences, batch_size=64, show_progress_bar=True, convert_to_tensor=True)


print("Start clustering")
start_time = time.time()

#Two parameters to tune:
#min_cluster_size: Only consider cluster that have at least 25 elements
#threshold: Consider sentence pairs with a cosine-similarity larger than threshold as similar
clusters = util.community_detection(corpus_embeddings, min_community_size=25, threshold=0.75)

print("Clustering done after {:.2f} sec".format(time.time() - start_time))

#Print for all clusters the top 3 and bottom 3 elements
for i, cluster in enumerate(clusters):
    print("\nCluster {}, #{} Elements ".format(i+1, len(cluster)))
    for sentence_id in cluster[0:3]:
        print("\t", corpus_sentences[sentence_id])
    print("\t", "...")
    for sentence_id in cluster[-3:]:
        print("\t", corpus_sentences[sentence_id])

kmeans.py:

"""
This is a simple application for sentence embeddings: clustering

Sentences are mapped to sentence embeddings and then k-mean clustering is applied.
"""
from sentence_transformers import SentenceTransformer
from sklearn.cluster import KMeans

embedder = SentenceTransformer('paraphrase-MiniLM-L6-v2')

# Corpus with example sentences
corpus = ['A man is eating food.',
          'A man is eating a piece of bread.',
          'A man is eating pasta.',
          'The girl is carrying a baby.',
          'The baby is carried by the woman',
          'A man is riding a horse.',
          'A man is riding a white horse on an enclosed ground.',
          'A monkey is playing drums.',
          'Someone in a gorilla costume is playing a set of drums.',
          'A cheetah is running behind its prey.',
          'A cheetah chases prey on across a field.'
          ]
corpus_embeddings = embedder.encode(corpus)

# Perform kmean clustering
num_clusters = 5
clustering_model = KMeans(n_clusters=num_clusters)
clustering_model.fit(corpus_embeddings)
cluster_assignment = clustering_model.labels_

clustered_sentences = [[] for i in range(num_clusters)]
for sentence_id, cluster_id in enumerate(cluster_assignment):
    clustered_sentences[cluster_id].append(corpus[sentence_id])

for i, cluster in enumerate(clustered_sentences):
    print("Cluster ", i+1)
    print(cluster)
    print("")

agglomerative.py:

"""
This is a simple application for sentence embeddings: clustering

Sentences are mapped to sentence embeddings and then agglomerative clustering with a threshold is applied.
"""
from sentence_transformers import SentenceTransformer
from sklearn.cluster import AgglomerativeClustering
import numpy as np

embedder = SentenceTransformer('paraphrase-MiniLM-L6-v2')

# Corpus with example sentences
corpus = ['A man is eating food.',
          'A man is eating a piece of bread.',
          'A man is eating pasta.',
          'The girl is carrying a baby.',
          'The baby is carried by the woman',
          'A man is riding a horse.',
          'A man is riding a white horse on an enclosed ground.',
          'A monkey is playing drums.',
          'Someone in a gorilla costume is playing a set of drums.',
          'A cheetah is running behind its prey.',
          'A cheetah chases prey on across a field.'
          ]
corpus_embeddings = embedder.encode(corpus)

# Normalize the embeddings to unit length
corpus_embeddings = corpus_embeddings /  np.linalg.norm(corpus_embeddings, axis=1, keepdims=True)

# Perform kmean clustering
clustering_model = AgglomerativeClustering(n_clusters=None, distance_threshold=1.5) #, affinity='cosine', linkage='average', distance_threshold=0.4)
clustering_model.fit(corpus_embeddings)
cluster_assignment = clustering_model.labels_

clustered_sentences = {}
for sentence_id, cluster_id in enumerate(cluster_assignment):
    if cluster_id not in clustered_sentences:
        clustered_sentences[cluster_id] = []

    clustered_sentences[cluster_id].append(corpus[sentence_id])

for i, cluster in clustered_sentences.items():
    print("Cluster ", i+1)
    print(cluster)
    print("")

1
我一直在使用快速聚类来对新闻进行聚类,但是我无法确定适合它的阈值。 请问您能否告诉我基于哪些标准决定这个阈值为(75%)? 非常感谢。 - Ibtsam Ch

3
Bert在每个样本/句子的开头添加了一个特殊的[CLS]标记。经过下游任务微调后,这个[CLS]标记或称为池化输出在Hugging Face实现中表示句子嵌入。
但是如果没有标签,就无法进行微调,因此不能使用pooled_output作为句子嵌入。相反,您应该使用encoded_layers中的单词嵌入,它是一个具有维度(12,seq_len,768)的张量。在这个张量中,您可以从Bert的每个12层中获取嵌入(维度为768)。要获取单词嵌入,可以使用最后一层的输出,可以连接或求和最后4层的输出等操作。
以下是提取特征的脚本: https://github.com/ethanjperez/pytorch-pretrained-BERT/blob/master/examples/extract_features.py

BERT是在下一个句子预测任务上进行预训练的,因此我认为[CLS]标记已经对句子进行了编码。然而,我更愿意采用@Palak在下面提出的解决方案。 - glicerico

0

不确定您是否仍需要,但最近有一篇论文提到如何使用文档嵌入来聚类文档,并从每个聚类中提取单词以表示主题。这是链接: https://arxiv.org/pdf/2008.09470.pdf, https://github.com/ddangelov/Top2Vec

受上述论文的启发,这里提到了另一种使用BERT生成句子嵌入进行主题建模的算法: https://towardsdatascience.com/topic-modeling-with-bert-779f7db187e6, https://github.com/MaartenGr/BERTopic

以上两个库提供了从语料库中提取主题的端到端解决方案。但是,如果您只对生成句子嵌入感兴趣,请查看Gensim的doc2vec(https://radimrehurek.com/gensim/models/doc2vec.html)或句子转换器(https://github.com/UKPLab/sentence-transformers),如其他答案中所述。如果您选择使用句子转换器,则建议您在特定于域的语料库上训练模型以获得良好的结果。


0

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接