利用知識(shí)圖譜提升RAG應(yīng)用的準(zhǔn)確性
共 9251字,需瀏覽 19分鐘
·
2024-05-12 13:28
作者:lucas大叔原文地址:https://zhuanlan.zhihu.com/p/692595027
圖檢索增強(qiáng)生成(GraphRAG)利用圖數(shù)據(jù)庫的結(jié)構(gòu)化特性,以節(jié)點(diǎn)和關(guān)系的方式進(jìn)行組織數(shù)據(jù),增加了檢索到信息的深度和關(guān)聯(lián)上下文,是傳統(tǒng)向量檢索方法的有效補(bǔ)充。
圖擅長以結(jié)構(gòu)化的方式表示和存儲(chǔ)異構(gòu)和互連的信息,可以輕松地捕捉不同數(shù)據(jù)類型之間的復(fù)雜關(guān)系和屬性。相比之下,向量數(shù)據(jù)庫往往難以處理此類結(jié)構(gòu)化信息,因?yàn)樗鼈兊膬?yōu)勢在于通過高維向量處理非結(jié)構(gòu)化數(shù)據(jù)。在RAG應(yīng)用中,可以將結(jié)構(gòu)化圖數(shù)據(jù)與非結(jié)構(gòu)化文本的向量搜索相結(jié)合,以實(shí)現(xiàn)優(yōu)勢互補(bǔ)。
雖然知識(shí)圖譜的概念已經(jīng)比較普及,但構(gòu)建知識(shí)圖還是一項(xiàng)有挑戰(zhàn)性的工作。它涉及到數(shù)據(jù)的收集和結(jié)構(gòu)化,需要對領(lǐng)域和圖建模有深入的了解。為了簡化圖譜構(gòu)建過程,我們嘗試?yán)肔LM來構(gòu)建。LLM對語言和上下文有著深刻的理解,可以實(shí)現(xiàn)知識(shí)圖創(chuàng)建過程重要部分的自動(dòng)化。通過分析文本數(shù)據(jù),LLM可以識(shí)別實(shí)體,理解實(shí)體之間的關(guān)系,并建議如何在圖結(jié)構(gòu)中最好地表示它們。作為實(shí)驗(yàn)的結(jié)果,我們在LangChain中添加了圖構(gòu)建模塊的第一個(gè)版本,并將在這篇博客中進(jìn)行演示。相關(guān)代碼可以在GitHub獲取。
Neo4j環(huán)境配置
首先創(chuàng)建一個(gè)Neo4j實(shí)例。最簡單的方法是在Neo4j Aura上啟動(dòng)一個(gè)免費(fèi)實(shí)例,該實(shí)例提供Neo4j數(shù)據(jù)庫的云實(shí)例。或者,你也可以下載 Neo4j Desktop 應(yīng)用并創(chuàng)建Neo4j數(shù)據(jù)庫的本地實(shí)例。
os.environ["OPENAI_API_KEY"] = "sk-"
os.environ["NEO4J_URI"] = "bolt://localhost:7687"
os.environ["NEO4J_USERNAME"] = "neo4j"
os.environ["NEO4J_PASSWORD"] = "password"
graph = Neo4jGraph()
數(shù)據(jù)攝入
本演示使用 Elizabeth I 的維基百科頁面,我們用LangChain loader無縫地從維基百科抓取和分割文檔。
# Read the wikipedia article
raw_documents = WikipediaLoader(query="Elizabeth I").load()
# Define chunking strategy
text_splitter = TokenTextSplitter(chunk_size=512, chunk_overlap=24)
documents = text_splitter.split_documents(raw_documents[:3])
現(xiàn)在用分割后的文檔構(gòu)建圖譜。為此,我們實(shí)現(xiàn)了LLMGraphTransformer模塊,它大大簡化了在圖數(shù)據(jù)庫中構(gòu)建和存儲(chǔ)知識(shí)圖譜。
LLMGraphTransformer類利用LLM將文檔轉(zhuǎn)化為圖譜文檔,允許指定輸出圖譜中節(jié)點(diǎn)和關(guān)系類型的約束,不支持抽取節(jié)點(diǎn)或者關(guān)系的屬性。它的參數(shù)如下:
llm (BaseLanguageModel):支持結(jié)構(gòu)化輸出的語言模型實(shí)例
allowed_nodes (List[str], optional): 指定圖譜中包含哪些節(jié)點(diǎn)類型,默認(rèn)是空list,允許所有節(jié)點(diǎn)類型
allowed_relationships (List[str], optional): 指定圖譜中包含哪些關(guān)系類型,默認(rèn)是空list,允許所有關(guān)系類型
prompt (Optional[ChatPromptTemplate], optional): 傳給LLM的帶有其他指令的prompt
strict_mode (bool, optional): 確定轉(zhuǎn)化是否應(yīng)該使用篩選以嚴(yán)格遵守`allowed_nodes` 和 `allowed_relationships`,默認(rèn)為True
本例allowed_nodes和allowed_relationships都采取默認(rèn)設(shè)置,即圖譜中允許所有的節(jié)點(diǎn)和關(guān)系類型。
llm=ChatOpenAI(temperature=0, model_name="gpt-4-0125-preview")
llm_transformer = LLMGraphTransformer(llm=llm)
# Extract graph data
graph_documents = llm_transformer.convert_to_graph_documents(documents)
# Store to neo4j
graph.add_graph_documents(
graph_documents,
baseEntityLabel=True,
include_source=True
)
你可以定義知識(shí)圖譜生成鏈要使用的LLM。目前,只支持OpenAI和Mistral的function-calling模型。在本例中,我們使用最新的GPT-4。值得注意的是,生成圖譜的質(zhì)量在很大程度上取決于所使用的模型。LLM graph transformer 返回圖譜文檔,通過add_graph_documents方法導(dǎo)入到Neo4j。baseEntityLabel參數(shù)為每個(gè)節(jié)點(diǎn)分配一個(gè)額外的__Entity__標(biāo)簽,從而提高索引和查詢性能。include_source參數(shù)將節(jié)點(diǎn)鏈接到其原始文檔,便于數(shù)據(jù)跟蹤和上下文理解。
在Neo4j瀏覽器中可以檢查生成的圖譜。
可以看到,每種類型的節(jié)點(diǎn)除了自身的節(jié)點(diǎn)類型之外,多了一個(gè)__Entity__標(biāo)簽。
同時(shí),節(jié)點(diǎn)通過MENTIONS關(guān)系與源文檔連接。
RAG混合檢索
在生成圖譜之后,我們將向量索引和關(guān)鍵字索引的混合檢索與圖譜檢索結(jié)合起來用于RAG。
上圖展示了從用戶提出問題開始的檢索過程,問題首先輸入到RAG檢索器,該檢索器采用關(guān)鍵詞和向量搜索非結(jié)構(gòu)化文本數(shù)據(jù),并與從知識(shí)圖譜中收集的信息結(jié)合。由于neo4j同時(shí)具有關(guān)鍵詞和向量索引,因此可以使用單個(gè)數(shù)據(jù)庫實(shí)現(xiàn)全部三種檢索方式。從這些數(shù)據(jù)源收集的數(shù)據(jù)輸入給LLM生成最終答案。
非結(jié)構(gòu)化數(shù)據(jù)檢索器
可以用Neo4jVector.from_existing_graph方法為文檔添加關(guān)鍵字和向量檢索。此方法為混合搜索方法配置關(guān)鍵字和向量搜索索引,目標(biāo)節(jié)點(diǎn)類型為Document。另外,如果文本embedding值缺失,它還會(huì)計(jì)算創(chuàng)建向量索引。
vector_index = Neo4jVector.from_existing_graph(
OpenAIEmbeddings(),
search_type="hybrid",
node_label="Document",
text_node_properties=["text"],
embedding_node_property="embedding"
)
可以看到,Document節(jié)點(diǎn)原來沒有embedding屬性,創(chuàng)建非結(jié)構(gòu)化數(shù)據(jù)檢索器后,基于Document節(jié)點(diǎn)的text屬性新創(chuàng)建了embedding。
然后使用similarity_search方法就可以調(diào)用向量索引。
圖譜檢索器
另一方面,配置圖譜檢索更為復(fù)雜,但提供了更多的自由度。本示例將使用全文索引識(shí)別相關(guān)節(jié)點(diǎn)并返回其一階鄰居。
圖譜檢索器首先識(shí)別輸入中的相關(guān)實(shí)體。為簡單起見,我們讓LLM識(shí)別人、組織和地點(diǎn)等通用實(shí)體,用LCEL和新添加的with_structured_output方法來提取。
# Extract entities from text
class Entities(BaseModel):
"""Identifying information about entities."""
names: List[str] = Field(
...,
description="All the person, organization, or business entities that "
"appear in the text",
)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are extracting organization and person entities from the text.",
),
(
"human",
"Use the given format to extract information from the following "
"input: {question}",
),
]
)
entity_chain = prompt | llm.with_structured_output(Entities)
讓我們測試一下:
entity_chain.invoke({"question": "Where was Amelia Earhart born?"}).names
# ['Amelia Earhart']
現(xiàn)在實(shí)現(xiàn)了從問題中檢測出實(shí)體,接下來用全文索引將它們映射到知識(shí)圖譜。首先,我們需要定義全文索引和一個(gè)函數(shù),該函數(shù)將生成允許有些拼寫錯(cuò)誤的全文查詢。
graph.query(
"CREATE FULLTEXT INDEX entity IF NOT EXISTS FOR (e:__Entity__) ON EACH [e.id]")
def generate_full_text_query(input: str) -> str:
"""
Generate a full-text search query for a given input string.
This function constructs a query string suitable for a full-text search.
It processes the input string by splitting it into words and appending a
similarity threshold (~2 changed characters) to each word, then combines
them using the AND operator. Useful for mapping entities from user questions
to database values, and allows for some misspelings.
"""
full_text_query = ""
words = [el for el in remove_lucene_chars(input).split() if el]
for word in words[:-1]:
full_text_query += f" {word}~2 AND"
full_text_query += f" {words[-1]}~2"
return full_text_query.strip()
把上面的功能組裝在一起實(shí)現(xiàn)圖譜檢索的結(jié)構(gòu)化檢索器。
# Fulltext index query
def structured_retriever(question: str) -> str:
"""
Collects the neighborhood of entities mentioned
in the question
"""
result = ""
entities = entity_chain.invoke({"question": question})
for entity in entities.names:
response = graph.query(
"""CALL db.index.fulltext.queryNodes('entity', $query, {limit:2})
YIELD node,score
CALL {
MATCH (node)-[r:!MENTIONS]->(neighbor)
RETURN node.id + ' - ' + type(r) + ' -> ' + neighbor.id AS output
UNION
MATCH (node)<-[r:!MENTIONS]-(neighbor)
RETURN neighbor.id + ' - ' + type(r) + ' -> ' + node.id AS output
}
RETURN output LIMIT 50
""",
{"query": generate_full_text_query(entity)},
)
result += "\n".join([el['output'] for el in response])
return result
structured_retriever函數(shù)從檢測用戶問題中的實(shí)體開始,迭代檢測到的實(shí)體,使用Cypher模板來檢索相關(guān)節(jié)點(diǎn)的一階鄰居。
print(structured_retriever("Who is Elizabeth I?"))
# Elizabeth I - BORN_ON -> 7 September 1533
# Elizabeth I - DIED_ON -> 24 March 1603
# Elizabeth I - TITLE_HELD_FROM -> Queen Of England And Ireland
# Elizabeth I - TITLE_HELD_UNTIL -> 17 November 1558
# Elizabeth I - MEMBER_OF -> House Of Tudor
# Elizabeth I - CHILD_OF -> Henry Viii
# and more...
最終的檢索器
如開頭所述,我們將結(jié)合非結(jié)構(gòu)化和圖譜檢索器來創(chuàng)建傳遞給LLM的最終上下文。
def retriever(question: str):
print(f"Search query: {question}")
structured_data = structured_retriever(question)
unstructured_data = [el.page_content for el in vector_index.similarity_search(question)]
final_data = f"""Structured data:
{structured_data}
Unstructured data:
{"#Document ". join(unstructured_data)}
"""
return final_data
正如處理Python一樣,可以簡單地使用f-string拼接輸出。
定義RAG Chain
我們已經(jīng)成功地實(shí)現(xiàn)了RAG的檢索組件。首先引入查詢重寫功能,允許根據(jù)對話歷史對當(dāng)前問題進(jìn)行改寫。
# Condense a chat history and follow-up question into a standalone question
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question,
in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""" # noqa: E501
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
def _format_chat_history(chat_history: List[Tuple[str, str]]) -> List:
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
_search_query = RunnableBranch(
# If input includes chat_history, we condense it with the follow-up question
(
RunnableLambda(lambda x: bool(x.get("chat_history"))).with_config(
run_name="HasChatHistoryCheck"
), # Condense follow-up question and chat into a standalone_question
RunnablePassthrough.assign(
chat_history=lambda x: _format_chat_history(x["chat_history"])
)
| CONDENSE_QUESTION_PROMPT
| ChatOpenAI(temperature=0)
| StrOutputParser(),
),
# Else, we have no chat history, so just pass through the question
RunnableLambda(lambda x : x["question"]),
)
接下來,引入prompt利用集成混合檢索器提供的上下文生成響應(yīng),完成RAG鏈的實(shí)現(xiàn)。
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
chain = (
RunnableParallel(
{
"context": _search_query | retriever,
"question": RunnablePassthrough(),
}
)
| prompt
| llm
| StrOutputParser()
)
最后,繼續(xù)測試混合RAG實(shí)現(xiàn)。
chain.invoke({"question": "Which house did Elizabeth I belong to?"})
# Search query: Which house did Elizabeth I belong to?
# 'Elizabeth I belonged to the House of Tudor.'
前面實(shí)現(xiàn)了查詢重寫功能,使RAG鏈能夠適配允許后續(xù)問題的對話設(shè)置。考慮到我們使用向量和關(guān)鍵字搜索,必須重寫后續(xù)問題以優(yōu)化搜索過程。
chain.invoke(
{
"question": "When was she born?",
"chat_history": [("Which house did Elizabeth I belong to?", "House Of Tudor")],
}
)
# Search query: When was Elizabeth I born?
# 'Elizabeth I was born on 7 September 1533.'
可以看到 When was she born? 首先被改寫為“When was Elizabeth I born?”,然后使用重寫后的查詢來檢索相關(guān)上下文并回答問題。
參考資料
Enhancing the Accuracy of RAG Applications With Knowledge Graphs
