使用HuggingFace的create_csv_agent:无法解析LLM输出。

3
我正在使用Langchain,并在一个小的csv数据集上应用create_csv_agent,以查看google/flan-t5-xxl能够从表格数据中查询答案的效果如何。目前,我遇到了“OutputParserException: Could not parse LLM output: `0`”的问题。
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
OutputParserException                     Traceback (most recent call last)
<ipython-input-13-f86336065d8e> in <cell line: 1>()
----> 1 agent.run('how many rows are there?')

7 frames
/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py in run(self, callbacks, tags, metadata, *args, **kwargs)
    473             if len(args) != 1:
    474                 raise ValueError("`run` supports only one positional argument.")
--> 475             return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
    476                 _output_key
    477             ]

/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
    280         except (KeyboardInterrupt, Exception) as e:
    281             run_manager.on_chain_error(e)
--> 282             raise e
    283         run_manager.on_chain_end(outputs)
    284         final_outputs: Dict[str, Any] = self.prep_outputs(

/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
    274         try:
    275             outputs = (
--> 276                 self._call(inputs, run_manager=run_manager)
    277                 if new_arg_supported
    278                 else self._call(inputs)

/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py in _call(self, inputs, run_manager)
   1034         # We now enter the agent loop (until it returns something).
   1035         while self._should_continue(iterations, time_elapsed):
-> 1036             next_step_output = self._take_next_step(
   1037                 name_to_tool_map,
   1038                 color_mapping,

/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py in _take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    842                 raise_error = False
    843             if raise_error:
--> 844                 raise e
    845             text = str(e)
    846             if isinstance(self.handle_parsing_errors, bool):

/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py in _take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
    831 
    832             # Call the LLM to see what to do.
--> 833             output = self.agent.plan(
    834                 intermediate_steps,
    835                 callbacks=run_manager.get_child() if run_manager else None,

/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py in plan(self, intermediate_steps, callbacks, **kwargs)
    455         full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
    456         full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
--> 457         return self.output_parser.parse(full_output)
    458 
    459     async def aplan(

/usr/local/lib/python3.10/dist-packages/langchain/agents/mrkl/output_parser.py in parse(self, text)
     50 
     51         if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
---> 52             raise OutputParserException(
     53                 f"Could not parse LLM output: `{text}`",
     54                 observation=MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE,

OutputParserException: Could not parse LLM output: `0`

我不确定为什么会出现这种情况,因为提示模板似乎很好地理解了它应该扮演的角色。以下是我的代码:
import os
from langchain import PromptTemplate, HuggingFaceHub, LLMChain, OpenAI, SQLDatabase, HuggingFacePipeline
from langchain.agents import create_csv_agent
from langchain.chains.sql_database.base import SQLDatabaseChain
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoConfig
import transformers

model_id = 'google/flan-t5-xxl'
config = AutoConfig.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, config=config)
pipe = pipeline('text2text-generation',
                model=model,
                tokenizer=tokenizer,
                max_length = 1024
                )
local_llm = HuggingFacePipeline(pipeline = pipe)

agent = create_csv_agent(llm = hf_llm, path = "dummy_data.csv", verbose=True)
agent.run('how many unique status are there?')

我尝试着使用Flan-t5和OpenAI的轻量级版本进行实验。然而,对于OpenAI来说,即使我只运行了一个查询,我仍然遇到了速率限制的问题。而且,在create_csv_agent方面,除了OpenAI之外,并没有太多的文档资料可供参考。
2个回答

2
这个问题的正确解决方案是编写自己的自定义输出解析器。
由于您正在使用代理,参数"handle_parsing_errors=True"没有任何效果。
另一个选项是链接新的LLM来解析这个输出。
临时解决方法如下:
try: response = agent.run("有多少个唯一状态?") except Exception as e: response = str(e) if response.startswith("无法解析LLM输出:`"): response = response.removeprefix("无法解析LLM输出:`").removesuffix("`") print(response)

0
解决这个错误的最佳方法是在模型中包含kwargs,就像这样:

设置LLM

llm = HuggingFaceHub(repo_id="google/flan-t5-xxl", huggingfacehub_api_token= '**************', model_kwargs={"temperature":0.1, "max_length":512})


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接