smolagents 文件

構建好的智慧體

Hugging Face's logo
加入 Hugging Face 社群

並獲得增強的文件體驗

開始使用

構建好的智慧體

構建一個能工作的智慧體與構建一個不工作的智慧體之間存在天壤之別。我們如何構建出能工作的智慧體呢?在本指南中,我們將討論構建智慧體的最佳實踐。

如果您是構建代理的新手,請務必先閱讀代理簡介smolagents 入門指南

最好的智慧體系統是最簡單的:儘可能簡化工作流程

在您的工作流程中賦予 LLM 一定的自主權會引入一些錯誤風險。

設計良好的智慧體系統無論如何都應具備良好的錯誤日誌記錄和重試機制,以便 LLM 引擎有機會自我糾正錯誤。但是,為了最大程度地降低 LLM 錯誤的風險,您應該簡化工作流程!

讓我們回顧一下智慧體簡介中的例子:一個為衝浪旅行公司回答使用者查詢的機器人。與其讓智慧體在每次被問及一個新的衝浪地點時,分別呼叫“旅行距離 API”和“天氣 API”,不如只使用一個統一的工具“return_spot_information”,這是一個可以同時呼叫這兩個 API 並將它們的連線輸出返回給使用者的函式。

這將降低成本、延遲和錯誤風險!

主要準則是:儘可能減少 LLM 呼叫次數。

這會帶來一些啟示

  • 儘可能將兩個工具合併為一個,就像我們例子中的兩個 API 一樣。
  • 儘可能地,邏輯應基於確定性函式而非智慧體決策。

改進流向 LLM 引擎的資訊流

請記住,您的 LLM 引擎就像一個智慧機器人,被困在一個房間裡,與外界唯一的交流方式是透過門下傳遞的紙條。

如果您不明確地將其放入其提示中,它將不知道發生了什麼。

因此,首先要明確您的任務!由於智慧體由 LLM 提供支援,任務表述的微小變化可能會產生完全不同的結果。

然後,在工具使用中改進流向智慧體的資訊流。

需要遵循的特定準則

  • 每個工具都應該記錄(只需在工具的 forward 方法中使用 print 語句)所有對 LLM 引擎有用的資訊。
    • 特別是,記錄工具執行錯誤的詳細資訊將大有幫助!

例如,這是一個根據位置和日期時間檢索天氣資料的工具

首先,這是一個糟糕的版本

import datetime
from smolagents import tool

def get_weather_report_at_coordinates(coordinates, date_time):
    # Dummy function, returns a list of [temperature in °C, risk of rain on a scale 0-1, wave height in m]
    return [28.0, 0.35, 0.85]

def convert_location_to_coordinates(location):
    # Returns dummy coordinates
    return [3.3, -42.0]

@tool
def get_weather_api(location: str, date_time: str) -> str:
    """
    Returns the weather report.

    Args:
        location: the name of the place that you want the weather for.
        date_time: the date and time for which you want the report.
    """
    lon, lat = convert_location_to_coordinates(location)
    date_time = datetime.strptime(date_time)
    return str(get_weather_report_at_coordinates((lon, lat), date_time))

為什麼它不好?

  • 沒有明確指出 `date_time` 應使用的格式
  • 沒有詳細說明位置應如何指定。
  • 沒有日誌機制嘗試明確指出失敗情況,例如位置格式不正確或日期時間格式不正確。
  • 輸出格式難以理解

如果工具呼叫失敗,記憶體中記錄的錯誤跟蹤可以幫助 LLM 反向工程該工具以修復錯誤。但為什麼要讓它承擔如此繁重的工作呢?

構建此工具的更好方法如下

@tool
def get_weather_api(location: str, date_time: str) -> str:
    """
    Returns the weather report.

    Args:
        location: the name of the place that you want the weather for. Should be a place name, followed by possibly a city name, then a country, like "Anchor Point, Taghazout, Morocco".
        date_time: the date and time for which you want the report, formatted as '%m/%d/%y %H:%M:%S'.
    """
    lon, lat = convert_location_to_coordinates(location)
    try:
        date_time = datetime.strptime(date_time)
    except Exception as e:
        raise ValueError("Conversion of `date_time` to datetime format failed, make sure to provide a string in format '%m/%d/%y %H:%M:%S'. Full trace:" + str(e))
    temperature_celsius, risk_of_rain, wave_height = get_weather_report_at_coordinates((lon, lat), date_time)
    return f"Weather report for {location}, {date_time}: Temperature will be {temperature_celsius}°C, risk of rain is {risk_of_rain*100:.0f}%, wave height is {wave_height}m."

總的來說,為了減輕 LLM 的負擔,你應該問自己的一個好問題是:“如果我是一個傻瓜,第一次使用這個工具,用這個工具程式設計和糾正自己的錯誤會多容易?”

為智慧體提供更多引數

要將除簡單字串描述任務之外的額外物件傳遞給智慧體,您可以使用 `additional_args` 引數來傳遞任何型別的物件。

from smolagents import CodeAgent, InferenceClientModel

model_id = "meta-llama/Llama-3.3-70B-Instruct"

agent = CodeAgent(tools=[], model=InferenceClientModel(model_id=model_id), add_base_tools=True)

agent.run(
    "Why does Mike not know many people in New York?",
    additional_args={"mp3_sound_file_url":'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/recording.mp3'}
)

例如,您可以使用 `additional_args` 引數傳遞您希望智慧體利用的影像或字串。

如何除錯您的智慧體

1. 使用更強大的 LLM

在智慧體工作流中,有些錯誤是實際錯誤,而另一些則是您的 LLM 引擎推理不當造成的。例如,考慮一下我要求 `CodeAgent` 建立汽車圖片的跟蹤記錄:

==================================================================================================== New task ====================================================================================================
Make me a cool car picture
──────────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────
Agent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
image_generator(prompt="A cool, futuristic sports car with LED headlights, aerodynamic design, and vibrant color, high-res, photorealistic")
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Last output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
Step 1:

- Time taken: 16.35 seconds
- Input tokens: 1,383
- Output tokens: 77
──────────────────────────────────────────────────────────────────────────────────────────────────── New step ────────────────────────────────────────────────────────────────────────────────────────────────────
Agent is executing the code below: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
final_answer("/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png")
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Print outputs:

Last output from code snippet: ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png
Final answer:
/var/folders/6m/9b1tts6d5w960j80wbw9tx3m0000gn/T/tmpx09qfsdd/652f0007-3ee9-44e2-94ac-90dae6bb89a4.png

使用者看到的不是返回一張圖片,而是返回了一個路徑。這看起來像系統的一個錯誤,但實際上智慧體系統並沒有導致這個錯誤:只是 LLM 大腦犯了沒有將圖片輸出儲存到變數中的錯誤。因此,除了利用儲存圖片時記錄的路徑之外,它無法再次訪問圖片,所以它返回的是路徑而不是圖片。

因此,除錯您的智慧體的第一步是“使用更強大的 LLM”。像 `Qwen2/5-72B-Instruct` 這樣的替代方案就不會犯這種錯誤。

2. 提供更多資訊或具體指令

您也可以使用效能較弱的模型,只要您更有效地指導它們。

設身處地為您的模型著想:如果您是解決任務的模型,您會因為可用的資訊(來自系統提示 + 任務表述 + 工具描述)而感到困惑嗎?

您需要詳細說明嗎?

  • 如果指令應始終提供給智慧體(如我們通常理解的系統提示工作方式):您可以在智慧體初始化時將其作為字串傳遞給 `instructions` 引數。
  • 如果是關於一個特定的任務:將所有這些細節新增到任務中。任務可以非常長,比如幾十頁。
  • 如果它是關於如何使用特定工具的:將其包含在這些工具的 `description` 屬性中。

3. 更改提示模板(通常不建議)

如果以上說明不足以解決問題,您可以更改智慧體的提示模板。

讓我們看看它是如何工作的。例如,讓我們檢查 CodeAgent 的預設提示模板(以下版本透過跳過零樣本示例進行了縮短)。

print(agent.prompt_templates["system_prompt"])

這是你得到的

You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.
To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.
To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.

At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.
Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '<end_code>' sequence.
During each intermediate step, you can use 'print()' to save whatever important information you will then need.
These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.
In the end you have to return a final answer using the `final_answer` tool.

Here are a few examples using notional tools:
---
Task: "Generate an image of the oldest person in this document."

Thought: I will proceed step by step and use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.
<code>
answer = document_qa(document=document, question="Who is the oldest person mentioned?")
print(answer)
</code>
Observation: "The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland."

Thought: I will now generate an image showcasing the oldest person.
<code>
image = image_generator("A portrait of John Doe, a 55-year-old man living in Canada.")
final_answer(image)
</code>

---
Task: "What is the result of the following operation: 5 + 3 + 1294.678?"

Thought: I will use python code to compute the result of the operation and then return the final answer using the `final_answer` tool
<code>
result = 5 + 3 + 1294.678
final_answer(result)
</code>

---
Task:
"Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French.
You have been provided with these additional arguments, that you can access using the keys as variables in your python code:
{'question': 'Quel est l'animal sur l'image?', 'image': 'path/to/image.jpg'}"

Thought: I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image.
<code>
translated_question = translator(question=question, src_lang="French", tgt_lang="English")
print(f"The translated question is {translated_question}.")
answer = image_qa(image=image, question=translated_question)
final_answer(f"The answer is {answer}")
</code>

---
Task:
In a 1979 interview, Stanislaus Ulam discusses with Martin Sherwin about other great physicists of his time, including Oppenheimer.
What does he say was the consequence of Einstein learning too much math on his creativity, in one word?

Thought: I need to find and read the 1979 interview of Stanislaus Ulam with Martin Sherwin.
Code:
```py
pages = search(query="1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein")
print(pages)
```<end_code>
Observation:
No result found for query "1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein".

Thought: The query was maybe too restrictive and did not find any results. Let's try again with a broader query.
Code:
```py
pages = search(query="1979 interview Stanislaus Ulam")
print(pages)
```<end_code>
Observation:
Found 6 pages:
[Stanislaus Ulam 1979 interview](https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/)

[Ulam discusses Manhattan Project](https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/)

(truncated)

Thought: I will read the first 2 pages to know more.
Code:
```py
for url in ["https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/", "https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/"]:
    whole_page = visit_webpage(url)
    print(whole_page)
    print("\n" + "="*80 + "\n")  # Print separator between pages
```<end_code>
Observation:
Manhattan Project Locations:
Los Alamos, NM
Stanislaus Ulam was a Polish-American mathematician. He worked on the Manhattan Project at Los Alamos and later helped design the hydrogen bomb. In this interview, he discusses his work at
(truncated)

Thought: I now have the final answer: from the webpages visited, Stanislaus Ulam says of Einstein: "He learned too much mathematics and sort of diminished, it seems to me personally, it seems to me his purely physics creativity." Let's answer in one word.
Code:
```py
final_answer("diminished")
```<end_code>

---
Task: "Which city has the highest population: Guangzhou or Shanghai?"

Thought: I need to get the populations for both cities and compare them: I will use the tool `search` to get the population of both cities.
Code:
```py
for city in ["Guangzhou", "Shanghai"]:
    print(f"Population {city}:", search(f"{city} population")
```<end_code>
Observation:
Population Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']
Population Shanghai: '26 million (2019)'

Thought: Now I know that Shanghai has the highest population.
Code:
```py
final_answer("Shanghai")
```<end_code>

---
Task: "What is the current age of the pope, raised to the power 0.36?"

Thought: I will use the tool `wiki` to get the age of the pope, and confirm that with a web search.
Code:
```py
pope_age_wiki = wiki(query="current pope age")
print("Pope age as per wikipedia:", pope_age_wiki)
pope_age_search = web_search(query="current pope age")
print("Pope age as per google search:", pope_age_search)
```<end_code>
Observation:
Pope age: "The pope Francis is currently 88 years old."

Thought: I know that the pope is 88 years old. Let's compute the result using python code.
Code:
```py
pope_current_age = 88 ** 0.36
final_answer(pope_current_age)
```<end_code>

Above example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools:
{%- for tool in tools.values() %}
- {{ tool.to_tool_calling_prompt() }}
{%- endfor %}

{%- if managed_agents and managed_agents.values() | list %}
You can also give tasks to team members.
Calling a team member works similarly to calling a tool: provide the task description as the 'task' argument. Since this team member is a real human, be as detailed and verbose as necessary in your task description.
You can also include any relevant variables or context using the 'additional_args' argument.
Here is a list of the team members that you can call:
{%- for agent in managed_agents.values() %}
- {{ agent.name }}: {{ agent.description }}
{%- endfor %}
{%- else %}
{%- endif %}

Here are the rules you should always follow to solve your task:
1. Always provide a 'Thought:' sequence, and a 'Code:\n```py' sequence ending with '```<end_code>' sequence, else you will fail.
2. Use only variables that you have defined!
3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wiki(query="What is the place where James Bond lives?")'.
4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.
5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.
6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.
7. Never create any notional variables in our code, as having these in your logs will derail you from the true variables.
8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}
9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.
10. Don't give up! You're in charge of solving the task, not providing directions to solve it.

Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000.

如您所見,存在諸如 `“{{ tool.description }}”` 之類的佔位符:這些將在智慧體初始化時用於插入某些自動生成的工具或受管智慧體的描述。

因此,雖然您可以透過將自定義提示作為引數傳遞給 `system_prompt` 引數來覆蓋此係統提示模板,但您的新系統提示可以包含以下佔位符

  • 插入工具描述
    {%- for tool in tools.values() %}
    - {{ tool.to_tool_calling_prompt() }}
    {%- endfor %}
  • 插入受管代理的描述(如果有的話)
    {%- if managed_agents and managed_agents.values() | list %}
    You can also give tasks to team members.
    Calling a team member works similarly to calling a tool: provide the task description as the 'task' argument. Since this team member is a real human, be as detailed and verbose as necessary in your task description.
    You can also include any relevant variables or context using the 'additional_args' argument.
    Here is a list of the team members that you can call:
    {%- for agent in managed_agents.values() %}
    - {{ agent.name }}: {{ agent.description }}
    {%- endfor %}
    {%- endif %}
  • 僅適用於 `CodeAgent`,插入授權匯入列表:`"{{authorized_imports}}"`

然後您可以按如下方式更改系統提示

agent.prompt_templates["system_prompt"] = agent.prompt_templates["system_prompt"] + "\nHere you go!"

這也適用於 ToolCallingAgent

但通常在智慧體初始化時傳遞引數 `instructions` 更簡單,例如

agent = CodeAgent(tools=[], model=InferenceClientModel(model_id=model_id), instructions="Always talk like a 5 year old.")

4. 額外規劃

我們提供了一個用於補充規劃步驟的模型,智慧體可以在正常的行動步驟之間定期執行。在此步驟中,沒有工具呼叫,LLM 只是被要求更新其已知的事實列表,並根據這些事實思考接下來應該採取哪些步驟。

from smolagents import load_tool, CodeAgent, InferenceClientModel, WebSearchTool
from dotenv import load_dotenv

load_dotenv()

# Import tool from Hub
image_generation_tool = load_tool("m-ric/text-to-image", trust_remote_code=True)

search_tool = WebSearchTool()

agent = CodeAgent(
    tools=[search_tool, image_generation_tool],
    model=InferenceClientModel(model_id="Qwen/Qwen2.5-72B-Instruct"),
    planning_interval=3 # This is where you activate planning!
)

# Run it!
result = agent.run(
    "How long would a cheetah at full speed take to run the length of Pont Alexandre III?",
)
< > 在 GitHub 上更新

© . This site is unofficial and not affiliated with Hugging Face, Inc.