-
-
Notifications
You must be signed in to change notification settings - Fork 722
Description
create the mock test in src/praisonai/tests/unit/
for example, see how sequential is use in llm.py
mainly focus on files inside src/praisonai-agents/praisonaiagents/
look at this for reference #842
Here is the sequential code example src/praisonai-agents/gemini-sequential.py
from praisonaiagents import Agent
def get_stock_price(company_name: str) -> str:
"""
Get the stock price of a company
Args:
company_name (str): The name of the company
Returns:
str: The stock price of the company
"""
return f"The stock price of {company_name} is 100"
def multiply(a: int, b: int) -> int:
"""
Multiply two numbers
"""
return a * b
agent = Agent(
instructions="You are a helpful assistant. You can use the tools provided to you to help the user.",
llm="gemini/gemini-2.5-pro",
tools=[get_stock_price, multiply]
)
result = agent.start("what is the stock price of Google? multiply the Google stock price with 2")
print(result)
❯ python gemini-sequential.py
19:24:08 - LiteLLM:DEBUG: litellm_logging.py:141 - [Non-Blocking] Unable to import GenericAPILogger - LiteLLM Enterprise Feature - No module named 'litellm.proxy.enterprise'
[19:24:08] DEBUG [19:24:08] litellm_logging.py:141 DEBUG [Non-Blocking] litellm_logging.py:141
Unable to import GenericAPILogger - LiteLLM Enterprise
Feature - No module named 'litellm.proxy.enterprise'
[19:24:09] DEBUG [19:24:09] telemetry.py:81 DEBUG Telemetry enabled with session telemetry.py:81
27bae6936a4eea15
DEBUG [19:24:09] llm.py:141 DEBUG LLM instance initialized with: { llm.py:141
"model": "gemini/gemini-2.5-pro",
"timeout": null,
"temperature": null,
"top_p": null,
"n": null,
"max_tokens": null,
"presence_penalty": null,
"frequency_penalty": null,
"logit_bias": null,
"response_format": null,
"seed": null,
"logprobs": null,
"top_logprobs": null,
"api_version": null,
"stop_phrases": null,
"api_key": null,
"base_url": null,
"verbose": true,
"markdown": true,
"self_reflect": false,
"max_reflect": 3,
"min_reflect": 1,
"reasoning_steps": false,
"extra_settings": {}
}
DEBUG [19:24:09] agent.py:416 DEBUG Tools passed to Agent with custom agent.py:416
LLM: [<function get_stock_price at 0x100d84ae0>, <function
multiply at 0x100f116c0>]
DEBUG [19:24:09] agent.py:1160 DEBUG Agent.chat parameters: { agent.py:1160
"prompt": "what is the stock price of Google? multiply the
Google stock price with 2",
"temperature": 0.2,
"tools": null,
"output_json": null,
"output_pydantic": null,
"reasoning_steps": false,
"agent_name": "Agent",
"agent_role": "Assistant",
"agent_goal": "You are a helpful assistant. You can use the
tools provided to you to help the user."
}
INFO [19:24:09] llm.py:593 INFO Getting response from llm.py:593
gemini/gemini-2.5-pro
DEBUG [19:24:09] llm.py:147 DEBUG LLM instance configuration: { llm.py:147
"model": "gemini/gemini-2.5-pro",
"timeout": null,
"temperature": null,
"top_p": null,
"n": null,
"max_tokens": null,
"presence_penalty": null,
"frequency_penalty": null,
"logit_bias": null,
"response_format": null,
"seed": null,
"logprobs": null,
"top_logprobs": null,
"api_version": null,
"stop_phrases": null,
"api_key": null,
"base_url": null,
"verbose": true,
"markdown": true,
"self_reflect": false,
"max_reflect": 3,
"min_reflect": 1,
"reasoning_steps": false
}
DEBUG [19:24:09] llm.py:143 DEBUG get_response parameters: { llm.py:143
"prompt": "what is the stock price of Google? multiply the Google
stock price with 2",
"system_prompt": "You are a helpful assistant. You can use the
tools provided to you to help the user.\n\nYour Role: Ass...",
"chat_history": "[1 messages]",
"temperature": 0.2,
"tools": [
"get_stock_price",
"multiply"
],
"output_json": null,
"output_pydantic": null,
"verbose": true,
"markdown": true,
"self_reflect": false,
"max_reflect": 3,
"min_reflect": 1,
"agent_name": "Agent",
"agent_role": "Assistant",
"agent_tools": [
"get_stock_price",
"multiply"
],
"kwargs": "{'reasoning_steps': False}"
}
DEBUG [19:24:09] llm.py:2180 DEBUG Generating tool definition for llm.py:2180
callable: get_stock_price
DEBUG [19:24:09] llm.py:2225 DEBUG Function signature: (company_name: llm.py:2225
str) -> str
DEBUG [19:24:09] llm.py:2244 DEBUG Function docstring: Get the stock llm.py:2244
price of a company
Args:
company_name (str): The name of the company
Returns:
str: The stock price of the company
DEBUG [19:24:09] llm.py:2250 DEBUG Param section split: ['Get the stock llm.py:2250
price of a company', 'company_name (str): The name of the company\n
\nReturns:\n str: The stock price of the company']
DEBUG [19:24:09] llm.py:2259 DEBUG Parameter descriptions: {'company_name llm.py:2259
(str)': 'The name of the company', 'Returns': '', 'str': 'The stock
price of the company'}
DEBUG [19:24:09] llm.py:2283 DEBUG Generated parameters: {'type': llm.py:2283
'object', 'properties': {'company_name': {'type': 'string',
'description': 'Parameter description not available'}}, 'required':
['company_name']}
DEBUG [19:24:09] llm.py:2292 DEBUG Generated tool definition: {'type': llm.py:2292
'function', 'function': {'name': 'get_stock_price', 'description':
'Get the stock price of a company', 'parameters': {'type':
'object', 'properties': {'company_name': {'type': 'string',
'description': 'Parameter description not available'}}, 'required':
['company_name']}}}
DEBUG [19:24:09] llm.py:2180 DEBUG Generating tool definition for llm.py:2180
callable: multiply
DEBUG [19:24:09] llm.py:2225 DEBUG Function signature: (a: int, b: int) llm.py:2225
-> int
DEBUG [19:24:09] llm.py:2244 DEBUG Function docstring: Multiply two llm.py:2244
numbers
DEBUG [19:24:09] llm.py:2250 DEBUG Param section split: ['Multiply two llm.py:2250
numbers']
DEBUG [19:24:09] llm.py:2259 DEBUG Parameter descriptions: {} llm.py:2259
DEBUG [19:24:09] llm.py:2283 DEBUG Generated parameters: {'type': llm.py:2283
'object', 'properties': {'a': {'type': 'integer', 'description':
'Parameter description not available'}, 'b': {'type': 'integer',
'description': 'Parameter description not available'}}, 'required':
['a', 'b']}
DEBUG [19:24:09] llm.py:2292 DEBUG Generated tool definition: {'type': llm.py:2292
'function', 'function': {'name': 'multiply', 'description':
'Multiply two numbers', 'parameters': {'type': 'object',
'properties': {'a': {'type': 'integer', 'description': 'Parameter
description not available'}, 'b': {'type': 'integer',
'description': 'Parameter description not available'}}, 'required':
['a', 'b']}}}
╭─ Agent Info ────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ 👤 Agent: Agent │
│ Role: Assistant │
│ Tools: get_stock_price, multiply │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
╭────────────────────────────────────────── Instruction ──────────────────────────────────────────╮
│ Agent Agent is processing prompt: what is the stock price of Google? multiply the Google stock │
│ price with 2 │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
DEBUG [19:24:09] main.py:206 DEBUG Empty content in display_generating, main.py:206
returning early
/Users/praison/miniconda3/envs/praisonai-package/lib/python3.11/site-packages/httpx/_models.py:408:
DeprecationWarning: Use 'content=<...>' to upload raw bytes/text content.
headers, stream = encode_request(
/Users/praison/miniconda3/envs/praisonai-package/lib/python3.11/site-packages/litellm/litellm_core_
utils/streaming_handler.py:1544: PydanticDeprecatedSince20: The `dict` method is deprecated; use
`model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration
Guide at https://errors.pydantic.dev/2.10/migration/
obj_dict = response.dict()
[19:24:19] DEBUG [19:24:19] llm.py:828 DEBUG [TOOL_EXEC_DEBUG] About to execute tool llm.py:828
get_stock_price with args: {'company_name': 'Google'}
DEBUG [19:24:19] agent.py:946 DEBUG Agent executing tool get_stock_price agent.py:946
with arguments: {'company_name': 'Google'}
DEBUG [19:24:19] telemetry.py:152 DEBUG Tool usage tracked: telemetry.py:152
get_stock_price, success=True
DEBUG [19:24:19] llm.py:830 DEBUG [TOOL_EXEC_DEBUG] Tool execution result: llm.py:830
The stock price of Google is 100
DEBUG [19:24:19] llm.py:837 DEBUG [TOOL_EXEC_DEBUG] Display message with llm.py:837
result: Agent Agent called function 'get_stock_price' with
arguments: {'company_name': 'Google'}
Function returned: The stock price of Google is 100
DEBUG [19:24:19] llm.py:842 DEBUG [TOOL_EXEC_DEBUG] About to display tool llm.py:842
call with message: Agent Agent called function 'get_stock_price'
with arguments: {'company_name': 'Google'}
Function returned: The stock price of Google is 100
DEBUG [19:24:19] main.py:175 DEBUG display_tool_call called with message: main.py:175
"Agent Agent called function 'get_stock_price' with arguments:
{'company_name': 'Google'}\nFunction returned: The stock price of
Google is 100"
DEBUG [19:24:19] main.py:182 DEBUG Cleaned message in display_tool_call: main.py:182
"Agent Agent called function 'get_stock_price' with arguments:
{'company_name': 'Google'}\nFunction returned: The stock price of
Google is 100"
╭─────────────────────────────────────── Tool Call ────────────────────────────────────────╮
│ Agent Agent called function 'get_stock_price' with arguments: {'company_name': 'Google'} │
│ Function returned: The stock price of Google is 100 │
╰──────────────────────────────────────────────────────────────────────────────────────────╯
DEBUG [19:24:19] main.py:206 DEBUG Empty content in display_generating, main.py:206
returning early
[19:24:23] DEBUG [19:24:23] llm.py:828 DEBUG [TOOL_EXEC_DEBUG] About to execute tool llm.py:828
multiply with args: {'b': 2, 'a': 100}
DEBUG [19:24:23] agent.py:946 DEBUG Agent executing tool multiply with agent.py:946
arguments: {'b': 2, 'a': 100}
DEBUG [19:24:23] telemetry.py:152 DEBUG Tool usage tracked: telemetry.py:152
multiply, success=True
DEBUG [19:24:23] llm.py:830 DEBUG [TOOL_EXEC_DEBUG] Tool execution result: llm.py:830
200
DEBUG [19:24:23] llm.py:837 DEBUG [TOOL_EXEC_DEBUG] Display message with llm.py:837
result: Agent Agent called function 'multiply' with arguments: {'b':
2, 'a': 100}
Function returned: 200
DEBUG [19:24:23] llm.py:842 DEBUG [TOOL_EXEC_DEBUG] About to display tool llm.py:842
call with message: Agent Agent called function 'multiply' with
arguments: {'b': 2, 'a': 100}
Function returned: 200
DEBUG [19:24:23] main.py:175 DEBUG display_tool_call called with message: main.py:175
"Agent Agent called function 'multiply' with arguments: {'b': 2,
'a': 100}\nFunction returned: 200"
DEBUG [19:24:23] main.py:182 DEBUG Cleaned message in display_tool_call: main.py:182
"Agent Agent called function 'multiply' with arguments: {'b': 2,
'a': 100}\nFunction returned: 200"
╭──────────────────────────────── Tool Call ────────────────────────────────╮
│ Agent Agent called function 'multiply' with arguments: {'b': 2, 'a': 100} │
│ Function returned: 200 │
╰───────────────────────────────────────────────────────────────────────────╯
DEBUG [19:24:23] main.py:206 DEBUG Empty content in display_generating, main.py:206
returning early
╭────────────────────────────────────── Generating... 5.4s ───────────────────────────────────────╮
│ The stock price of Google is 100 and after multiplying with 2 it is 200. │
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
[19:24:28] DEBUG [19:24:28] agent.py:1247 DEBUG Agent.chat completed in 19.26 agent.py:1247
seconds
DEBUG [19:24:28] telemetry.py:121 DEBUG Agent execution tracked: telemetry.py:121
success=True
DEBUG [19:24:28] telemetry.py:121 DEBUG Agent execution tracked: telemetry.py:121
success=True
The stock price of Google is 100 and after multiplying with 2 it is 200.
[19:24:29] DEBUG [19:24:29] telemetry.py:209 DEBUG Telemetry flush: {'enabled': telemetry.py:209
True, 'session_id': '27bae6936a4eea15', 'metrics':
{'agent_executions': 2, 'task_completions': 0, 'tool_calls':
2, 'errors': 0}, 'environment': {'python_version': '3.11.11',
'os_type': 'Darwin', 'framework_version': 'unknown'}}
Metadata
Metadata
Assignees
Labels
No labels