Skip to content

An Agentic AI Research Playground for building AI Agents and Mult-Agent systems leveraging open-source models, libraries, and frameworks. Models used to run agents locally on Nvidia RTX A2000 GPU - 8GB VRAM include the quantized Ollama models: llama3.1:8b-instruct-q8_0, llama3.1:8b_q8_0, qwen2.5:7b-instruct-q8_0, and qwen2.5-coder:7b-instruct-q8_0.

License

Notifications You must be signed in to change notification settings

BjornMelin/agentic-ai-playground

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Agentic AI Playground

An Agentic AI Research Assistant multi-agent system leveraging the smolagent library. Models used to run agents locally include llama3.1:8b-instruct-q8_0, llama3.1:8b-q8_0, qwen2.5:7b-instruct-q8_0, and qwen2.5-coder:7b-instruct-q8_0

Notes

  • When running agents locally that use the smolagents library, ensure that you are using a coding optimized LLM for optimized performance given that smolagents lets the agents write actual Python code to perform their actions, which means a strong coding model is needed to ensure that the agents can write good code. Locally, I usually run qwen2.5-coder:7b-instruct-q8_0 for the best performance.

About

An Agentic AI Research Playground for building AI Agents and Mult-Agent systems leveraging open-source models, libraries, and frameworks. Models used to run agents locally on Nvidia RTX A2000 GPU - 8GB VRAM include the quantized Ollama models: llama3.1:8b-instruct-q8_0, llama3.1:8b_q8_0, qwen2.5:7b-instruct-q8_0, and qwen2.5-coder:7b-instruct-q8_0.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages