- Which models does LangManus support?
- How to deploy the Web UI frontend project?
- Can I use my local Chrome browser as the Browser Tool?
In LangManus, we categorize models into three types:
- Usage: For conversation scenarios, mainly called in Supervisor and Agent.
- Supported Models:
gpt-4o,qwen-max-latest,gemini-2.0-flash,deepseek-v3.
- Usage: For complex reasoning tasks, used in Planner when "Deep Think" mode is enabled.
- Supported Models:
o1,o3-mini,QwQ-Plus,DeepSeek-R1,gemini-2.0-flash-thinking-exp.
- Usage: For handling tasks combining vision and language, mainly called in Browser Tool.
- Supported Models:
gpt-4o,qwen2.5-vl-72b-instruct,gemini-2.0-flash.
You can switch the model in use by modifying the conf.yaml file in the root directory of the project, using the configuration in the litellm format. For the specific configuration method, please refer to README.md.
LangManus supports the integration of Ollama models. You can refer to litellm Ollama.
The following is a configuration example of conf.yaml for using Ollama models:
REASONING_MODEL:
model: "ollama/ollama-model-name"
api_base: "http://localhost:11434" # Local service address of Ollama, which can be started/viewed via ollama serveLangManus supports the integration of OpenRouter models. You can refer to litellm OpenRouter. To use OpenRouter models, you need to:
- Obtain the OPENROUTER_API_KEY from OpenRouter (https://openrouter.ai/) and set it in the environment variable.
- Add the
openrouter/prefix before the model name. - Configure the correct OpenRouter base URL.
The following is a configuration example for using OpenRouter models:
- Configure OPENROUTER_API_KEY in the environment variable (such as the
.envfile)
OPENROUTER_API_KEY=""- Configure the model in
conf.yaml
REASONING_MODEL:
model: "openrouter/google/palm-2-chat-bison"Note: The available models and their exact names may change over time. Please verify the currently available models and their correct identifiers in OpenRouter's official documentation.
LangManus supports the integration of Google's Gemini models. You can refer to litellm Gemini. To use Gemini models, please follow these steps:
- Obtain the Gemini API key from Google AI Studio (https://makersuite.google.com/app/apikey).
- Configure the Gemini API key in the environment variable (such as the
.envfile)
GEMINI_API_KEY="Your Gemini API key"- Configure the model in
conf.yaml
REASONING_MODEL:
model: "gemini/gemini-pro"Notes:
- Replace
YOUR_GEMINI_KEYwith your actual Gemini API key. - The base URL is specifically configured to use Gemini through LangManus' OpenAI-compatible interface.
- The available models include
gemini-2.0-flashfor chat and visual tasks.
LangManus supports the integration of Azure models. You can refer to litellm Azure. Configuration example of conf.yaml:
REASONING_MODEL:
model: "azure/gpt-4o-2024-08-06"
api_base: $AZURE_API_BASE
api_version: $AZURE_API_VERSION
api_key: $AZURE_API_KEYLangManus provides an out-of-the-box Web UI frontend project. You can complete the deployment through the following steps. Please visit the LangManus Web UI GitHub repository for more information.
First, ensure you have cloned and installed the LangManus backend project. Enter the backend project directory and start the service:
cd langmanus
make serveBy default, the LangManus backend service will run on http://localhost:8000.
Next, clone the LangManus Web UI frontend project and install dependencies:
git clone https://github.com/langmanus/langmanus-web.git
cd langmanus-web
pnpm installNote: If you haven't installed
pnpmyet, please install it first. You can install it using the following command:npm install -g pnpm
After completing the dependency installation, start the Web UI development server:
pnpm devBy default, the LangManus Web UI service will run on http://localhost:3000.
LangManus uses browser-use to implement browser-related functionality, and browser-use is built on Playwright. Therefore, you need to install Playwright's browser instance before first use.
uv run playwright installYes. LangManus uses browser-use to implement browser-related functionality, and browser-use is based on Playwright. By configuring the CHROME_INSTANCE_PATH in the .env file, you can specify the path to your local Chrome browser to use the local browser instance.
-
Exit all Chrome browser processes Before using the local Chrome browser, ensure all Chrome browser processes are completely closed. Otherwise,
browser-usecannot start the browser instance properly. -
Set
CHROME_INSTANCE_PATHIn the project's.envfile, add or modify the following configuration item:CHROME_INSTANCE_PATH=/path/to/your/chromeReplace
/path/to/your/chromewith the executable file path of your local Chrome browser. For example:- macOS:
/Applications/Google Chrome.app/Contents/MacOS/Google Chrome - Windows:
C:\Program Files\Google\Chrome\Application\chrome.exe - Linux:
/usr/bin/google-chrome
- macOS:
-
Start LangManus After starting LangManus,
browser-usewill use your specified local Chrome browser instance. -
Access LangManus Web UI Since now your local Chrome browser is being controlled by
browser-use, you need to use another browser (such as Safari, Mozilla Firefox) to access LangManus's Web interface, which is typically athttp://localhost:3000. Alternatively, you can access the LangManus Web UI from another device.