This document provides detailed configuration information for cursor-tools.
cursor-tools can be configured through two main mechanisms:
- Environment variables (API keys and core settings)
- JSON configuration file (provider settings, model preferences, and command options)
Create .cursor-tools.env
in your project root or ~/.cursor-tools/.env
in your home directory:
# Required API Keys
PERPLEXITY_API_KEY="your-perplexity-api-key" # Required for web search
GEMINI_API_KEY="your-gemini-api-key" # Required for repository analysis
# Optional API Keys
OPENAI_API_KEY="your-openai-api-key" # For browser commands with OpenAI
ANTHROPIC_API_KEY="your-anthropic-api-key" # For browser commands with Anthropic
GITHUB_TOKEN="your-github-token" # For enhanced GitHub access
# Configuration Options
USE_LEGACY_CURSORRULES="true" # Use legacy .cursorrules file (default: false)
Create this file in your project root to customize behavior. Here's a comprehensive example with all available options:
{
"perplexity": {
"model": "sonar-pro", // Default model for web search
"maxTokens": 8000 // Maximum tokens for responses
},
"gemini": {
"model": "gemini-2.0-pro-exp", // Default model for repository analysis
"maxTokens": 10000 // Maximum tokens for responses
},
"plan": {
"fileProvider": "gemini", // Provider for file identification
"thinkingProvider": "openai", // Provider for plan generation
"fileMaxTokens": 8192, // Tokens for file identification
"thinkingMaxTokens": 8192 // Tokens for plan generation
},
"repo": {
"provider": "gemini", // Default provider for repo command
"maxTokens": 10000 // Maximum tokens for responses
},
"doc": {
"maxRepoSizeMB": 100, // Maximum repository size for remote docs
"provider": "gemini", // Default provider for doc generation
"maxTokens": 10000 // Maximum tokens for responses
},
"browser": {
"defaultViewport": "1280x720", // Default browser window size
"timeout": 30000, // Default timeout in milliseconds
"stagehand": {
"env": "LOCAL", // Stagehand environment
"headless": true, // Run browser in headless mode
"verbose": 1, // Logging verbosity (0-2)
"debugDom": false, // Enable DOM debugging
"enableCaching": false, // Enable response caching
"model": "claude-3-5-sonnet-latest", // Default Stagehand model
"provider": "anthropic", // AI provider (anthropic or openai)
"timeout": 30000 // Operation timeout
}
},
"tokenCount": {
"encoding": "o200k_base" // Token counting method
},
"openai": {
"maxTokens": 8000 // Will be used when provider is "openai"
},
"anthropic": {
"maxTokens": 8000 // Will be used when provider is "anthropic"
}
}
model
: The AI model to use for web searchesmaxTokens
: Maximum tokens in responses
model
: The AI model for repository analysismaxTokens
: Maximum tokens in responses- Note: For repositories >800K tokens, automatically switches to gemini-2.0-pro-exp
fileProvider
: AI provider for identifying relevant filesthinkingProvider
: AI provider for generating implementation plansfileMaxTokens
: Token limit for file identificationthinkingMaxTokens
: Token limit for plan generation
provider
: Default AI provider for repository analysismaxTokens
: Maximum tokens in responses
maxRepoSizeMB
: Size limit for remote repositoriesprovider
: Default AI provider for documentationmaxTokens
: Maximum tokens in responses
defaultViewport
: Browser window sizetimeout
: Navigation timeoutstagehand
: Stagehand-specific settings including:env
: Environment configurationheadless
: Browser visibilityverbose
: Logging detail leveldebugDom
: DOM debuggingenableCaching
: Response cachingmodel
: Default AI modelprovider
: AI provider selectiontimeout
: Operation timeout
encoding
: Method used for counting tokenso200k_base
: Optimized for Gemini (default)gpt2
: Traditional GPT-2 encoding
The GitHub commands support several authentication methods:
-
Environment Variable: Set
GITHUB_TOKEN
in your environment:GITHUB_TOKEN=your_token_here
-
GitHub CLI: If you have the GitHub CLI (
gh
) installed and logged in, cursor-tools will automatically use it to generate tokens with the necessary scopes. -
Git Credentials: If you have authenticated git with GitHub (via HTTPS), cursor-tools will automatically:
- Use your stored GitHub token if available (credentials starting with
ghp_
orgho_
) - Fall back to using Basic Auth with your git credentials
- Use your stored GitHub token if available (credentials starting with
To set up git credentials:
- Configure git to use HTTPS instead of SSH:
git config --global url."https://github.com/".insteadOf [email protected]:
- Store your credentials:
git config --global credential.helper store # Permanent storage # Or for macOS keychain: git config --global credential.helper osxkeychain
- The next time you perform a git operation requiring authentication, your credentials will be stored
Authentication Status:
-
Without authentication:
- Public repositories: Limited to 60 requests per hour
- Private repositories: Not accessible
- Some features may be restricted
-
With authentication (any method):
- Public repositories: 5,000 requests per hour
- Private repositories: Full access (if token has required scopes)
cursor-tools will automatically try these authentication methods in order:
GITHUB_TOKEN
environment variable- GitHub CLI token (if
gh
is installed and logged in) - Git credentials (stored token or Basic Auth)
If no authentication is available, it will fall back to unauthenticated access with rate limits.
When generating documentation, cursor-tools uses Repomix to analyze your repository. By default, it excludes certain files and directories that are typically not relevant for documentation:
- Node modules and package directories (
node_modules/
,packages/
, etc.) - Build output directories (
dist/
,build/
, etc.) - Version control directories (
.git/
) - Test files and directories (
test/
,tests/
,__tests__/
, etc.) - Configuration files (
.env
,.config
, etc.) - Log files and temporary files
- Binary files and media files
You can customize the files and folders to exclude by adding a .repomixignore
file to your project root.
Example .repomixignore
file for a Laravel project:
vendor/
public/
database/
storage/
.idea
.env
This ensures that the documentation focuses on your actual source code and documentation files. Support to customize the input files to include is coming soon - open an issue if you run into problems here.
The browser
commands support different AI models for processing. You can select the model using the --model
option:
# Use gpt-4o
cursor-tools browser act "Click Login" --url "https://example.com" --model=gpt-4o
# Use Claude 3.5 Sonnet
cursor-tools browser act "Click Login" --url "https://example.com" --model=claude-3-5-sonnet-latest
You can set a default provider in your cursor-tools.config.json
file under the stagehand
section:
{
"stagehand": {
"provider": "openai", // or "anthropic"
}
}
You can also set a default model in your cursor-tools.config.json
file under the stagehand
section:
{
"stagehand": {
"provider": "openai", // or "anthropic"
"model": "gpt-4o"
}
}
If no model is specified (either on the command line or in the config), a default model will be used based on your configured provider:
- OpenAI:
o3-mini
- Anthropic:
claude-3-5-sonnet-latest
Available models depend on your configured provider (OpenAI or Anthropic) in cursor-tools.config.json
and your API key.
cursor-tools
automatically configures Cursor by updating your project rules during installation. This provides:
- Command suggestions
- Usage examples
- Context-aware assistance
For new installations, we use the recommended .cursor/rules/cursor-tools.mdc
path. For existing installations, we maintain compatibility with the legacy .cursorrules
file. If both files exist, we prefer the new path and show a warning.
To get the benefits of cursor-tools you should use Cursor agent in "yolo mode". Ideal settings:
The ask
command requires both a provider and a model to be specified. While these must be provided via command-line arguments, the maxTokens can be configured through the provider-specific settings:
{
"openai": {
"maxTokens": 8000 // Will be used when provider is "openai"
},
"anthropic": {
"maxTokens": 8000 // Will be used when provider is "anthropic"
}
}
The plan command uses two different models:
- A file identification model (default: Gemini with gemini-2.0-pro-exp)
- A thinking model for plan generation (default: OpenAI with o3-mini)
You can configure both models and their providers:
{
"plan": {
"fileProvider": "gemini",
"thinkingProvider": "openai",
"fileModel": "gemini-2.0-pro-exp",
"thinkingModel": "o3-mini",
"fileMaxTokens": 8192,
"thinkingMaxTokens": 8192
}
}
The OpenAI o3-mini model is chosen as the default thinking provider for its speed and efficiency in generating implementation plans.