A Model Context Protocol (MCP) server for building and managing Textual TUI applications. This server provides tools to validate CSS, generate widgets, analyze styles, and search Textual documentation.
The Textual MCP Server enables AI assistants to help you develop Textual applications by providing:
- CSS Validation: Validate TCSS (Textual CSS) using Textual's native parser
- Code Generation: Generate boilerplate for widgets, layouts, and screens
- Style Analysis: Analyze CSS selectors, detect conflicts, and validate classes.
- Documentation Search: Semantic search through Textual docs and examples
- Widget Information: Get detailed information about Textual widgets and their properties
Install uv if you haven't already:
# macOS
brew install uv
# Linux/Windows
curl -LsSf https://astral.sh/uv/install.sh | sh- Clone the repository:
git clone https://github.com/kevpgoff/textual-mcp.git
cd textual-mcp- Create virtual environment and install dependencies:
# Create a virtual environment
uv venv
# Activate it (Linux/macOS)
source .venv/bin/activate
# Activate it (Windows)
.venv\Scripts\activate
# Install dependencies
uv sync- Configure environment (optional):
cp .env.example .env
# Edit .env with your settingsfastmcp install claude-desktop server.py --name "Textual MCP"fastmcp install claude-code server.py --name "Textual MCP"fastmcp install cursor server.py --name "Textual MCP"# With MCP Inspector
fastmcp dev server.py
# Or directly with uv
uv run python server.py
# Or if venv is activated
python server.pyIf your server needs extra packages:
fastmcp install claude-desktop server.py \
--name "Textual MCP" \
--with pandas \
--with requestsfastmcp install claude-desktop server.py \
--name "Textual MCP" \
--env-file .env# Generate and copy to clipboard
fastmcp install mcp-json server.py --copy
# Generate with specific dependencies
fastmcp install mcp-json server.py \
--with textual \
--with richFor development, install the package in editable mode:
uv pip install -e .uv run pytest# Add a new dependency
uv add new-package
# Update all dependencies
uv sync --upgrade-
validate_tcss- Validate TCSS content using Textual's native CSS parser# Example: Validate CSS content await client.call_tool("validate_tcss", { "css_content": "Button { background: $primary; }", "strict_mode": False })
-
validate_tcss_file- Validate a TCSS file directly by path# Example: Validate a CSS file await client.call_tool("validate_tcss_file", { "file_path": "styles/app.tcss", "watch": False })
-
validate_inline_styles- Check inline CSS declarations in Python code -
check_selector- Validate a single CSS selector for correctness
-
generate_widget- Generate custom widget boilerplate code# Example: Generate a custom button widget await client.call_tool("generate_widget", { "widget_name": "CustomButton", "widget_type": "input", "includes_css": True })
-
generate_grid_layout- Generate grid layout code with specified rows and columns# Example: Generate a 3x3 grid layout await client.call_tool("generate_grid_layout", { "rows": 3, "columns": 3, "areas": { "header": {"row": 0, "column": "0-2"}, "sidebar": {"row": "1-2", "column": 0}, "content": {"row": "1-2", "column": "1-2"} } })
list_widget_types- List all available Textual widgets with descriptionslist_event_handlers- List supported event handlers for widgetsvalidate_widget_name- Validate widget names for Python naming conventions
-
detect_style_conflicts- Identify potential CSS conflicts and overlapping selectors# Example: Detect conflicts in CSS await client.call_tool("detect_style_conflicts", { "css_content": """ Button { background: red; color: white; } .primary { background: blue; } Button.primary { color: black; } """ })
Returns detailed analysis including:
- Conflicting property declarations between selectors
- Overlapping selectors with specificity scores
- Resolution suggestions for conflicts
- Summary of total conflicts and issues
-
analyze_selectors- Analyze CSS selector usage and specificity (planned)
-
search_textual_docs- Semantic search across Textual documentation# Example: Search for reactive properties info await client.call_tool("search_textual_docs", { "query": "how to create reactive properties", "limit": 5, "content_type": ["guide", "api"] # Optional: filter by type })
-
search_textual_code_examples- Search specifically for code examples# Example: Find DataTable examples await client.call_tool("search_textual_code_examples", { "query": "DataTable sorting", "language": "python", "limit": 10 })
-
index_textual_docs- Manually trigger documentation indexing
The Textual MCP Server includes semantic search capabilities for Textual documentation. This feature uses VectorDB2 for local embeddings and search.
Install Additional Dependencies:
-
GitHub Token (Required for indexing): The documentation indexing requires a GitHub token to avoid API rate limits.
Create a token at: https://github.com/settings/tokens/new
- No special permissions needed (public access is sufficient)
- Just used to increase rate limits from 60 to 5000 requests/hour
Set it via environment variable:
export GITHUB_TOKEN="your-github-token"
Or add to
.envfile:GITHUB_TOKEN=your-github-token
-
Configuration Options (in
config/textual-mcp.yaml):search: auto_index: true # Auto-index docs on first use embeddings_model: "BAAI/bge-base-en-v1.5" # Embedding model persist_path: "./data/textual_docs.db" # Where to store the index chunk_size: 200 # Text chunk size for indexing chunk_overlap: 20 # Overlap between chunks github_token: null # Optional: for private repos default_limit: 10 # Default number of results similarity_threshold: 0.7 # Minimum similarity score
The server automatically indexes Textual documentation on first use. You can also manually trigger indexing:
# Run the indexing script
uv run python scripts/index_documentation.py
# With custom settings
uv run python scripts/index_documentation.py --embeddings "sentence-transformers/all-MiniLM-L6-v2" --forceNote: Indexing requires approximately 300 GitHub API requests. Make sure you have sufficient rate limit available.
-
General Documentation Search:
# Search across all documentation results = await search_textual_docs( query="reactive properties", limit=5 )
-
Filtered Search:
# Search only in specific content types results = await search_textual_docs( query="CSS styling", content_type=["guide", "css_reference"], doc_path_pattern="*/css/*" )
-
Code Example Search:
# Find Python code examples results = await search_textual_code_examples( query="DataTable with sorting", language="python" )
The search system categorizes documentation into these types:
guide- Tutorial and guide documentsapi- API reference documentationwidget- Widget-specific documentationexample- Code examplescss_reference- CSS/styling documentationcode- Code blocks within documentation
The Textual MCP Server uses Chonkie for intelligent document chunking, providing content-aware processing that preserves semantic coherence. This system adapts its chunking strategy based on content type:
- Code Examples: Uses specialized code chunker that preserves code structure
- API Documentation: Uses semantic chunking with larger chunks to keep class/method documentation together
- Guides: Uses recursive markdown chunking that respects document structure
- CSS Reference: Uses semantic chunking with smaller chunks for individual properties
Chonkie's semantic chunking requires embedding models. To avoid runtime download issues, pre-initialize the models:
# Quick start - download and convert default models
python scripts/init_embeddings.pyThis will:
- Download required embedding models from Hugging Face
- Convert them to the efficient model2vec format
- Store them in the
models/directory - Create a model registry for easy lookup
The initialization script provides one model by default:
minishlab/potion-base-8M- Primary model for semantic chunking
To add a custom model:
python scripts/init_embeddings.py --model "sentence-transformers/your-model" --output "your-model"The system automatically selects models in this order:
- Check for local pre-initialized models in
models/directory - Use cached models from
~/.cache/huggingface/hub/or~/.cache/sentence_transformers/ - Fall back to downloading from Hugging Face
Control chunking behavior in config/textual-mcp.yaml:
search:
chunking_strategy: 'chonkie' # Use 'manual' to disable semantic chunking
# ... other settingsSet a custom models directory:
export TEXTUAL_MCP_MODELS_DIR=/path/to/your/models
python scripts/init_embeddings.pyIf you see the error:
"Folder does not exist locally, attempting to use huggingface hub."
This means models aren't initialized. Either:
- Run
python scripts/init_embeddings.py(recommended) - Ensure you have models cached from other projects
- Set
chunking_strategy: 'manual'to use simpler chunking without embeddings
The system gracefully falls back to simpler chunking methods if semantic models aren't available, ensuring functionality even without pre-downloaded models.
- Native Textual Integration: Uses Textual's TCSS parser for 100% compatibility
- Intelligent Code Generation: Context-aware templates for common patterns
- Vector Search: Semantic search for better documentation discovery with local embeddings
- MCP Inspector Support: Debug and test tools interactively
MIT