diff --git a/UseCases/Chat_with_Teradata_MCP_Server/.Chat_with_Teradata_MCP_Server_Python.yaml b/UseCases/Chat_with_Teradata_MCP_Server/.Chat_with_Teradata_MCP_Server_Python.yaml
new file mode 100644
index 00000000..e3b6e789
--- /dev/null
+++ b/UseCases/Chat_with_Teradata_MCP_Server/.Chat_with_Teradata_MCP_Server_Python.yaml
@@ -0,0 +1,4 @@
+inputs:
+ - type: env
+ value: 'OPENAI_API_KEY'
+ cell: 12
diff --git a/UseCases/Chat_with_Teradata_MCP_Server/Chat_with_Teradata_MCP_Server_Python.ipynb b/UseCases/Chat_with_Teradata_MCP_Server/Chat_with_Teradata_MCP_Server_Python.ipynb
new file mode 100644
index 00000000..b31c789e
--- /dev/null
+++ b/UseCases/Chat_with_Teradata_MCP_Server/Chat_with_Teradata_MCP_Server_Python.ipynb
@@ -0,0 +1,450 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ " Chat with Vantage Using the Teradata MCP Server\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "
Introduction
\n",
+ "\n",
+ " The Teradata MCP server provides a set of tools and prompts for interacting with Teradata databases, enabling AI agents and users to query, analyze, and manage their data efficiently.
\n",
+ "\n",
+ "\n",
+ "Model Context Protocol (MCP) is an open standard and open-source framework designed to enable seamless interaction between AI models (like large language models) and external tools, data sources, and software systems. It was introduced by Anthropic in November 2024 and is often described as the \"USB-C of AI apps\" due to its universal and standardized approach to AI integration.\n",
+ "
\n",
+ "
\n",
+ "
\n",
+ "image source: medium.com
\n",
+ "
\n",
+ "\n",
+ " Why MCP is Important?\n",
+ "
\n",
+ " - \n",
+ " Standardized Integration: MCP provides a universal interface for AI models to interact with APIs, databases, files, and other tools—eliminating the need for custom connectors for each integration.\n",
+ "
\n",
+ " - Real-Time Context Access: It allows AI models to fetch live data and context from external systems, improving the relevance and accuracy of responses.
\n",
+ " - Cross-Platform Interoperability: MCP supports multiple programming languages and platforms, making it easier to build AI-powered applications that work across diverse environments.
\n",
+ " - Secure and Scalable: Designed with enterprise use in mind, MCP supports secure, scalable connections between AI and business systems.
\n",
+ " - Enhanced AI Capabilities: By enabling access to external tools and data, MCP significantly expands what AI models can do—such as executing functions, reading files, or automating workflows.
\n",
+ "
\n",
+ "
\n",
+ " \n",
+ "Business Values
\n",
+ "\n",
+ " - Get insights from Vantage functions in various categories like, Data Quality, DBA, DB Performance etc.
\n",
+ " - Ask any question in natural language and get response from Vantage across all the available databases.
\n",
+ "
\n",
+ "
\n",
+ "Why Vantage?
\n",
+ "To maximize the business value of advanced analytic techniques including Machine Learning and Artificial Intelligence, it is estimated that organizations must scale their model development and deployment pipelines to 100s or 1000s of times greater amounts of data, models, or both.\n",
+ "
\n",
+ "
\n",
+ " ClearScape Analytics provides powerful, flexible end-to-end data connectivity, feature engineering, model training, evaluation, and operational functions that can be deployed at scale as enterprise data assets; treating the products of ML and AI as first-class analytic processes in the enterprise.
\n",
+ " \n",
+ "Steps in the analysis:
\n",
+ "\n",
+ " - Configuring the environment
\n",
+ " - Connect to Vantage
\n",
+ " - Install MCP
\n",
+ " - Launch the Chatbot
\n",
+ " - You can try your own question
\n",
+ " - Cleanup
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "1. Configuring the Environment\n",
+ "Here, we import the required libraries, set environment variables and environment paths (if required).
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%%capture\n",
+ "\n",
+ "!pip install panel==1.3.4 openai"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "
Note: The above statements will install the required libraries to run this demo. Be sure to restart the kernel after executing the above lines to bring the installed libraries into memory. The simplest way to restart the Kernel is by typing zero zero: 0 0
\n",
+ "
\n",
+ " \n",
+ "\n",
+ "
Note: To ensure that the Chatbot interface reflects the latest changes, please reload the page by clicking the 'Reload' button or pressing F5 on your keyboard for first-time only This will update the notebook with the latest modifications, and you'll be able to interact with the Chatbot using the new libraries.
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "# Standard Libraries\n",
+ "import os\n",
+ "import getpass\n",
+ "import warnings\n",
+ "warnings.filterwarnings(\"ignore\")\n",
+ "\n",
+ "# Teradata Libraries\n",
+ "from teradataml import *"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "2. Connect to Vantage\n",
+ "We will be prompted to provide the password. We will enter the password, press the Enter key, and then use the down arrow to go to the next cell.
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [],
+ "source": [
+ "%run -i ../startup.ipynb\n",
+ "eng = create_context(host = 'host.docker.internal', username = 'demo_user', password = password)\n",
+ "print(eng)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%%capture\n",
+ "execute_sql(\"SET query_band='DEMO=Chat_with_Teradata_MCP_Server_Python.ipynb;' UPDATE FOR SESSION;\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We begin running steps with Shift + Enter keys.
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "2.1 Enter the OpenAI key and start Chatbot
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import getpass\n",
+ "import os\n",
+ "\n",
+ "if not os.getenv(\"OPENAI_API_KEY\"):\n",
+ " os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Getting Data for This Demo
\n",
+ "We have provided data for this demo on cloud storage. We have the option of either running the demo using foreign tables to access the data without using any storage on our environment or downloading the data to local storage, which may yield somewhat faster execution. However, we need to consider available storage. There are two statements in the following cell, and one is commented out. We may switch which mode we choose by changing the comment string.
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# %run -i ../run_procedure.py \"call get_data('DEMO_GLM_Fraud_cloud');\" # Takes 1 minute\n",
+ "%run -i ../run_procedure.py \"call get_data('DEMO_GLM_Fraud_local');\" # Takes 2 minutes"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Optional step – We should execute the below step only if we want to see the status of databases/tables created and space used.
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%run -i ../run_procedure.py \"call space_report();\" # Takes 10 seconds"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We loaded the data from https://www.kaggle.com/code/georgepothur/4-financial-fraud-detection-xgboost/data into Vantage in a table named \"transaction_data\". We checked the data size and printed sample rows: 63k rows and 12 columns.
\n",
+ "*Please scroll down to the end of the notebook for detailed column descriptions of the dataset.
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "txn_data = DataFrame(in_schema('DEMO_GLM_Fraud', 'transaction_data'))\n",
+ "\n",
+ "print(txn_data.shape)\n",
+ "txn_data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "3. Install MCP server\n",
+ "We will install MCP server on this machine and setup all the required envirnment to run MCP server. In the below cell we will run one MCP server setup script which will install MCP server (Clone from github), create python virtual environment and install required python libraries.
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load the setup functions\n",
+ "exec(open('uv_installer_jupyter.py').read())\n",
+ "\n",
+ "# Then run the complete setup\n",
+ "setup_teradata_environment()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this setup, we are configuring the full MCP environment.
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "4. Launch the Chatbot\n",
+ "\n",
+ "In this demo we are using OpenAI GPT4.1 mini
model for process user query and pass context to MCP server. This advanced technology allows us to store and recall conversations, enabling our chatbot to provide more personalized and informed responses.As a mortgage advisor, our chatbot is trained to assist with a wide range of database, space, data qualty, general database queries, etc.
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import panel as pn\n",
+ "pn.extension(design=\"material\")\n",
+ "\n",
+ "async def callback(contents, user, instance):\n",
+ " try:\n",
+ " command = [\"uv\", \"run\", \"td_mcp_client.py\", \"--query\", f\"{contents}\"]\n",
+ " result = subprocess.run(command, capture_output=True, text=True)\n",
+ "\n",
+ " if result.returncode == 0:\n",
+ " print(f\"✅ script executed: {result.stdout.strip()}\")\n",
+ " if result.stdout.strip():\n",
+ " return result.stdout.strip()\n",
+ " return \"Please try again in some time, right now our MCP server seems busy. or try to rephrase your question.\"\n",
+ " if result.returncode == 1:\n",
+ " return \"Please run the setup_teradata_environment function first from the above cell.\"\n",
+ "\n",
+ " except Exception as e:\n",
+ " print(\"ERROR: \", e)\n",
+ " return \"Please try again in sometime!\"\n",
+ "\n",
+ "pn.chat.ChatInterface(\n",
+ " callback=callback,\n",
+ " show_rerun=False,\n",
+ " show_undo=False,\n",
+ " show_clear=False,\n",
+ " width=1200,\n",
+ " height=400,\n",
+ " )"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\n",
+ "
Note: To ensure that the Chatbot interface reflects the latest changes, please reload the page by clicking the 'Reload' button or pressing F5 on your keyboard for first-time only This will update the notebook with the latest modifications, and you'll be able to interact with the Chatbot using the new libraries.
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "5. You can try your own question
\n",
+ "\n",
+ "\n",
+ "Here are some sample questions that you can try out:
\n",
+ "\n",
+ " - Show me all the tables
\n",
+ " - Show me top 5 records from table: transaction_data
\n",
+ " - Give me aggregated resource usage summary
\n",
+ " - Give me usage of a table and views by users
\n",
+ " - Give me Column Summary for table: transaction_data
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "6. Cleanup"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ " Databases and Tables
\n",
+ "We will use the following code to clean up tables and databases created for this demonstration.
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%run -i ../run_procedure.py \"call remove_data('demo_glm_fraud');\" # Takes 5 seconds"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "remove_context()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "\n",
+ "Required Materials\n",
+ "Let’s look at the elements we have available for reference for this notebook:
\n",
+ "\n",
+ "Filters:
\n",
+ "\n",
+ " - Industry: Finance
\n",
+ " - Functionality: Machine Learning
\n",
+ " - Use Case: Fraud Detection
\n",
+ "
\n",
+ "\n",
+ "Related Resources:
\n",
+ "\n",
+ "\n",
+ "\n",
+ "Dataset:\n",
+ "\n",
+ "- `txn_id`: transaction id\n",
+ "- `step`: maps a unit of time in the real world. In this case 1 step is 1 hour of time. Total steps 744 (31 days simulation).\n",
+ "- `type`: CASH-IN, CASH-OUT, DEBIT, PAYMENT and TRANSFER\n",
+ "- `amount`: amount of the transaction in local currency\n",
+ "- `nameOrig`: customer who started the transaction\n",
+ "- `oldbalanceOrig`: customer's balance before the transaction\n",
+ "- `newbalanceOrig`: customer's balance after the transaction\n",
+ "- `nameDest`: customer who is the recipient of the transaction\n",
+ "- `oldbalanceDest`: recipient's balance before the transaction\n",
+ "- `newbalanceDest`: recipient's balance after the transaction\n",
+ "- `isFraud`: identifies a fraudulent transaction (1) and non fraudulent (0)\n",
+ "- `isFlaggedFraud`: flags illegal attempts to transfer more than 200,000 in a single transaction\n",
+ "\n",
+ "Links:
\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ ""
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.10"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/UseCases/Chat_with_Teradata_MCP_Server/__pycache__/td_mcp_client.cpython-39.pyc b/UseCases/Chat_with_Teradata_MCP_Server/__pycache__/td_mcp_client.cpython-39.pyc
new file mode 100644
index 00000000..11929594
Binary files /dev/null and b/UseCases/Chat_with_Teradata_MCP_Server/__pycache__/td_mcp_client.cpython-39.pyc differ
diff --git a/UseCases/Chat_with_Teradata_MCP_Server/images/mcp_as_usb.png b/UseCases/Chat_with_Teradata_MCP_Server/images/mcp_as_usb.png
new file mode 100644
index 00000000..58dedffe
Binary files /dev/null and b/UseCases/Chat_with_Teradata_MCP_Server/images/mcp_as_usb.png differ
diff --git a/UseCases/Chat_with_Teradata_MCP_Server/td_mcp_client.py b/UseCases/Chat_with_Teradata_MCP_Server/td_mcp_client.py
new file mode 100644
index 00000000..f317d2d4
--- /dev/null
+++ b/UseCases/Chat_with_Teradata_MCP_Server/td_mcp_client.py
@@ -0,0 +1,245 @@
+import argparse
+import asyncio
+from asyncio import subprocess
+import sys
+from dataclasses import dataclass, field
+from typing import Union, Any, Optional
+from asyncio import TimeoutError
+from openai import AsyncOpenAI
+from openai.types.chat import ChatCompletionMessageParam, ChatCompletionToolParam
+from mcp import ClientSession, StdioServerParameters
+from mcp.client.stdio import stdio_client
+
+# Configuration constants
+MAX_RETRIES = 3
+RETRY_DELAY = 1 # seconds
+TIMEOUT = 30 # seconds
+MODEL_NAME = "gpt-4.1"
+
+# Initialize OpenAI client
+openai_client = AsyncOpenAI()
+
+# Create server parameters for stdio connection
+server_params = StdioServerParameters(
+ command="python", # Executable
+ args=["./src/teradata_mcp_server/server.py"],
+ env=None, # Optional environment variables
+)
+
+
+async def execute_tool_with_retry(
+ session: ClientSession,
+ tool_name: str,
+ tool_args: dict,
+ max_retries: int = MAX_RETRIES,
+) -> Optional[Any]:
+ """
+ Execute a tool call with retry logic and timeout handling.
+
+ Args:
+ session: The client session
+ tool_name: Name of the tool to execute
+ tool_args: Arguments for the tool
+ max_retries: Maximum number of retry attempts
+
+ Returns:
+ Tool execution result or None if all retries fail
+ """
+ for attempt in range(max_retries):
+ try:
+ async with asyncio.timeout(TIMEOUT):
+ result = await session.call_tool(tool_name, tool_args)
+ if "teradata_mcp_server - ERROR" in result.content[0].text:
+ raise ValueError("Error executing SQL query")
+ return result
+ except TimeoutError:
+ pass
+ except Exception as e:
+ if attempt == max_retries - 1:
+ raise
+ await asyncio.sleep(RETRY_DELAY)
+ return None
+
+
+@dataclass
+class Chat:
+ """Manages chat interactions with Teradata database through OpenAI."""
+
+ messages: list[ChatCompletionMessageParam] = field(default_factory=list)
+ system_prompt: str = """You are a Teradata Database expert and you are tasked with generating SQL queries for Teradata based on user questions.
+ Your response should ONLY be based on the given context and follow the response guidelines and format instructions.
+
+ Here are some tips for writing Teradata style queries:
+ * Always use table aliases when your SQL statement involves more than one source
+ * Aggregated fields like COUNT(*) must be appropriately named
+ * Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 3 results using SELECT TOP 3
+ * Remove unnecessary ORDER BY clauses unless required
+ * Use TOP keyword instead of LIMIT or FETCH
+ * If you receive "Bad character in format or data" error, change column values to get values from table only
+
+ * Most critical: Use available Database: "DEMO_GLM_Fraud" and table: "transaction_data"
+
+ Response Guidelines:
+ * Give answers in bulleted points with proper markup
+ * Ensure responses are exclusively derived from query results
+ * Create syntactically correct Teradata-style queries
+ * Execute SQL and return final answers in simple English
+ * Do not return JSON or SQL"""
+
+ async def _handle_tool_call(
+ self,
+ session: ClientSession,
+ tool_call: Any,
+ available_tools: list[ChatCompletionToolParam],
+ ) -> None:
+ """Handle individual tool calls and their responses."""
+ tool_name = tool_call.function.name
+ tool_args = eval(tool_call.function.arguments)
+
+ try:
+ result = await execute_tool_with_retry(session, tool_name, tool_args)
+ if result is None:
+ return
+
+ self._append_tool_messages(tool_call, tool_args, result)
+ await self._get_ai_response(available_tools)
+
+ except Exception as e:
+ error_message = f"Error executing SQL query: {str(e)}"
+ self.messages.append({"role": "assistant", "content": error_message})
+
+ async def process_query(self, session: ClientSession, query: str) -> None:
+ """Process a user query through OpenAI and execute resulting tool calls."""
+ try:
+ response = await session.list_tools()
+ available_tools = [
+ {
+ "type": "function",
+ "function": {
+ "name": tool.name,
+ "description": tool.description or "",
+ "parameters": tool.inputSchema,
+ },
+ }
+ for tool in response.tools
+ ]
+
+ ai_response = await self._get_ai_response(available_tools)
+
+ if ai_response.tool_calls:
+ for tool_call in ai_response.tool_calls:
+ await self._handle_tool_call(session, tool_call, available_tools)
+
+ except Exception as e:
+ pass
+
+ async def run(self, query: str) -> None:
+ """Initialize and run a single query session.
+
+ Args:
+ query: User query to process
+ """
+ try:
+ async with stdio_client(server_params) as (read, write):
+ async with ClientSession(read, write) as session:
+ await session.initialize()
+ self.messages.append({"role": "user", "content": query})
+ await self.process_query(session, query)
+ except Exception as e:
+ sys.exit(1)
+
+ async def _get_ai_response(
+ self, available_tools: list[ChatCompletionToolParam]
+ ) -> Any:
+ """Get response from OpenAI API with retry mechanism."""
+ max_retries = 3
+ retry_delay = 1 # seconds
+ for attempt in range(max_retries):
+ try:
+ response = await openai_client.chat.completions.create(
+ model=MODEL_NAME,
+ messages=[
+ {"role": "system", "content": self.system_prompt},
+ *self.messages,
+ ],
+ tools=available_tools,
+ tool_choice="auto",
+ )
+
+ assistant_message = response.choices[0].message
+ # if assistant_message.content is not None:
+ if assistant_message.content:
+ print(f"AI response: {assistant_message.content}")
+ self.messages.append(
+ {"role": "assistant", "content": assistant_message.content}
+ )
+ else:
+ print(
+ "Please try again in some time, right now our MCP server seems busy. or try to rephrase your question."
+ )
+ self.messages.append({"role": "assistant", "content": ""})
+
+ return assistant_message
+
+ except Exception as e:
+ if attempt == max_retries - 1: # Last attempt
+ raise # Re-raise the exception after all retries are exhausted
+
+ await asyncio.sleep(retry_delay)
+
+ def _append_tool_messages(
+ self, tool_call: Any, tool_args: dict, result: Any
+ ) -> None:
+ """Append tool-related messages to the conversation history."""
+ self.messages.append(
+ {
+ "role": "assistant",
+ "content": None,
+ "tool_calls": [
+ {
+ "id": tool_call.id,
+ "type": "function",
+ "function": {
+ "name": tool_call.function.name,
+ "arguments": str(tool_args),
+ },
+ }
+ ],
+ }
+ )
+ self.messages.append(
+ {
+ "role": "tool",
+ "tool_call_id": tool_call.id,
+ "content": getattr(result.content[0], "text", ""),
+ }
+ )
+
+
+def main():
+ """Main entry point of the application."""
+ try:
+ # Set up argument parser
+ parser = argparse.ArgumentParser(
+ description="Teradata Database Query Assistant"
+ )
+ parser.add_argument(
+ "--query",
+ "-q",
+ type=str,
+ default="Hello",
+ help='The query to process (default: "Hello")',
+ )
+
+ # Parse arguments
+ args = parser.parse_args()
+
+ # Run the chat with the provided query
+ chat = Chat()
+ asyncio.run(chat.run(args.query))
+ except Exception as e:
+ sys.exit(1)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/UseCases/Chat_with_Teradata_MCP_Server/uv_installer_jupyter.py b/UseCases/Chat_with_Teradata_MCP_Server/uv_installer_jupyter.py
new file mode 100644
index 00000000..eaee9626
--- /dev/null
+++ b/UseCases/Chat_with_Teradata_MCP_Server/uv_installer_jupyter.py
@@ -0,0 +1,251 @@
+import subprocess
+import os
+import sys
+from pathlib import Path
+
+def install_uv():
+ """Install uv package manager in Jupyter environment"""
+
+ print("🔧 Installing uv package manager...")
+
+ # Method 1: Install via pip (recommended for Jupyter)
+ try:
+ print("Installing uv via pip...")
+ result = subprocess.run([sys.executable, "-m", "pip", "install", "uv"],
+ capture_output=True, text=True)
+
+ if result.returncode == 0:
+ print("✅ uv installed successfully via pip!")
+ return True
+ else:
+ print("❌ pip installation failed, trying curl method...")
+
+ except Exception as e:
+ print(f"pip method failed: {e}")
+
+ # Method 2: Install via curl (fallback)
+ try:
+ print("Installing uv via curl...")
+ # Download and install uv
+ curl_result = subprocess.run([
+ "curl", "-LsSf", "https://astral.sh/uv/install.sh"
+ ], capture_output=True, text=True)
+
+ if curl_result.returncode == 0:
+ # Run the installer
+ install_result = subprocess.run([
+ "sh", "-c", curl_result.stdout
+ ], capture_output=True, text=True)
+
+ if install_result.returncode == 0:
+ print("✅ uv installed successfully via curl!")
+ return True
+
+ except Exception as e:
+ print(f"curl method failed: {e}")
+
+ print("❌ Failed to install uv")
+ return False
+
+def check_uv_installation():
+ """Check if uv is properly installed"""
+ try:
+ result = subprocess.run(["uv", "--version"],
+ capture_output=True, text=True)
+ if result.returncode == 0:
+ print(f"✅ uv is installed: {result.stdout.strip()}")
+ return True
+ else:
+ print("❌ uv is not accessible")
+ return False
+ except FileNotFoundError:
+ print("❌ uv command not found")
+ return False
+
+def setup_teradata_environment():
+ """Complete setup for Teradata MCP Server"""
+
+ print("🚀 Setting up Teradata MCP Server Environment")
+ print("=" * 50)
+
+ # Step 1: Install uv
+ if not check_uv_installation():
+ if not install_uv():
+ print("❌ Cannot proceed without uv. Please install it manually.")
+ return False
+
+ # Step 2: Create directory structure
+ print("\n📁 Setting up directory structure...")
+ os.makedirs("MCP", exist_ok=True)
+ os.chdir("MCP")
+
+ # Remove existing directory if it exists
+ if os.path.exists("teradata-mcp-server"):
+ print("Removing existing teradata-mcp-server directory...")
+ subprocess.run(["rm", "-rf", "teradata-mcp-server"])
+
+ # Step 3: Clone repository
+ print("\n📦 Cloning Teradata MCP Server repository...")
+ clone_result = subprocess.run([
+ "git", "clone", "https://github.com/Teradata/teradata-mcp-server.git"
+ ], capture_output=True, text=True)
+
+ if clone_result.returncode != 0:
+ print(f"❌ Failed to clone repository: {clone_result.stderr}")
+ return False
+
+ os.chdir("teradata-mcp-server")
+ print("✅ Repository cloned successfully")
+
+ # Step 4: Setup virtual environment
+ print("\n🐍 Setting up Python virtual environment, it will take approax 3 minutes, please wait⌛.")
+ sync_result = subprocess.run(["uv", "sync"], capture_output=True, text=True)
+
+ if sync_result.returncode != 0:
+ print(f"❌ Failed to sync environment: {sync_result.stderr}")
+ return False
+
+ sync_result = subprocess.run(["uv", "pip" , "install", "panel==1.3.4"], capture_output=True, text=True)
+ if sync_result.returncode != 0:
+ print(f"❌ Failed to sync environment: {sync_result.stderr}")
+ return False
+
+ print("✅ Virtual environment created")
+
+ # Step 5: Setup environment file
+ print("\n⚙️ Setting up environment configuration...")
+ env_files = ["env"]
+ env_created = False
+
+ for env_file in env_files:
+ if os.path.exists(env_file):
+ if env_file != ".env":
+ subprocess.run(["cp", env_file, ".env"])
+ env_created = True
+ break
+
+ if not env_created:
+ with open(".env", "w") as f:
+ f.write("DATABASE_URI=\n")
+
+ print("✅ Environment file created")
+
+ # Step 6: Get database password
+ print("\n🔐 Database configuration...")
+ import getpass
+ password = getpass.getpass("Enter database password for demo_user: ")
+ # OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key:")
+
+ if not password:
+ print("❌ Password cannot be empty")
+ return False
+
+ # Update .env file
+ connection_string = f"teradatasql://demo_user:{password}@host.docker.internal"
+
+ # Read existing .env content
+ env_content = ""
+ if os.path.exists(".env"):
+ with open(".env", "r") as f:
+ env_content = f.read()
+
+ # Update or add DATABASE_URI
+ if "DATABASE_URI=" in env_content:
+ # Replace existing
+ lines = env_content.split('\n')
+ for i, line in enumerate(lines):
+ if line.startswith("DATABASE_URI="):
+ lines[i] = f"DATABASE_URI={connection_string}"
+ break
+ # if line.startswith("OPENAI_API_KEY="):
+ # lines[i] = f"OPENAI_API_KEY={OPENAI_API_KEY}"
+ # break
+ env_content = '\n'.join(lines)
+ else:
+ # Add new
+ env_content += f"\nDATABASE_URI={connection_string}\n"
+
+ with open(".env", "w") as f:
+ f.write(env_content)
+
+ print("✅ Database connection configured")
+ # print("✅ OpenAI key configured")
+
+ # Step 7: Install Node.js
+ print("\n📦 Installing Node.js...")
+ node_install_result = subprocess.run([
+ "curl", "-o-", "https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh"
+ ], capture_output=True, text=True)
+
+ if node_install_result.returncode == 0:
+ subprocess.run(["bash", "-c", node_install_result.stdout])
+ print("✅ Node.js setup initiated")
+
+ # Step 8: Add MCP CLI
+ print("\n🔧 Adding MCP CLI dependency...")
+ mcp_result = subprocess.run(["uv", "add", "mcp[cli]"],
+ capture_output=True, text=True)
+
+ if mcp_result.returncode == 0:
+ print("✅ MCP CLI added successfully")
+ else:
+ print(f"⚠️ Warning: {mcp_result.stderr}")
+
+ # Change directory back to initial path (2 directories up)
+ print("\n📁 Check the current directory...")
+ try:
+ # os.chdir("../..") # Go up 2 directories
+ current_dir = os.getcwd()
+ print(f"✅ Current directory: {current_dir}")
+ except Exception as e:
+ print(f"⚠️ Warning: Could not change to initial directory: {e}")
+
+
+ print("\n🏗️ Copy MCP client to current directory...")
+ shutil.copy("../../td_mcp_client.py", ".")
+ print("✅ File moved successfully!")
+
+ print("\n✅ Setup completed successfully!")
+ print("\nTo start the server, run:")
+ print("uv run mcp dev ./src/teradata_mcp_server/server.py")
+ return True
+
+def start_teradata_server():
+ """Start the Teradata MCP Server"""
+ server_path = "./src/teradata_mcp_server/server.py"
+ from pathlib import Path
+
+ if not os.path.exists(server_path):
+ print(f"❌ Server file not found: {server_path}")
+ return False
+
+ print("🚀 Starting Teradata MCP Cliennt...")
+ print("Press Ctrl+C to stop the server")
+
+ try:
+ process = subprocess.Popen(
+ ["uv", "run", server_path],
+ stdout=subprocess.PIPE,
+ stderr=subprocess.STDOUT,
+ universal_newlines=True
+ )
+
+ for line in process.stdout:
+ print(line.rstrip())
+
+ return_code = process.wait()
+ print(f"Server stopped with return code: {return_code}")
+
+ except KeyboardInterrupt:
+ print("\n🛑 Server stopped by user")
+ process.terminate()
+
+# Usage
+print("Teradata MCP Server Setup Tools Loaded!")
+print("\nAvailable functions:")
+print("- install_uv(): Install uv package manager")
+print("- check_uv_installation(): Check if uv is installed")
+print("- setup_teradata_environment(): Complete setup process")
+print("- start_teradata_server(): Start the MCP server")
+print("\nTo run complete setup:")
+print("setup_teradata_environment()")
\ No newline at end of file