XyraChain is a multi-service health-tech project that combines:
- a React + Vite frontend
- a TypeScript + Express backend
- a Python AI module for chest X-ray prediction and Grad-CAM generation
- a FastAPI chatbot service for medical triage guidance
- a Hardhat/XDC smart contract for storing report metadata on-chain
The main product flow is:
- A user uploads a chest X-ray in the frontend.
- The backend stores the file and calls the local Python AI module.
- The AI module returns a diagnosis and a Grad-CAM heatmap.
- The backend pins the generated report JSON to IPFS through Pinata.
- The frontend uses the connected wallet to mint the report CID to the XDC contract.
XyraChain/
|- frontend/ React + Vite + TypeScript app
|- backend/ Express + TypeScript API
|- ai-module/ Python model inference + Grad-CAM scripts
|- chatbot/ FastAPI + LangChain triage assistant
|- blockchain/ Hardhat project for XDC smart contract
Location: frontend/
Main responsibilities:
- upload X-ray images
- display diagnosis and Grad-CAM heatmap
- generate a PDF report in-browser
- call the backend for report generation and IPFS pinning
- connect wallet and mint the IPFS CID on-chain
- provide AI chat and triage UI
Important files:
frontend/src/App.tsxfrontend/src/pages/AnalysisCenter.tsxfrontend/src/pages/TriageChat.tsxfrontend/src/components/ChatWidget.tsxfrontend/src/context/WalletContext.tsxfrontend/src/config.ts
Location: backend/
Main responsibilities:
- accept image uploads
- call the local Python AI scripts
- serve generated upload and heatmap files
- pin report JSON to IPFS with Pinata
- proxy chat requests to the FastAPI chatbot service
Important files:
backend/src/index.tsbackend/src/routes/analysisRoutes.tsbackend/src/routes/chatRoutes.tsbackend/src/controllers/analysisController.tsbackend/src/controllers/chatController.tsbackend/src/services/pythonService.tsbackend/src/services/pinataService.ts
Location: ai-module/
Main responsibilities:
- load the TensorFlow model
- preprocess X-ray images
- run binary pneumonia prediction
- generate Grad-CAM visualizations
Important files:
ai-module/prediction.pyai-module/gradcam.pyai-module/preprocess.pyai-module/model_loader.py
Location: chatbot/
Main responsibilities:
- provide triage-focused chat responses
- use a retrieval-augmented generation flow based on Chroma + LangChain + Groq
Important files:
chatbot/app/main.pychatbot/app/engine/rag_engine.pychatbot/app/core/prompt.pychatbot/scripts/ingest.py
Location: blockchain/
Main responsibilities:
- deploy the
XyraChaincontract - store report metadata per connected wallet
- expose read/write methods for user reports
Important files:
blockchain/contracts/XyraChain.solblockchain/hardhat.config.tsblockchain/scripts/deploy.tsblockchain/scripts/verify.ts
- Frontend: React, Vite, TypeScript, Tailwind CSS, ethers, axios, jsPDF
- Backend: Node.js, Express, TypeScript, multer, axios, dotenv
- AI: Python, TensorFlow, OpenCV, NumPy
- Chatbot: FastAPI, LangChain, Chroma, Groq, HuggingFace embeddings
- Blockchain: Hardhat, Solidity, ethers, XDC Apothem testnet
Install these before running the project:
- Node.js 18+
- npm
- Python 3.10+
- a wallet that supports custom EVM networks
- access to Pinata credentials
- access to a Groq API key
Copy frontend/.env.example to frontend/.env and set:
VITE_API_BASE_URL=http://localhost:5000
VITE_CONTRACT_ADDRESS=
VITE_CHAIN_ID=51
VITE_CHAIN_ID_HEX=0x33
VITE_CHAIN_NAME=XDC Apothem Testnet
VITE_RPC_URL=https://rpc.apothem.network
VITE_BLOCK_EXPLORER_URL=https://apothem.xdcscan.io
VITE_NATIVE_CURRENCY_NAME=XDC
VITE_NATIVE_CURRENCY_SYMBOL=XDC
VITE_NATIVE_CURRENCY_DECIMALS=18Copy backend/.env.example to backend/.env and set:
PORT=5000
CORS_ORIGIN=http://localhost:3000
CHATBOT_SERVICE_URL=http://127.0.0.1:8000/chat
PYTHON_BIN=python
AI_MODULE_PATH=../ai-module
PINATA_API_KEY=your_pinata_api_key
PINATA_SECRET_API_KEY=your_pinata_secret_keyNotes:
CHATBOT_SERVICE_URLshould point to the deployed chatbot service endpoint.PYTHON_BINcan be changed if Python is not available aspythonon your server.AI_MODULE_PATHmust point to the realai-modulelocation from the backend runtime.
Copy chatbot/.env.example to chatbot/.env and set at minimum:
GROQ_API_KEY=
GROQ_MODEL_NAME=llama-3.3-70b-versatile
VECTOR_DB_PATH=./data/processed
RAW_DATA_PATH=./data/raw
BLOCKCHAIN_RPC_URL=http://127.0.0.1:8545Copy blockchain/.env.example to blockchain/.env and set:
APOTHEM_RPC_URL=https://erpc.apothem.network
PRIVATE_KEY=
CONTRACT_ADDRESS=Install dependencies for each Node service:
cd frontend && npm install
cd backend && npm install
cd blockchain && npm installInstall Python dependencies:
pip install -r ai-module/requirements.txt
pip install -r chatbot/requirements.txtpython chatbot/app/main.pyDefault port: 8000
cd backend
npm run devDefault port: 5000
cd frontend
npm run devDefault port: 3000
Frontend:
cd frontend
npm run buildBackend:
cd backend
npm run buildBlockchain:
cd blockchain
npx hardhat compileCompile and deploy:
cd blockchain
npx hardhat run scripts/deploy.ts --network apothemAfter deployment:
- copy the deployed contract address
- set
VITE_CONTRACT_ADDRESSinfrontend/.env - rebuild the frontend
Optional verification script:
cd blockchain
npx hardhat run scripts/verify.ts --network apothemGET /- health checkPOST /api/analysis/upload- upload an image and run AI analysisPOST /api/analysis/generate-report- pin a report JSON to IPFSPOST /api/chat/message- proxy a chat request to the chatbot serviceGET /uploads/:file- access uploaded/generated images
POST /chat- submit triage message payloads
The backend currently accepts only:
- PNG
- JPEG / JPG
- WebP
Maximum file size:
- 5 MB
Important note:
- DICOM is not supported in the current implementation.
- build with
npm run build - configure SPA route rewrites if hosting on a static platform because the app uses
BrowserRouter - set
VITE_API_BASE_URLto the deployed backend URL - set
VITE_CONTRACT_ADDRESSto the deployed contract address
- deploy together with access to the Python runtime and
ai-module - ensure the server can reach the chatbot service URL
- set correct
CORS_ORIGIN - configure secure production secrets for Pinata
- ensure the vector database exists at
VECTOR_DB_PATH - ensure
GROQ_API_KEYis valid - expose the service only where needed, or keep it internal behind the backend proxy
- deploy to XDC Apothem or update the frontend network configuration for another chain
- keep the ABI and deployed address in sync with the frontend
frontend/src/pages/PatientVault.tsxstill uses mock datafrontend/src/pages/Profile.tsxstill uses mock data- minted reports currently send
chatLogs: []from the frontend analysis flow - the backend depends on local Python subprocess execution, which may need container/process tuning in production
- the frontend production bundle is large and Vite warns about chunk size during build
Check:
PYTHON_BINpoints to a working Python installationAI_MODULE_PATHpoints to the correctai-module- TensorFlow and OpenCV dependencies are installed
- the model file exists inside
ai-module/model/
Check:
- the chatbot service is running
CHATBOT_SERVICE_URLis correct- Groq credentials are valid
Check:
VITE_CONTRACT_ADDRESSis set correctly- the wallet is connected to the expected chain
- the deployed contract matches the frontend ABI
Check:
- Pinata credentials are set in
backend/.env - the backend can reach Pinata from the deployment environment
The codebase was updated to improve deployment readiness by:
- replacing hardcoded service URLs with env-based config
- unifying chat requests through the backend proxy
- removing mock/fallback Pinata success behavior
- making Python runtime and AI module paths configurable
- enforcing backend upload validation
- fixing frontend confidence typing and wallet resync behavior
No license is currently defined in the repository.