This project provides a robust and efficient system for accessing historical and real-time trading data from the DeepBook V3 decentralized orderbook on Sui. Built with a focus on performance, reliability, and observability, it delivers OHLCV data, live orderbooks, and trade-level summaries via a simple HTTP API.
-
📊 OHLCV Aggregation
Aggregated candlestick data (Open, High, Low, Close, Volume) at 1-minute resolution using TimescaleDB. -
🧾 Trade-Level Summaries
Exposes normalized historical trades (excluding tick-by-tick) per pool with optional time filters. -
📚 Historical Snapshots
Live and historical orderbook snapshots per trading pair with timestamped updates. -
🧩 API Extensibility
Fully compatible with all endpoints fromdeepbookv3, with custom endpoints added. -
⚙️ Monitoring and Metrics
Includes a full observability stack:- Prometheus for metrics
- Grafana dashboards for visual monitoring
-
🧩 Dashboard
Dashboard showing important data is live at deeplook.carmine.finance.
- Backend Framework: Built on top of
deepbookv3fork. - Database: PostgreSQL + TimescaleDB for high-performance time-series aggregation.
- Deployment: Dockerized services with NGINX reverse proxy and HTTPS enabled.
- Monitoring Stack: Prometheus + Grafana with custom metrics for ingestion and uptime tracking.
All endpoints return JSON and are publicly accessible via HTTPS.
Returns metadata for all available pools.
Example
Returns OHLCV candlestick data for the specified time range and timeframe. Timeframe defaults to 1 min.
Example
Returns the current orderbook snapshot and the timestamp of the last update.
Example
Returns all trade-level order fills within the specified time window.
Example
Returns whole orderbook snapshot via websocket that updates everytime a relevant event happens.
- Example: wss://api.sui.carmine.finance/ws_orderbook/SUI_USDC
Returns current best levels via websocket on every orderbook update (even if the update doesn't happen on best levels).
- Example: wss://api.sui.carmine.finance/ws_orderbook_bests/SUI_USDC
Returns current spread via websocket on every orderbook update (even if the update doesn't happen on best levels).
- Example: wss://api.sui.carmine.finance/ws_orderbook_spread/SUI_USDC
Returns latest 100 trades every time a new trade is observed.
- Example wss://api.sui.carmine.finance/latest_trades/SUI_USDC
Endpoints that aggregate data to provide further insights.
Returns the average base and quote volume per trade.
Example
Returns the average volume per trade in following windows: [5min, 15min, 1h, 24h]
Example
Returns the total volume in last n-days. Example
Returns the total volume in following windows: [5min, 15min, 1h, 24h]
Example
Returns the average time in milliseconds between consecutive trades.
Example
Returns the Volume-Weighted Average Price (VWAP) over the selected time window.
Example
Returns the normalized order book imbalance (0–100 scale) at a given depth and level.
Example
Returns trading summary of all pools in the last 24 hours. Example
- API and indexer prometheus metrics are gathered
- There is a Grafana dashboard for API and for indexer
Dashboards are available here.
- Aggregated OHLCV and trade data is stored in Postgresql with TimescaleDB extension for fast querying.
The project has a Makefile that streamlines local development.
Create .env file for local development in the root directory with the following values:
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=deeplook
POSTGRES_HOST=localhost
DATABASE_URL=postgres://postgres:postgres@localhost/deeplook
REMOTE_STORE_URL=https://checkpoints.mainnet.sui.io
RPC_URL=https://fullnode.mainnet.sui.io:443
ENV=mainnet
REDIS_URL=redis://localhost:6379Do not use these values in production
Create Postgresql database locally using Docker container.
make postgresIt is currently required to install Timescaledb manually. Access the container:
docker exec -it deeplook-db /bin/bashAnd following Install TimescaleDB on Linux tutorial.
Then create db with
make createdbRun database migrations
make migrateupCreate Redis locally using Docker container. Redis is used to hold live orderbook.
make redisRun indexer (currently it runs from checkpoint 150000000 with skip-watermark, feel free to adjust this in the Makefile):
make indexerRun API
make apiRun orderbook service that keeps track of orderbook events and updates live orderbook in Redis.
make orderbookIt is advised to build docker images from docker folder and use those in production.
You need to have postgresql database with timescaledb extension, then provide ENV variables, specified above, with correct values and run the containers.