A high-performance crossword clue and answer lookup service built with Cloudflare Workers and Hono.
Down Pattern (AKA "across-word") provides fast crossword answer pattern matching and clue lookup. It uses a chunked indexing strategy to efficiently serve crossword data from Cloudflare R2 storage.
API live at https://downpattern.livediagonal.com if you would like to cheat on crosswords or whatever
- Pattern Matching: Find crossword answers matching a pattern (e.g.,
A?PLE
findsAPPLE
) - Clue Lookup: Get clues for specific crossword answers
- High Performance: Chunked indexes for fast lookups without loading entire datasets
- Scalable: Built on Cloudflare Workers with R2 storage
Find answers matching a crossword pattern.
- Pattern format: Use
?
for unknown letters (e.g.,A?PLE
,??T
,HELLO????
) - Returns: Array of matching answers with occurrence counts
Example:
curl "https://your-worker.your-subdomain.workers.dev/answers?pattern=A?PLE"
Get clues for a specific crossword answer.
- Answer: The crossword answer to find clues for
- Returns: Array of clues (up to 10 random clues)
Example:
curl "https://your-worker.your-subdomain.workers.dev/clues?answer=APPLE"
The service uses a chunked indexing strategy:
- Source Data: Large TSV file with crossword answer-clue pairs
- Chunked Processing: Data is split into manageable chunks based on answer patterns
- R2 Storage: Chunks are uploaded to Cloudflare R2 for fast retrieval
- Dynamic Loading: Only relevant chunks are loaded for each query
- Node.js 18+
- npm or yarn
- Cloudflare account with R2 enabled
- Clone the repository:
git clone <repository-url>
cd down-pattern
- Install dependencies:
npm install
- Set up your crossword data:
- Place your compressed TSV file at
resources/clues.tsv.xv
- The file should contain tab-separated answer-clue pairs
- Place your compressed TSV file at
- Build chunked indexes:
npm run build-chunks
- Upload chunks to R2:
npm run upload-chunks
- Deploy to Cloudflare Workers:
npm run deploy
npm run dev
- Start local development servernpm run build
- Build the Workernpm run build-chunks
- Build chunked indexes from source datanpm run upload-chunks
- Build and upload chunks to R2npm run deploy
- Deploy to Cloudflare Workers
Configure your Cloudflare R2 bucket in wrangler.toml
:
[[r2_buckets]]
binding = "DATA_BUCKET"
bucket_name = "your-bucket-name"
├── src/
│ ├── index.ts # Main Worker application
│ ├── chunked-data-loader.ts # Chunked data loading logic
│ └── types.ts # TypeScript type definitions
├── scripts/
│ ├── build-chunked-index.js # Build chunked indexes
│ └── upload-chunks.js # Upload chunks to R2
├── resources/
│ └── clues.tsv.xv # Compressed source data
└── chunked-indexes/ # Generated chunk files (gitignored)
- Cold start: ~100-200ms (loads only needed chunks)
- Warm requests: ~10-50ms
- Memory usage: Low (chunks loaded on-demand)
- Storage: Efficient chunked storage in R2
MIT