Appearance
Play Lc0 — Deep Technical Profile
Table of Contents
- Project Overview
- Pre-Implementation Research & Spec
- Architecture
- The Engine: MCTS + Neural Network Inference
- Neural Network Catalog
- Tournament System
- UI & Game Flow
- Technical Tradeoffs & Decisions
- Major Bugs & Debugging Stories
- AI Agent Involvement
- Development Timeline
- Key Files Reference
1. Project Overview
A fully client-side web application that lets you play chess against Leela Chess Zero (Lc0) neural networks running entirely in the browser. All inference happens locally via ONNX Runtime Web (WebGPU with WASM fallback) — no server-side computation.
By the numbers:
- ~16,000 lines of TypeScript/TSX across 51 files
- 36 commits across 11 PRs (9 merged, 1 closed, 1 open), built in 19 days (Feb 5–23, 2026)
- 53 neural network models from ~800 to ~2900 Elo
- MCTS search with configurable node budget and temperature
- Swiss and round-robin tournament mode with FIDE performance ratings
- 15,000+ opening positions (full ECO database)
- Custom ONNX model upload with verification
- Shareable game URLs via query parameters
- Models hosted on Cloudflare R2; app deployed to Cloudflare Pages
2. Pre-Implementation Research & Spec
Before any code was written, Hunter conducted extensive research into which Lc0 networks would work in a browser, how policy-only (0-node) play performs, and what the full architecture should look like. This research produced a 676-line implementation spec that was passed directly to Claude Code as the project blueprint.
2.1 Network Feasibility Analysis
Hunter researched the full Lc0 network ecosystem and categorized models by browser feasibility:
Tier 1 — Tiny CNNs (~50 KB to a few MB): dkappe distilled series (Tiny Gyal, etc.). Run on any device, even WASM-only without WebGPU. Useful for low-end users and mobile.
Tier 2 — Small/medium CNNs (64x6 to 128x10): SE residual networks. Strong but compact. Fast inference, low download size. Good for mobile and modest hardware.
Tier 3 — Standard CNNs (192x15 to 320x24): Mainline Lc0 sizes. Need WebGPU for reasonable performance. Downloads get large (50-200 MB).
Tier 4 — Transformers (the standout): T1-256x10-distilled at ~65 MB FP16 has a dramatically stronger policy head than any residual CNN. At 1 node, its policy head alone should be in the ~2600-2800 Elo range. This became the "best practical browser net."
Tier 5 — Big nets (BT4, 768+ filters): Desktop-native territory. 707 MB for BT4. Needs ~4 GB VRAM. Works in browser with WebGPU on high-end hardware.
2.2 Policy-Only (0-Node) Strength Research
Hunter researched how strong Lc0 is without any search — just the raw policy head picking the highest-probability move. This was critical because depth-0 play was designed as a first-class feature, not an afterthought.
Key data points gathered:
- Older convolutional nets (2021, net 67743): above 2200 Elo at 1 node — enough to trouble a human master
- Latest BT4 transformer: nearly 300 Elo stronger in raw policy than the strongest CNN (T78), with fewer parameters
- Wikipedia (Nov 2024): Lc0 models achieving "grandmaster-level strength at one position evaluation per move"
- Lc0 team claims "grandmaster" policy strength for BT3/BT4
Best estimate for BT4 at 1 node: roughly 2500-2700 Elo (strong IM to GM level). This validated the design decision to support depth-0 as a meaningful play mode — a user playing against T1-256x10-distilled at 0-node gets a genuine chess opponent, not a toy.
2.3 Prior Art: MaiaChess Browser Implementation
Hunter studied the MaiaChess web platform as a working precedent. Maia uses a dual-engine architecture running entirely client-side: Maia neural network models converted to ONNX and run via onnxruntime-web, with Stockfish running alongside via WebAssembly for comparison analysis. Platform built with Next.js, TypeScript, React Context.
This confirmed the technical path: ONNX conversion via lc0 leela2onnx → onnxruntime-web → chess.js for board logic. Hunter had already fine-tuned his own Maia model (hunter-chessbot project), so he knew the ONNX conversion pipeline worked.
2.4 The 676-Line Implementation Spec
Hunter's research culminated in a comprehensive 4-phase implementation spec covering every architectural decision:
Phase 1: Foundation & Depth-0 Only (the MVP)
- Project setup: React + TypeScript + Vite, react-chessboard + chess.js, onnxruntime-web with WebGPU (WASM fallback)
- Weight conversion pipeline: offline
lc0 leela2onnxconversion, optional FP16 quantization, CDN hosting - Board encoding: the 112-plane representation (104 history + 8 auxiliary), replicating
lc0/src/neural/encoder.ccexactly - Policy decoding: the 1858-element vector → UCI moves, replicating
lc0/src/chess/board.ccindexing - Milestone: "Leela outputs a legal move at depth 0"
Phase 2: MCTS Search (Depth > 0)
- MCTS with PUCT (AlphaZero variant), node budget controls (10/100/1000)
- Tree structure with visit counts, prior probabilities, value estimates
- Selection → expansion → backup loop
- Web Worker for non-blocking inference
Phase 3: Multiple Networks & Smart Loading
- IndexedDB caching (no re-download), network switching UI
- Engine scheduler managing concurrent sessions with memory pressure handling
- Download progress indicators
Phase 4: Polish & Features
- Eval bar, policy visualization, PGN export, mobile responsiveness
Risk mitigations were explicitly planned:
- Board encoding wrong → compare against lc0's actual encoder output for known positions
- Policy decoding wrong → same approach
- WebGPU not available → WASM fallback, smaller nets
- MCTS too slow in JS → start with low node counts, batch inference, SharedArrayBuffer
Curated default network selection: 11 networks spanning ~800-2900 Elo with distinct playing personalities — "Brawler" (Bad Gyal 8), "Wild Style" (Mean Girl 8), "Endgame Drill" (Ender), giving users meaningfully different opponents.
The spec was designed to be self-contained: it included network download links, architecture glossary (NxM filters × residual blocks, SE = Squeeze-Excite, SWA = Stochastic Weight Averaging, distilled = smaller net mimicking larger), and references to specific lc0 source files for encoder/decoder validation.
3. Architecture
┌─────────────────────────────────────────────────────┐
│ React 19 + Tailwind CSS │
│ ├── HomeScreen (network picker, game history) │
│ ├── GameScreen (board, controls, move history) │
│ └── TournamentPage (setup, live view, standings) │
└──────────────────┬──────────────────────────────────┘
│ postMessage (Web Worker)
┌──────────────────▼──────────────────────────────────┐
│ Web Worker │
│ ├── ONNX Runtime Web (WebGPU / WASM fallback) │
│ ├── MCTS Search (PUCT selection, backpropagation) │
│ ├── Board Encoding (FEN → [1,112,8,8] tensor) │
│ └── Policy Decoding (1858 logits → legal moves) │
└──────────────────┬──────────────────────────────────┘
│ fetch + IndexedDB cache
┌──────────────────▼──────────────────────────────────┐
│ Cloudflare R2 (model hosting, 25MB–707MB per model) │
└─────────────────────────────────────────────────────┘Key architectural boundaries:
- Main thread: React UI, game state (chess.js), opening book lookup, persistence (localStorage + IndexedDB)
- Web Worker: All neural network inference and MCTS search. Communicates via typed message protocol (
WorkerRequest/WorkerResponse) - Cloudflare R2: Model storage. Models are gzip-compressed
.onnx.binfiles. Downloaded on demand, decompressed viaDecompressionStream, cached in IndexedDB
4. The Engine: MCTS + Neural Network Inference
4.1 Board Encoding (encoding.ts)
Converts chess positions to the Lc0 input format: a [1, 112, 8, 8] Float32 tensor (7,168 elements).
112 input planes:
- Planes 0–103: 13 planes × 8 history positions. Per position: 6 own piece types + 6 opponent piece types + 1 repetition flag
- Planes 104–107: Castling rights (our queenside, our kingside, opponent queenside, opponent kingside)
- Plane 108: Is black to move (1.0 if black, 0.0 if white)
- Plane 109: Rule50 count / 99.0
- Plane 110: Zeros (move count, disabled)
- Plane 111: All ones
Perspective flipping: The network always sees the position from the side-to-move's perspective. When it's black's turn, piece ownership is swapped, the board is vertically flipped (rank = 7 - rank), and castling rights are swapped. This is the standard Lc0 convention.
4.2 Policy Decoding (decoding.ts, policyIndex.ts)
The neural network outputs 1858 policy logits — one per possible move in Lc0's compressed move encoding.
POLICY_INDEX: A pre-generated array of 1858 UCI move strings. The array index IS the policy output neuron index. This was initially generated programmatically by the AI, but the output was wrong — Hunter directed using the reference table from his hunter-chessbot repo instead.
Decoding flow:
- For each legal move, flip to white perspective if black (via
flipUci) - Look up in
POLICY_INDEX_MAP(reverse map: UCI string → index) - Apply softmax with temperature scaling:
exp((logit - max) / temp) - If temperature > 0: sample from distribution. If temperature = 0: pick argmax
4.3 MCTS Algorithm (mcts.ts)
Tree node structure:
move,parent,children: Map<string, MCTSNode>prior(policy network probability),visits(N),totalValue(W)wdlSum: [win, draw, loss]accumulated over visitsexpanded,terminal,terminalValue
PUCT selection (cPUCT = 2.5):
score = -Q(child) + cPUCT × prior × sqrt(parentVisits) / (1 + childVisits)Q is negated because a child's value is from the opponent's perspective.
Search loop (mctsSearch):
- Create root, expand it (run inference)
- For each iteration (up to nodeLimit or timeLimitMs):
- Select: Walk down the tree picking highest PUCT child until reaching an unexpanded non-terminal leaf. A fresh
Chessinstance is replayed along the path. - Expand: Run neural network inference on the leaf. Create child nodes with priors. Store evaluation (value = wdl[0] - wdl[2]).
- Backpropagate: Walk back to root, negating value at each level. WDL is flipped (win↔loss) at each level.
- Select: Walk down the tree picking highest PUCT child until reaching an unexpanded non-terminal leaf. A fresh
- Progress callback every 10 iterations.
Move selection: After search, if temperature = 0: pick most-visited move (argmax). If temperature > 0: sample proportional to visits^(1/temperature).
Performance: ~80-100 nodes/sec on small nets, ~8-10 on large nets (single-node, unbatched).
Roadmap (Phase 2, not yet implemented): Batched MCTS with virtual loss for branch diversity, batch collection into [B, 112, 8, 8] tensor, expected 5-8× throughput.
4.4 Inference (inference.ts)
Execution provider selection: Checks navigator.gpu — if present, tries ["webgpu", "wasm"]; otherwise ["wasm"] only.
Output head discovery: Dynamically matches output tensor names containing "policy", "wdl", or "value" (but not "wdl" for value). If no WDL head exists, synthesizes WDL from the value head as [(v+1)/2, 0, (1-v)/2].
4.5 Worker Protocol
Request types: init (model URL), getBestMove (single inference, no search), evaluatePosition (WDL only), mctsSearch (full MCTS)
Response types: ready, initProgress, initError, bestMove, evaluation, mctsResult, mctsProgress, error
Lc0Engine class (main-thread API): Wraps the Web Worker with a pub-sub state pattern and promise-based request/response. Only one request of each type can be in-flight at a time.
4.6 Model Cache (modelCache.ts)
IndexedDB database lc0-model-cache with a single models object store. Models stored as decompressed ArrayBuffer keyed by URL. All operations silently catch errors to avoid crashing on IndexedDB issues.
5. Neural Network Catalog
53 models organized by playing strength, spanning 6 model families:
Model Families
| Family | Count | Architecture | Elo Range | Description |
|---|---|---|---|---|
| 11258 distilled | 15 | 16x2-SE to 128x10-SE | ~800–2450 | Distilled from Lc0 T10 training net. The backbone of the rating ladder. |
| Maia | 11 | 64x6-SE | ~1100–2200 | Trained to predict human moves at specific Lichess rating levels. |
| Gyal family | 8 | Various (16x2 to 192x16) | ~800–2500 | Lichess-trained. Sub-families: Tiny/Bad/Good/Evil/Mean with distinct play styles. |
| Official Lc0 | 5 | Various | ~2100–2900 | Official training runs: T70, T42850, T71 FRC/Armageddon variants. |
| Transformers | 5 | 256x10 to 1024x15 | ~2525–2900 | Newest architecture. T1, t3, T82, BT3, BT4. Require WebGPU + significant VRAM. |
| Specialty | 4 | Various | ~2100–2600 | Leelenstein (engine-game trained), Ender (endgame specialist), Little Demon, Maia 2200 Hunter (fine-tuned on Hunter's own games). |
Notable Models
| Model | Arch | Size | Runtime MB | Elo | Notes |
|---|---|---|---|---|---|
| Tiny Gyal | 16x2 | 1.1 MB | ~25 | ~800 | Smallest, blunders freely |
| Maia 1100 | 64x6-SE | 3.3 MB | ~39 | ~1100 | Human-like at Lichess 1100 |
| T1-256x10 Distilled | Transformer | 77 MB | ~459 | ~2525 | "Best practical browser net" |
| BT4-1024x15 | Transformer | 707 MB | ~3229 | ~2900 | Strongest available. GM-level at 1-node. Needs ~4 GB VRAM. |
| Maia 2200 Hunter | 64x6-SE | 3.3 MB | ~39 | ~2050 | Fine-tuned on Hunter's own blitz/rapid games |
Memory Estimation
Each model has an estimatedRuntimeMb field computed by the benchmark-network-memory.mjs script: loads the ONNX session via onnxruntime-node, measures RSS delta, stores round(peakDeltaMb * 1.2) as a conservative estimate. Used by the tournament engine to estimate how many concurrent games can run.
6. Tournament System
10.1 Configuration
- Formats: Round Robin (circle method / Berger tables) or Swiss (greedy top-down with color balancing)
- Entrants: Each has a network, temperature (0–2), searchNodes (0–800, 0 = raw policy), searchTimeMs (0–30s), custom label
- Best-of: 1–30 regulation games per series (default 3)
- Tiebreak: "capped" (up to N extra games) or "win_by" (leader must be ahead by M)
- Concurrency: 1–8 simultaneous games
- Custom positions: Opening FENs rotate across series
10.2 Execution (useTournamentRunner.ts, 2474 lines)
The tournament runner manages the complete lifecycle:
- Engine pooling: LRU-evicted
Lc0Engineinstances, max =maxSimultaneousGames × 2 + 2. Evicts by next-use distance. - Game execution: Each game creates a
chess.jsinstance, alternates moves between engines (MCTS or raw policy), records FEN history and WDL eval snapshots. Games end on checkmate, stalemate, draw rules, 300-ply limit, or 3-minute timeout. - Concurrency:
Promise.racepattern — fill concurrent slots, proceed when any finishes, refill. No entrant appears in two simultaneous matches. - Error handling: Exponential backoff retries (1s–30s, max 6 retries). After 6 retries, adjudicate as draw.
- Series reconciliation: After each game, recalculates series scores. Early termination when one side has insurmountable lead. Tiebreak games added dynamically.
10.3 Standings & Ratings
- Match points: 1 for series win, 0.5 for draw, 0 for loss
- Game points: From individual game results (regulation only, tiebreakers excluded)
- Buchholz: Sum of opponents' match points (strength of schedule)
- Performance rating: FIDE method — average opponent Elo + dp(score percentage) using the standard 51-entry lookup table
- Cross table: N×N head-to-head matrix with series points, game points, and per-pair performance ratings
9.4 Persistence
Active tournaments saved to localStorage every 200ms (debounced). Also archived to IndexedDB every 5 seconds. On reload, running matches reset to "waiting" and can be auto-resumed. Completed tournaments stored permanently with full state for reopening.
7. UI & Game Flow
6.1 Routing
No router library — App.tsx manages a state machine with screen types: home, game, tournament, share-loading, share-confirm. Persisted to localStorage.
6.2 Home Screen
- NetworkPicker: Searchable, sortable list of 53+ networks. Each shows name, architecture, Elo, download size, cache status. Inline download with progress bar. Temperature slider (0–2), MCTS search controls (nodes 0–800, time limit 0–30s), opening book selection, custom FEN input, share URL generation.
- GameHistory: Saved games list with expand/collapse, PGN display, and "Continue" for incomplete games.
6.3 Game Screen
- Board:
react-chessboardwith click-to-move and drag-and-drop. Legal move indicators (dots for quiet moves, rings for captures). Promotion picker overlay. - Status Bar: Engine status, "Thinking..." indicator, last move in SAN, WDL bar (three-segment Win/Draw/Loss visualization).
- Move History: Tabbed view — Moves (clickable navigation with arrow keys) and PGN (click to copy).
- Controls: New Game (alternates color), Resign (two-step confirmation), Flip Board, temperature adjustment.
- Opening Book: Checks user-selected openings before consulting the engine. Plays book moves randomly, shows "Book" badge.
- Auto-save: Every move persists to localStorage. Game completion triggers final save with result.
6.4 Share URLs
Query parameters: network (required, must match built-in ID), color, fen, temperature. Large models (>25MB, not cached) show a confirmation dialog before downloading.
8. Technical Tradeoffs & Decisions
7.1 Model Hosting: Local → Git LFS → Cloudflare R2
Evolution on Feb 6:
- Models bundled in
public/models/— hit Cloudflare Pages 25MB deployment limit - Tried Git LFS — abandoned same day (complexity, bandwidth costs)
- Final: Cloudflare R2 public bucket. Two upload scripts:
wrangler r2 object putfor normal models,aws s3 cpfor the 707MB BT4 (wrangler can't handle files that large)
7.2 Gzip Compression + .onnx.bin Extension
Models gzip-compressed for 30-45% size reduction. Browser decompresses via DecompressionStream API. Initially used .gz extension, but Vite's dev server (sirv) intercepted .gz files as pre-compressed assets. Hunter directed renaming to .onnx.bin.
7.3 WebGPU vs WASM
Extensively debated. Five approaches analyzed (batched MCTS + WebGPU, GPU-resident search, lc0-to-WASM, Rust WASM, single-node MCTS). Hunter pushed for practical implementation first, extraction into library later. Current: WASM with WebGPU as automatic upgrade when navigator.gpu is available.
7.4 MCTS vs Raw Policy (0-Node)
The app originally used only the policy head (no search). MCTS with PUCT selection was added in PR #9. The 0-node mode is still available (set searchNodes = 0) for speed or weaker play. Phase 2 batched MCTS is planned but not yet implemented.
8.5 Bundler: Rollup → Rolldown
Switched from Vite's default Rollup to rolldown-vite (Rust-based bundler) in PR #4 for faster builds.
8.6 Pre-Generated vs Programmatic Policy Index
The AI initially generated the 1858-entry policy index programmatically. The output was wrong — incorrect move ordering caused the engine to "play random bullshit." Hunter identified the problem and directed using the pre-generated table from his hunter-chessbot reference repo. This fixed the engine immediately.
9. Major Bugs & Debugging Stories
8.1 "Playing Random Bullshit" — Six Encoding Bugs (Feb 5)
The biggest debugging effort in the project. After the initial build, Hunter reported the engine was playing nonsensical moves despite correct WDL evaluation. Six separate bugs were found:
- Policy index table: Programmatic generation produced wrong move ordering. Fixed by using pre-generated 1858-entry array from Hunter's reference repo.
- Promotion encoding: Inverted (queen treated as normal, n/b/r as underpromotions). Correct: q/r/b have explicit suffixes, knight uses bare 4-char move.
- Move flipping for black: Was flipping square indices instead of UCI string ranks.
- History ordering: Taking first 7 positions instead of most recent 7.
- FenHistory initialization: Started as
[]instead of[startFEN]. - Halfmove clock: Divided by 100.0, should be 99.0.
Hunter identified that the value head was correct but the policy head was wrong, narrowing the investigation. He pointed to his working hunter-chessbot as the authoritative reference.
8.2 Bus Error in ONNX Conversion (Feb 6)
Hunter's custom fine-tuned Maia model crashed with Bus error: 10 during lc0 leela2onnx conversion. Deep investigation:
- Compared hex dumps of working base model vs Hunter's model
- Found Hunter's model had extra
training_paramsfields (policy_loss, accuracy) - Traced crash to
FloatOnnxWeightsAdapter::GetRawData()— KERN_PROTECTION_FAILURE at memory boundary - Root cause: Bug in lc0 v0.32.1 handling models with training_params populated (needed v0.21.0+)
8.3 useEffect Anti-Patterns (Feb 9)
Hunter identified pervasive useEffect problems: "what the fuck are these useEffects for?" Led to a comprehensive rewrite of OpeningPicker (removed all 3 useEffects, removed open prop entirely, parents conditionally render instead). Also fixed a flickering bug where NetworkPicker's selection oscillated due to re-resolve effects depending on [networks, selected.id] and running before localStorage writes completed.
8.4 Vite .gz Interception
Gzipped model files couldn't be served in development because Vite's sirv middleware treated .gz files as pre-compressed assets. Solved by renaming to .onnx.bin.
10. AI Agent Involvement
9.1 Session Data
| Metric | Value |
|---|---|
| Sessions | 9 session directories |
| Subagent files | 63 JSONL files |
| Total dialog lines | ~3,371 |
| Date range | Feb 5–13, 2026 |
9.2 Hunter's Direction
Hunter provided a comprehensive spec file (lc0-browser-chess-spec.md) upfront describing a 4-phase plan. He directed the architecture, identified bugs by testing, and pointed the AI to his working reference implementation (hunter-chessbot) when the AI's code was wrong.
Key corrections Hunter made:
- Policy encoding: AI's programmatic generation was fundamentally wrong — Hunter directed using his reference repo's pre-generated table
- useEffect quality: Hunter identified anti-patterns the AI had written and directed a comprehensive rewrite ("fix all your code that is dog shit. check over it and do refactors where necessary, but with a brain this time")
- Temperature default: AI suggested 0; Hunter pointed out that makes moves deterministic and predictable, changed to 0.15
- Modal vs page navigation: AI suggested page-based game detail view; Hunter directed modal overlay instead
- Board size: Hunter specified exact sizing (
min(90vh, 90vw)) - MCTS architecture: Hunter pushed back on AI's recommendation to build in-repo, noting batched MCTS is logically a library. AI conceded.
- Auto health checks: AI added automatic engine health checks on modal open; Hunter directed removing them
9.3 AI's Execution
The AI handled implementation, model conversion research, and the bulk of the coding. It deployed 3 parallel research subagents at project start to investigate Lc0 encoding, policy output format, and ONNX Runtime configuration requirements. The tournament system (2,474 lines in useTournamentRunner.ts alone) was largely AI-generated, with Hunter directing the feature requirements and correcting UI decisions.
11. Development Timeline
| Date | Commits | Key Achievement |
|---|---|---|
| Feb 5 | 3 | First working app: Lc0 in browser, encoding bugs found and fixed |
| Feb 6 | 14 | 30 networks added, Maia series, Git LFS → R2 pivot, model compression, game saving |
| Feb 7 | 3 | Custom ONNX upload (PR #3), useEffect audit (PR #4), tournament mode (PR #1) |
| Feb 9-10 | 5 | Opening book system (PR #5, 15K+ openings), FIDE performance ratings (PR #6), modal polish (PR #7) |
| Feb 12 | 1 | Shareable game URLs (PR #8) |
| Feb 13 | 4 | MCTS search (PR #9), per-entrant tournament settings |
| Feb 23 | 3 | Temperature sampling attempt + revert (PR #11, still open) |
Key Velocity Facts
- First working app with neural network inference: 2 hours from initial commit
- 53 networks cataloged and converted: 1 day
- Full tournament mode with Swiss/round-robin: 1 day (PR #1, 6 commits)
- Opening book with 15K+ positions: 2 days (PR #5, 4 commits)
- MCTS search engine: 1 day (PR #9)
12. Key Files Reference
Engine
| File | Lines | Purpose |
|---|---|---|
engine/mcts.ts | ~250 | MCTS search (PUCT selection, expansion, backpropagation) |
engine/inference.ts | ~120 | ONNX Runtime session management, WebGPU/WASM |
engine/encoding.ts | ~180 | FEN → [1,112,8,8] tensor encoding |
engine/decoding.ts | ~80 | 1858 policy logits → legal moves with temperature |
engine/policyIndex.ts | ~1900 | Pre-generated 1858 UCI move lookup table |
engine/worker.ts | ~200 | Web Worker: model loading, inference, MCTS |
engine/workerInterface.ts | ~150 | Main-thread Lc0Engine class (pub-sub + promises) |
engine/modelCache.ts | ~60 | IndexedDB model caching |
Tournament
| File | Lines | Purpose |
|---|---|---|
hooks/useTournamentRunner.ts | 2474 | Complete tournament lifecycle management |
lib/tournament/pairings.ts | ~150 | Round-robin (Berger tables) + Swiss pairings |
lib/tournament/standings.ts | ~100 | Match/game points, Buchholz, sorting |
lib/tournament/performanceRating.ts | ~80 | FIDE dp lookup table + computation |
Data
| File | Lines | Purpose |
|---|---|---|
constants/networks.ts | ~800 | 53 network definitions with metadata |
lib/openingBook.ts | ~50 | Trie-based opening book lookup |
data/openings.ts | ~20 | Lazy-loaded ECO opening database |
UI
| File | Lines | Purpose |
|---|---|---|
components/GameScreen.tsx | ~500 | Active game view with board, controls, history |
components/NetworkPicker.tsx | ~600 | Network selection, download, configuration |
components/TournamentLiveScreen.tsx | ~500 | Live tournament view with standings |
components/OpeningPicker.tsx | ~400 | Opening book selection modal |