Case Study

Panoptes

Live sports aggregator that detects the games worth watching

Category

Platforms

Status

In Progress

Maturity

Prd

Problem and Motivation

I watch a lot of sports. Multiple games running at the same time on different screens, and the question is always "which one should I actually be paying attention to right now?" You flip between channels or check your phone for score updates, and by the time you switch the interesting moment is already over.

I wanted a system that watches every game and tells me which ones matter. Not just "this game is close" but actually scoring how interesting each game is right now, based on score difference, what period it's in, momentum shifts, whether someone's mounting a comeback. Named it Panoptes after the hundred-eyed giant from Greek mythology. Felt right for something that's supposed to see everything at once.

Architecture Overview

It's a pnpm monorepo with two packages: api (Fastify + TypeScript) and web (React + Vite + Tailwind). The backend does all the real work right now. The frontend is two files.

The data pipeline goes: ESPN polling, state diffing, event emission, insight detection, priority calculation. Each is its own service layer. The PollingOrchestrator manages one SportPoller per league, each on its own timer with staggered start times so they don't all fire at once. When a poll comes back, GameStateTracker diffs the new scores against previous state and emits typed events through an EventBus. The bus does in-process pub/sub but also bridges to Redis for clustering later.

Events hit the InsightsEngine, which runs 8 detectors against the game state and pushes new insights back through the bus. Then calculatePriority() takes everything (score closeness, momentum, period, active insights) and outputs a 0-100 priority score per game.

PostgreSQL with TimescaleDB handles persistence. Score events go into a hypertable partitioned by timestamp for efficient trend queries. Redis handles ephemeral state: poller configs, game state cache, pub/sub for real-time updates. The two stores serve different purposes and I didn't want to collapse them into one.

Key Technical Decisions and Tradeoffs

  1. ESPN's undocumented API instead of a paid data provider

    The ESPN scoreboard endpoints at site.api.espn.com/apis/site/v2/sports/... aren't officially documented, but they're stable and cover everything: NFL, NBA, MLB, NHL, college football, college basketball, and 7 soccer leagues including Champions League. No auth required. For a homelab project, paying for a sports data API felt wrong. The risk is ESPN changes the response format without warning. I've got a typed parser with snapshot tests that'll throw immediately if the shape changes, so at least I'll know fast.

  2. Sport-specific insight configs instead of one-size-fits-all thresholds

    "Close game" means 7 points in football, 3 in hockey, 5 in basketball, 2 in baseball, 1 in soccer. A 10-point NBA deficit in the 4th quarter is a comeback. A 10-point NFL deficit in the 4th quarter is basically over. Each detector loads sport-specific configs from getSportConfig() with overrides for close game thresholds, blowout margins, comeback deficit minimums, and final minutes windows. More code to maintain, but universal thresholds would make the insights useless for half the sports.

  3. Polling with adaptive intervals instead of streaming

    ESPN doesn't offer webhooks, so polling is the only option. But polling every 5 seconds across 13 leagues when nothing's happening is wasteful. The pollers start at 60-second base intervals, drop to 15 seconds when games are in progress, and go to 5 seconds during critical moments (close game in the 4th quarter, overtime). The orchestrator staggers start times across leagues so they don't all spike the request rate at the same instant.

export function calculatePriority(state: GameState): number {
  const scoreDiff = Math.abs(state.game.homeScore - state.game.awayScore);
  const closeScoreFactor = scoreDiff >= CLOSE_GAME_MAX_SCORE_DIFF
    ? 0 : 1 - scoreDiff / CLOSE_GAME_MAX_SCORE_DIFF;

  const raw =
    closeScoreFactor * PRIORITY_WEIGHT_CLOSE_SCORE +
    Math.min(1, state.game.period / 4) * PRIORITY_WEIGHT_CRITICAL_PERIOD +
    (state.momentum.strength / 100) * PRIORITY_WEIGHT_MOMENTUM +
    getStatusFactor(state.game.status) * PRIORITY_WEIGHT_GAME_STATUS +
    (calculateInsightBoost(state.insights) / 100) * PRIORITY_WEIGHT_INSIGHT_BOOST;

  return Math.min(PRIORITY_MAX_SCORE, Math.max(PRIORITY_MIN_SCORE, Math.round(raw)));
}

The weights (close score 18%, critical period 15%, momentum 12%, game status 15%, insight boost 40%) are hand-tuned. Insights dominate because a game with an active comeback alert or overtime detection is almost always worth watching regardless of the other factors. These will need more tuning once the display layer is actually routing games to screens, but the ranking feels right when I test against real scoreboard data.

Screenshots and Video

No screenshots yet. The frontend is two files. Planning to capture the dashboard once the multiview UI is built (PBIs 7-8 in the backlog).

Tech Stack with Rationale

  • TypeScript + Node.js because the whole pipeline is I/O-bound (HTTP polling, Redis reads, PostgreSQL writes) and async/await makes that clean. Considered Go for the same reasons I used it for navi, but ESPN response parsing is way more pleasant with TypeScript's type narrowing than Go's struct tags and manual unmarshaling.
  • Fastify over Express because it's faster and has first-class TypeScript support. The JSON Schema validation for request/response is useful for the REST API. Didn't need Express's pile of middleware for this.
  • Redis for caching game state between polls so the API can serve current data without hitting PostgreSQL, and for pub/sub in the event system. Fire-and-forget pub/sub is fine here because events also get persisted to PostgreSQL as the source of truth.
  • PostgreSQL + TimescaleDB because score events are time-series data. TimescaleDB's hypertables handle partitioning automatically. Regular PostgreSQL would work at this scale honestly, but I wanted to learn TimescaleDB and it'll matter when I ingest historical seasons (PBI-15 in the backlog).
  • Docker Compose because the stack has 4 services (API, web, PostgreSQL, Redis) and I don't want to manage them locally. One docker-compose up and everything runs.
  • Biome + oxlint instead of ESLint + Prettier. Biome handles formatting and linting in one tool, oxlint catches things Biome misses. Ditched ESLint completely.

Challenges and Learnings

  • ESPN's response format is nested and inconsistent across sports. A football game has competitions[0].competitors for teams, but the team structure differs depending on whether the game is in-progress, final, or scheduled. The parseScoreboard() function is 80+ lines of defensive parsing. Ugly, but tested against real response snapshots from each sport.

  • My first priority formula weighted score closeness at 40%. That meant a 0-0 soccer match in the first half ranked higher than a basketball game with an active comeback in the 4th quarter. Completely wrong. Took a few iterations to land on insights getting 40% of the weight, because an active insight (comeback, overtime, upset alert) is almost always the strongest signal that something interesting is happening.

  • The insight aggregator needs cooldown periods or it spams. Without deduplication, the CloseGameDetector fires every single poll cycle for the entire duration of a close game. That's not an "insight," that's noise. The aggregator tracks when each insight type last fired per game and enforces a cooldown before allowing the same type to trigger again.

Links

  • GitHub
  • Live demo: not yet (frontend barely started)