Y
Yasindu Nethmina
Full-Stack Product Engineer
MENU
Projects
Blog
Experience
About
Contact
©2026 Yasindu Nethmina. Made with in Colombo, Sri Lanka.
BLOG

Fanning Out Tickers: Real-Time Price Distribution Across Server Instances Without a Message Broker

PUBLISHED

February 10, 2026

READ TIME

20 min read

TOPICS
WebSocketRedisReal-TimePub/Sub
SUMMARY

How a single upstream WebSocket connection fans live market prices out to browser clients across multiple backend instances: Redis pub/sub as a signaling layer, PUBSUBCHANNELS-driven subscriptions that mirror actual browser demand with zero manual management, metrics-on-write recomputation that makes all reads free, a staleness sweep with built-in retry backoff, and a frontend that patches TanStack Query cache directly without any bespoke state management.

I've been building a stock market research platform for a while. It covers financials, earnings transcripts, insider trades, the economic calendar: the full stack of what a serious equity researcher needs in one place. Every page that shows a stock needs to display a price that updates in real time. That sounds straightforward. The implementation touches a WebSocket connection to a market data stream, Redis pub/sub across multiple backend instances, a background staleness sweep, metrics recomputation on every trade, and a frontend that wires all of it into TanStack Query without any bespoke state management.

This post covers the whole pipeline: how prices get in, how they get out, and what happens at every step along the way.

The Cross-Instance Problem

A production backend runs more than one instance. Requests are load-balanced across them. A browser client opens a WebSocket to the server and lands on instance A. The upstream market data connection runs on instance B. That browser will never receive a price update unless something bridges the two.

The standard answer is a dedicated message broker like Kafka, RabbitMQ, or Redis Streams. All of these work. All of them introduce an additional system to run, monitor, and reason about.

The approach here uses Redis as a signaling layer instead. Prices are not stored in Redis for delivery. Redis carries pub/sub events across instances. Each instance delivers those events directly to its local browser WebSocket clients over their existing connections. The data transport is the connection you already have. Redis just coordinates.

Connecting to the Market Data Stream

The backend holds a single WebSocket connection to the upstream market data stream. The protocol follows an auth-then-subscribe pattern:

socket.addEventListener('message', (event) => {
  const messages = JSON.parse(event.data) as AlpacaWebSocketMessage[];

  for (const message of messages) {
    switch (message.T) {
      case 'success': {
        if (message.msg === 'connected') authenticate();
        else if (message.msg === 'authenticated') isAuthenticated = true;
        break;
      }
      case 'subscription': {
        logger.info(`Subscribed to trades: ${message.trades.join(', ')}`);
        break;
      }
      case 't': {
        processAlpacaTradeEvent(message);
        break;
      }
    }
  }
});

The stream delivers several message types: quotes, bars, trading status messages, LULD bands, corrections. Only T: 't' is handled. A trade event is the most authoritative price signal: it is a transaction that actually happened at a specific price on a specific exchange.

On disconnect or error, the connection retries with a 2-second delay. A isSafeToClose flag distinguishes an intentional shutdown from an unexpected drop. Without this flag, a graceful shutdown would trigger the retry loop.

Demand-Driven Subscriptions

The WebSocket worker does not maintain a static ticker list. It does not read configuration. It discovers what to subscribe to by querying Redis every 2 seconds:

const checkRoomsInterval = setInterval(async () => {
  if (!isAuthenticated) return;

  const roomsConnected = await getStockWsRoomIds();
  const existingStocks = roomsConnected.map((room) => room.split(':')[1]);

  const newStocks = existingStocks.filter((s) => !subscribedStocks.has(s));
  const removedStocks = [...subscribedStocks].filter(
    (s) => !existingStocks.includes(s),
  );

  if (newStocks.length > 0) {
    subscribe(newStocks);
    for (const symbol of newStocks) {
      updateStockPriceIfOutdatedBySymbol(symbol);
    }
  }

  if (removedStocks.length > 0) unsubscribe(removedStocks);
}, 2000);

getStockWsRoomIds() calls ROOMS.getRoomIds('stocks-room:*'), which maps to Redis PUBSUBCHANNELS stocks-room:*. This command returns the list of all currently active pub/sub channels matching that pattern. A channel named stocks-room:AAPL exists only while at least one subscriber is listening on it.

This is the key insight of the subscription model. The browser WebSocket controller joins stocks-room:AAPL when a client opens the AAPL page. That makes the channel active. On the next 2-second interval, the upstream worker sees it and subscribes to AAPL. When the last AAPL browser session closes, that channel disappears. Two seconds later, AAPL is unsubscribed from the upstream stream. The upstream subscription list is always exactly the set of tickers someone is actively viewing, with a 2-second lag.

When a new stock joins, updateStockPriceIfOutdatedBySymbol fires immediately. The staleness threshold is 10 seconds. If the stored price is older than that, a snapshot REST request fetches the current price before the first trade event arrives.

The Rooms Abstraction

ROOMS is the global pub/sub service that the entire delivery layer is built on:

async join(roomId, listener) {
  const subscriber = await this.getSubscriber();
  await subscriber.subscribe(roomId, handlerListener);
}

async leave(roomId, listener) {
  const handlerListener = this._listeners.get(roomId)?.get(listener);
  await subscriber.unsubscribe(roomId, handlerListener);
}

async emit(roomId, message) {
  await MEMORY.client.publish(roomId, JSON.stringify(message));
}

async getRoomIds(pattern) {
  return await MEMORY.client.pubSubChannels(pattern);
}

The subscriber is a dedicated Redis connection, duplicated from the main client. Redis requires this because a connection in pub/sub mode cannot issue regular commands. emit uses the main client's publish command.

The internal listener map stores a two-level Map<roomId, Map<callerListener, wrapper>>. This is necessary because leave must remove the exact wrapper function that was registered with Redis, not just any listener on the room. The wrapper parses the JSON string that Redis delivers into the typed object the caller expects.

The WebSocket Endpoints

Two endpoints expose stock events to browser clients: GET /stocks/ws for the global room and GET /stocks/:symbol/ws for per-stock rooms. Both are gated by session + active subscription middleware before the upgrade.

createWs(async (ctx) => {
  const stock = await getStockBySymbolOrId(ctx.req.param('symbolOrId')!);
  const stockRoomId = formatStockWsRoomId(stock.symbol);

  return {
    onOpen: async (_, ws) => {
      ws.data.onEvent = async (data) => { ws.json(data); };
      await ROOMS.join(stockRoomId, ws.data.onEvent);
    },
    onClose: async (_, ws) => {
      await ROOMS.leave(stockRoomId, ws.data.onEvent);
    },
  };
}),

On open: ws.json is registered as the room listener. When any backend instance publishes to stocks-room:AAPL, Redis delivers the message to every subscriber across all instances. Each instance's subscriber fires its listener, which is ws.json, which serializes and sends the event directly to that browser.

The Price Write Path

When a trade arrives, three things happen in sequence:

Step 1: Write to the database

The stock row carries two price columns: price (the current live price) and price_pc (the previous session close, used as the denominator for percentage change). Trades write price. The price_pc is updated separately during session transitions.

Step 2: Recompute derived metrics

This reads the last 8 quarters of financials from the database and recomputes every metric that depends on the current price: market cap, P/E (TTM and NTM), P/S, P/B, P/FCF, P/OCF, EV/EBITDA, EV/Sales, NTM ratios, PEG, dividend yield, and 52-week high/low tracking. All of it is persisted back to Postgres. Cache entries for this stock's metrics are invalidated.

Computing metrics on write is the right tradeoff for this use case. Page loads fetch precomputed values from a database row rather than deriving them on every request. The write path already holds a DB transaction, so the extra computation concentrates cost where it already exists. The read path stays free.

Step 3: Fan out to subscribers

sendStockUpdatedPartial(updatedStock, {
  price: updatedStock.price.toNumber(),
  price_pc: updatedStock.price_pc.toNumber(),
  price_updated_at: updatedStock.price_updated_at,
  metrics: metricsPatch,
});

This publishes a stock_updated_partial event to the stock's Redis room. Redis delivers it to every instance with a subscriber on that channel. Each such instance calls ws.json for its local browser connections. Only the fields that actually changed are included in the event, so the downstream payload stays minimal.

Market Session Awareness

The system tracks four session states: pre_market, regular, post_market, closed. These come from the upstream broker's clock and calendar APIs:

const getUSMarketSession = async (): Promise<MarketSessionType> => {
  const cached = await MEMORY.getJSON<CachedMarketSession | null>(
    MARKET_STATUS_CACHE_KEY,
  );

  if (cached) return getSessionFromClockAndCalendar(cached.clock, cached.calendarEntry);

  const session = await fetchCurrentMarketSession();
  await MEMORY.setJSON(MARKET_STATUS_CACHE_KEY, session, 60);
  return getSessionFromClockAndCalendar(session.clock, session.calendarEntry);
};

The clock API gives open/closed status. The calendar API gives the exact session times for today: session open (pre-market start), regular open, regular close, and session close (after-hours end). The session is computed from the current ET minute-of-day against those four boundaries. This is accurate to the actual trading day, not hardcoded to 9:30 AM and 4 PM.

The 60-second Redis cache means session classification can lag by up to 1 minute on transitions. price is only written when the market is in a regular session during background snapshot refreshes. After 4 PM ET, the price column holds the final regular-session trade. REST snapshot refreshes during off-hours do not overwrite it with a potentially stale extended-session quote. The live WebSocket path (those are actual trade events) is not subject to this guard.

The Background Price Worker

The live WebSocket pipeline only covers tickers someone is currently viewing. When all sessions for a ticker close, it gets unsubscribed from the upstream stream. The stored price goes stale. When the next user opens that stock, they need a reasonably current price immediately.

Two mechanisms handle this. The first is the staleness check on new subscription: if the stored price is older than 10 seconds, a snapshot fetch fires immediately. The second is the background sweep:

const stocks = await DB.stock.findMany({
  where: {
    OR: [
      { price_updated_at: { lte: new Date(Date.now() - 3 * 60 * 60 * 1000) } },
      { price_updated_at: null },
    ],
  },
  take: 50,
  orderBy: { price_updated_at: 'asc' },
});

Every 3 seconds the scheduler fires this. It fetches the 50 stocks whose prices are most stale (those more than 3 hours old or never updated) ordered oldest first. At most 2 snapshot requests are in flight at any time across all instances via a global distributed lock.

On failure, the worker does not retry immediately. It writes a future timestamp, setting price_updated_at to 2 hours in the future. This means the stock will not be selected by the staleness filter for the next 2 hours. The retry backoff requires no scheduler entry, no retry queue. The staleness filter is the backoff mechanism.

Chart Candle Data

Live price updates cover the current trade. Historical charts need OHLCV bars from the data provider. These are fetched with pagination, since a multi-year intraday history can exceed a single API response. adjustment: 'all' applies both split and dividend adjustments, which is critical for historical accuracy. Without it, a 20:1 stock split would show as a price drop.

The result is cached in Redis keyed by symbol, resolution, and exact time window. TTL: 5 minutes for daily/weekly/monthly resolutions, 1 minute for intraday. Intraday charts refresh frequently during market hours. Daily charts are stable enough for a 5-minute window.

Chart Data Processing on the Frontend

Raw candles go through several transforms before reaching the chart component.

Gap Filling

The provider does not guarantee a bar for every minute in thinly-traded stocks. Missing minutes leave gaps in a continuous line chart. A gap filler inserts synthetic flat bars carrying the previous close forward. A 30-step cap prevents bridging across overnight gaps and multi-day breaks where no data legitimately exists.

Holiday and Weekend Handling

The 1D view fetches 5 days of data to handle the common case where the current day is a weekend or market holiday. After fetching, the client filters to the last day that actually contains candles. For 5D, the query window is widened by 9 extra calendar days to guarantee 5 real trading sessions even across holiday clusters.

Extended Hours Time Compression

1D is the only interval that shows pre- and post-market data. Without treatment, the chart would be unreadable: pre-market runs from 4 AM ET (5.5 hours before open) and would dominate the visual width while containing a handful of low-volume trades.

const INTRADAY_EXTENDED_HOURS_WIDTH_RATIO = 0.35;

// Pre-market and post-market together get 35% of total width.
// Regular session gets 65%.

const mapTimestampToCompressedX = (timestampMs: number): number => {
  if (clampedTimestamp <= marketOpenMs) {
    return mappedDayStartMs + (clampedTimestamp - dayStartMs) * preRatio;
  }
  if (clampedTimestamp < marketCloseMs) {
    return mappedMarketOpenMs + (clampedTimestamp - marketOpenMs);
  }
  return mappedMarketCloseMs + (clampedTimestamp - marketCloseMs) * postRatio;
};

The mapping is invertible for tooltip timestamp lookups. Both extended sessions share the compressed width of whichever is shorter, keeping them visually balanced. Regular session timestamps map 1:1 within their allocated space.

The Frontend WebSocket Connection

The browser connects to the per-stock WebSocket endpoint and patches TanStack Query's cache directly on every event:

export const useStockWs = () => {
  const symbolOrId = useSymbolOrId();
  const { updateStock } = useUpdateStock();

  useWs('/stocks/{symbolOrId}/ws', { path: { symbolOrId } }, (event) => {
    switch (event.type) {
      case 'stock_updated':
      case 'stock_updated_partial': {
        updateStock(event.data);
        break;
      }
    }
  });
};

updateStock does a selective deep merge: a price update that carries new metrics does not replace the profile object. Components subscribed to profile data do not re-render. Only the components consuming the changed fields do.

The WebsocketClient is stored as a singleton in TanStack Query's cache with staleTime: Infinity, which prevents TanStack Query from ever recreating the client. Two components calling useWs for the same endpoint share the same connection. The useEffect registers a listener when the component mounts and removes it when it unmounts. When the last listener unregisters, the connection disconnects. On unexpected close, the client auto-reconnects after 1 second.

The Header Price Model

The stock header shows different information depending on the current session. During regular hours, one row shows the live price with the daily change. Outside regular hours, two rows appear: the session close price and the extended-session price.

Session detection runs on a 60-second interval on the client side, using the exchange's known market window. During regular hours, stock.price is the live price flowing in from the WebSocket. Outside regular hours, the header shows two rows. The primary row is the close price, not from the database price column but derived from the 1D candle data. The secondary row is the extended-session price from stock.price, which the WebSocket still updates as after-hours trades come in.

The close price is found by walking backwards through candles to find the last candle that falls within regular session hours and whose session close boundary has already passed. animatePrice: false is set on the locked close price since the close is static, it does not receive WebSocket updates. Animating a static value creates a false impression of live data.

The Full Data Flow

End to end, from exchange to browser:

  • -A trade executes on an exchange. The market data stream delivers it to the backend WebSocket connection.
  • -The upstream symbol is in the current subscription because PUBSUBCHANNELS stocks-room:* returned stocks-room:AAPL, meaning at least one browser client is watching this stock.
  • -updateStockPriceLocal runs: writes new price to Postgres, recomputes all price-dependent metrics from the last 8 quarters of financials, publishes stock_updated_partial to the stock's Redis room.
  • -Redis pub/sub delivers the message to every backend instance subscribed to stocks-room:AAPL. Each instance fires its listener for every browser WebSocket connected to that room.
  • -The browser receives the event. updateStock merges price and partial metrics into TanStack Query's cache. React re-renders only components subscribed to the changed fields.

For tickers with no active browser sessions, the background sweep runs every 3 seconds on the controller instance, fetching the 50 most stale stocks and refreshing them via the REST snapshot endpoint. At most 2 concurrent requests globally, via a distributed lock.

What Stands Out Looking Back

The subscription model is the cleanest part. The upstream worker never needs to know about users or sessions. It asks Redis what channels exist and reconciles against its current subscription list. This is resilient to every edge case an explicit lifecycle would have to handle: browser crash without clean disconnect, instance restart, network drop. The system converges on the correct state automatically.

Redis plays two distinct roles here and it is worth being clear about the distinction. As a pub/sub delivery layer, it carries events across instances (the signaling role). As a key-value store, it holds cached market session state, candle data, and stock metadata (the caching role). Neither role requires a dedicated message broker. Both fit on the same Redis instance the application already runs.

The metrics-on-write design concentrates computation at the write path and makes reads free. The write path already holds a DB transaction. Adding a metrics recompute to it means every downstream read (page load, API request, WebSocket subscriber) gets a precomputed value with no extra work. On a platform where a single ticker can be viewed by many users but trades at a finite rate, this is clearly the right ratio.

EXPLORE MORE
ALL POSTS

Browse all engineering deep-dives and technical write-ups.

Contact