24

Nokia Corporation

Bell Labs Hypebox Table App

An interactive Electron + Vue 3 application designed for the Bell Labs Showcase Hypebox, a digital exhibit table.

Bell Labs Hypebox Table App
Date: Jan 2025 - Apr 2025
Project Types:#Upswell#Touch Table
Tech Stacks:#Electron#Vue.js#Vite#Pinia#TailwindCSS#Three.js#AWS S3

Architected Offline-First Multi-Display Signage System with Real-Time TCP Sync for Nokia Bell Labs

πŸ“ŒThe Challenge

Nokia Bell Labs needed a multi-station interactive exhibit system for their Murray Hill, NJ research facility. The system had to sync content across 4 BrightSign digital signage players (3 interactive tables + 1 passive overhead display) while maintaining 99%+ uptime despite network instability in a museum environment.

Technical constraints:

  • BrightSign XT4 players run embedded Linux (limited npm package support)
  • No guarantee of stable network connection to Strapi headless CMS
  • Main table interactions must propagate to overhead display in <100ms
  • ES6 modules incompatible with BrightSign's default file:/// protocol
  • Asset pool limited to 10GB on SD card storage

Body Image

πŸ› οΈThe Engineering

1. Offline-First Data Architecture Built a two-tier caching system to ensure content availability:

  • Fetches JSON from Strapi API (/integration/v1/showcase/hypebox/{uuid}) on boot
  • Writes response to /storage/sd/cmsData.json on successful fetch
  • Falls back to cached file if API unreachable (try/catch with fs.readFileSync)
  • Asset downloader uses BrightSign's AssetPool API with retry logic (3 attempts, 1024 bytes/sec minimum transfer rate)

Flattened nested Strapi response objects using recursive function to extract image URLs:

flatCMSData β†’ filter keys ending in .image.url β†’ build asset collection β†’ download to pool

This is an illustrative snippet and does not represent the production code.

This reduced parse time from ~2.3s to <400ms on cold boot.

2. Custom TCP Messaging Layer

Implemented Node.js net module-based TCP server/client for inter-display communication:

  • Main table = primary (TCP server, port 5000)
  • Overhang display = secondary (TCP client with auto-reconnect)
  • Message format: {type: 'seek_to', route: '/content/123', uuid: '...', id: '...'}
  • 5-second reconnect interval with exponential backoff

Handled edge cases:

  • Socket cleanup on disconnect (prevents memory leaks)
  • Message queue for when client reconnects mid-session
  • Multiple client support using Set() data structure

Body Image

3. BrightSign ES Module Compatibility

BrightSign's Chromium fork doesn't support ES6 modules over file:/// protocol. Solved by:

  • Built local HTTP server (port 9090) using Node.js http module
  • Served Vite-compiled assets with correct MIME types from /storage/sd/
  • Custom Vite plugin to inject config.js and index.js via DOM manipulation post build
  • Modified rollup config: inlineDynamicImports: true (single bundle instead of chunks)

4. Device-Specific Config Injection

Each BrightSign unit has unique serial number.

Mapped serials to API UUIDs:

// index.html boot sequence
window.serialNumber β†’ lookup table β†’ window.api_uuid β†’ fetch(API_URL + uuid)

This is an illustrative snippet and does not represent the production code.

This allowed deploying identical code to all 4 units while fetching unit-specific content.

5. Performance Optimization

  • Refactored Three.js background animations from Vue composables to vanilla JS (reduced bundle size by ~970 lines)
  • Idle state timer (300s) to pause animations and preserve GPU
  • Implemented asset protection: assetPool.protectAssets("cms_collection", dataCollection) prevents cache eviction during content updates

Body Image

πŸš€The Impact

Deployed to production at Nokia Bell Labs Murray Hill campus

  • Uptime: System operates offline for days if network drops, falling back to cached content
  • Sync latency: <50ms route propagation between main table and overhead display
  • Asset management: Handles 10GB asset pool with automatic eviction and re-download
  • Build optimization: Reduced deployment artifact to 8.8MB (from initial 23MB)
    Multi-environment deployment: 4 AWS S3 buckets (dev/prod Γ— main/overhang) with CLI-based sync

DevOps pipeline:

  • Bitbucket Pipelines β†’ Docker build β†’ ZIP with MD5/SHA256 checksums β†’ Upload to Experience Manager API
  • Deployment time: ~4 minutes from commit to live