Architecture
How Shelf is built.
Shelf runs as four layers that each do one job. The dashboard in your Shopify admin is a read-only view of work that's already happened — never a live call to AI or to competitor sites.
The four layers
1. Collection
Respectful, rate-limited crawls of the public storefronts of the brands you follow. Product pages, homepages, banners, email-capture flows — the same pages anyone shopping the brand would see. No logins, no scraped APIs.
2. Processing
Collected pages are normalized into a structured price and promo history. Every crawl is append-only. Over time this compounds into the record behind trend signals and category reads — a dataset others can't claim without the history.
3. Intelligence
A background pipeline asks our AI provider to synthesize what the category is doing — promos, email cadence, launches, incentives. Results are cached and tagged with the exact prompt version that produced them.
4. Delivery
The Shelf dashboard inside your Shopify admin reads from cache only. No live AI calls at render time. No waiting for competitor sites to respond. Fast page loads, predictable costs, no AI failure modes leaking into your session.
The two sides don't touch.
The merchant-facing app and the AI-facing pipeline are physically separated. They run in different containers, on different schedules, with different credentials. The app that serves your dashboard never has AI-provider credentials loaded at request time. The pipeline that calls AI never holds a merchant session.
A failure on one side can't cascade into the other. If our AI provider is down, your dashboard still loads. If a merchant session is active, no AI call is waiting on it. The separation is architectural, not a policy.
The AI pipeline
Shelf uses Anthropic's Claude API for its market-read layer. We run the latest Sonnet model and upgrade deliberately — we don't chase model releases.
Fallback behavior. If Anthropic is unreachable, Shelf returns your last successful analysis rather than showing broken or stale output. If no safe cached response exists, the scan fails explicitly rather than silently serving misleading output.
Prompt versioning. Every AI analysis is tagged with the exact prompt version that produced it. When we improve our prompts, old analyses are re-run rather than served alongside new ones. You always see a coherent read, not a mix of old and new methodology.
Cache refresh cadence
Cached intelligence refreshes on each crawl cycle. Refresh frequency scales with tier — weekly on Starter, daily on Pro, real-time signals on Enterprise.
Why this matters for a technical review. The architecture optimizes for predictability: no live AI at render time means no surprise cost spikes, no surprise latency, no surprise failures visible to your team. The dashboard behaves like any read-from-cache web app, because that's what it is.