AHA spent $150M+ on campaigns with zero structural visibility. Fiserv scored 46/100 on brand power despite Fortune 500 status. FrameBright had 10 structural gaps invisible to their analytics stack. This is systemic.
| Client | What They Thought | What We Found |
|---|---|---|
| American Heart Assoc. | Strong brand, healthy engagement metrics | Structural gaps in competitive positioning invisible to campaign analytics. 5/5 satisfaction on our diagnostic. |
| Fiserv (Fortune 500) | Market leader by revenue and reach | Scored 46/100 on structural brand power. Growth stalled because the system was broken, not the campaigns. |
| FrameBright | Growth metrics looked healthy | 10 structural gaps in the brand ecosystem. Their marketing team had no framework to detect them. |
Campaign optimization without system-level diagnosis. Every one of these organizations was optimizing inside a brand system without knowing whether the system itself was sound. Prescribing medication without running a blood test.
Generic AI tools produce generic insights. LLMs know everything in general and nothing in particular about structural brand dynamics. Domain-specific intelligence requires purpose-built infrastructure.
SHUR IQ is a structural brand intelligence platform. We encode brand competitive dynamics as scored, queryable, time-series data inside a proprietary knowledge graph. Every engagement and every nightly automation cycle makes the system smarter.
Domain experts define the schema. 12 classes, 22 properties encoding competitive dynamics no public dataset covers.
Queryable, scored, temporal. SPARQL queries across 76,000+ structured facts with week-over-week trajectories.
Grows nightly via $3 auto-research. Cross-vertical transfer proven. The database is the moat.
No existing system combines these three layers for brand diagnostics. L2 (acquired by Gartner for $100M+) is the closest precedent, but L2 ran on human researchers. SHUR IQ runs on autonomous agents and a knowledge graph that compounds.
The SBPI (Structural Brand Power Index) scores brand competitive position across five dimensions on a 100-point scale, updated weekly with full attestation provenance.
Campaign metrics are lagging indicators. Structural position is a leading indicator. A brand with strong distribution power and weak content strength will behave predictably. SBPI encodes that structural logic as scored, queryable data.
| Tier | What | Timeline | Price |
|---|---|---|---|
| Tier 1 | Outside-in structural diagnosis using public data | 3–5 weeks | $40K–$75K |
| Tier 2 | Gap validation with internal data (CAC, LTV, retention) | 6–10 weeks | $90K–$150K |
| Tier 3 | System-level design that repairs value flows | 8–12 weeks | $125K–$250K |
| Monitoring | Quarterly structural drift detection + persistent scoring | Ongoing | $8K–$20K/mo |
The client gets a paid deliverable (the Artifact). SHUR IQ gets structured knowledge deposited into a proprietary graph (the Asset). Revenue on the front end. Compounding IP on the back end. Every engagement makes the next one more valuable.
| Client | Sector | Outcome | Report |
|---|---|---|---|
| American Heart Assoc. | Healthcare Nonprofit | Brand power score, viz suite, 5/5 satisfaction, asked for 3 more verticals | View ↗ |
| Fiserv | Fintech (Fortune 500) | Brand diagnosis (46/100 Distressed) led to $40K naming follow-on | View ↗ |
| FrameBright | Child Safety Tech | Full pipeline: 10 gaps, value prop canvas, active engagement | View ↗ |
| MicroCo | Entertainment | Weekly intelligence for 22 companies, ongoing SBPI scoring | View ↗ |
| Deckers Brands | Consumer (CPG) | Portfolio gap analysis (HOKA 75, UGG 64), C-suite showcase | View ↗ |
| Long Zhu | Education/Gaming | Layered ontology, BMC overlay, value flow analysis | View ↗ |
| AFDVI | Nonprofit | Donation strategy, $100K–$500K revenue opportunity mapped | View ↗ |
Built by a 4-person team. Every deliverable is live on the web. The pipeline produced this. Now we scale it.
The autoresearch pipeline produces the same structural extraction as a $40K client engagement for $3 in compute. Locally hosted models. Nightly cycle. At scale: 50,000+ reports per year for $150K total compute. The same intelligence output that would cost a competitor millions in analyst time.
$150K/year in compute produces the same ontological output as thousands of client engagements. The $3 auto-research loop decouples knowledge graph growth from the client sales cycle. The database grows every night whether we have clients that week or not.
1,694 companies tracked across 3 verticals with 76,000+ structured facts. No public dataset encodes structural brand dynamics. This schema has no equivalent in Wikidata, Schema.org, or any LLM's training data.
Node Density grows with every engagement and every nightly cycle. Ontological Alpha deepens with every new vertical. Extraction Efficiency improves with every experiment. Decay Rate dictates refresh cadence. These are quantifiable moat metrics, not claims.
ReelShort, DramaBox, Netflix, Disney, Amazon, and 17 more. Full structural scoring with week-over-week deltas and published rankings.
Y Combinator batch analysis. Same scoring methodology. Same nightly cycle. Same knowledge graph substrate. Applied in 2–3 days.
8 entities scored from web-sourced data (BTS 87.55 to NewJeans 32.25). Same structural dimensions, same pipeline, different industry. Tests established market behavior alongside emerging categories.
No new code. No new infrastructure. Company list and dimension weight calibration are the only per-vertical inputs. Each new vertical adds to the shared graph. The system gets smarter at analyzing any vertical because it has already learned the structural behavior of markets.
Client engagement =
Revenue ($40K–$250K)
+ KG Deposit (structured facts)
Auto-research =
$3 compute
+ KG Deposit (same extraction)
Every engagement generates revenue AND grows the knowledge graph. Auto-research fills gaps between engagements at $3 per run. Both channels deposit the same structured intelligence into the same graph.
A competitor who starts later starts with an empty database in a domain where historical trajectories are the product. Our database grows every night at 6:13 AM whether we have clients that week or not. Time cannot be purchased.
Expert-designed schema encoding competitive dynamics. 12 classes, 22 properties. No public equivalent.
Queryable, scored, temporal data. SPARQL queries across 76K+ structured facts. Week-over-week trajectories.
Grows nightly via $3 auto-research. Cross-vertical transfer proven. Each week of history is permanent competitive advantage.
| Player | What They Have | What They Lack |
|---|---|---|
| McKinsey, BCG, Bain | Human expertise, enterprise relationships | No compounding data asset. Knowledge walks out the door. $500K+ per engagement. |
| Brandwatch, Sprout, Talkwalker | Surface analytics at scale | Sentiment and impressions. No structural diagnosis. No competitive architecture. |
| L2 (acquired by Gartner) | Closest precedent. $100M+ scoring digital brand competence. | Human researchers. No autonomous agents. No compounding knowledge graph. No nightly automation. |
| Tier | Volume | Revenue |
|---|---|---|
| Tier 1 | 6 | $360K |
| Tier 2 | 4 | $480K |
| Tier 3 | 3 | $540K |
| Monitoring | 3 | $540K |
| Target | $1.92M |
Conservative: $1.38M (5/3/2/2). Gross margin: 55–70%. Core team: 3–5 operators.
Budget source: Strategy and transformation budgets, not marketing budgets. SHUR IQ competes with McKinsey project scopes, not Brandwatch subscriptions.
Service revenue grows linearly with headcount and sales effort. The knowledge graph asset grows exponentially through the $3 auto-research flywheel.
By Year 2–3, the cumulative value of the knowledge graph begins to exceed cumulative service revenue. At that inflection, SHUR IQ transitions from a consulting firm with strong tools to an infrastructure asset with a funded acquisition channel.
Post-critical-mass: subscription access to the SHUR IQ Atlas. Clients query the validated knowledge graph for internal simulations. Bloomberg Terminal economics for structural brand intelligence.
Every week of scoring data is a week of competitive history that did not exist before we created it. A competitor who raises $50M next year still starts with zero weeks of structural brand trajectories. Our database is bigger and older. In structural intelligence, time is the moat.
| Category | What It Builds |
|---|---|
| Infrastructure (40%) | Local LLMs, graph databases. The IP factory that runs 50K+ reports/year at $3 each. |
| R&D (30%) | Auto-research pipeline, cross-vertical transfer, accuracy experiments targeting 85%+ |
| Expert Calibration (20%) | Domain experts validate the graph. The human layer that separates verified intelligence from LLM noise. |
| Sales & GTM (10%) | Channel validation. The public ranking IS the marketing (L2 model). |
Limore Shur — limore@weareshur.com
Nuri Djavit — ndjavit@weareshur.com
Shur Creative Partners — Confidential — April 2026
This entire portfolio — 7+ engagements, 15+ deployed sites, 76K structured facts, nightly automation, weekly publications — was produced by four people. Headcount scales linearly. The knowledge graph scales exponentially.
| Category | Allocation | Line Items |
|---|---|---|
| Infrastructure | 40% | High-VRAM local servers (H100s or Mac Studio clusters) for 24,000+ auto-research reports/year. Graph database infrastructure (SPARQL/RDF at billion-node scale). Local LLM hosting (Llama-3-70B quantized). Data pipeline for Common Crawl, Semantic Scholar, industry feeds. |
| R&D / Agentic Tuning | 30% | Karpathy auto-research loop engineering (Seed, Extract, Cross-Reference, Stack Rank, Finalize). TPE parameter optimization (12 parameters, 30 trials). Cross-vertical transfer R&D. 5 queued experiments. Ontological Referee agent for quality gating. |
| Expert Calibration | 20% | Domain expert RLHF (humans validate high-value nodes from auto-research). Ontology validation reviews per vertical. Vertical-specific industry specialists (K-Pop, biotech, fintech). Scoring methodology review for institutional-grade claims. |
| Sales & GTM | 10% | Channel validation for $40K–$250K engagements. Stack ranking publication (inbound driver). Prestige positioning and executive network activation. Beta diagnostic sessions for case studies. |
| Stage | Cost | Capacity |
|---|---|---|
| Machine 1 | $25K | 5 verticals, 500K triples, 3 monitoring clients |
| Machines 2–3 | $50K | 25 verticals, 10M triples, 15 monitoring clients |
| GPU Cluster | $100K | 50 verticals, 100M triples, real-time prediction |
| Billion-Node | $250K–$500K | 100+ verticals, 1B+ triples, S&P-scale |
Capital builds the IP factory, not the consulting team. Machines extract intelligence. Humans bless it. Every dollar invested accelerates the flywheel by lowering the marginal cost of intelligence toward zero.
Detailed model: Use-of-Funds Spreadsheet