There’s a specific kind of intellectual dishonesty that happens when you fall in love with your own idea. You stop asking whether it’s good and start asking whether people understand it yet. You mistake enthusiasm for validation.
I wanted to stress-test Resonance — a product that connects biometric sensing to real-time generative multi-sensory output — before I committed serious resources to it. So I ran a structured adversarial debate. Four rounds. No softening. Here’s the full transcript and what I’m actually going to act on.
What is Resonance?
Resonance is an adaptive personal environment engine that dynamically generates ambient soundscapes, evolving visuals, and synchronised haptics — in real-time — based on your biometric state (HRV, skin conductance) and activity context. Think of it as an OS-level layer that makes your technology emotionally responsive to you. It builds a personalised “Resonance Profile” that deepens over time, creating lock-in analogous to Spotify Wrapped but grounded in your physiology, not your listening history.
Marcus — Ruthless VC
“Personal Atmosphere” isn’t a category — it’s a mood board with a REST API. You’re building an expensive intersection of three markets that already exist: wellness apps, ambient music, OS personalisation. Spotify, Apple, and Calm have the distribution, the data, and the brand trust to ship 80% of this. Why does a standalone app win? What’s your actual wedge that Apple can’t replicate in a watchOS update?
Ava — Visionary Advocate
The wedge is the Resonance Profile — the personalised aesthetic fingerprint that deepens over six months. Apple doesn’t have a reinforcement learning loop that maps your biometric arousal markers to generative output parameters. They have playlists. Curating existing content is fundamentally different from generating an environment that literally learns your emotional fingerprint. That’s not a feature Apple ships in a point release.
Marcus — Ruthless VC
You just described a 6-month lock-in strategy as your moat. That means your churn problem is catastrophic in months 1–5. If the product only becomes “irreplaceable” after six months of use, what’s your D30 retention story? You’re targeting 55% — show me one consumer wellness app that hits 55% D30 without a daily habit anchor like a checklist or social feed. Calm and Headspace sit at 30–35%.
“You just described a 6-month lock-in strategy as your moat. That means your churn problem is catastrophic in months 1–5.”
Marcus — Sceptical CTO
Real-time on-device RL inference. AudioCraft generation at 48kHz. Metal shaders at 60fps. HealthKit at 1Hz. All simultaneously. Under 5% battery per hour combined. That’s not a PRD requirement — that’s a physics problem. AudioCraft alone runs at 3–5x slower than real-time on an M-series chip in benchmark conditions. You’re claiming sub-100ms end-to-end latency on an A15 Bionic. Your entire technical architecture is written as if these constraints don’t interact.
Ava — Visionary Advocate
The AudioCraft claim is based on the full model. A fine-tuned, distilled variant optimised for ambient generation — lower token count, constrained parameter space — is a fundamentally different inference profile. Teams at Meta and Stability have demonstrated real-time light variants. The architecture assumes purpose-built model compression, not off-the-shelf AudioCraft. That’s a standard production engineering challenge, not a showstopper.
Marcus — Sceptical CTO
“Standard production engineering challenge.” That phrase is doing a lot of work for 18–24 months of ML engineering just to know if it’s possible at the performance targets you’ve specified. There’s zero proof-of-concept data in this PRD. Also: stress spikes HRV and skin conductance. So does deep flow. How does your engine know the difference between “I’m in deep creative focus” and “I’m having an anxiety attack”?
Marcus — Regulator
You’ve built a product that reads biometric data, derives emotional state inferences, stores a detailed psychological profile, and then uses that profile to manipulate physiological arousal in real-time. Under UK GDPR and EU AI Act Article 6, biometric data used to infer emotional states is special category data. The EU AI Act’s prohibited practices include “subliminal techniques beyond a person’s consciousness to distort their behaviour.” Your Anticipation Engine — which shifts the environment before the user consciously realises it — is a textbook description of that provision.
Ava — Visionary Advocate
The on-device processing architecture addresses this directly. Raw biometric data never leaves the device, the Resonance Profile is encrypted and user-controlled, and the product’s purpose is explicitly user-beneficial personalisation. Regulatory frameworks distinguish between manipulation that harms the user and personalisation the user consents to and controls.
Marcus — Regulator
You’re a first-time consumer app arguing to the ICO that a product reading people’s physiological stress signals and adjusting their environment subconsciously is “user-beneficial personalisation.” That legal argument costs £300k in regulatory counsel and 18 months of back-and-forth. Also: your Resonance Test uses Web Bluetooth to grab wearable data from a first-time visitor with no account. That’s a GDPR consent nightmare before you’ve acquired a single paying user.
Marcus — Ruthless VC
You need £2.5M MRR by month 12. At £12/month Pro, that’s ~208,000 paying subscribers. Your funnel: 2M Resonance Test completions × 10% conversion = 200K paid users. For context, Notion took 3 years to hit 1M users with a fundamentally viral B2B product and a massive PLG machine. You’re a consumer wellness app with no social loop, no network effect, and a product that by design doesn’t interrupt the user with engagement prompts. How do you acquire 2M users in 12 months?
Ava — Visionary Advocate
The Resonance Test launches before the app (Q1 2027, app GA Q2 2027). The social sharing mechanic creates the viral loop: your Resonance Profile card is a personality test result for the AI generation era. People shared Spotify Wrapped obsessively. People share Myers-Briggs obsessively. A personalised aesthetic identity card with a custom soundscape is the same psychological hook, but interactive and generative.
Marcus — Ruthless VC
Spotify Wrapped works because 400M people already use Spotify daily. Myers-Briggs works because it’s been culturally seeded for 50 years. You’re asking a first-time visitor to grant Web Bluetooth access to a product they’ve never heard of, sit through a 3-minute calibration, and then share a card that says “your harmonic preference is Dorian mode, BPM 73.” That’s not a viral moment — that’s a niche early-adopter moment. The 2M completions figure is not grounded in anything.
1
Existential
Build a Technical Prototype Before Everything Else
The battery / latency / generation quality triangle is the single existential risk of this product. Allocate three months and a small technical team to one question only: can real-time distilled audio generation and on-device RL inference run simultaneously on an iPhone 14 within the stated battery constraints? If the answer is no, the PRD is fiction. No prototype means no credible investor conversation.
2
Existential
Get Regulatory Pre-Clearance Before Building the Anticipation Engine
The EU AI Act subliminal techniques provision and UK GDPR special category data classification are genuine blockers. Engage a DPA and specialist AI regulation counsel before building the pre-conscious environment shifting feature. Get written legal opinion. This isn’t a post-launch cleanup — it can kill the product retroactively.
3
High Priority
Run a Lean Funnel Experiment Now
2M Resonance Test completions is a belief dressed as a target. Strip down a version of the Resonance Test to a landing page experiment today — even with a static soundscape. Measure actual conversion from page visit to test completion, actual share rate on the profile card, and actual click-through on the app CTA. Build the financial model on real coefficients, not optimistic assumptions.
4
High Priority
Solve the Signal Ambiguity Problem with Data
The RL reward signal conflates flow states and stress states — both spike HRV and skin conductance. “The model learns over time” doesn’t resolve this; it’s a fundamental signal ambiguity problem that could make anxiety worse, not better. You need a labelled dataset and a validated disambiguation model before beta. This is also your largest liability from a user harm and regulatory perspective.
5
Financial Model
Rebase the Retention Targets or Redesign for Habit
55% D30 retention is not achievable for a passive ambient product without a deliberate daily habit trigger. Either redesign the free tier around a sticky daily ritual — a “Morning Calibration” push notification with a 5-minute session — or lower the retention target to 30–35% and model your unit economics accordingly. Don’t raise capital or set burn rate against a target that category data doesn’t support.
Blind spot check: what are you not considering?
The competitive response from Apple specifically. Apple has HealthKit, Core ML, Metal, AirPods spatial audio, and watchOS haptics — every single technical component of your stack exists inside one company that controls the OS and the hardware.
The moment Resonance shows meaningful traction, this becomes an Apple Watch feature. If your moat is the depth of the Resonance Profile, the honest question is: how long does that moat last when Apple starts building an equivalent model six months after you prove the concept works?
That’s the question I haven’t answered yet. And until I can, it’s the one that matters most.