The Investor Update That Builds Itself
How I replaced a monthly slide-deck ritual with a live web page, a Cursor skill, and API calls to six data sources
Every month, I used to spend half a day assembling an investor update.
Open RevenueCat. Screenshot the MRR chart. Open Supabase. Run a SQL query I’d half-forgotten. Open Apple Search Ads. Navigate four menus deep to find the CPI. Open a Google Doc. Open Gamma. Start building slides. Realize I formatted it differently from last month. Start over.
The content of the update was never the bottleneck. The assembly was.
So I built a system where the update assembles itself.
What I Wanted
The end state I had in mind was simple:
One command (or one conversation with an AI agent) that pulls metrics from every data source at once
A structured interview about qualitative things the agent can’t know: product updates, strategic shifts, what surprised me
A living web page that renders the data, password-protected for investors
The whole thing done in under 30 minutes
Here’s what I actually built, in enough detail that you could replicate it.
Step 1: The Cursor Skill
The foundation is a Cursor/Claude skill. If you’re not using skills yet, the idea is simple: a markdown file that acts as an instruction set for an AI agent. It describes a workflow, lists the exact shell commands, SQL queries, and API calls to run, and tells the agent what to do with the results.
My investor update skill lives at alma-growth/skills/investor-update/SKILL.md. Here’s the structure:
1. Gather Metrics ← automated, runs in parallel
2. Interview ← the agent asks me 13 questions
3. Generate Output ← structured markdown, ready to render
When I say “run the investor update skill,” the agent reads the file and begins. It doesn’t need further instructions.
Step 2: The Data Sources
The metrics layer hits six sources. Here’s exactly how:
RevenueCat — Subscription Revenue
RevenueCat has a clean V2 API. One call to their metrics overview endpoint returns everything useful: MRR, ARR, active subscriptions, active trials, revenue per platform. No scraping, no dashboards.
curl -s -X GET "https://api.revenuecat.com/v2/projects/[project_id]/metrics/overview" \
-H "Authorization: Bearer $REVENUECAT_API_KEY" \
-H "Content-Type: application/json"
The response includes mrr, active_subscriptions, active_trials, and enough to compute ARR on the spot. The MRR split between monthly and annual plans takes one more query — you cross-reference active subscription counts against known product IDs. No SDK required. Just HTTP.
Supabase Production — User Metrics
MAU, DAU, total users, retention funnels. All live in the production database. The Supabase MCP (Model Context Protocol) tool lets the agent query production directly without me opening a SQL editor.
The retention query I use is slightly involved: it’s a weekly cohort funnel that traces app_open events through sign_up, activation, and day_7_return. I keep the full query in a references file (references/retention_query.md) so the skill can reference it without bloating the main SKILL.md.
The MAU query is three lines:
SELECT COUNT(DISTINCT user_id) as mau
FROM food_items
WHERE created_at >= NOW() - INTERVAL '30 days';
Everything else is variations on that pattern.
Apple Search Ads — ASA Performance
ASA has an API, but it requires a signed JWT with a private key and a specific auth flow. I have a Python fetcher script that handles all of that:
cd alma-growth/ad-optimizers/apple-search-ads
python asa_data_fetcher.py --days 30
Outputs total spend, installs, and CPI for the month. The script handles token generation, pagination, and unit normalization (ASA returns spend in local currency, which I convert).
Meta Ads — Facebook/Instagram
The daily ad audit script handles this. It calls the Facebook Business SDK, pulls spend and performance for all active campaigns, and spits out a summary table.
cd alma-growth/ad-optimizers
python daily_ad_audit.py --days 30 --dry-run --no-ai
--dry-run means it won’t send an email. --no-ai skips the Claude summary step since the investor update skill handles that itself.
App Store Connect — Downloads
Apple’s App Store Connect Reporter API is the oldest and most annoying of the bunch, it requires a Java jar file and a properties file with API credentials. Once that’s set up, it works:
cd alma-growth/app-store-connect-api
python app_store_connect_reports.py --days 30
Returns total installs, updates, and platform breakdown for the month.
Google Ads
Handled by the Meta audit script’s companion. Spend and click data extracted the same way.
Step 3: The Interview
After the metrics are gathered, the agent asks me 13 questions, one at a time:
What product features shipped this month?
Any major fixes or reliability improvements?
What are users clearly responding to?
Did you ship or test any monetization changes?
What marketing activities actually happened?
Any partnerships signed, paused, or dropped?
Total ad spend breakdown by platform?
Any team changes?
Current team structure?
Cash in bank at month end?
What went better than expected?
What’s clearly not working or still messy?
Top 3 concrete priorities for next month?
These are the things an API can’t tell you. The agent takes my answers, combines them with the quantitative data, and produces the full update.
The output is structured markdown. Numbers first, narrative second. Same format every month so investors can scan it without re-learning the layout.
Note: I use a voice dictation tool to make it easy to just talk out loud as I answer these questions.
Step 4: The Web Page
This is where it gets interesting. Instead of building a slide deck, I render the update as a React page.
It’s password-protected, a simple client-side check is enough given the context. It looks like this:
At login: Alma logo, a password field, nothing else. Clean.
Once unlocked:
Hero section with month, tagline, one-line summary
Animated metric cards (numbers count up on scroll into view)
Month-over-month comparison table
Revenue breakdown
Ad performance summary by platform
Retention funnel table with weekly cohorts
Major product releases (icons, titles, descriptions)
Monetization strategy section
Financial position with cash breakdown
Two-column “Working Well / Needs Work” reflection
Numbered priority list for next month
The entire thing is roughly 520 lines of React and 300 lines of CSS. It doesn’t use a charting library. Animated numbers are a 40-line custom hook with requestAnimationFrame. The styling uses our existing design tokens.
Updating it takes about 10 minutes: swap out the numbers, update the lists, commit, push. AWS deploys automatically.
What This Replaced
The old workflow involved Gamma.
If you haven’t tried Gamma, it’s an AI presentation tool that generates slides from text input. It’s genuinely good. You paste in a text outline and it produces a styled presentation in 30 seconds. For a while, it was the fastest path from “update notes” to “something I can share.”
But I kept running into the same friction: my update had specific formatting. I wanted numbers animated. I wanted MoM comparisons always side-by-side. I wanted the same layout, month after month, so investors could scan it in 90 seconds. Gamma would always give me something close but different. I’d spend time adjusting. The slides would live in a link that expired or required a login. The numbers were static.
The alternative, building it in HTML and CSS, used to feel like overkill for a two-person startup. It isn’t anymore.
With Cursor and a capable model, I described the page I wanted in plain English. The agent built the initial version in about 20 minutes. I gave feedback. Iterated. By the end of the day it was live and doing exactly what I wanted.
This is a meaningful shift. The definition of software is changing in front of us.
Software used to mean: a non-technical founder has an idea, briefs a developer, the developer codes it over days, deploys it, the founder waits. Or alternatively: a non-technical founder uses a no-code/low-code tool (Gamma, Webflow, Glide, Notion), trades customization for speed.
Those two options still exist. But there’s a third option now: describe what you want to an AI agent in a coding environment, get the code, ship it. The barrier between “I want a thing” and “the thing exists” has dropped dramatically.
I’m not saying this makes software engineers obsolete. The Cursor agent can’t architect a complex distributed system without guidance. It makes mistakes. It needs reviewing. But for a specific class of internal tools, dashboards, landing pages, update portals, data visualizations, the iteration loop has compressed from days to hours.
Gamma is a great product. There’s still a version of this world where most people use tools like it. But when the raw material (HTML, CSS, JavaScript) is this easy to generate and the deployment pipeline (Git + Vercel) is this frictionless, I find myself asking: what problem is the abstraction layer actually solving?
The value of software is shifting. Less about encoding behavior that’s hard to describe. More about describing behavior clearly enough that the agent can encode it for you.
What Still Requires Me
A few things the agent gets wrong every time:
Narrative judgment. The agent can write “Coaching 2.0 launched.” I need to write “Coaching 2.0 is landing well, unexpected finding is users seem genuinely impressed by the fact that Alma flags its own bugs, treating transparency as a feature.” That signal came from watching user reactions, not from a database query.
Prioritization. The agent can list everything that happened. I need to decide what matters. The SR&ED refund approval is worth mentioning. The specific thing we tried and abandoned is probably not.
Tone calibration. Investor updates walk a line. Honest without being alarming. Transparent without being scattered. The agent drafts in the right format. I edit for the right register.
The Result
My February update took about 25 minutes from starting the skill to having a live URL.
The page has animated metrics, a retention funnel table, a side-by-side reflection section, and a priorities list. It’s password protected. It renders on mobile. The numbers animate on scroll. The brand tokens match the rest of the site.
The investors see it. They send back questions. I answer. That’s the loop.
The slide deck is gone. The assembly ritual is gone. The monthly half-day is now a half-hour.

