Delete Typeform. Ship the Weekly Feedback Loop Instead.
How I run a self-hosted user survey flywheel in 20 minutes a week
I love sending surveys. I love reading the answers. Talking to users is one of the few parts of the job that has never gotten old, and a well-timed survey is the highest signal-to-effort way I know to do it at scale.
The part I do not love is the plumbing.
Typeform is a nice product, but the pricing tiers turn a weekly habit into a line item. The good features live behind the higher plans, response limits cap the fun right when things are interesting, and the data ends up trapped in a dashboard I then have to export, clean, and rejoin to my user table to ask any question worth asking. Rolling your own from scratch using a generic form builder or stitching together Notion plus Zapier plus a spreadsheet was, for most of my career, worse: expensive in hours, tedious to maintain, and always one broken integration away from a Saturday afternoon of debugging.
So the survey itch would show up. I’d weigh the plumbing. I’d ship the survey anyway, because the answers are worth it. But the friction kept the cadence irregular. I wanted a weekly habit. I was getting a quarterly burst.
So I rebuilt the whole thing. No Typeform. No Google Forms. No third-party anything. The survey page lives on our own site, the responses land in our own database, the sends go through our own email, and a skill orchestrates the whole loop. Weekly cadence. About twenty minutes of my time per round.
Here is how it works, and how you can build the same thing for your product.
What I Actually Wanted
The target was simple:
Survey topics should come from gaps in what we already know, not whatever I happened to be thinking about that week.
Writing a survey should take minutes, not hours. Designing four questions is not a craft that deserves a full afternoon.
Responses should land in a place I control, next to the user data they belong with, not trapped in a SaaS dashboard.
Sending should be one command, and the system should refuse to spam the same person twice inside a month.
I should never, ever open Typeform again.
Five pieces, end to end. Each one is a hundred lines of code or less.
Piece 1: The Survey Page
We have a marketing site at alma.food. It already renders a dozen static pages. Adding a dynamic survey route turned out to be a one-evening job.
The page lives at alma.food/surveys/:slug. The slug is the survey identifier. The configuration for each survey is a plain JavaScript file in src/survey-configs/:
export default {
slug: 'logging-accuracy-2026-04',
title: 'Is Alma getting your food right?',
subtitle: 'Quick 3-minute check on nutrition data accuracy.',
questions: [
{
id: 1,
type: 'single_choice',
question: 'How often does Alma feel off?',
options: ['Almost never', 'Sometimes', 'Often', 'Most of the time'],
required: true,
},
{
id: 2,
type: 'rating',
question: 'How much does that affect your trust in Alma?',
scale: 5,
labels: ['Not at all', 'Makes me want to quit'],
required: true,
},
{
id: 3,
type: 'open_text',
question: 'Anything specific you want us to fix?',
required: false,
},
{
id: 4,
type: 'email_opt_in',
question: 'Want us to follow up?',
required: false,
},
],
}
The React page reads the slug from the URL, imports the matching config, and renders a single-page survey with a progress bar. Six question types cover everything I’ve ever needed: single_choice, multi_choice, rating, yes_no, open_text, email_opt_in. The survey pages aren’t linked from anywhere public. The path you end up on is the one you received in the email.
On submit, the page posts to a single backend endpoint. That’s it. Shipping a new survey is now writing a JS config file, adding it to a loader map, and pushing to our prod branch. Vercel deploys it automatically.
Piece 2: The Response Endpoint
The backend is a single FastAPI route:
@router.post("/surveys/{slug}/response")
@rate_limit(...)
def submit_survey_response(slug, params, session):
session.execute(text("""
INSERT INTO survey_responses (survey_slug, answers, respondent_email)
VALUES (:slug, :answers, :email)
"""), {
"slug": slug,
"answers": json.dumps(params.answers),
"email": params.respondent_email,
})
session.commit()
return {"success": True}
(The endpoint is rate-limited and does basic input validation. Don’t ship a public write endpoint without both.)
That is the entire storage layer. Three tables cover the schema:
surveys— slug, title, description, questions JSON, cohort SQL (for reference later)survey_responses— slug, answers JSON, respondent email, created_atsurvey_sends— slug, user_id, email, sent_at (enforces the cooldown)
No admin panel. No dashboard. I query it the way I query any other production data, which is exactly the point. Survey responses live next to the food logs, the chat transcripts, the subscription events. When I want to cross-reference a specific answer against a user’s logging pattern, it is one join, not a CSV export.
Piece 3: The Skill
This is the part that took the habit from “quarterly maybe” to “weekly without thinking.”
I wrote a Cursor skill at docs/skills/survey.md. Skills are markdown files that describe a workflow for an AI agent to execute. Mine has eight steps, in order:
Load context — the agent reads a voice guide and queries the last 15 surveys from the database so it doesn’t repeat a topic.
Propose topics — based on gaps in past surveys, the agent proposes five candidate topics with a one-line rationale each, and asks me to pick one or describe my own.
Design the survey — four to six questions, no leading phrasing, one open text near the end, one email opt-in last. The agent drafts it, I approve.
Create the files — config file, loader entry,
surveystable insert. All generated.Find target users — the agent proposes a cohort (active last 30 days is a good default), runs the SQL against our Supabase prod database via MCP, dedupes against anyone surveyed in the last 30 days, and reports the final count.
Draft the email — short, plain text, warm, direct. No template. No branding. The subject line is curiosity driven, not “We need your feedback.”
Send — one command per batch, BCC up to 49 recipients per call (AWS SES limit), record every send in
survey_sends.Follow up with opt-ins — respondents who leave an email in the opt-in field get a short thank-you note with a Cal.com link two days later.
Each step has clear instructions: the exact SQL to run, the exact email structure, the exact insert statement. The skill removes every small decision that used to consume my energy.
When I say “run the survey skill,” the agent reads the file and starts at Step 1. I show up for the approval checkpoints. The agent handles the plumbing.
Piece 4: The Review Loop
Sending is half the loop. The other half is reading what came back and actually doing something with it.
Every Thursday I run a second skill that pulls the week’s responses, groups them by question, summarises the open text qualitatively, and drops the whole thing into a dated markdown file at product_marketing/survey_manager/survey_results_YYYY_MM.md. The same file gets appended to every week. Over time it becomes a rolling log of what users are telling us, which is more useful than any single survey’s top-line numbers.
The key move here is that the review is deterministic. Same format, same sections, same location, every week. That consistency is what makes it possible to skim three months of results in ten minutes and actually notice a shift in sentiment, not just re-read last week’s spike.
Piece 5: The Cadence
Every Thursday. That is the whole schedule.
On Thursday morning, I run the review skill against the survey I sent the previous Thursday. Most responses are in by then. The skill summarises the answers and appends them to the running results doc. Then I run the send skill and ship a new one the same day. Review, then send, then done until next Thursday.
Twenty minutes, once a week, at the same time. The consistency is what turned this into a habit instead of a project.
The cohort query is the part I’m most protective of. Every send pulls a fresh batch of active users directly from the production database and cross-references them against the survey_sends table before anyone gets an email. If you’ve received a survey from me in the last month, you are automatically excluded from the next one, no matter what the topic is. That table is effectively a lightweight CRM: one row per send, one per-user cooldown, enforced at query time. The skill refuses to draft the email until the cohort has been deduped.
This matters more than it sounds. The fastest way to ruin a feedback loop is to burn out the people who answer. The cooldown means nobody hears from me more often than once a month, and every survey reaches a genuinely fresh slice of the user base. The BCC sends use a generic “Hi there” greeting (no personalisation is possible in BCC), which I decided was a fine trade for being able to send to a batch with one command. The signal is in the answers, not the salutation.
The Math
A weekly Typeform-style workflow has real costs:
30 minutes designing each survey in a builder UI
15 minutes exporting responses to a CSV
15 minutes rejoining that CSV to user data to ask any question worth asking
A monthly SaaS bill that climbs the moment your response volume gets interesting, plus feature gates on the higher tiers for things that should be table stakes
A self-hosted loop costs me:
20 minutes a week, mostly spent on approval checkpoints
Zero incremental SaaS
Full access to the raw data for cross-referencing
A rolling log of feedback I actually re-read
The tradeoff I thought I’d be making (polish for control) turned out to be a non-issue. Nobody has complained about the page. People answer the questions. The open text boxes fill up with thoughtful paragraphs. Response rates have gone up, not down, probably because the emails sound like they’re coming from a founder who actually cares, not a form that escaped a marketing department.
What You Need to Build This
If you’re a founder or PM reading this and thinking about stealing the pattern, here’s the minimum viable version:
A page on your site that reads a slug, loads a config, renders a form, and POSTs to one endpoint. 150 lines of React.
One backend endpoint to store the response. Rate limit it.
Three database tables: surveys, responses, sends.
A skill file that walks an AI agent through proposing, designing, sending, and recording. Start with mine. Swap out the cohort SQL for whatever matches your user model.
A weekly time slot on your calendar. Not for doing the work. For showing up to the approval checkpoints.
Everything else is polish. The entire system is probably 400 lines of code end to end, most of which you’re going to ask an agent to write for you anyway.
The lesson I keep learning is that tools like Typeform solve a problem that doesn’t really exist anymore, which is “I don’t know how to ship a form.” You do know how to ship a form. Or rather, the agent sitting next to you does, and all you need to give it is the structure of your own system and a little bit of taste about what the survey should ask.

