Chase AdamsChase Adams
AboutContentPromptsPrototypesNewsletter

5 Years of Startup Learnings At Murmur Labs

Reflections on 5 years at Murmur Labs. What we built, what worked, what didn't, and how I'd approach starting something new.

26 minute read

Most failed startup stories are invisible or incomplete. A story that goes untold is lost forever. This will not be an untold story.

With most startups, if you even hear about them failing, at best you get the highlight reel or a postmortem, but rarely more.

I spent the last five years at Murmur Labs: the first three as the founding engineer, the last two as technical co-founder building Plumb.

On Friday, October 10, 2025, my team did its last retrospective.

On Monday, October 13, I would have celebrated 5 years there.

We decided to shut down. No exit, no scale, no traditional "win" (...yet?).

But shutting down doesn't mean the story should die.

This is my retro, based on what I shared during our final retrospective. It's a collection of how we worked, what we built, what we got right and wrong, and what I'd do differently next time.

Context

I was there from day one, for five years as founding engineer, the last two years as the technical co-founder.

I built 3 end-to-end products:

  • Murmur (asynchronous decision-making),
  • Supermanage (1:1 prep for managers)
  • Plumb (AI workflow automation)

After Murmur and before Supermanage, I sprinted on 6 more ideas with the team.

I wrote the code, made the architecture decisions, hired the team, and watched the metrics.

This is through the lens of a founding engineer, technical co-founder, and leader.

Why This Is a Story

Most startup retrospectives are bullet-pointed lessons. This one isn't.

Decisions don't happen in a vacuum. They happen in context with incomplete information, competing priorities, team dynamics, and the weight of what came before.

To understand why we made the choices we did (and what I'd do differently) you need the narrative, not just the takeaways.

So this follows the chronology:

Murmur → The Sprint Phase → Supermanage → Plumb.

Each section shows what we built, how we decided, what worked, what failed, and what I learned.

The lessons are embedded in the story.

Murmur (2020-2022): Building With Certainty

What We Built

Murmur was an asynchronous decision-making platform using Integrated Decision Making (IDM).

The vision: democratize decision-making for remote teams.

How We Worked

Murmur started as a self-organizing company from day one. No managers, no hierarchy—just people owning what they cared about.

We worked in rhythms: strategy sessions to align on direction, weekly action meetings to plan execution, and build jams where we used liberating structures to solve hard problems together. Project management happened through async updates and lightweight check-ins, not status meetings. We leveraged Slack channels (#wip-it-good and #shipped) to stay aligned.

At a certain point, we started learning and practicing the 15 Commitments of Conscious Leadership together. We treated conflict as signal, not something to avoid. When tension showed up, we named it, worked through it, and got clearer on the other side.

We kept synchronous time minimal. Most work happened asynchronously—people building in parallel, syncing when it mattered. Pairing sessions when we needed deep collaboration, otherwise independence.

It worked because everyone bought in. The culture wasn't aspirational—it was how we actually operated. It was a living organism: we let it grow and evolve with us as we had needs.

What We Got Right

Culture foundations stuck. The 15 Commitments and conscious leadership practices became our operating system. It was how we actually made decisions and handled tension. When conflict showed up, we leaned in instead of avoiding it. That saved us from letting resentment build.

Team member leveling retrospective format worked. During leveling conversations, we used a shu ha ri heuristic to assess where someone actually was in their growth. The breakthrough moment: someone once asked to be down-leveled so they could focus on growing into the next level rather than feeling like they weren't meeting expectations. That kind of honesty only happens when psychological safety is real. They went on to be leveled back up in just a few months.

Our software infrastructure and architecture decisions paid off. NextJS + MongoDB + TypeScript let us move fast without breaking things. We could ship features in days, not weeks, and the type safety caught errors before they hit production. Vercel and GitHub allowed us to have gating checks that made it hard to create unforced errors for users in production.

Working without a product roadmap or project manager, we still shipped consistently. The Slack rituals (#wip-it-good and #shipped) kept us accountable without needing formal process. Everyone knew what everyone else was doing, and blockers got surfaced quickly.

What We Missed

We hired too many people, too early. We should've kept it to just CEO and technical co-founder until we found PMF.

Instead, we grew the team when we were still figuring out the product.

We hired juniors with genuine desire to support their growth, but without the structure to actually do it. And we didn't hire for self-management or self-organization, we just assumed people would figure it out. We didn't educate the team on how to do either well.

Hiring for fit. I was too afraid to push for best fit on technical skills and mindset alignment in early hires for what we needed in the moment. That cost us velocity and culture coherence.

We built a complex editor too early. We should've built an MVP first, validated the core assumption, then added sophistication. Instead we built what we thought was the right experience without testing if anyone actually wanted it or would use it (or if the premise worked).

We didn't listen to customers. When they told us what they needed, we filtered it through our own vision instead of taking it at face value. We built for one audience, then pivoted to another without fully understanding either.

We missed the sociological dimension. Murmur was a decision-making framework, but we underestimated how much users needed power literacy to actually use it. Decision-making isn't just a process problem, it's a people, power and politics problem. If you can't solve the sociological problem, the technology doesn't matter.

We were too optimistic about metrics. When the data showed things weren't working, I didn't push back hard enough. I wanted to believe we could turn it around. We didn't watch metrics early enough, and when we finally did, we were already too far down the wrong path.

We didn't give one person the explicit role of looking at metrics. I was too busy building to look at metrics until it was too late.

The Pivot Moment

Two years in, I finally looked at the analytics. The usage patterns we needed weren't there. The retention curves were flat. The activation metrics showed people signing up, poking around, then disappearing.

I was so focused on building...shipping features, fixing bugs, making the product better, that I hadn't stepped back to ask if anyone actually wanted or was using what we were making. We were shipping features to people who weren't there.

When I brought it to the team, the conversation was honest: we'd built something we believed in, but belief doesn't equal market fit.

We had a choice: keep pushing on Murmur and hope something changed, or admit we needed to search for a different problem.

We chose to sprint. Not abandon Murmur Labs entirely, but put Murmur on ice and run fast experiments on other ideas.

The goal: find signal in 6-8 week bursts instead of 6-month build cycles.

The Sprint Phase (2022-2023): Searching for Signal

The Approach

We adopted a sprint methodology: 6-8 weeks to build a V1, get it in front of users, and see if there was real signal. No long planning cycles, no perfect products. Just ship, learn, look for a "there there", otherwise, move on.

We did Sunday together on a retreat. The rest we ran remotely.

The ideas we sprinted on:

  • Sunday: An app that enables healthy coupling through a family meeting (consumer scale, couples/families)
  • GOAT: An engagement tool that actually drives change (HR leaders, 50-75 employee companies)
  • Supermanage: An AI assistant that helps you be a better manager, starting with 1:1s (startup managers, 150+ employees)
  • Tally: An extension that allows teams to make decisions about anything (30-50 employee teams)
  • Beam: A modern employee handbook for people leaders (HR teams, 50-75 employees)

Each sprint had a clear buyer, tester, and scale hypothesis. We'd define those upfront, build the V1, then test whether the hypothesis held.

What Worked

The sprint methodology itself was liberating. Instead of betting years on one idea, we could bet weeks. Build fast, ship, get real feedback, decide whether to continue. It forced clarity: What's the core hypothesis? Who's the buyer? How do we test it in 6-8 weeks? There was no room for building features we thought were cool, only what we needed to validate the idea.

We shipped fast and learned faster. Each sprint taught us something new about markets, buyers, and what resonated. Some ideas (like Sunday) showed early traction (and the team loved it) but didn't have a clear business model. Others (like Supermanage) showed promise with a specific audience. The velocity meant we could test 6 ideas in the time it took us to build Murmur's first version.

Remote Design Sprints worked surprisingly well. We'd done Sunday together on a retreat, which was great for team bonding. But the rest? All remote. We'd use structured sessions to align on problem, design, prototype, and test. We did it all asynchronously or in focused sync time. It proved we didn't need to be in person to move fast on new ideas.

What Didn't Work

We explored too broadly. Five completely different ideas, five different markets, five different buyer personas. We spread ourselves thin chasing signal instead of doubling down when we found something promising. Consumer whims pulled us around. We would get excited about an idea, build it, then move on to the next shiny thing before really validating whether the first one had legs.

Excited exploration lasted too long without a tension-holder. We loved the exploration phase. It felt productive, energizing, low stakes. But we didn't have anyone explicitly holding the tension of "when do we stop exploring and start executing?" Drew (who later became the Product Manager for Plumb) helped with this later, but for most of the sprint phase, we were in discovery mode without a forcing function to commit.

We broke things down too large. Even within 6-8 week sprints, we still built more than we needed to test the hypothesis. We couldn't resist one more polish pass. We should've been even more ruthless about the MVP.

We still didn't listen to customers. Same pattern as Murmur. When users told us what they needed, we filtered it through our own assumptions instead of hearing them clearly. We were learning to ship faster, but not learning to listen better.

The Signal

Supermanage showed promise. Managers actually wanted better 1:1 prep, and AI-powered sense-making from Slack data felt like the right moment. GPT-3 was maturing, but the space wasn't crowded yet.

We took it to the board. The question: Should we validate more with additional sprints, or commit and build?

The board's take: Our biggest risk wasn't runway (we had plenty). It was someone else shipping first. The AI space was moving fast. Their recommendation: "Build an MVP in 4-6 weeks. If you invalidate the idea, stop. But don't miss the moment."

That landed. We decided to sprint on Supermanage back-to-back. We used the same methodology, but focused on building instead of exploring. Anyone who wanted to explore other sprint ideas could run a "red team" in parallel. We kept Fridays for recovery and testing.

For the first time since Murmur, we were committing to one thing.

Supermanage (2023): The Buyer Problem

What We Built

Better 1:1 prep for managers using AI "sense making" based on Slack data.

What We Got Right

The branding was exceptional. Keya Vadgamax.com nailed it. The visual identity, the messaging, the positioning...it all worked. I still think about that brand to this day. It felt professional, approachable, and distinctly not-boring in a space full of enterprise gray.

Our way of working tightened significantly. With only 2 engineers, we had no room for waste. The sprint methodology evolved into something leaner. We got uncannily good at shipping fast and far. Every feature had a clear purpose, every day became its own sprint with a specific learning goal. The constraint of a small team forced us to be ruthless about what mattered.

Small team with owner-per-domain mindset worked. Everyone owned their area. Engineering had a Slack implementation and a backend implementation. We had one person in ops. One person in Brand/Design. One person PMing. One person in customer support. One person in ops. One person in marketing. There was no way to step on each others' toes and as a result, no ambiguity about who was responsible. When something broke or needed improving, there was a clear owner.

We built infrastructure that would matter later. A lot of what we built for Supermanage (the Slack data pipelines, the AI orchestration patterns, the way we structured prompts and chained LLM calls) eventually became the foundation for Plumb. We didn't know it at the time, but we were laying groundwork for the next product.

What We Missed

User ≠ buyer disparity. Managers loved using Supermanage. The insights were helpful, the 1:1 prep saved time, the AI summaries from Slack data actually worked. We even had managers tell us they learned something about their report (that wasn't a hallucination) from their brief. But managers weren't the ones buying enterprise software. Their directors, VPs, and HR leaders (everybody but them) were. We built for the user without understanding the buyer's decision-making process. That gap killed our ability to close deals at scale.

We kept the complexity mindset. Even with a small team and tight sprints, we couldn't resist making things more complex. We kept adding features: more brief sections, more AI-powered insights, vecto db implementations we probably didn't need. What we needed was the simplest possible MVP. We accidentally carried forward the same pattern from Murmur: building sophistication before validating core value.

We still didn't listen to customers. The pattern repeated. When users told us what they actually needed, we filtered it through our vision instead of taking it at face value. We were getting better at building fast, but we weren't getting better at hearing what people were really asking for.

Why It Led to Plumb

I love to climb and I tried to keep a ritual: Tuesday and Thursday, an hour and a half I was at the gym with my friend Zack. Supermanage was "stealing" that from me because it was becoming untenable.

Every code change to Supermanage took an engineer 2-3 hours. Splice the code, add the functionality, make sure both input and output worked correctly. With two engineers shipping multiple features per week, the platform would drift. We struggled to talk about what was actually happening in the product because we were both moving fast in parallel.

I built Plumb internally to solve two problems:

  1. Give us a shared language to talk about the product
  2. Let our non-engineering product team contribute to the codebase.

The schema you would see in the no-code canvas was the same schema used to run Supermanage. Changes that took me 2-3 hours now took our product team 15-30 minutes. I got my time back. I could focus on building new nodes to unlock more capabilities for them so that we could keep building Supermanage fast while still being able to Climb.

Plumb (2023-2025): The Tool We Needed

Origin Story

As I said earlier, Plumb started as an internal tool, a way to build Supermanage faster and let non-engineers contribute to the codebase.

But the signal came from customers. When Aaron demoed Supermanage to managers, he'd show them the canvas view—the graph of nodes and connections that powered their briefs. Almost every single person said: "Supermanage is cool, but I want that thing."

We pivoted. Plumb became the product.

For Engineers

We started with engineers because engineering managers said they wanted it.

The core product: a visual workflow builder where you could chain together AI nodes, API calls, data transformations, and human-in-the-loop steps. Drag and drop nodes, wire them together, run the workflow. Call it with an API call and it'll give you a response.

The problem: engineers didn't want to build in a GUI. They wanted code. The visual builder felt like a constraint, not a feature. We also didn't build an SDK to make it easy to use (If we had, I would have created something generate your SDK dynamically based on your pipelines and API key would have similar to how Supabase works for generating types).

We moved on.

For Everybody: Magic Mode

If engineers didn't want the GUI, maybe the GUI wasn't accessible enough. We added Magic Mode. With magic mode, the goal was that someone could describe what they want in natural language, and the AI would generate the workflow for you.

Chat to create, then tune with the visual editor if needed. Generate with natural language, refine with click-drag-drop.

It felt like the right direction: make workflow building accessible to everyone. We tried too hard to get it perfect when maybe 70% magic is still enough to keep going. The models weren't ready yet to create a zero or one shot schema that was valid.

For Real Estate Agents

As a result, we pivoted to subscriptions and wanted to validate it with a specific ideal customer profile where we were the ones building. We chose real estate agents.

We built workflows for them: a listing description generator, a listing walkthrough note taker and a market update podcast producer. We focused on solving real problems for a real vertical.

The growth was there. When we focused, the metrics moved. But we didn't commit (frankly, I wasn't interested in integrating with the various MLS services for real estate agents).

Seeing that the conversion and retention for this audience (even though none were paying) made us think that anyone who was willing to build workflows could build for an audience.

For Creators: The Subscription Pivot

This time, we pivoted into focusing on subscriptions for creators where creators could monetize workflows.

The idea: let builders create workflows and publish them.

Users could subscribe, get updates, and review changes before upgrading.

Subscribe button → settings view → automatic version control.

We had a choice: creators or consultants. Consultants wanted more integrations and total customization. We chose creators because we thought we could build for them faster.

That was the bet. Subscription-based AI workflow automation for creators to publish and users to subscribe.

Technical Decisions That Worked

Shared schema everywhere. The visual builder and the execution engine used the same schema. What you saw in the canvas was exactly what ran in production. This eliminated an entire class of bugs. There was no translation layer, no drift between representation and reality. When someone built a workflow in the GUI, it was stored as JSON. When we executed it, we read that same JSON. One source of truth made it so that we could build fast with confidence.

Pragmatic infrastructure over perfect architecture. We used Vercel for the UI and serverless functions and Fly.io for scheduled jobs and long-running workflows. We could've built a complex orchestration system with queues and workers, put our infra in AWS, but we didn't need to. Vercel gave us fast deploys and edge caching. Fly gave us persistent VMs when we needed them. It was boring, reliable, and let us ship features instead of managing infrastructure. Since we were pre-product market fit, this was the right stack to move fast without having to manage infrastructure.

Observability from day one. After Murmur, I learned: if you can't see what's happening, you can't fix it. Every workflow execution logged its steps, timing, and errors. We could watch workflows run in real-time, see where they failed, and understand user behavior. This paid off constantly. We could debug production issues in minutes, not hours.

Dynamic forms from schemas. We used Zod schemas to define node configurations and those schemas auto-generated the forms in the UI. An engineer could add a new field to the schema and the form updated automatically. No manual form building, no keeping forms in sync with validation logic. It was one of those decisions that saved us from doing work anytime a new node was added or an old node changed.

Human-in-the-loop patterns. Workflows could pause and wait for human input. Send an email, wait for a reply. Generate content, wait for approval. You could use human in the loop with text messaging, email or Slack messages. This made AI workflows incredibly useful. You could automate the boring parts and keep humans in the loop for judgment calls.

Supabase for auth and row-level security. We didn't build our own auth system. We didn't write our own permission logic. Supabase gave us both. Row-level security meant we could trust the database to enforce access controls without having to create complex authorization logic. It was one less system to maintain, one less place for security bugs to hide.

The takeaway: we chose boring, reliable technology that let us move fast. Every decision optimized for shipping features, not building infrastructure.

Product/Marketing Wins

The brand was strong. Keyakeyavadgama.com (again) built a visual identity that stood out. Clean, modern, approachable and delightful. The messaging was clear. People understood what Plumb did when they landed on the site. In a space full of technical jargon and enterprise monochrome, we had a brand that felt human. That mattered. People remembered us, they shared our stuff, and the brand carried weight in conversations.

We got products in front of people without a marketing team. With very no official marketing person on staff, our team was able to get in front of people and we shipped products that found users. We did it through founder-led content (AI Builder's Club), showing up in communities where our audience lived, and building in public. Aaron and I made videos, wrote posts, and talked about what we were learning. It wasn't scalable, but it worked. People found us, tried Plumb, and some stuck around.

We tuned analytics and actually watched metrics. After Murmur, I wasn't going to make the same mistake. We instrumented everything. Signup flow, workflow creation, execution patterns, retention cohorts. We watched the numbers weekly, sometimes daily. When something moved, we knew why. When something broke, we saw it immediately. This discipline kept us honest about what was working and what wasn't.

The real estate focus showed growth potential. For the brief period we focused on real estate agents, the metrics moved. Conversion improved. Retention held. People were using the workflows we built for them. It was the clearest signal we'd seen that vertical focus worked. The problem wasn't the strategy, it was that we didn't commit to it (I really didn't want to build stuff for real estate).

What We Got Right

We were right about workflows. The core thesis: that structured, deterministic, declarative, repeatable workflows matter even in the age of AI turned out to be correct. OpenAI validated this when they launched Agent Builder. They chose a workflow-based approach for building agents, not just chat interfaces. Workflows are reliable, iterable, and shareable. We saw that. We built for that.

The ability to scale reuse of workflows. The subscription model wasn't wrong in concept. One person builds a workflow, hundreds of people can use it. That's leverage. It's why SaaS works. The problem wasn't the idea, it was the execution and the audience we chose. But the core insight that workflows should be reusable and updatable? That was right.

Using AI to help people generate workflows. Magic Mode (generating workflows from natural language) was the right direction. We were just too early to get it right or we were too worried about being perfect we couldn't show the world what was possible. The models weren't ready, we didn't have training data, and we tried to get it perfect when 70% magic would've been enough to keep going. But chat-to-workflow is the future. Generate with natural language, tune with visual tools. We built the MVP before the market knew to ask for it.

Interview process that found great people. Despite everything that didn't work, we built a world-class team. The interview process worked. We found people who were smart, capable, and culturally aligned. They scaled hard, they shipped constantly, and they stuck with us through pivots and uncertainty. That doesn't happen by accident, it happens because you hire well.

What We Missed

We cycled through ICPs without validating any deeply enough.

Engineers didn't want a GUI. They wanted code. We should've built an SDK so non-technical teammates could build workflows and engineers could integrate them in code. We never did.

"Everyone" isn't an ICP. Magic Mode was the right direction, but we were too early or too focused on perfection. The models weren't ready, we didn't have training data, and we couldn't ship at 70% good enough.

Real estate agents showed traction, but making it valuable meant integrating with services that only mattered to real estate: MLS systems and CRMs specific to the industry. None of it was reusable. I didn't want us to become a real estate company.

Creators didn't believe AI could execute with their taste. Even if we could codify it, they didn't trust the execution. We chose creators over consultants because we thought we could build faster.

Consultants needed more integrations than we had. Zapier, n8n, Make...they all had more connectors. Consultants also wanted total customization. We couldn't compete on either dimension.

The actual business was building verticals on Plumb. Real estate showed us this. When we built workflows for a specific vertical, metrics moved. But we kept trying to build a platform instead of building solutions. The insight was there, but it's not a VC scale idea.

We started founder-led growth too late. Aaron and I should've been making content from day one. Videos, posts, demos, learnings. We should have done all of it in public. By the time we ramped up, we'd already burned months without momentum.

We didn't validate subscriptions fast enough. The subscription model became an anchor. Once we committed to it, every pivot became harder. We should've tested it faster, learned it wasn't working, and moved on. Instead, we built infrastructure around an unvalidated idea.

Early or wrong = wrong. My co-founder Aaron said this first and it's really stuck with me. Being early feels like being visionary. It's not. If the market isn't ready, you're just wrong. Magic Mode, subscriptions, workflows for everyone. We were early on all of it. Early didn't save us. It killed us.

How AI Changed How We Built

"10 one-day sprints instead of one two-week sprint." AI collaboration made it possible to ship features in half a day that would've taken weeks. Changed the entire velocity equation.

The Shutdown Decision

After several years hustling on asynchronous decision making, manager 1:1 prep and agentic workflow automation, Aaron and I met up one last time and decided it was time to fold up the tent.

We got things right: workflows over agents for things that matter, exquisite UX, structured output, human-in-the-loop patterns. But we got critical things wrong.

We invented things that nobody asked for and nobody cared about (but maybe they will in the future). The decision made it so that we couldn't pivot without rebuilding everything.

On October 8, 2025, Aaron announced the shutdownx.com publicly. We thanked our investors, opened our DMs to find the team a home, and moved on.

What I'd Do Next Time

I have thoughts on what I'd do differently, given what I know, both if I were starting fresh and if I were rebooting Plumb. This proved to be enough information that "What I'd Do Next Time" is a separate essay.

Gratitude

Finally, I think the most important lesson of the last 5 years is how grateful I am for the people who I've built alongside.

I'm grateful for this team believing in me enough to make me a technical co-founder in the middle of our company's existence. I'm grateful for their willingness to be here when they didn't have to, and for growing together through everything.

Leading this organization technically and co-leading it strategically and culturally was one of the greatest experiences of my life.

Sarah saw people's light when they couldn't see it themselves and rallied for them when they were low. She's fierce for the people she loves and would walk through fire for them.

Keya brought taste, thoughtfulness, care, and craft to everything she touched. She created exceptional brand identities for at least five products and combined design thinking with genuine care for people. She's one of the best in the game.

Drew held the tension of when to stop exploring and start executing. He was the first to find problems and propose solutions, and his generosity with energy made people feel special. He traveled with me through almost the entire existence of Murmur Labs. He held so many roles, wore so many hats and every time he'd become an expert at the role he was in.

Pete was an intentional explorer who thinks more about enabling people to use computers with AI than anyone I've met. He wants to steer AI toward human flourishing in his "little tiny Pete-way."

Tyler could zoom in and out with ease—from implementation details to system topology to strategy. He bridged gaps between viewpoints with wit, curiosity, and patience, and elevated my code quality and capacity for systematic architecture design.

Aaron was the most thoughtful leader I've ever worked with. He taught me what responsibility-centered leadership looks like in practice, how to paint mental models before sharing proposals, and how to treat work as play. Honestly, I could dedicate a whole article to what it's been like to work with him, what I've learned from him and how I've grown because of him...maybe I will. 😉

To our board and investors: thank you for being present, thoughtful, and kind through everything.

Thanks for All the Fish

So there you have it. My perspective of five years. Three products. Countless pivots. No exit.

But the story isn't lost. The lessons are real. The team was exceptional. The work mattered, even if the market didn't care yet.

So long, and thanks for all the fish.

Post Details

Published
Oct 13, 2025
Category
Startups
Share
ChatGPT

Similar Posts

Claude

Claude

in AI

AboutAI Workflow SpecContentStacksNewsletterPromptsRSS