Case Study · 02

Lead Gen Agent

Designing an agentic framework for Birdeye — and the first agent built on it

Role

Solo Designer

Timeline

4 Weeks

Company

Birdeye

Lead Gen Agent overview
../images/lead-gen-hero.png
Context

Birdeye's inbox handles millions of inbound messages across web chat, text, Facebook, Instagram, WhatsApp, and email — for businesses that often have a single person managing all of it. The volume was there. The automation wasn't.

The brief was to design an AI agent that could engage prospects, answer questions, capture lead information, and hand off to a human when things got complex — without requiring any technical setup from the business. But before the Lead gen agent, there was a bigger problem: Birdeye had no design framework for AI agents at all. That's where this project actually started.


Problem

"I'm one person managing messages from five locations across three platforms. Something always slips through."

— Marketing Manager, Franchise group

01

No response outside business hours

Prospects messaged on evenings and weekends and heard nothing back. By the time a human replied, the lead had moved on.

02

Lead information never got captured

Contact details were scattered across channel inboxes. There was no consistent, structured moment where the business collected name, email, or phone number from a prospect.

03

Human handoff was abrupt and context-free

When conversations escalated to a human agent, there was no summary of what had been discussed. The customer had to repeat themselves.


Who We Designed For
Primary User

Marketing or Ops Manager

Manages inbound for multiple locations. Not technical. Wants the agent live and working without a lengthy setup.

  • No time to write prompts or logic from scratch
  • Needs to trust the agent before publishing it
  • Wants visibility into what the agent is actually doing
Secondary User

Franchise Operator / Business Owner

Cares about leads and response rate. Doesn't want to be involved in day-to-day configuration but needs confidence the agent represents their brand correctly.

  • Needs consistent tone across locations
  • Wants leads captured and routed, not lost
  • Measures success in conversations and conversions

Process

Week 1

Research

  • Competitive analysis — Microsoft Copilot, Intercom, Drift
  • Mapped existing inbox workflows
  • Identified legacy UI constraints with engineering
  • Defined the agent mental model with PM

Week 2

Framework Design

  • Designed agent library and card structure
  • Defined Goals → Triggers → Tasks model
  • Explored task granularity with ML engineer
  • Internal critique and iteration

Week 3

Lead Gen Agent

  • Designed task stack and tool configuration
  • Built preview with channel/location switching
  • Iterated on task count to reduce LLM cost
  • Tested with real customer via CS team

Week 4

Handoff

  • Designed Outcomes and Activity tabs
  • Documented edge cases and error states
  • Full Figma handoff with annotations
  • Framework shared with broader design team
View Prototype

Key Decisions
01
Agent Framework

A framework before a feature

Before designing the Lead gen agent, we needed a shared mental model for what an AI agent even was inside Birdeye. We studied how Microsoft Copilot and other agentic products structured their configuration surfaces, then mapped that against our legacy UI constraints. The result was the Agent Library — a central place to discover, activate, and configure agents — built around a Goals → Triggers → Tasks architecture that any agent could slot into. Designing the container before the content meant the system could scale without each agent needing to reinvent its own setup pattern.

app.birdeye.com · Settings — AI Agents
Agent library
Tradeoff

Building a framework first added a week to the timeline. The bet was that the investment would compound — and it did. Every subsequent agent required significantly less design time.

02
Tasks

A readable pipeline, not a prompt box

The earliest version was a single large prompt field. It was flexible but gave users no mental model of what was actually happening. We redesigned it as a sequential pipeline: understand intent → compose response → handle handoff → capture lead → follow up. Each task is a named step with a plain-language description and configurable tool chips. Users can open any tool to configure it without ever writing a prompt. Several tasks were merged after working closely with the ML engineer to balance expressiveness and LLM cost.

app.birdeye.com · Lead gen agent — Tasks
Tasks pipeline
Configure "Intent classifier"
Intent classifier
Configure "Lead capture"
Lead capture config
Tradeoff

The pipeline view abstracts the AI's decision logic into named steps. Advanced teams wanted more control, so we exposed prompt editing directly within tasks without leaving the pipeline view.

03
Preview

Preview that explains, not just demos

Most preview tools show you the output. Ours needed to show the reasoning. The preview panel lets users simulate a real conversation — switching channel, location, and business hours context. The Activity trace shows which task was triggered, which tool fired, what intent was detected, and what data was captured. When a user asks "why did the agent say that?", the activity trace answers it. This was the primary reason confident users published their agent without needing to talk to support first.

app.birdeye.com · Lead gen agent — Preview settings
Preview settings
app.birdeye.com · Lead gen agent — Tasks · Live preview with activity
Preview with activity trace
Tradeoff

The activity trace added engineering complexity. We kept it inline — collapsible but always present — because separating it would have hidden the reasoning at the exact moment users needed it most.

04
Outcomes & Activity

Measuring what the agent actually did

Most agent dashboards show run counts. We needed to show business value. The Outcomes tab tracks leads captured, deflection rate, and time saved — each broken down by channel. The Activity tab logs every agent run with a timestamp, status, and a plain-English summary. This was designed for the operator who wants oversight without involvement — a record they can check weekly rather than a dashboard they need to actively manage.

Lead gen agent — Outcomes
Outcomes tab
Lead gen agent — Activity
Activity tab
Tradeoff

Deflection rate can be gamed. We paired it with leads captured and time saved so the three numbers tell a complete story, not just an optimistic one.


Outcomes

10+

Agents shipped on the framework at launch across the product

72%

Conversation deflection rate across channels in the first 30 days

3x

More leads captured compared to pre-agent manual response baseline

The Lead gen agent became one of Birdeye's most activated AI features at launch. The Goals → Triggers → Tasks model was adopted across every subsequent agent, significantly reducing design and engineering overhead for each new one.

Businesses that had previously gone dark after business hours were now capturing leads around the clock. The channel-level breakdown in Outcomes helped marketing teams understand which platforms were actually driving results.


Reflection

The hardest part of this project wasn't designing the agent — it was designing the language around it. Terms like "intent," "sentiment," and "task" are second nature to an ML engineer and completely opaque to a front desk manager. Every label and description went through multiple rounds of plain-English rewriting.

The task merging exercise taught me that the design surface and the model architecture are not independent decisions. The number and structure of tasks was directly constrained by LLM cost and latency. Designing in close collaboration with the ML engineer from the start was what made that tradeoff navigable.

If I had more time, I'd have invested in onboarding. The first publish of an agent is a high-stakes moment for a non-technical user. A guided first-run experience would have closed the gap between "I set it up" and "I trust it's working."