Google Gemini

Intro: Google DeepMind × Gemini App
Challenge + Response

Role: Senior UX Design Lead; Founding Designer on Gemini (via Google Assistant)

  • Founding Designer on Gemini (formerly Bard)

  • Led 0→1 design strategy and foundational UX

  • Drove cross-functional alignment across design, engineering, and research

  • Launched and scaled core features: Gems, Canvas, Deep Research, Extensions

  • Defined AI-native interaction patterns and systems used across Google

  • Shaped the evolution from chatbot → agentic, multimodal AI platform


Designing the Future of Multimodal AI

As generative AI rapidly evolved, Google DeepMind set out to redefine how billions of people interact with intelligence—moving beyond search and assistants toward a truly multimodal, agentic AI system. This effort became Gemini (formerly Bard), a next-generation AI platform capable of understanding and generating text, images, code, and more across surfaces.

As a founding designer on Gemini, I led cross-functional teams spanning design, content, engineering, and research to shape the foundational user experience and system-level design principles. From early 0→1 exploration through global launch and beyond, I helped define the interaction models, design patterns, and feature systems that now power Gemini’s core capabilities—including Gems, Canvas, Deep Research, and Extensions—used by millions worldwide.


Part I: The investigation begins

I was brought in at the earliest stage as a founding designer to help define what an AI-first product should be—before clear patterns or industry standards existed. This wasn’t about designing a chatbot; it was about rethinking how humans interact with intelligence.

I led early-stage exploration across product, research, and engineering—translating emerging LLM capabilities into usable product concepts. Through rapid prototyping, prompt design, and design sprints, I helped establish the foundational mental models for Gemini:

  • AI as a collaborative partner, not just a tool

  • Conversations as interfaces for action, not just responses

  • Multimodal inputs and outputs as a first-class interaction layer

This work laid the groundwork for Gemini’s evolution from Bard into a fully integrated AI system—capable of reasoning across modalities and contexts in ways traditional assistants could not.

Part II: Design

I led the design of Gemini as a cohesive AI platform, aligning multiple product surfaces and capabilities into a unified experience used across web, mobile, and ecosystem integrations.

This included launching and scaling key features:

  • Gems → Custom AI agents tailored to user intent and workflows

  • Canvas → A collaborative workspace for creating, iterating, and building with AI

  • Deep Research → An autonomous research agent that plans, explores, and synthesizes complex information into structured outputs

  • Extensions (Apps) → Integrations across Google products enabling real-world task completion

Each of these required defining new interaction patterns for AI-native workflows—moving from simple prompt/response toward goal-oriented, assistive systems.

I partnered deeply across design, engineering, and content to:

  • Establish scalable interaction frameworks for conversational + multimodal UX

  • Define structured outputs that turn AI responses into actionable results

  • Create design principles and patterns that now extend across Google’s AI ecosystem

  • Ensure a user-centered, trustworthy experience across highly complex systems

The result was a unified product experience that transformed fragmented AI capabilities into a coherent, intelligent system.

Part III: Ship It and Repeat!

Launching Gemini was not an endpoint—it was the foundation of an evolving AI platform.

I continue to operate at the frontier of applied AI, leading the evolution of Gemini through:

  • Agentic workflows → enabling multi-step, autonomous task completion

  • Persistent memory & personalization → making AI context-aware over time

  • Deeper multimodal interactions → blending text, vision, and code seamlessly

  • Continuous system refinement → improving quality, trust, and usability at scale

Features like Deep Research demonstrate this shift—where AI independently plans, searches, reasons, and delivers synthesized outputs, moving beyond reactive systems into proactive intelligence.

My role continues to focus on ensuring that as the technology advances, the experience remains human-centered, scalable, and industry-defining—pushing Gemini from a product into a platform that shapes how people interact with AI globally.