Pfizer · Generative AI · Enterprise SaaS

Designing for Trust in a Gen AI Marketing System

Problem

Pfizer's marketing teams had a Gen AI copy tool — but low adoption, because marketers couldn't audit or defend generated content in medical-legal review.

Action

Reframed adoption resistance as a transparency design problem. Implemented parameter visibility, generation provenance, and structured feedback capture — aligning cross-functional teams on the Idea Accelerator positioning before a line of production code shipped.

Outcome

MVP delivered for beta onboarding. The "Idea Accelerator" reframe unlocked adoption — repositioning AI as a volume generator for human evaluation, not a replacement.

Trust

is a design problem. Not a training problem.

Role

Systems Architect · Lead UX Designer

Deliverables

Stakeholder alignment, design sprints, wireframes, prototypes, user testing

Status

MVP delivered · Beta testing

The design challenge wasn't "how do you make this look good." It was "how do you design for trust in a system whose outputs marketers can't fully verify, inside a regulatory environment that doesn't move as fast as the technology."

I orchestrated the UX architecture for Pfizer's generative AI marketing platform — leveraging cross-functional alignment across marketing, regulatory, and engineering stakeholders to implement a shared definition of "trustworthy AI output" inside a pharmaceutical review workflow. The real project was institutional: reframing AI adoption resistance as a transparency design problem, not a training problem, and standardizing that framing across teams before a line of production code shipped.

Pfizer Gen AI marketing platform — key generation screen

01 — Context & Constraints

The organization

Pfizer's marketing teams develop brand-specific content for pharmaceutical products across multiple channels and global markets. Content must pass through medical-legal review and comply with pharmaceutical advertising regulations. The business case for Gen AI was efficiency — the design problem was making that efficiency trustworthy.

My role

Systems architect and lead UX designer responsible for orchestrating cross-functional alignment and implementing the end-to-end workflow architecture. I facilitated design sprints, navigated stakeholder relationships across multiple Pfizer teams, and served as the primary bridge between design intent and developer constraints — leveraging mid-engagement engineering constraints as design inputs rather than treating them as blockers.

The real constraints

Regulatory environment that doesn't move as fast as the technology

Every piece of pharmaceutical marketing copy requires review against brand guidelines, regulatory standards, and medical-legal approval processes. A Gen AI tool that generates content faster than it can be reviewed creates compliance risk, not efficiency. The design had to accommodate the review workflow as a first-class feature — not an afterthought.

Marketers who didn't trust the output

Pfizer's marketing teams are experts at copy. They arrive at pitches with 3–5 carefully developed ideas. Asking them to use AI-generated content meant asking them to stake their professional judgment on output they couldn't fully verify. The adoption problem wasn't technological — it was about trust in a system whose accuracy they had no way to independently evaluate.

Brand-specific language across multiple pharmaceutical brands

Pfizer operates dozens of pharmaceutical brands, each with its own approved language, tone, and regulatory constraints. The tool couldn't be a generic text generator — it had to produce output that was brand-appropriate and defensible inside Pfizer's review process.

Scalable MVP under active feature expansion

The initial brief covered copy generation, translation, and image generation. The roadmap included significantly more. The architecture of the UI — layout, navigation, component structure — had to accommodate features that hadn't been scoped yet without requiring a redesign each time one shipped.

02 — The Strategic Frame

Reframing adoption resistance as a design problem

The initial framing of low AI adoption was a change management problem — resistant users who needed training and persuasion. The actual problem was a design failure: the system gave users no basis for trusting its output. A marketer who can't explain why a piece of copy was generated the way it was can't defend it in a review meeting. That's not a training gap. That's a transparency gap in the interface.

Positioning Gen AI as an Idea Accelerator, not a replacement

Pharmaceutical marketers typically develop 3–5 copy concepts to pitch. The reframe that unlocked adoption: Gen AI as a tool that generates the same volume of initial concepts in minutes, leaving human judgment — which is irreplaceable in regulated communications — for evaluation and refinement rather than first draft generation. The tool doesn't replace the marketer. It front-loads the work the marketer was least good at.

03 — Prompt Architecture

The generation workflow was designed as a transparent pipeline. Marketers needed to see the parameters they set, trace them through to the output, and understand why the system produced what it did — before they could defend it in a review meeting.

Generation Pipeline

Input Parameters

BrandLipitor · Oncology
ToneClinical · Reassuring
FormatWeb banner · 300×250
ConstraintsFDA compliant · ISI included

Engine

Brand model + regulatory filters

Output + Provenance

“Talk to your doctor about Lipitor…”

Generated from:

LipitorClinical300×250FDA

Feedback on output retrains the brand model — closing the trust loop

Parameters are always visible alongside the generated output

04 — Process

Starting with the user journey

Before wireframing, I mapped the user journey against the business requirements and MVP features — not as a deliverable but as a guide that was updated throughout the engagement as features were added and scoped. This prevented the workflow design from locking in too early around a feature set that was actively changing.

Sketching → wireframes → prototype

Initial layouts for the copy generation workflow were sketched first, then translated to black-and-white wireframes for structured feedback before any visual design decisions were made. This kept early feedback on the workflow and information architecture rather than aesthetics. The wireframes were then iterated through user testing cycles with Pfizer marketing teams.

Stakeholder presentation cadence

Design ideas, workflow diagrams, and interactive prototypes were presented to stakeholders across multiple Pfizer teams at regular intervals. The cadence was intentional — showing work early and often with diverse stakeholder groups in a regulated environment surfaces compliance and brand concerns before they become late-stage blockers.

User journey map — MVP features mapped against business requirements

User journey map used as a guide throughout the engagement

Early sketches — copy generation workflow layout exploration
Black and white wireframe — copy generation interface before visual design

Early sketches and wireframes before visual design decisions

05 — The Hard Part

The problems that weren't in the brief — and how they changed the design.

The trust problem

Early user testing revealed that marketers weren't rejecting the generated copy because it was bad — they were rejecting the process of using it because they couldn't explain to their reviewers why it said what it said. The output quality wasn't the barrier. Auditability was. The entire transparency layer of the interface — parameter display, generation history, feedback capture — emerged from this finding, not the original brief.

The change management victory — reframing AI as an Idea Accelerator

Before the reframe

“AI will write your copy. Your job is to check it.”
Result: resistance, low adoption, professional threat framing

After the reframe

“AI drafts the first five ideas. You decide which one is right.”
Result: adoption unlocked — human judgment repositioned as the premium skill

The language “Idea Accelerator” came out of a stakeholder workshop. It wasn't a marketing line — it was the framing that finally made the tool feel safe to use. Marketers had been trained that their value was their ideas. Positioning Gen AI as a tool that expands ideation volume rather than replacing ideation entirely was the design and change management intervention that moved adoption.

Designing around constraints I didn't control

Generation attempt limits, duplicate content handling, and content saving behavior were all engineering constraints that arrived mid-design. Each one required a design response — how do you show a user they've reached their generation limit in a way that feels intentional rather than broken? How do you handle similar outputs without making the system feel repetitive? These constraint-to-design translations were some of the most precise problems in the project and weren't in the original scope.

06 — Key Design Decisions

Dual-panel layout built for future features

The core screen uses a right panel for Gen AI inputs, filters, and generation controls — and a left side panel for user context, settings, and menu items that don't exist yet. The left rail was designed explicitly as a future feature slot, so every new capability has a home without restructuring the primary workspace.

Parameters visible at output time

Generated content is shown alongside a text description of the parameters that produced it — brand, tone, format, constraints. This was a trust decision: marketers needed to see not just what was generated, but why, so they could evaluate output against the inputs rather than judging content in a vacuum.

Feedback capture as a first-class feature

Detailed feedback mechanisms for generated content and translation were built in from the start — not as a product analytics play, but as a mechanism for improving the language model with real pharmaceutical marketing data. The loop between user judgment and model improvement was a design requirement, not an engineering afterthought.

Design–engineering bridge on generation constraints

Leveraged direct collaboration with developers to translate engineering constraints into interaction design decisions: generation attempt limits, duplicate content handling, and content saving behavior. These weren't edge cases — they were load-bearing behaviors in a regulated environment where every generated output is a potential compliance artifact.

Workflow for key features — copy generation interface

Copy generation workflow

Workflow for translation — post-generation translation experience

Translation workflow

07 — MVP Feature Set

The MVP was scoped to establish the core generation workflow and layout system before expanding. Each feature was designed not just for its own workflow but for how it would coexist with future features in the same interface.

Copy generation

Brand-specific marketing copy for web banners, emails, and advertisements — generated from parameter inputs and evaluated against brand guidelines.

Translation

Workflow for translating generated content across markets, with error-handling for post-generation translation failures and user feedback mechanisms.

Image generation & regeneration

Image selection and generation workflow, designed as wireframes to accommodate the interaction model for visual content within the same generation context.

08 — Outcomes

MVP delivered for beta onboarding

The complete MVP workflow — copy generation, translation, image selection — was delivered and prepared for the first set of beta testers. The flexible layout system means new features can ship into the existing interface without redesign.

Institutional alignment around a new way of working

The Idea Accelerator reframe achieved institutional alignment across marketing, regulatory, and leadership stakeholders — making adoption a shared goal rather than a top-down mandate. The transparency layer — parameter display, generation history, feedback capture — gave marketers the auditability they needed to defend generated outputs inside Pfizer's existing review workflow.

[Add specific metrics here if available]

e.g., number of beta users, time saved per copy iteration, adoption rate among target teams, number of brands onboarded, stakeholder count, engagement rate — whatever is shareable under NDA terms.

“I'm not using AI to write copy. I'm using it to never start from a blank page again.”

— Howdy, on the Idea Accelerator framing

09 — What I'd Do Differently

I would have pushed for a medical-legal reviewer to be part of the design research process earlier — not as a gatekeeper, but as a user. The compliance review workflow is downstream of the tool, but its requirements shaped every design decision we made. Getting that perspective in week two rather than week eight would have changed the initial framing significantly.

[Add a second reflection here — something specific to this project that you'd change with hindsight.]