I'm May—an AI-Native Product Designer who shapes strategy as much as pixels. I prototype fast, validate with real users, and close the loop on what actually moves the needle. Based in London.
8+ years as a staff-level product designer, most of it as the only designer in the room — which means I've made product calls as often as design ones. Fintech, logistics, AI/ML, enterprise SaaS. The common thread: ambiguity I had to resolve before design could begin. AI doesn't reframe problems, navigate competing stakeholders, or decide what not to build. That's still the work.
Not a Figma jockey. My job is to figure out what to build and why — then get something real in front of people before the team has debated the brief to death. I use AI tools (Cursor, Claude, V0) to compress the gap between "what if" and "here, try this." Stakeholders react to something they can touch. I validate fast, so the risk stays low and the speed of being right stays high.
I hypothesise before I design. "I think this looks nice" isn't a useful contribution to a team discussion. Every direction I propose connects to something the business actually cares about: retention, conversion, activation, revenue. I've shaped roadmaps, scoped MVPs, and argued successfully for what not to ship this quarter — the kind of decisions a PM owns on paper, but that good designers have always made alongside them.
At a payments company, I designed a 0-to-1 sales platform for 300+ field agents. At Gophr, sole product designer for 3+ years — shaped strategy with founders, scoped roadmaps, made trade-offs daily. At Eigen, rebuilt the design team and turned document review into collaboration: 87% faster processing. The pattern: show up when the problem is undefined, leave when the system runs without me.
London-based. I've spent most of my career doing 0→1, and I thrive on ambiguity. I'd rather ship something real than polish something theoretical. "Let's try it and see how it feels" is how I work. Kind, ambitious, pragmatic.
I treat collaboration as a multiplier—the best outcomes I've delivered came from making everyone in the room smarter about the problem, not from designing alone. I have the best WhatsApp sticker library you'll ever see, and I will absolutely send you one mid-meeting if the moment calls for it.
I know the difference. Retention design means understanding what brings people back, what erodes it, and what to ship next to move the number. Designing for a brief means executing someone else's answer to a question they may have asked wrong. I've spent real years on mobile apps as the core product — not as an occasional side surface.
Research, UX, UI, visual polish, experimentation, copy. I can go from a whiteboard to a finished interaction detail — and I know when each is the right move. I notice the small things that elevate good to genuinely great. And I know that obsessing over them at the wrong moment is how teams miss their window.
I check the data even when it disagrees with my instinct — especially then. I can explain why a direction makes sense in terms that connect to what the business actually cares about: retention, conversion, activation, revenue. When the question is "what should we build and why," I'm in the room. When it's "should we build this at all," I've argued both sides.
I see AI as the thing that protects the hours I spend on the details that actually differentiate — not as a shortcut that replaces taste. I have opinions on when to reach for a quick AI prototype vs. when to open Figma and really polish. I make those choices based on what brings impact, not on what's faster in the abstract.
I understand layout, CSS, and how the things I design actually get built. I've hacked together sites, vibe-coded things for fun. Some teams call this a design-engineer sensibility. At minimum, code doesn't intimidate me. At best, I reach for it before a brief has been fully written.
I reason about problems in a way others can follow. I can take a complex product challenge and explain my approach clearly to PMs, engineers, and stakeholders — without losing them in the detail or stripping out the nuance that actually matters.
Not a fit if
Poor product decisions are expensive.
May Fanucci · AI-Native Product Designer · 8+ years across fintech, logistics, AI/ML, and enterprise SaaS
8+ years across fintech, logistics, AI/ML, and enterprise SaaS. End-to-end ownership shaped by startup constraints and complex stakeholder environments.
Payments Company — Designed 0-to-1 internal sales platform: lead capture, deal management, lifecycle governance. Resolved cross-team terminology confusion that changed backend architecture. Delivered progressive disclosure enabling 10-second deal capture for 300+ field agents.
Digital Consultancy (Contract) — Led discovery for complete app rebuild serving millions of fans globally. Reframed project direction from "add features" to "rebuild trust through reliability"—by synthesising 10,000+ app reviews that revealed the real problem. Created design system with React/Tailwind foundations unifying two competing brands.
Gophr — Sole designer for 3+ years, serving 10K+ enterprise accounts. Built dual design systems from inherited chaos, drove 40% engagement increase and 47% dev efficiency boost. Partnered directly with CEO and CTO on product strategy.
Eigen Technologies — Transformed single-user document review into enterprise collaboration system for Goldman Sachs, Deloitte, ING. 87% faster processing, 75% fewer errors, 93% satisfaction. Led design across 6 engineering squads post-restructure, mentored 2 designers through delivery.
Coforge — Led projects for Channel 4, British Library, Santander. Channel 4: 2.5x ROI, 89% operational efficiency, 98% error reduction. British Library: 45% navigation reduction, satisfaction 42%→87%. Built WCAG-compliant design systems.
Freelance — 20+ projects with end-to-end ownership: Art Basel, S&P Global, Sun Life, Mercer, Viiv Healthcare, A&O Shearman, Swiss Re, Dulux, ATP/WTA, PTSB. Mentored 10+ junior and mid-designers—several now senior.
How I work — in practice.
Case studies are claims. This page is the evidence behind them. Specific moments from real projects, indexed by what interviewers actually ask about.
66% of UK revenue came from sellers the company couldn't see or track.
0-to-1 Sales Platform.
Progressive disclosure lets agents capture deals in 10 seconds — compliance data follows later.
External sellers generated 66% of UK revenue, but the company had zero visibility into their activity until a merchant was already registered. 15-day activation had dropped from 62% to 48% in four months. 50% of sellers churned within their first three months. A previous CRM had failed because it demanded too much data upfront.
The brief said "build lead capture." The real problem was bigger: nobody had agreed on what a lead actually was.
I kept asking: "What is a deal? What is a lead? When does one become the other?" Engineering had built "leads" that behaved like deals. Product used terms interchangeably. The company's existing CRM had specific definitions nobody followed. Everyone was having conversations about the same thing using different words.
The resolution: what engineers had been calling a "lead" was actually a deal. A lead is a contact identity. A deal is a sales attempt. This single clarification restructured the backend data model and aligned three teams around a shared language.
300+ field agents needed to log opportunities on-site with merchants. But compliance required comprehensive data. These goals seemed incompatible—until I separated creation from enrichment.
I designed a progressive disclosure system: create a deal with just a name in 10 seconds. Enrich with contact details, offers, and compliance data later. Deduplication only triggers when a unique identifier is added—not at creation. The trade-off: accepting incomplete records temporarily. But field agents could now capture opportunities in the moment, and completion rates improved because the initial friction was gone.
Early prototypes tested multi-method capture: photo of a business card, voice input, manual search. Through 17 iterations I stripped the flow down to what field agents actually needed in the moment—while designing the system to handle complexity (multiple offers per deal, deal stage transitions, conflict resolution) without exposing it upfront.
The principle: agents always create a deal first. Identity comes later. This meant the UI could be radically simple at the point of capture, with depth available on demand.
The initial scope included AI data override functionality for handling automated capture edge cases. I questioned the actual frequency and impact. The result: weeks of engineering work deferred from V1, faster launch, no sacrifice to core value.
53% first-pass approval. 240 address errors per month. Three teams using the same words to mean different things.
Sequential Data Cascade.
Business details populate and lock downstream fields — each section builds on verified upstream data, targeting 75% auto-approval.
The company's merchant registration collected business details, trading locations, identity documents, and banking information across multiple sections. Users could fill sections in any order. The assumption was flexibility would help.
It didn't. Business details from the national business registry pre-populate trading location, signatory information, and business representatives. Allowing users to fill location first created orphaned data that conflicted with what the registry returned. A convenience toggle that copied registered office addresses to trading addresses generated 240 incorrect cases in 30 days — each requiring manual compliance follow-up.
The first-pass approval rate sat at 53%. The target was 75% auto-approval by year-end. Every registration field decision connected to that number.
I proposed locking sections sequentially — business details first, because they determine what populates everywhere downstream. The PM initially resisted, wanting all sections open. I articulated the data dependency chain and cited user interview data showing that users given open-form freedom defaulted to left-to-right completion anyway. Engineering confirmed locking was simpler to implement. The PM changed his position: start constrained, loosen based on feedback.
The trade-off was explicit: reduced flexibility for data consistency. But the flexibility wasn't serving anyone — it was creating errors that cost operational hours to resolve.
A conversation about "dynamic rendering" had stalled. Product defined it as boolean conditions. Engineering defined it as schema-driven field rendering. The compliance team defined it as backend-driven document requirements based on business activity. Everyone was agreeing on a word and disagreeing on its meaning.
I asked for concrete examples — pharmacies needing licences, shooting ranges, medical services. This grounded the abstract discussion in real requirements and let the team align on what was MVP scope (known restrictions) versus what needed the backend-driven schema (deferred to a daily sync track). The insight wasn't technical — it was that a definitional gap was blocking implementation.
The three-name conflict: I designed the flow to prefill the primary signatory from the deal contact. Then I stress-tested it: "If the deal says John Smith and they upload a document for someone named Mario, what are we checking against?" Three conflicting data sources — deal contact, document OCR, business registry — with no reconciliation logic. The PM reversed his position on prefilling. I proposed the self-service pattern instead: show business representatives first, let the agent select who signs, then collect documents. This eliminated an entire class of matching errors.
The manual-store gap: When the PM stated that editing trading details triggers evidence requirements, I asked: "If the store was added manually and we never ran any checks, what are we checking against?" The evidence-triggering logic assumed a prior API check that might not exist. This surfaced a new requirement: trigger validation on field edit regardless of data source.
The mismatch distinction: The original requirement was "editing triggers evidence upload." I helped surface the real logic: the trigger should be data mismatch, not the act of editing. A user correcting a typo shouldn't be penalised if the corrected value still matches. This distinction changed the validation pattern.
The PM wanted a condensed desktop form. The Engineering Lead argued this meant separate schemas — effectively two applications to build. Rather than letting it stall, I proposed creating a desktop version for review while committing to mobile-first responsive for the pilot. Both accepted. The PM conceded that field agents would primarily use mobile anyway.
This wasn't a design decision — it was an unblocking decision. I volunteered extra work to resolve a scope tension that could have delayed the pilot.
The sequential data-cascade pattern — upstream verified data populates and constrains downstream fields — is now the team's default approach for multi-section forms. The "select identity first, then collect documents" pattern became the standard for signatory identification across both assisted and self-service flows. The distinction between fixed conditional logic, known restrictions, and truly dynamic schema rendering clarified scope discussions for subsequent sprints.
I operated as a design partner across three team boundaries, not an executor within one. I shaped the form architecture, resolved cross-functional definitional conflicts, caught validation gaps through scenario stress-testing, and connected every field-level decision to the 75% auto-approval target. The registration flow I designed is the foundation the pilot shipped on.
Millions of fans, 10,000+ negative reviews. The app crashed when it mattered most — live matches.
Reliability Over Features.
40% fewer features at launch, focused on what must work flawlessly. Trust rebuilt through stability, not feature count.
Stakeholders wanted new features—live scores, more stats, social sharing. The assumption was that the app was missing functionality. I synthesised 10,000+ app store reviews and found something different.
Users weren't asking for more. They were frustrated that basic features didn't work reliably. Crashes during live matches. Scores that didn't update. Notifications that arrived late. The problem wasn't missing features—it was broken trust.
Instead of "add more features," I proposed "rebuild trust through reliability." This changed everything: stakeholder priorities, sprint planning, success metrics. We shifted from feature count to stability metrics, from "what can we add" to "what must work flawlessly."
The trade-off was real: we deprioritised social sharing, advanced stats, and personalization features that competitors had. But a reliable app with fewer features would outperform a feature-rich app that crashed during Wimbledon finals. Stakeholders accepted this once they saw the review data.
This was a joint venture between two organisations with competing brand interests. Every design decision had political implications. Who gets top billing? Whose visual language dominates? How do we handle editorial content from both?
I established decision frameworks and content strategy for joint editorial workflow. The design system I created with React/Tailwind foundations established a unified visual language that both brands could own—neither dominant, both represented.
A junior designer joined their first enterprise project during this engagement. I established feedback cadence and design critique practices, guiding them through the ambiguity of multi-stakeholder work. They delivered production-ready components by project end.
The deliverables—wireframes, prototypes, component library, testing strategy—were solid. But the real value was the reframe. The rebuild launched with 40% fewer features than originally scoped, focusing on live scores, match schedules, and notifications that actually worked.
By investing in understanding the actual user problem (broken trust) rather than the assumed problem (missing features), we set the rebuild on a foundation that could succeed. The design system I created now serves as the foundation for both brands' mobile experiences.
Sole designer across two platforms, 10K+ enterprise accounts, 10K+ couriers. No system, no docs, no process.
Dual Design Systems.
Atomic Design systems for both platforms. From chaos to 30% faster feature delivery and a seat at the strategy table with CEO/CTO.
The audit revealed something unexpected: the three previous designers hadn't disagreed—they'd never talked. Each had built their own patterns in isolation. The result was 47 button variants, 12 colour palettes, and zero documentation.
I interviewed customers and couriers in their actual environments—office desks, delivery vans, warehouse floors. The booking platform users wanted speed and clarity. The couriers needed glanceability and one-thumb operation. Different contexts, different needs, but both suffering from the same inconsistency.
I needed to create consistency across two very different platforms—a web booking system for enterprise customers and a native mobile app for couriers in the field. The constraint: I was the only designer, and both needed to ship.
I chose Atomic Design because it let me build once and compose infinitely—tokens and atoms could be shared across platforms while molecules and organisms adapted to each context. The trade-off was upfront investment: the first month felt slow as I built foundations instead of features. But by month three, I was shipping twice as fast as before.
The result: 47% boost in development efficiency and visual consistency across platforms serving 10K+ enterprise customers and 10K+ couriers.
In a lean 3-person Product team, strategic work wasn't optional—I stepped into it. I drove product strategy as a peer with CEO and CTO, orchestrated research across 5 teams and 3 dev squads, and delivered 80% on-time launches across 10+ features.
I also engineered a compliance solution protecting the company from legal exposure while expanding fleet capabilities, and implemented AI automation in Customer Service that reduced support volume while maintaining quality.
The most effective design systems include a shared vision between design and engineering, governance processes for reviewing new variants, a joint roadmap of prioritised updates, and documentation that extends past Figma components.
Implementing a new design is challenging because users are accustomed to old patterns. My approach: focus on improving usability over time while providing clear onboarding to help users navigate changes.
Enterprise document review forced sequential processing. Every queued document meant delayed decisions and revenue.
Collaborative Intelligence.
AI handles routine extractions, experts handle edge cases. Review became teamwork, not gatekeeping.
The single-user review system created an artificial ceiling on how fast enterprises could extract intelligence from critical documents. Every hour of delay meant delayed contracts, delayed decisions, delayed revenue.
Stakeholders initially wanted faster individual processing. I reframed the problem: What if document review wasn't an individual task but a collaborative intelligence system?
Through user research with financial services and legal teams, I uncovered that the bottleneck wasn't technical—it was organisational. Subject matter experts were gatekeepers, not collaborators. Documents queued in inboxes while decisions waited.
I designed three interconnected systems: Team-Based Document Pools for smart allocation based on complexity and expertise. Parallel Review Workflows enabling simultaneous processing with real-time status updates. And a Conflict Resolution System for handling overlapping edits elegantly.
The trade-off: parallel workflows introduced coordination overhead. Teams needed new rituals—handoff protocols, progress visibility, conflict resolution norms. I designed for this by making status visible at every level, so coordination happened through the interface rather than through meetings.
When the team restructured, I stepped into a consultancy leadership role—leading design across 6 engineering squads while directly mentoring 2 designers through project delivery. Managing 4 concurrent feature initiatives forced me to build systems for quality at scale, not just maintain it personally.
Early prototypes overwhelmed users. By revealing functionality based on context and role, the final design achieved both power and simplicity. Technical solutions must support organisational needs, not force new behaviours.
Manual ad booking capped how many campaigns could run. Sales spent 70% of time on admin instead of relationships.
Digital Self-Service Platform.
Designed for psychological safety first — familiar patterns and fallbacks encouraged adoption over resistance.
The stated problem: "We need a better booking system."
The real problem: The manual process was actively limiting how many campaigns could be processed, creating an artificial ceiling on ad revenue. Sales teams spent 70% of their time on administrative tasks instead of relationship building.
The stakes: In a market where streaming services threatened traditional TV advertising, Channel 4 needed to differentiate through frictionless agency experiences—or lose ground.
I conducted deep discovery interviews with agency representatives and internal teams. The surprising insight: users were attached to spreadsheets despite their limitations. The transition from manual to digital required psychological safety, not just better tools.
By incorporating familiar patterns and providing clear fallback options, I created a psychological safety net that encouraged exploration. External agencies could now allocate TV commercials independently, access real-time program metrics, and make smarter placement decisions.
The trade-off: I prioritised adoption over feature completeness. The first release didn't have every capability the old spreadsheet system had. But users who trusted the new system enough to try it became advocates—and their feedback shaped what we built next. Adoption-first beat feature-parity.
I redesigned the British Library website with a comprehensive WCAG-compliant design system, cutting navigation paths by 45% and increasing researcher satisfaction from 42% to 87%. This project taught me that accessibility isn't a constraint—it's a design driver that improves experiences for everyone.
Multiple departments with competing priorities and success metrics. I developed a structured workshop approach that visualised tradeoffs, building consensus around core user needs while acknowledging—and making visible—the business constraints each team operated under. This approach became a template for subsequent cross-functional projects.