Using Expert Personas in AI Prompts: A Different Way to Think
I've been experimenting with AI prompts for a while now, and I want to share an approach that's been working for me: creating expert personas instead of just giving instructions. This isn't a proven methodology - it's just something I've found helpful that might change how you think about prompting.
The Experiment That Started This
I noticed something interesting when I told the AI to "be a product manager." I got exactly what you'd expect - generic product advice. But when I tried creating a more detailed persona that pulled from specific product thinkers I admire, the responses felt different. More nuanced. More useful.
What I Mean by "Expert Personas"
Instead of just assigning a role, I started building complete professional identities. The product strategy prompt I've been analyzing does this by combining three influences:
Marty Cagan's product discipline
Lenny Rachitsky's growth focus
Teresa Torres's continuous discovery approach
I'm not sure if the AI truly "understands" these perspectives, but naming specific approaches and philosophies seems to elicit more coherent responses.
My Theory on Why This Might Work
Here's what I think is happening: when you create a detailed persona with specific beliefs and methods, you're giving the AI a framework for consistency. Every response gets filtered through this identity you've created.
For example, if your persona "believes in testing over opinions," the AI seems less likely to suggest launching features without validation. It's like the persona becomes self-governing.
The Interesting Part: Blending Perspectives
What caught my attention was combining multiple expert perspectives into one persona. In my experience, this creates something more flexible than following a single approach. The AI seems to synthesize rather than switch between modes. The responses feel less dogmatic and more balanced.
An Unexpected Discovery
The more constraints I built into these personas, the more creative the output became. This surprised me. You'd think more rules would make things more rigid, but instead, the specificity seemed to unlock more relevant insights.
It reminds me of how specialists often give better advice than generalists - their constraints become their strength.
What I've Been Including in Personas
Through trial and error, I've found these elements seem to matter:
Core beliefs (what principles guide decisions)
Preferred methods (how they approach problems)
Named influences (whose work shapes their thinking)
Communication style (how they explain things)
The more specific I get, the more consistent and useful the output becomes, at least in my experience.
A Hypothesis About Trust
I've noticed that when AI speaks from a defined perspective with clear principles, I trust the output more. Not because it's necessarily more accurate, but because I understand where it's coming from. I can evaluate the advice based on the persona's stated beliefs.
This might be my psychology at work, but it feels important.
What This Means for Creating Prompts
If you want to experiment with this approach, here's what I've been doing:
Think beyond roles to complete professionals
Consider what schools of thought would shape their thinking
Define their principles and methods explicitly
Give them a clear communication style
Let them have opinions
My Current Take
Creating expert personas seems to be a way to encode professional judgment into AI interactions. Instead of getting generic responses, you get output filtered through a specific worldview.
An Invitation to Experiment
If you're working with AI for professional tasks, try building a complete persona instead of just assigning a role. See what happens. Maybe you'll find the same thing I did - that the extra effort in creating a coherent professional identity pays off in more thoughtful, nuanced responses.
Or maybe you'll discover something completely different. That's the interesting part about working with these tools - we're all still figuring out what works.
The prompts I've been using aren’t a template to follow blindly. They are an example of thinking differently about how we interact with AI. Take what works, ignore what doesn't, and build your approach.
After all, that's what good product thinking is about anyway - experimenting, learning, and iterating based on what you discover.