Popular Prompt Frameworks: A Practical Guide to Getting Better Results from AI
If you’ve spent any time working with AI tools like ChatGPT, Claude, or Gemini, you’ve probably noticed something a little frustrating: the same question asked two different ways can produce wildly different answers. One version of your prompt gets you a thoughtful, detailed response. The other gets you something generic and vaguely disappointing. The difference isn’t magic — it’s craft. And that craft has a name: prompt engineering.
Now, “prompt engineering” sounds intimidating — like you need a computer science degree or some insider knowledge to get it right. But the truth is, most of it comes down to common sense: be clear about what you want, give the AI enough context to help you, and structure your request in a way that makes it easy for the model to understand. Over the past few years, practitioners, researchers, and enthusiastic tinkerers have codified a lot of this common sense into named frameworks — reusable templates and mental models you can apply whenever you sit down to write a prompt.
This article walks through the most popular ones in plain language. No jargon for the sake of jargon. Just practical explanations, real examples, and honest takes on where each framework shines and where it falls short.
Why Frameworks Matter in the First Place
Before we dive in, it’s worth pausing on why these frameworks exist at all. When you talk to another person, there’s a shared social context that does a lot of work for you. If you tell a friend “can you help me with something?” they’ll ask follow-up questions, make reasonable assumptions, and draw on everything they know about you. AI models don’t have that luxury — at least not yet. They work with what you give them in the moment.
Frameworks help you remember to give the model what it needs. They’re essentially checklists dressed up as acronyms. And just like any good checklist, they’re most useful when you’ve internalized them enough that you’re not slavishly following them — you’re using them as a scaffold for your own thinking.
With that said, let’s get into it.
1. The RTF Framework — Role, Task, Format
Let’s start with one of the simplest and most widely used frameworks: RTF. The idea is dead simple. Every prompt has three components you should always think about:
Role — Who should the AI be? A copywriter? A data analyst? A skeptical editor? A patient teacher?
Task — What do you actually want done? Write an email? Summarize a document? Brainstorm ten ideas? Debug a function?
Format — How should the output be structured? A bulleted list? A table? A formal paragraph? Code with comments?
When you give a model a clear role, you’re not just playing pretend — you’re activating a particular register of knowledge and tone. Asking it to respond “as an experienced marketing strategist” versus “as a junior assistant” genuinely shifts the style and depth of the response. The task component forces you to be specific about the deliverable, and the format component saves you the frustration of getting a wall of prose when you wanted a quick list.
Example:
You are an experienced UX researcher (Role). Please review the user feedback below and identify the top three pain points customers are experiencing (Task). Present your findings as a numbered list with a brief explanation under each point (Format).
This approach is so intuitive that many people use it without knowing it has a name. But making it explicit helps — especially when you’re training others to write prompts or reviewing prompts that aren’t working well.
Where it falls short: RTF is great for getting started, but it doesn’t give you much guidance on how to handle context, constraints, or nuanced situations. That’s where more structured frameworks come in.
2. The CRISPE Framework — Capacity, Role, Insight, Statement, Personality, Experiment
CRISPE was developed by Matt Nigh as a way to write more comprehensive, layered prompts. It’s a bit more involved than RTF, but it produces richer results for complex tasks.
Here’s what each letter stands for:
Capacity and Role — What is the AI’s capacity (what kind of expert is it?) and role (what is it supposed to do in this interaction)?
Insight — What background information or context does the AI need to do this well?
Statement — What exactly are you asking it to do?
Personality — What tone, style, or persona should it adopt?
Experiment — Ask it to give you multiple options or variations.
The “Experiment” piece is one of the more clever additions here. It’s easy to forget that AI models can generate several alternatives at once, which is incredibly useful when you’re not sure exactly what you want. Asking for three different versions of something gives you raw material to react to rather than an empty canvas to stare at.
Example:
You are an expert science communicator with a background in physics (Capacity and Role). I’m writing a blog post for a general audience about quantum entanglement and need to make it accessible without dumbing it down (Insight). Please explain quantum entanglement in plain language (Statement), using a warm, curious tone similar to how Carl Sagan used to write (Personality). Give me three different opening paragraphs I could use (Experiment).
CRISPE works particularly well for content creation tasks — blog posts, speeches, marketing copy — where tone and style matter as much as substance. It’s a bit heavyweight for quick queries but excellent for anything where you’re going to spend serious time with the output.
3. The RISEN Framework — Role, Instructions, Steps, End Goal, Narrowing
RISEN is particularly popular among people using AI for structured, process-oriented tasks — think project planning, research synthesis, or technical writing. It emphasizes breaking down the work into clear steps and being explicit about the destination.
Role — Same as before: who is the AI in this interaction?
Instructions — The specific directions for how to complete the task.
Steps — The step-by-step process the AI should follow.
End Goal — What does success look like? What’s the final deliverable?
Narrowing — Constraints, limitations, and scope. What should the AI not do? What boundaries should it stay within?
The “Narrowing” component is genuinely underrated. One of the most common mistakes people make when writing prompts is being clear about what they want but vague about what they don’t want. Telling the AI “don’t use technical jargon,” “keep it under 500 words,” or “focus only on the US market” can dramatically improve the relevance of the output.
Example:
You are a project manager with experience in software development (Role). Help me create a risk assessment for a new mobile app launch (Instructions). Start by listing potential technical risks, then business risks, then reputational risks (Steps). The final output should be a structured document I can share with my executive team (End Goal). Keep it to one page, avoid technical jargon, and focus only on risks we can realistically mitigate in the next 90 days (Narrowing).
RISEN is a solid choice when you’re working on something where process matters — where the how is as important as the what.
4. The STAR Framework — Situation, Task, Action, Result
If you’ve ever prepared for a job interview, you might recognize STAR — it’s a classic framework for structuring behavioral interview answers. But it translates remarkably well to prompt writing, especially when you’re asking an AI to help you tell a story, write a case study, or generate example-based content.
Situation — Set the scene. What’s the context?
Task — What was the challenge or objective?
Action — What steps were taken to address it?
Result — What was the outcome?
This framework is particularly useful when you want the AI to help you craft narratives. Rather than asking it to “write a case study,” you walk it through the STAR structure with your actual content, and it helps you flesh out each component compellingly.
Example:
I need to write a case study for a client proposal. Here’s the background: Situation — a mid-sized retail company was losing 15% of online customers at checkout. Task — they needed to reduce cart abandonment without a major site redesign. Action — we simplified the checkout flow from six steps to two and added guest checkout. Result — cart abandonment dropped by 34% in 60 days. Please turn this into a polished case study section, about 300 words, written in third person with a confident, professional tone.
STAR is a natural choice whenever you’re working with narratives, presentations, or any content that benefits from a clear beginning, middle, and end.
5. The CO-STAR Framework — Context, Objective, Style, Tone, Audience, Response
CO-STAR is one of the more comprehensive frameworks out there, and it’s gaining significant traction in professional settings. What makes it particularly useful is that it forces you to think explicitly about your audience — something that’s easy to forget when you’re focused on the task at hand.
Context — The background for the request.
Objective — The specific goal you’re trying to achieve.
Style — The writing or communication style (analytical? conversational? journalistic?).
Tone — The emotional register (warm? authoritative? playful? urgent?).
Audience — Who will ultimately read or use this output?
Response — The format and structure of the expected output.
The split between style and tone is subtle but important. Style is about how you construct the writing — your sentence structure, vocabulary, level of formality. Tone is about the emotional quality underneath it. You can have an analytical style with a warm tone, or a conversational style with an urgent tone. Separating these helps you be more precise about what you want.
Example:
Context — I run a financial literacy nonprofit that helps first-generation college students understand personal finance. Objective — write an explainer on the difference between a Roth IRA and a Traditional IRA. Style — conversational and accessible, using relatable examples from everyday life. Tone — encouraging and non-judgmental; many of our readers feel embarrassed about not knowing this stuff. Audience — 18-22 year olds who have never invested before and may be skeptical of financial advice. Response — 500 words, broken into short paragraphs with a simple comparison table at the end.
CO-STAR shines in communications and content strategy work, where understanding your audience is the whole game.
6. The APE Framework — Action, Purpose, Expectation
APE is lean and fast. It strips things down to three essential questions: what should the AI do, why is it doing it, and what does success look like?
Action — The concrete task.
Purpose — The underlying reason or goal.
Expectation — The desired outcome or criteria for success.
This is a great framework when you need a quick answer and don’t want to write a paragraph-long prompt. It also forces you to articulate the purpose of your request, which is surprisingly clarifying. When you have to explain why you’re asking for something, you often realize your initial request was slightly off-target.
Example:
Action — rewrite the following product description. Purpose — the current version is too technical and we’re getting feedback that customers don’t understand what the product does. Expectation — a version that a non-technical person could read in 30 seconds and immediately understand the value.
APE works best for focused, single-task prompts. When things get complex, you’ll want one of the more structured frameworks.
7. The RACE Framework — Role, Action, Context, Expectation
RACE is a close cousin of APE but adds the crucial element of role. It’s a clean, balanced framework that works well across a wide range of tasks.
Role — Who is the AI in this interaction?
Action — What should it do?
Context — What’s the relevant background?
Expectation — What does a good output look like?
Example:
Role — you are a career coach specializing in mid-career transitions. Action — review the cover letter I’ve written below and give me specific feedback. Context — I’m applying for my first management role after ten years as an individual contributor; I want to convey leadership potential without sounding like I’m overstating my experience. Expectation — identify two or three concrete things I should change, explain why, and show me revised versions of those sections.
RACE is particularly useful in coaching, feedback, and advisory contexts. The explicit framing of the expectation — including asking for why and revised versions — is what gets you actionable feedback rather than generic encouragement.
8. Chain-of-Thought Prompting
This one is a little different from the frameworks above — it’s less a template and more a technique. Chain-of-thought (CoT) prompting encourages the model to reason through a problem step by step before giving you the answer, rather than jumping straight to a conclusion.
The magic phrase that kicks this off is something like “Let’s think through this step by step” or “Walk me through your reasoning.” There’s now a good amount of research showing that this simple instruction genuinely improves accuracy on reasoning tasks — particularly math problems, logical puzzles, and multi-step decisions. The model is essentially doing its thinking out loud, which not only produces better answers but also lets you catch errors in the reasoning chain.
Example:
A company’s revenue grew by 12% in Q1, then declined by 8% in Q2, then grew by 5% in Q3. If their starting revenue was $2.4M, what was their revenue at the end of Q3? Walk me through the calculation step by step.
Chain-of-thought is particularly powerful when combined with other frameworks. You might use CO-STAR to set up the context and then add “reason through this step by step” to get the model to show its work.
9. Few-Shot Prompting
Another technique rather than a named acronym framework, few-shot prompting involves giving the AI examples of what you want before asking it to produce something new. Instead of just describing the task, you show it.
“Few-shot” refers to the fact that you’re providing a small number of examples — usually between one and five. Zero-shot prompting is when you give no examples at all. The more examples you give, the better the model can calibrate to your specific style, format, or requirements.
Example:
I need you to write subject lines for promotional emails in the style of the examples below. Example 1: “You’re leaving money on the table (here’s how to get it back)” Example 2: “The 15-minute habit that changed how we do Mondays” Example 3: “We almost didn’t send this one” Now write five subject lines for an email announcing a summer sale on kitchen appliances.
Few-shot prompting is incredibly effective when you have a strong sense of the style or format you want but struggle to articulate it in words. Showing is often clearer than telling — for humans and for AI models.
10. The PEAS Framework — Purpose, Expected Output, Any Constraints, Specifics
PEAS is a practical, no-frills framework popular in enterprise settings where prompts often need to be documented, shared, and reused by different people.
Purpose — What problem is this prompt solving?
Expected Output — What should the response look like?
Any Constraints — What limitations apply?
Specifics — What details are essential for getting the right answer?
The emphasis on documenting constraints and specifics makes PEAS particularly useful in team environments where prompts get passed around. When someone else needs to use or modify your prompt, having all of this spelled out explicitly means they don’t have to reverse-engineer your intent.
Example:
Purpose — to generate a weekly status update for a software project team. Expected Output — a brief email (150-200 words) covering what was completed, what’s in progress, and any blockers. Any Constraints — must be jargon-free for non-technical stakeholders; don’t mention individual team members by name; keep a positive, forward-looking tone even when discussing problems. Specifics — project is a customer portal redesign; the team completed login redesign this week; currently working on dashboard; a third-party API integration is behind schedule.
Mixing and Matching
Here’s something most guides don’t tell you: the best prompt writers don’t follow any single framework religiously. They understand several of them well enough to draw on elements from each depending on the situation. CRISPE’s “Experiment” component pairs beautifully with CO-STAR’s audience specificity. Chain-of-thought can be tacked onto the end of virtually any framework. Few-shot examples can be embedded inside an RTF or RACE prompt.
Think of these frameworks as tools in a toolbox, not commandments. Once you’ve internalized the principles — clarity about role and audience, specificity about the task, explicit constraints, showing versus telling — you’ll find yourself writing better prompts naturally, even without consciously running through an acronym.
Common Mistakes That No Framework Will Save You From
Even with the best framework, there are a few mistakes that consistently derail prompts. It’s worth naming them directly.
The first is vagueness disguised as concision. “Make it better” isn’t a prompt — it’s a prayer. Better how? Clearer? More persuasive? More concise? More formal? The framework helps here, but you still have to fill it in.
The second is forgetting that the AI doesn’t know your history. Every conversation generally starts fresh. The AI doesn’t know that you’ve been working on this project for six months, that your audience is particularly skeptical, or that your boss has a thing about Oxford commas. You have to tell it.
The third is accepting the first output as final. The frameworks get you a good first draft, not a finished product. Iteration is where the real value is. Ask for revisions, push back on things you don’t like, ask the model to try a different approach. The conversation is the work.
The fourth is over-engineering. Sometimes the best prompt really is just “explain this to me like I’m twelve” or “make this shorter.” Not every request needs a CRISPE framework with chain-of-thought and five examples. Match the complexity of your prompt to the complexity of the task.
Choosing the Right Framework
Here’s a quick cheat sheet for when to reach for which:
If you’re in a hurry and need something simple, start with RTF or APE. They’re fast, intuitive, and cover the essentials.
If you’re producing content where tone and audience matter, CO-STAR or CRISPE will serve you well.
If you’re working on something process-heavy — a research report, a structured plan, a technical document — RISEN gives you the scaffolding to think through the steps.
If you’re telling a story or writing a case study, STAR is your friend.
If you’re working in a team environment where prompts need to be documented and reused, PEAS gives you the structure for that.
And if you’re asking the AI to reason through something complex — a decision, a calculation, a logical problem — add chain-of-thought to whatever else you’re doing.
A Note on the Future
These frameworks were developed for today’s AI models, which need explicit guidance and context to perform at their best. As models get better at inference and context-tracking, some of the more verbose frameworks may feel increasingly unnecessary. We might find ourselves writing shorter, more natural prompts and getting equally good results.
But for now, the frameworks are genuinely useful — not just as technical tools, but as a way of thinking more clearly about what you want and how to communicate it. That kind of clarity is valuable regardless of whether you’re talking to an AI or a human colleague.
And honestly, the more you practice writing good prompts, the more you realize it’s really just writing well. Being clear, specific, contextual, and purposeful. The AI is just a new audience — one that’s very good at following detailed instructions and very patient with people who are still learning.
The best time to start experimenting with these frameworks was the first time you used an AI tool. The second best time is now.

