All Units
Unit 6intermediate
50 min

Prompt Training

Learn how to write effective AI prompts that get useful, consistent results.

Key lesson

A good prompt includes role, context, task, constraints, and format. Vague prompts produce vague results.

Start Learning
Learning Objectives
  • Build prompts with role, context, task, constraints, examples, and output format.
  • Use context windows and tokens wisely instead of pasting unfocused information.
  • Iterate on prompts by diagnosing what went wrong in the output.
  • Create reusable prompt patterns for common business workflows.
  • Evaluate AI responses with clear criteria and responsible review.
Unit Content

A prompt is a work brief

A prompt is the instruction you give an AI model. A good prompt works like a clear brief to a capable assistant: it explains the role, situation, task, constraints, examples, and desired output.

Prompt training is not about memorizing magic phrases. It is about communicating the work clearly enough that the model can produce something useful, then improving the instruction when the first result misses.

Prompt formula

Use role, context, task, constraints, examples, and output format. Then evaluate the result and refine the prompt.

Role and task

The role tells the model which perspective to use. "Act as a technical editor for non-technical founders" gives a different result than "Act as a senior backend engineer."

The task tells the model what to do. Strong tasks use active verbs: summarize, compare, rewrite, extract, classify, critique, draft, generate options, or identify risks.

Avoid asking for everything at once. If the work has several stages, ask for one stage at a time or explicitly define the stages.

Context and source material

Context is the background the model needs to do the work. Useful context includes audience, goal, constraints, current draft, source notes, brand voice, business model, and how the output will be used.

More context is not always better. Irrelevant context can distract the model. Put the most important information near the task, label source material clearly, and remove details that do not affect the answer.

Context windows are finite. For long documents, ask the model to work section by section or extract the parts that matter before producing a final answer.

Constraints and output format

Constraints define boundaries: length, tone, reading level, must-include points, must-avoid claims, assumptions, audience, source limits, or compliance needs.

Output format tells the model how to present the answer: bullets, table, checklist, email, structured summary, decision memo, outline, or step-by-step plan.

When output will move into a workflow, format matters. A clean table, labeled checklist, or structured summary may save more time than a polished paragraph.

Examples and few-shot prompting

Examples show the model what good looks like. Few-shot prompting means providing one or more examples of input and desired output before asking for the real answer.

Examples are especially useful for tone, classification, extraction, and repeated internal workflows. If you want consistent sales email feedback, show the model a good critique and a bad critique.

Do not include confidential examples unless the tool and data policy allow it. Replace names, customer details, and sensitive numbers when possible.

Iteration and diagnosis

Prompting is usually iterative. If the answer is too generic, add audience and source detail. If it is too long, add a length constraint. If it invents details, require source-grounded answers and ask it to state uncertainty.

If tone is wrong, provide a before-and-after example. If structure is wrong, specify headings or a table. If analysis is shallow, ask for tradeoffs, risks, assumptions, and decision criteria.

Reusable patterns and responsible review

Useful patterns include summarizing a document for an executive, turning meeting notes into action items, reviewing a vendor proposal, drafting customer replies, comparing software options, and converting rough ideas into a project brief.

Every important prompt needs evaluation criteria. Decide whether the answer should be accurate, complete, concise, on-brand, source-grounded, legally cautious, or operationally specific.

AI output should be reviewed before it affects customers, contracts, money, health, safety, legal obligations, confidential data, or public claims.

Plain-English version

A prompt is not a spell. It is a work request. If the request is fuzzy, the answer will usually be fuzzy too. AI is not being rude; it is working with what you gave it.

Good prompting means saying who the AI should act as, what situation it is in, what job to do, what rules to follow, and what the answer should look like.

A normal business example

Weak prompt: "Write an email about the delay." Better prompt: "Act as an operations manager. Write a short, calm email to a client explaining that their website launch is delayed by three days because final content arrived late. Apologize, give the new date, and keep the tone professional but warm."

The better prompt is not longer for fun. It gives role, audience, reason, format, tone, and facts. The AI can now aim at a clear target instead of tossing words into the air.

How to fix weak output

If the answer is generic, add audience, goal, and context. If it is too long, set a limit. If it sounds too formal, give a tone example. If it invents facts, require it to use only the provided source text and say when it is unsure.

When a prompt fails, do not just think "AI is bad." Ask what instruction was missing. Prompting is often less like ordering coffee and more like editing a first draft.

Your meeting cheat sheet

Ask: What is the role? What context matters? What is the task? What should the output look like? What constraints apply? What would make the answer unsafe or unusable? Who reviews it?

For team use, turn good prompts into templates with placeholders. That keeps repeated work more consistent across the team.

Practice Scenario

Prompt improvement workshop

A teammate writes "make this better" and gets a generic answer from an AI assistant.

  • Rewrite the prompt with role, context, task, constraints, examples, and output format.
  • Define evaluation criteria for deciding whether the AI output is usable.
  • Create one reusable prompt template with placeholders for the same type of work.
Key Takeaways
  • 1Good prompts are clear work briefs, not secret commands.
  • 2Role, context, task, constraints, examples, and output format make responses more useful.
  • 3Prompt iteration should diagnose the specific failure in the output.
  • 4Reusable prompts help teams standardize repeated AI workflows.
  • 5Important AI outputs still need human review and clear evaluation criteria.