All Units
Unit 7intermediate
50 min

AI Risks and Responsible Use

Understand the limitations and risks of AI so you can use it responsibly.

Key lesson

Use AI to accelerate thinking, not to replace judgment.

Start Learning
Learning Objectives
  • Identify hallucinations, bias, privacy risk, copyright risk, deepfakes, prompt injection, and model drift.
  • Decide when human review and source checking are required.
  • Protect confidential data when using AI tools.
  • Separate low-risk productivity use from high-stakes decisions.
  • Create responsible AI guardrails for teams and vendors.
Unit Content

AI risk is workflow risk

AI risks show up when people trust output too quickly, paste sensitive data into the wrong tool, automate decisions without review, or publish unsupported claims.

Responsible use starts by asking what could go wrong if the output is false, biased, private, copyrighted, manipulated, outdated, or used outside its intended context.

Risk filter

The higher the consequence of a wrong answer, the more source checking, human review, and process control you need.

Hallucinations and source checking

A hallucination is a confident-sounding false answer. It can include fake citations, invented policies, incorrect calculations, made-up product details, or plausible but wrong summaries.

Source checking means verifying important claims against reliable sources. For internal work, that may be company documents. For public claims, it may require primary sources, current regulations, or expert review.

Bias and unfair outcomes

Bias can come from training data, prompts, missing context, measurement choices, or how people use the output. AI can reproduce unfair patterns even when nobody intended harm.

Be especially careful with hiring, lending, housing, healthcare, education, insurance, pricing, discipline, and other decisions that affect people materially.

Privacy and confidential data

Confidential data includes customer information, employee records, contracts, financials, trade secrets, credentials, unreleased strategy, and private messages.

Before using an AI tool, ask whether inputs are stored, used for training, shared with subprocessors, available to admins, retained after deletion, or governed by an enterprise privacy setting.

Copyright, deepfakes, and public content

AI can produce content that resembles existing work or includes unsupported claims. Risk is higher when asking for output in the style of a living creator, using copyrighted source material, or publishing generated material without review.

Deepfakes and synthetic media can damage trust. Public-facing AI content needs review for accuracy, consent, disclosure, and brand fit.

Prompt injection and malicious inputs

Prompt injection happens when untrusted text tries to override instructions or manipulate an AI system. This matters when AI reads emails, web pages, documents, support tickets, or user-submitted content.

Do not let AI blindly follow instructions found inside external content. Systems need boundaries around tools, data access, and actions the model is allowed to take.

Model drift and changing performance

Model drift means performance changes over time because the model, data, user behavior, or business context changes. A prompt that worked last quarter may produce weaker results after product, policy, or model updates.

Track important AI workflows with examples, review samples, and escalation paths. Treat AI quality as something to monitor, not something to set once and forget.

Plain-English version

AI risk is what happens when a tool sounds more certain than it really is. It may write smoothly, but smooth writing is not the same as truth, permission, fairness, or good judgment.

The big rule is simple: the more serious the consequence, the more review you need. A brainstorm can be loose. A contract, medical claim, hiring decision, or public statement cannot.

A normal business example

A team asks AI to summarize customer complaints. That can be useful. But if the team pastes names, emails, account numbers, and private messages into an unapproved tool, the workflow creates a privacy problem while trying to solve an operations problem.

Another team asks AI for legal language and sends it to customers without review. The output sounds official, but it may be wrong. Professional wording is not the same as legal review.

Set guardrails before trouble

Decide what data can go into AI tools, what data cannot, which tools are approved, which outputs need review, and which use cases are off limits.

For higher-risk work, keep examples of good and bad outputs, require source checking, and create an escalation path. A simple "ask a human before sending" rule can prevent many avoidable problems.

Your meeting cheat sheet

Ask: Could this output harm a person, customer, employee, or legal position? Is confidential data involved? Are sources checked? Could bias matter? Is the AI allowed to take action, or only suggest?

Responsible AI does not mean fear of AI. It means using it with review, boundaries, and clear accountability.

Practice Scenario

Responsible AI guardrails

A company wants employees to use AI for emails, research, customer replies, and internal document summaries.

  • Create low, medium, and high consequence categories for these use cases.
  • Name which data is prohibited, which outputs need source checking, and which outputs need manager review.
  • Write a short escalation rule for uncertain, sensitive, or customer-facing AI output.
Key Takeaways
  • 1AI risk depends on consequence, not just tool category.
  • 2Hallucinations require source checking for important claims.
  • 3Confidential data needs explicit approval before it goes into AI tools.
  • 4Bias, copyright, prompt injection, and model drift are operational risks.
  • 5Human review is mandatory when AI output affects people, money, contracts, safety, or public trust.