What AI Can and Cannot Do.
The hype goes both ways — AI will either replace everyone or it is useless. Neither is true. Here is the realistic picture.
After this lesson you'll know
- The three categories of AI capability: does well, can help, cannot do
- Which business tasks are safe to hand to AI and which need human oversight
- Why AI fails at certain things — not as a bug, but by design
- How to set accurate expectations with your team and stakeholders
The reality check.
Most business owners come to AI from one of two places: they heard it can do everything, or they tried it once and it failed them. Both experiences are real — because the truth is that AI is genuinely excellent at a narrow set of things, decent at a much wider set with the right guardrails, and genuinely bad at a few things it should never be trusted to do alone.
The framework in this lesson — three clear categories — is the single most important mental model you will learn in this course. Every decision you make about AI in your business comes back to one question: which category does this task fall into? Get that answer right, and you will deploy AI confidently. Get it wrong, and you will either waste time over-cautiously avoiding AI where it excels, or get burned by trusting it where it should not be trusted.
Here is how to think about the three categories:
AI Does Well: Tasks that involve processing language, finding patterns in large amounts of text, generating drafts, summarizing documents, or answering questions based on provided information. Examples: drafting marketing emails, summarizing a 50-page report, writing first drafts of job descriptions, answering customer FAQ questions, translating content, writing code.
AI Can Help (with oversight): Tasks where AI provides significant leverage but humans must review the output before acting on it. Examples: market research (AI can compile and summarize, but you verify the data), financial analysis (AI can spot patterns, but an accountant reviews conclusions), legal document review (AI flags clauses, but a lawyer makes the call), hiring assessments (AI scores resumes, but humans make final decisions).
AI Cannot Do: Tasks that require real-world judgment, emotional intelligence, lived experience, accountability, or current information it does not have access to. Examples: building genuine client relationships, making final strategic decisions, physical tasks, anything requiring real-time data it was not given, crisis management involving human emotions, and anything where being wrong has serious legal or financial consequences without human review.
The mistake most businesses make is either keeping AI out entirely, or handing it tasks it cannot handle. The skill is knowing which category each task falls into.
AI capabilities by business function.
The three categories look different depending on which department you are looking at. Here is a department-by-department breakdown so you can have specific conversations with each team leader about what AI can and cannot do in their area.
Marketing. AI Does Well: writing first drafts of blog posts, social media captions, email subject lines, ad copy variations, and SEO meta descriptions. AI Can Help: brand strategy research, competitive analysis, campaign performance interpretation, audience segmentation. AI Cannot Do: develop your brand voice from scratch, build genuine community relationships, make judgment calls about sensitive messaging, or guarantee that content will resonate emotionally with your specific audience.
Sales. AI Does Well: drafting outreach emails, personalizing follow-ups based on CRM data, summarizing call transcripts, generating proposal templates, and scoring leads based on historical patterns. AI Can Help: identifying upsell opportunities, forecasting pipeline, analyzing win/loss patterns, preparing for discovery calls. AI Cannot Do: close a deal, build trust with a prospect, navigate complex negotiations, or handle objections that require emotional intelligence and relationship history.
Operations. AI Does Well: automating data entry, formatting reports, scheduling recurring tasks, routing support tickets, and monitoring system alerts. AI Can Help: optimizing workflows, predicting inventory needs, identifying bottlenecks, planning resource allocation. AI Cannot Do: manage a team, handle a supplier crisis that requires human judgment, make trade-off decisions between competing priorities, or resolve conflicts between departments.
Finance. AI Does Well: categorizing expenses, reconciling transactions, formatting financial reports, generating invoice reminders, and converting data between formats. AI Can Help: spotting anomalies in spending patterns, forecasting cash flow, summarizing financial statements, and preparing audit documentation. AI Cannot Do: make investment decisions, sign off on financial statements, navigate complex tax situations that require professional judgment, or replace the accountability that a qualified accountant provides.
Human Resources. AI Does Well: writing job descriptions, screening resumes for basic qualifications, scheduling interviews, drafting onboarding materials, and summarizing employee feedback surveys. AI Can Help: identifying retention risk factors, analyzing compensation benchmarks, generating training content, and tracking compliance requirements. AI Cannot Do: make hiring decisions (bias risk is real and documented), handle sensitive employee conversations, navigate terminations, or replace the human judgment required in conflict resolution and workplace culture decisions.
Customer Service. AI Does Well: answering FAQ questions, routing tickets to the right department, drafting response templates, and handling status inquiries. AI Can Help: identifying trending issues, escalating complex cases with context summaries, suggesting responses for agents to edit, and analyzing satisfaction trends. AI Cannot Do: handle an emotionally charged complaint with genuine empathy, make exceptions to policy that require judgment, or rebuild trust after a serious service failure.
Understanding the limits is the real skill.
AI fails at certain things not because the technology is broken, but because of how it works at a fundamental level. Understanding why it fails helps you predict where it will fail in your business — before it embarrasses you or costs you money.
It has no real-world experience. AI learned from text, not from living. It knows what a customer complaint looks like in writing, but it has never felt the frustration of a customer or the pressure of managing an angry caller. This is why AI-written apology emails often sound polished but hollow — they lack the genuine understanding that comes from having been on both sides.
It cannot verify its own output. When AI writes "your company was founded in 1987," it has no way to check whether that is true. It is pattern-matching, not fact-checking. This is why hallucinations happen — the AI produces text that sounds right because the pattern fits, even when the facts are wrong. Verification must always be a human step.
It does not understand consequences. AI does not know that sending an email with the wrong price could cost you a $50,000 contract. It does not understand that a poorly worded social media post could become a PR crisis. Every output is equally weightless to the AI. The stakes are invisible to it. That is your job — to know which outputs carry risk and build review steps accordingly.
It reflects its training data. If the data it learned from contains biases — and it does, because all human-generated data contains biases — the AI will reproduce those biases. This is documented in hiring tools that penalize resumes from certain schools, content tools that default to male pronouns, and analytics tools that over-index on majority populations. Awareness of this limitation is the first defense.
How much human review does each task need?
Not all AI tasks need the same level of oversight. Some outputs can go straight to use. Others need a light scan. Others need line-by-line review. Knowing the right level of oversight for each task prevents two mistakes: over-reviewing (which wastes the time AI saved) and under-reviewing (which lets errors through). Here is the spectrum.
Map every task you give to AI onto this spectrum. When you are unsure, default one level higher than you think necessary. Over-reviewing is a time cost. Under-reviewing is a reputation cost. Reputation costs more.
A practical tip: create a one-page cheat sheet that lists your most common AI tasks with their oversight level. Post it where your team can see it. When someone is unsure whether to review an AI output, they check the cheat sheet instead of guessing. This simple artifact prevents 90% of oversight mistakes and takes 15 minutes to create.
Update the cheat sheet monthly as your team builds confidence and as you learn which tasks produce reliable output and which need more scrutiny. Over time, some tasks may move down a level (less review needed) as your prompts improve and your team's editing skills sharpen. Some tasks may move up a level if you discover that AI output in that area is less reliable than you initially assumed. The spectrum is a living document, not a fixed assignment.
How to talk to your team about AI capabilities.
The way you introduce AI to your team sets the tone for adoption. If you oversell it, people will be disappointed when it makes mistakes. If you undersell it, people will not bother trying. Here are four things to say in your first team conversation about AI — and four things to never say.
Say: "AI is a first-draft machine." This sets the right expectation — AI produces starting points, not finished products. Your team's job shifts from creating from scratch to editing and refining. That shift alone saves 40-60% of time on most writing tasks.
Say: "Always verify facts, numbers, and names." This makes verification a habit from day one. Teams that learn this early avoid the embarrassment of publishing hallucinated statistics or citing nonexistent sources.
Say: "Tell me when it does not work." This creates a feedback loop. If people feel comfortable reporting AI failures, you can fix prompts, switch tools, and improve processes. If they stay silent, failures compound.
Say: "Your job is safe. Your tasks are changing." This addresses the biggest fear directly. AI replaces tasks, not people. The people who learn to use AI become more valuable, not less. Frame AI as a career upgrade, not a career threat.
Never say: "AI can do everything." It cannot. Saying this sets your team up for a trust crash when AI inevitably fails at something. Be specific about what it does well and where human judgment is required.
Never say: "Just use AI for everything from now on." This skips the critical step of identifying which tasks are appropriate. It also signals that you have not done the work to understand AI's limitations in your specific context.
Never say: "If the AI wrote it, it must be right." AI output is probabilistic, not factual. Treating AI output as truth is how hallucinated case citations end up in court briefs. Every team member needs to understand that AI confidence and AI accuracy are two different things.
Never say: "We do not need to review AI output." You do. The review level varies by task (see the oversight spectrum above), but skipping review entirely is how quality drops, errors accumulate, and client trust erodes.
The way you frame AI to your team in the first conversation shapes their relationship with it for months. Take 30 minutes to prepare what you will say. Use the four "say" statements above as your outline. Add examples from your own business. Address concerns directly. This is not a memo — it is a conversation. And the quality of that conversation determines whether your team embraces AI as a tool or resists it as a threat.
One sentence to remember.
If you take nothing else from this lesson, take this: AI is not a replacement for human judgment — it is a multiplier of human capability. The businesses that win with AI are not the ones that hand everything to the machine. They are the ones that know exactly which tasks to hand over and which to keep. That knowledge — the three-category framework — is the foundation everything else in this course builds on.
Before you move to the next lesson, do one thing: write down three tasks you do this week and categorize each one. One "AI Does Well" task. One "AI Can Help" task. One "AI Cannot Do" task. This takes 60 seconds and turns an abstract framework into a personal tool.
When you start using AI in Lesson 7, these three tasks become your starting point — you will already know which one to automate first, which one to use AI as an assistant for, and which one to keep fully human. That 60-second exercise is the bridge between this lesson's theory and Lesson 7's practice. Do it now.
Ten common business tasks. Flip each card to find out which category it falls into — and more importantly, why. Some of these will surprise you.
Categorize your own business tasks.
Use this prompt to sort your real tasks into the three AI capability categories. It gives you a personalized action plan for what to automate first.
I run a [type of business] with [number] employees. Here are 10 tasks my team spends the most time on each week:
1. [task]
2. [task]
3. [task]
4. [task]
5. [task]
6. [task]
7. [task]
8. [task]
9. [task]
10. [task]
For each task, categorize it as:
- AI DOES WELL (hand it to AI with light review)
- AI CAN HELP (AI drafts, human reviews and decides)
- AI CANNOT DO (keep fully human)
For each, explain WHY it falls in that category and what specific AI tool or approach would work best. Then rank the "AI Does Well" tasks by estimated hours saved per week.
Match tasks to categories.
Test your judgment.
Five real scenarios. For each one, decide what role AI should play. There is one right answer per question based on the framework you just learned. Pay attention to the explanation after each question — it reveals the reasoning pattern you should apply to every new task you consider handing to AI.
The questions below are intentionally ambiguous. Real business decisions rarely have obvious answers. The three-category framework gives you a structured way to think through the gray areas — but the final judgment is always yours. That is the point: AI is a tool, and the person using the tool needs to understand its limits.