📚Academy
likeone
online

Find Your AI Opportunities.

Most businesses start AI in the wrong place. This lesson teaches you how to score every area of your business so you start where the impact is biggest.

After this lesson you'll know

  • The 3-factor scoring method for identifying high-value AI opportunities
  • Which business areas consistently score highest across industries
  • How to match business areas to specific AI tool types
  • The exact order of steps to identify and implement your first AI win

The opportunity formula.

The most common mistake businesses make with AI is starting with the shiniest tool instead of the biggest problem. Someone sees a demo of an AI video editor and buys it — but their actual bottleneck is a sales team drowning in manual follow-up emails. The tool does not match the pain.

This lesson gives you a scoring system that removes subjectivity from the decision. Instead of arguing about which department "deserves" AI first, you run the numbers and let the score decide. The highest score is your starting point — not the loudest voice, not the most exciting technology, not the CEO's pet project. The numbers know where the impact is.

The 3-factor scoring method fixes this. For every area of your business, you score it on three dimensions. Then you multiply the scores and sort. Start at the top.

Factor 1: Time Spent (1-5). How many hours per week does your team spend on this area? Score 1 if it is occasional, 5 if it consumes multiple people's full weeks. Time spent is your multiplier — even modest automation of a high-time-spend area generates big returns.

Factor 2: Repetitiveness (1-5). Are the tasks in this area similar or variable? Score 1 if every task is unique and requires fresh judgment, 5 if the same steps are repeated dozens of times per day. AI thrives on repetition. Novel situations still require humans.

Factor 3: Pain Level (1-5). How much friction does this area create? Score 1 if it runs smoothly, 5 if it creates bottlenecks, causes errors, frustrates customers, or keeps your best people stuck doing low-value work.

Here is a worked example: Customer support at a growing e-commerce company. Time Spent: 4 (three people, full time). Repetitiveness: 5 (80% of tickets are "where is my order" or "how do I return this"). Pain Level: 4 (slow response times are hurting reviews). Weighted score (Time x 0.3 + Repetitiveness x 0.4 + Pain x 0.3) x 20 = (4 x 0.3 + 5 x 0.4 + 4 x 0.3) x 20 = (1.2 + 2.0 + 1.2) x 20 = 88/100. This is a clear winner. Start here.

Here is a second worked example for contrast: Strategic planning. Time Spent: 2 (leadership team, quarterly). Repetitiveness: 1 (every strategic decision is unique). Pain Level: 2 (it is slow but working). Weighted score (2 x 0.3 + 1 x 0.4 + 2 x 0.3) x 20 = (0.6 + 0.4 + 0.6) x 20 = 32/100. This is a low-priority area for AI. The score correctly identifies that strategic planning is low-repetition, low-time, and low-pain — AI would add marginal value here compared to customer support at 88.

Why Repetitiveness gets the highest weight (0.4). AI thrives on patterns. The more repetitive a task is, the more accurately AI can handle it — because it is essentially doing the same thing with small variations. Time Spent and Pain Level matter, but a high-time, high-pain task that is completely unique every time (like crisis management) is still a poor AI candidate. Repetitiveness is the single best predictor of successful AI automation.

How to score accurately. The biggest risk in the scoring process is inflating scores to justify a tool you already want to buy. Be honest. Have two people score each area independently and compare. If their scores differ by more than 1 point on any factor, discuss until you agree. Accurate scoring prevents two problems: investing in low-impact areas because someone inflated the scores, and ignoring high-impact areas because someone scored them conservatively out of skepticism.

When you run this exercise for the first time, score at least 6 business areas. Fewer than 6 does not give you enough contrast to see the pattern. More than 10 creates analysis paralysis. The sweet spot is 6-8 areas scored in a single sitting, then ranked from highest to lowest. The ranking is your implementation roadmap.

Do this exercise quarterly. Scores change as your business evolves. An area that scored 45 six months ago may score 80 today because you hired new people, added new clients, or scaled a process that used to be manageable. Re-scoring quarterly keeps your AI implementation aligned with your actual business needs, not the needs you had when you first ran the exercise.

Include your team in the scoring. The person doing the work knows the time, repetitiveness, and pain better than anyone. Have department leads score their own areas, then compare scores across the organization. The cross-department comparison often reveals opportunities that no single person would have identified alone. Your operations manager might not realize that the marketing team's content pipeline has a higher pain score than her logistics workflow — but the scoring comparison makes it visible.

Document your scores. Keep a simple spreadsheet with all your scored business areas, the date you scored them, and the tool you deployed (if any). In 12 months, this document becomes your AI adoption history — showing where you started, what worked, and how your priorities evolved. It is also the best evidence you can show leadership that AI decisions were made systematically, not impulsively.

The scoring exercise is also a team alignment tool. When everyone on a leadership team independently scores the same business areas and then compares notes, you discover where perceptions differ. Your sales VP might rate "Sales Outreach" as a 95 while your CTO rates it a 60 — because they experience the pain differently. That conversation — surfacing and resolving disagreement — is often more valuable than the final score itself.

The 3-factor method does not just tell you where to start with AI. It surfaces the conversations your team needs to have about priorities, pain, and where time is actually being spent. In many organizations, this scoring exercise is the first time leadership has ever had a data-driven conversation about operational priorities. That alone makes it worth the 30 minutes.

The formula is not magic — it is a structured way to stop arguing about opinions and start comparing numbers. Run every major business area through it. The ranking tells you where to deploy your first dollar.

The full opportunity assessment worksheet.

The 3-factor score tells you where to start. But before you commit budget and time, you need a deeper assessment of each high-scoring area. This framework adds three more dimensions: risk level, implementation complexity, and expected payback period. Together, the six factors give you a complete picture.

Opportunity Assessment Template
Business Area: [Name the area]
3-Factor Score: [Calculate using Time x 0.3 + Repetitiveness x 0.4 + Pain x 0.3, multiplied by 20]
Risk Level (1-5): What happens if AI makes a mistake here? Score 1 if errors are caught easily with no cost. Score 5 if errors could cause legal exposure, client loss, or financial damage.
Implementation Complexity (1-5): How hard is it to set up? Score 1 for plug-and-play SaaS tools. Score 5 for custom builds requiring integration with multiple systems.
Expected Payback (weeks): How quickly will the investment return its cost? Use the ROI formula from Lesson 3.
Priority = (3-Factor Score) minus (Risk x 5) minus (Complexity x 5). Higher is better. Negative scores mean the area is not ready for AI yet.

Plot your opportunities on a 2x2 grid.

After scoring all your business areas, plot them on a simple grid. The horizontal axis is the 3-factor opportunity score (low left, high right). The vertical axis is implementation ease (hard bottom, easy top). This gives you four quadrants that tell you exactly what to do with each opportunity.

Quick Wins (Top-Right)
High opportunity, easy to implement. Do these first. Examples: email automation, FAQ chatbots, content drafting. ROI shows up in the first week.
Strategic Bets (Bottom-Right)
High opportunity, hard to implement. Worth the effort but plan carefully. Examples: custom analytics, workflow automation across systems. Budget 30-90 days.
Low-Hanging Fruit (Top-Left)
Low opportunity, easy to implement. Do these when you have spare capacity. Small wins that build team confidence. Examples: meeting transcription, template generation.
Avoid (Bottom-Left)
Low opportunity, hard to implement. Skip these entirely. The effort outweighs the return. Revisit in 6-12 months when tools improve or the business need grows.

Most businesses have 2-3 Quick Wins, 1-2 Strategic Bets, a handful of Low-Hanging Fruit, and several items in the Avoid quadrant. Start with Quick Wins. Use the results to fund and justify the Strategic Bets. Ignore the rest until the landscape changes.

Five ways businesses pick the wrong AI opportunity.

Knowing where to start is half the battle. The other half is knowing which temptations to resist. These are the five most common ways businesses pick the wrong first AI project — and how to avoid each one.

1. Starting with the CEO's pet project. The CEO saw a demo of AI video generation and wants it yesterday. But the company's real bottleneck is manual data entry that eats 40 hours a week. Following the scoring method instead of the loudest voice saves the company from investing in a tool that delivers spectacle instead of savings.

2. Automating a broken process. If your customer onboarding process is a mess, automating it with AI just creates a faster mess. Fix the process first, then automate. AI amplifies what exists — both the good and the bad.

3. Choosing the hardest problem first. Custom AI model for predictive pricing sounds impressive. But if you have never used AI before, start with something that works in a week, not six months. Build confidence and capability before tackling complex projects.

4. Ignoring the boring opportunities. Invoice formatting, email follow-ups, CRM data entry — these are boring. They are also where the ROI is highest because they consume the most hours and require the least AI sophistication. Boring problems make the best first projects.

5. Letting the vendor pick the use case. AI vendors will recommend the use case that showcases their product best, not the use case that delivers the most value to your business. Run your own scoring first. Then evaluate whether the vendor's tool fits your top-scoring area — not the other way around.

🔒

This lesson is for Pro members

Unlock all 520+ lessons across 52 courses with Academy Pro.

Already a member? Sign in to access your lessons.

Academy
Built with soul — likeone.ai