🤖 The Skill Gap Nobody's Talking About (And It's Not Coding)

Why understanding AI matters more than building it for 99% of people

Hey friend,

Last week, I was having dinner with a close friend.

She's a marketing director at a mid-sized company. Smart. Driven. 12 years of experience. The kind of person who always seems two steps ahead.

Halfway through our meal, she put down her fork and asked me something that caught me off guard:

"Okay, so when ChatGPT gives me wrong information... is that a bug? Or is it actually lying to me on purpose?"

I paused mid-bite.

Not because it's a stupid question. It's actually a really good question.

But it hit me: She uses AI every single day. Her entire team has adopted these tools. They're generating campaigns, writing copy, and analysing data with AI.

And she doesn't understand the difference between a hallucination and a bug.

She doesn't know why AI confidently gives wrong answers. She can't tell if an output is reliable or complete garbage.

And she's making real business decisions based on that output.

I spent the next 20 minutes explaining how large language models actually work. How do they predict text rather than "know" facts? Why confidence doesn't equal accuracy.

She looked at me like I'd just explained gravity for the first time.

"Why doesn't anyone teach this stuff?" she asked.

That question has been stuck in my head ever since.

Because here's the thing: She's not alone. Not even close.

Multiply her by millions of professionals worldwide, executives, managers, teachers, doctors, lawyers, all using AI tools daily without understanding how they actually work.

We have a massive AI literacy problem.

What AI Literacy Actually Means (It's Not What You Think)

Let's clear something up immediately:

AI literacy is NOT about becoming an AI specialist.

You don't need to understand neural network architectures. You don't need to write Python. You don't need to know what a transformer model is under the hood.

AI literacy is about developing practical wisdom: understanding how AI works, what it can and cannot do, and how to use it responsibly.

Think of it like driving a car. You don't need to understand internal combustion engines to be a good driver. But you do need to know:

  • What the car can and can't do

  • When conditions are dangerous

  • How to recognise when something's wrong

  • The rules of the road

Same with AI.

The formal definition: AI literacy is a set of competencies that enables you to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool in your daily life, workplace, and online environment.

The practical definition: It's knowing enough about AI to not get fooled by it, not break things with it, and actually get value from it.

The Four Core Abilities You Actually Need

Researchers have boiled AI literacy down to four fundamental abilities:

1. Recognise AI (Where Is It?)

Can you identify where AI exists in the world around you?

This sounds obvious, but it's not. AI is embedded in:

  • Your smartphone's autocorrect and keyboard predictions

  • Netflix and Spotify recommendations

  • Email spam filters

  • Social media feeds

  • Google search results

  • Banking fraud detection

  • Navigation apps

Most people use AI dozens of times daily without realizing it. Recognizing AI is the first step to understanding its influence on your decisions.

Quick test: How many AI systems have influenced you today before lunch? If your answer is "zero" or "one," you're not paying attention.

2. Grasp AI (How Does It Work?)

You don't need to understand the math. But you need to understand the basics:

Machine learning: Computers learn patterns from data to make predictions or decisions. They're not "thinking", they're pattern-matching at a massive scale.

The key insight: AI doesn't "know" things. It predicts what text should come next based on patterns in its training data. This is why it can sound confident while being completely wrong.

Strengths: Processing massive amounts of data, finding patterns humans miss, consistency at scale, and never getting tired.

Limitations: No true understanding, can't reason beyond training data, prone to hallucinations, and reflects biases in training data.

Why this matters: When you understand that ChatGPT is predicting likely text rather than "knowing" facts, you stop treating its outputs as truth and start treating them as drafts that need verification.

3. Use AI (Can You Actually Work With It?)

This is where most people stop. They learn to type prompts and accept outputs.

But real AI literacy means:

  • Asking clear, specific questions

  • Recognizing when outputs are good vs. garbage

  • Iterating on prompts to get better results

  • Knowing which AI tool fits which task

  • Understanding when AI is the wrong solution entirely

The progression:

  • Basic: "Write me an email"

  • Intermediate: "Write a professional email to a client explaining a project delay, using an apologetic but solution-focused tone, keeping it under 150 words"

  • Advanced: "Here's context about the client relationship and the delay. Draft three variations—one formal, one warm, one direct. I'll pick the best fit."

Most people never move past basic. They get mediocre outputs and conclude "AI isn't that useful."

4. Critically Assess AI (Can You Evaluate It?)

This is the competency that separates AI-literate professionals from everyone else.

Critical assessment means:

  • Spotting potential biases in AI outputs

  • Recognising hallucinations and inaccuracies

  • Understanding data privacy implications

  • Knowing when human judgment is necessary

  • Evaluating whether AI is appropriate for a given problem

Real example: My marketing director friend uses AI to generate customer personas. The AI confidently produces detailed personas with specific demographics, behaviours, and preferences.

But here's the thing: The AI is making this up.

It's generating plausible-sounding personas based on patterns in its training data. It has no access to her actual customer data. The personas might be useful as starting points, but treating them as research is dangerous.

Critical assessment means asking: "What data is this based on? How confident should I be? What could go wrong if I act on this?"

The Four Pillars in Practice

Let's get more specific. From a professional perspective, AI literacy rests on four practical pillars:

Pillar 1: Interacting with AI

This is the foundation. Prompting techniques, asking clear questions, and evaluating response quality.

The key skill: Steering an AI system rather than simply accepting its outputs.

Most people are passive consumers. They type a question, get an answer, and move on. AI-literate professionals are active participants. They:

  • Provide context and constraints

  • Ask follow-up questions

  • Request alternatives

  • Challenge outputs that seem wrong

  • Iterate until they get useful results

The difference in practice:

Passive: "Write a marketing email." Result: Generic, useless output.

Active: "I'm writing to customers who haven't purchased in 90 days. Our brand voice is casual but professional. The goal is re-engagement, not immediate sale. Draft 3 subject line variations and one email body under 100 words." Result: Actually useful starting point.

Pillar 2: Creating with AI

Using AI for writing, analysis, design, research, or planning.

This isn't about replacing your skills. It's about augmenting them.

AI-literate creators use AI to:

  • Generate first drafts, and then refine

  • Brainstorm ideas that they can evaluate

  • Analyse the data they interpret

  • Research topics they synthesise

  • Automate repetitive tasks they oversee

The trap: Letting AI create without your judgment applied. The output might be polished but wrong, generic, or misaligned with your actual goals.

Pillar 3: Managing AI

Understanding the responsible aspects: data privacy, governance, ethical risks, and bias detection.

Questions AI-literate professionals ask:

  • What data am I sharing with this AI system?

  • Who can access my prompts and outputs?

  • Could this output be biased in ways that harm my audience?

  • Am I complying with my organisation's AI policies?

  • What happens if I make a decision based on flawed AI output?

Real-world example: A lawyer using AI to draft contracts needs to understand that the AI might include clauses that are legally invalid, outdated, or inappropriate for the jurisdiction.

Blindly using AI-generated legal text is malpractice waiting to happen.

Pillar 4: Designing with AI

Mapping business problems to AI-supported solutions, even if you're not technical.

This means:

  • Identifying which problems AI can actually solve

  • Understanding what data would be needed

  • Knowing what "good enough" looks like

  • Collaborating with technical teams effectively

  • Evaluating whether AI solutions are working

The skill gap: Many professionals can use AI tools, but can't design AI solutions. They're consumers, not architects. AI-literate professionals can think strategically about where AI fits in their workflows.

The Progression: Where Are You?

AI literacy develops in stages. Be honest about where you are:

Level 1: Remember

You know key terms like "algorithm," "machine learning," and "neural network."

You can define what AI is in basic terms.

Most people stop here. They've read some articles, heard some buzzwords, and think they "get it."

Level 2: Understand

You can explain how AI works in plain language.

You understand that AI learns from data, makes predictions based on patterns, and has fundamental limitations.

You know why AI hallucinates and why it can be confidently wrong.

Level 3: Apply

You use AI tools in real-world tasks.

You've integrated AI into your workflows.

You get better results than the average user because you understand how to prompt effectively.

Level 4: Analyse

You break down AI outputs to understand their accuracy, potential biases, and reliability.

You question what you're seeing rather than accepting it.

You can identify when AI output is useful vs. when it's garbage.

Level 5: Evaluate

You can decide whether an AI solution is ethical, effective, and appropriate for a given situation.

You assess risks, tradeoffs, and unintended consequences.

You make informed decisions about when to use AI and when not to.

Level 6: Create

You design new ways to use AI for solving problems.

You can architect AI-powered workflows.

You innovate in how AI is applied in your field.

The honest truth: Most professionals are at Level 1 or 2. They know the words but can't apply, analyse, evaluate, or create.

Where do you honestly place yourself? Hit reply and tell me. No judgment, I'm genuinely trying to understand where people are, so I can help bridge the gap.

Prompt Tip of the Day

The AI Literacy Self-Assessment Framework:

Before using any AI output, ask these five questions:

1. ACCURACY: How confident should I be in this information?
   - What sources could verify this?
   - Is the AI making factual claims or generating plausible-sounding text?

2. BIAS: Could this output reflect problematic biases?
   - Who might be harmed if I act on this?
   - What perspectives might be missing?

3. APPROPRIATENESS: Is AI the right tool for this task?
   - Does this require human judgment?
   - What could go wrong if AI gets this wrong?

4. PRIVACY: What data am I sharing?
   - Is this information I should be inputting?
   - Who has access to my prompts and outputs?

5. VERIFICATION: How will I validate this before acting?
   - What's my fact-checking process?
   - Who else should review this?

Why this works: Most AI mistakes happen because people skip these questions. They accept outputs without evaluation. Building this habit into your workflow prevents costly errors.

Before You Go….

Alright, I've thrown a lot at you today.

But here's what I actually want to know:

How AI-literate do you honestly think you are?

Not what you'd tell your boss. Not what looks good on LinkedIn. Honestly.

On a scale of 1-6 (based on the progression levels above):

  1. You know the buzzwords

  2. You understand the basics

  3. You use AI tools regularly

  4. You analyse and question AI outputs

  5. You evaluate AI solutions strategically

  6. You design new AI applications

Where do you fall?

And more importantly: What's holding you back from the next level?

Is it:

  • Not enough time to learn?

  • Not sure where to start?

  • Overwhelmed by the pace of change?

  • Lack of practical training resources?

  • Something else entirely?

Hit reply and tell me. One honest answer.

I'm asking because I want to create content that actually helps people level up, not content that just sounds smart but doesn't move the needle.

If everyone's stuck at Level 2, I should write about getting to Level 3. If everyone's stuck because they can't find good training, I should curate better resources.

Your answer shapes what I write next.

So be honest: Where are you, and what's in the way?

Not subscribed yet? Hit the button below.

Share this with them. They'll thank you when they get their time back.

Reply

or to participate.