Ep. 2.5 | AI, Grounded

AI can mimic language, code, and media. Why does trusting it feel so intimidating? Learn how models are trained, where things go wrong, and what that means for trust.

All the Ways AI Could Go Wrong

Wait.

So we're giving a super technology massive, unthinkable amounts of information.

…AI knows how to mimic human data convincingly.

…Oh, and text is just the beginning. AI can mimic other modalities like music, code, images, video...

…One more thing. The technology doesn't actually know things.

In 2025, AI can’t look at a clock and tell time, a basic task for humans.

Researchers showed AI models struggling to read a simple clock face in this study, Lost in Time:

despite recent advancements, reliably understanding time remains a significant challenge for MLLMs.

*2025 study using then-current AI models

Can you think of a few ways this goes badly?

Let’s talk about trust issues with AI, and why that shouldn’t turn you away.

Now that we understand generative AI, prompt basics, and the big data models are trained on... let's consider how the models come to be.

Where can I use OpenAI's model, GPT-4?

Reader Quiz

Login or Subscribe to participate in polls.

Model Preparation

AI models are built with the end user in mind. With mega companies pouring billions into the space, the consumers benefit!

There’s a true arms race happening in AI market share, which incentivizes the models to earn our trust.

Where does all of the data come from?

How does AI learn what “right” looks like?

Take a scroll through these many tools and processes used to improve AI models:

  • Data governance/lineage

    • Knowing where data comes from.

  • Ground-truth data

    • Verified examples to show AI exactly what's correct.

  • Fine-tuning

    • Brief, targeted lessons to help master niches & topics.

  • Retrieval-Augmented Generation (RAG)

    • AI checks trusted sources first, then answers clearly with citations

  • Reinforcement learning

    • Humans rate AI responses to help it learn. A 2nd AI model is also used.

  • Constitutional AI & policy tuning

    • Teaching AI clear rules in plain English steers the model away from unsafe or biased answers.

  • Interpretability tools

  • Rigorous testing & red-teaming

    • Experts deliberately try to trick AI, so weaknesses are fixed before real use.

  • Continuous monitoring & feedback loops

PS, I left these out to conserve brain cells…

⋅ Data augmentation ⋅ Deduplication & filtering ⋅ LoRA & adapter layers ⋅ Bias & fairness audits ⋅ Guardrails & content filters ⋅ Quantization & distillation ⋅ Version Control for Data & Models ⋅ Model Cards ⋅ Security of the AI Pipeline

Bottom line:

Developers want AI models to have integrity, accuracy, and “be fair.”

It’s true: AI doesn’t know anything. It’s just mimicking patterns to appear intelligent.

That doesn’t mean you shouldn’t use it. It just means you should be the one steering.

So can we trust them?

The AIs…

I can’t say for sure if you can, or ever will, trust the AIs.

I can light the path with the most essential principle I’ve found so far.

👉 Human In The Loop.

Come back next week for more on wisely onboarding AI into your life, work, or studies.

If you know someone who would enjoy this content,

Here’s what’s coming next :

Tuesdays: The next episode with Chip & Scoops, a couple of AI-curious friends just like you.
(Catch up on the last episode here)

Thursdays: We’ll cover foundational AI concepts. (Don’t worry. I put the cookies on the lower shelf, where anyone can reach ‘em.) I teach advanced concepts in simple, bite-sized chunks that anyone can understand.

Reply

or to participate.