You’ve been letting AI give you “Average”

(aka slop)

And it’s costing you time, money, and even a hint of superintelligence. (aka sauce)

Here’s Ben Affleck, who went viral last week explainging why average sucks.

(Editor Note: This article features seven interview quotes that show up as images and won’t show up in the audio version.)

Ouch.

But also… fair?

Of your last 10 AI interactions -
→ How many were useful?
→ Well written?
→ Blatant AI slop?

Ben wasn’t just throwing shade. He, Matt Damon, and Joe Rogan were going deep on AI’s creative limits.

The Complacency Trap

Because the real problem is not the model.
It’s you.
Or rather, it’s me. I’ve fallen for this too.

You get a couple breakthrough AI interactions: A clever paragraph, a tasty recipe, help with your taxes… and suddenly you think it knows you.

You think, This model gets me. It knows I’m smart. From now on, every response will be smart, too.

Nope.
The base model isn’t optimized for you.*
🤖 The consumer app is tuned for broad usefulness + low regret across hundreds of millions of users. 🤖

Meanwhile I’m only human.
I love hard.
I trust quickly.
A few magical moments with ChatGPT and I’m ready to ditch Google search for life. The AI Complacency Trap has sprung.

When your prompts get lazy, the model doesn’t know what you want.

So it plays the odds.
It generates something broadly satisfying.
Something average.

Ben hits on the underlying math of modern AI: it’s predictive.

I’m glad he brought this up to an audience like The Joe Rogan Experience.
It’s true that AI can be probabilistic… but… yeah let’s not make this a math test.

(I’ll put the cookies on the lower shelf)

When you give the model a vague prompt, it has to guess what kind of response you want.

(This goes for almost all AI products! Chatbot, image generator, code, videos…)

It’s like walking into a restaurant and saying “make me something good.”

With no other context, the chef defaults to a crowd pleaser.

But if you say,
“I want something spicy, no dairy, and I’m sick of tacos,”
that same chef has constraints. Direction. Flavor.

Now they can make you something personalized.

AI is the same.

If you don’t want “the mean,”

don’t give it “mean-shaped” inputs.

Here’s how to start getting sauce instead of slop:

  • A strong stance: “Argue against the popular advice that…”

  • Set the scene: “Pretend you’e texting my manager after a Monday disaster”

  • Ask to feel a certain way: “Make my reader feel informed + intrigued”

  • A voice anchor: Feed it a writing sample. Ask to match rhythm, bluntness, tone.

  • Hard constraints: Ban certain words. Require concrete examples.

  • Ask for more: “Give me options; one safe, one spicy, one poetic…”

It’s no secret — but that’s the sauce 😉

If you know someone who would enjoy this content,

Here’s what’s coming next :

Tuesdays: 🗣️ Stay tuned for a new series coming soon!

Thursdays: We’ll cover LLM and AI concepts. (Don’t worry. I put the cookies on the lower shelf, where anyone can reach ‘em.) I teach advanced concepts in simple, bite-sized chunks that anyone can understand.

In partnership with

*AI products as of this writing (Jan 2026) are somewhat optimized for the user.

In my post, I simplified the stack. My post is meant to encourage both novice and expert users to raise the bar, never settle for average. In fact, I believe average = slop. Hindsight will make this seem obvious.

My research points to the following for current consumer app stacks:

  1. Base model (next-token predictor)

  2. Instruction + preference tuning (be helpful, follow directions, sound sane)

  3. Safety tuning / policy (don’t do harmful stuff; be cautious in certain areas)

  4. Product orchestration & agentic behavior (system prompts, memory, retrieval/tools, routing)

  5. Decoding defaults (how it samples tokens)

That last mile (4–5) is where “generic writing” often gets baked in.

-Christian

Reply

Avatar

or to participate

Keep Reading

No posts found