The Real Shift in AI Isn’t Coding—It’s How We Think
Now, let’s all hold up three fingers… wave them across your face… and say hello, everyone.
Don’t worry, Professor Chang—I’m not crazy… yet.
But why am I asking you to do this?
The other day, one of my coworkers told me about a video he came across. In the video, the person was doing all kinds of things—but there was one thing they wouldn’t do.
They wouldn’t wave three fingers across their face.
Turns out… It was an AI-generated video.
This is actually a known trick to spot deepfakes.
But—don’t get too comfortable thinking you’ve found a reliable test.
Because things are changing very quickly in the AI world.
Just last Friday, I was at an AI Summit at MIT.
One of the panelists, a professor, was explaining how the human brain works. He said, “Humans can take what we learn and turn it into graphs. AI can’t do that.”
And the moderator interrupted him and said,
“No, professor—that’s old news. AI can do that now.”
The professor was surprised. He asked, “Since when?”
The moderator said, “About two weeks ago.”
Professor Zhang and I studied computer science together in college. Back then, the language we used in our AI class was LISP. You’ve probably never heard of it—and that’s okay. It’s not very relevant anymore.
These days… AI speaks English.
After college, I worked at a textile company maintaining their inventory system. My hometown is in central Taiwan, where summer temperatures hit 90 to 100 degrees.
And I had a lot of friends in the company… because my office housed the mainframe stacks—and it was the only air-conditioned room.
mainframe stacks, running COBOL.
After about a year, I left and came to Boston for my master’s in computer science.
After grad school, I stayed in Boston and have been coding ever since. My first job was as a junior programmer working with C++.
Then Java came along.
And as you probably know—software engineers tend to have… strong opinions.
Back then, at least in my circle, we believed in C++.
Java? That was for people who couldn’t manage memory.
Of course… It took a few years, but Java became dominant.
I’ve been in this field for decades, and I’ve seen a lot of “best practices” come and go.
Usually, it takes years for people to adopt something new. Part of that is inertia—we get comfortable. Once we believe something works, it’s hard to change.
But what’s interesting is:
That cycle is getting shorter.
Take AI.
ChatGPT was released on November 30, 2022—and within days, millions of people were using it.
At the time, I’ll be honest—I didn’t pay much attention.
To me, it felt like… a shiny new toy.
It wasn’t until the second half of 2025 that I really started experimenting—with tools like Cursor, Copilot, and agents for writing tests.
And honestly? It was… okay.
But then things changed—very quickly.
Since January 2026, especially after Davos (the World Economic Forum), it felt like overnight everyone started talking about Anthropic.
We started using Claude Code on March 10th.
I remember that date very clearly.
Vibe Coding
That was the day I started taking vibe coding seriously.
In other words, I began spending a lot more time thinking carefully about my prompts—describing exactly what I wanted, how it should behave, and what the output should look like.
And here’s something funny:
When Claude processes your prompt… it costs tokens.
By the end of March, I was checking my token usage every day.
If you’re not familiar with tokens—it’s a bit like an amusement park.
You pay tokens… to go on rides.
And honestly? That’s exactly how it feels.
I’m spending tokens… to take my rides with Claude Code.
So what does that actually look like in my day-to-day work?
We now use tools like Claude to speed up how we design workflows.
Before AI, when we got new feature requirements from a product manager, we’d gather around a whiteboard and break down the requirements.
For example:
“We want to pop a dialog to perform X, When the user finishes action A.”
Simple, right? But how do we tell machines to do that?
We would translate that into detailed specs:
-
How do we detect action A?
-
Where does the dialog appear?
-
What size is it?
-
What does it look like?
And that’s just a tiny piece.
Turning product requirements into solid technical specs takes time—and it’s collaborative. We debate, challenge each other, and try to find edge cases.
Now?
We can work on multiple projects in parallel.
How?
Each of us has a team of Claude agents living inside our IDEs.
The process is actually the same at its core—but now we do it through agents.
So yes—I write more prompts and less code.
But that doesn’t mean I’m doing less engineering.
It just means the work starts differently.
Instead of jumping straight into code, I:
-
describe the problem
-
define the expected behavior
-
explain where it fits in the system
Of course, I go back and forth with Claude until we reach a solid plan.
But here’s the key:
You can’t trust AI blindly.
It will get things wrong.
And this is really important to understand—because it explains both the power and the risk.
Generative AI is fundamentally based on probability.
It’s not “thinking” the way we do.
It predicts the next word based on patterns it has learned from massive amounts of data—and then chooses from the words that are most likely to come next.
Then it does that again.
And again.
And again.
So what feels like intelligence…
is actually a chain of very sophisticated guesses.
A simple way to think about it:
It’s like your phone’s autocomplete… but on steroids.
It doesn’t know the answer.
It just knows what answer is most likely.
And sometimes, that goes very wrong.
You may have heard about the Mata v. Avianca case in New York.
For those who have not heard of it, it is about
A lawyer used ChatGPT to write a legal brief—and it confidently cited cases.
The problem?
Those cases didn’t exist.
The judge fined the lawyers $5,000.
So the lesson is simple:
AI can sound very convincing—but you still have to verify everything.
How do you do that?
You ask AI to explain its reasoning step by step: how did it reach that conclusion, and where did it get the sources?
I have a story that illustrates this well.
A friend of mine is a law professor who has published many papers. Recently, he was working on a new paper and needed examples and cases to support his point. He used an AI model to help search for them, and of course, the AI confidently found cases that seemed to support his theory.
But my friend, being a very experienced scholar, always asks for the source. And again, AI provided citations.
Here’s what set him apart: he cross-checked those citations with a different AI model.
And the source could not be found anywhere.
So he went back to the original model and asked, very directly, “I couldn’t verify the source. Did you make it up?”
And AI admitted it—without shame.
I guarantee you, this will happen again and again.
The lesson is simple: always analyze, always verify.
So from my perspective, the biggest shift is this:
I used to spend my time writing code for machines…
Now I spend a surprising amount of time explaining things to machines.
And that’s what prompt engineering is.
If you’ve ever used ChatGPT and thought:
“Why is this answer so bad?”
Take another look at your prompt, and ask yourself why does the model not understand my question? what’s missing? how do I explain it in a clear way?
Prompt engineering is really about:
How do I give instructions so the AI actually does what I mean?
And it turns out—it’s not that different from writing a spec.
I’ve heard people start to call this Spec-Driven-Development. (SDD)
What makes a good prompt
Over time, I’ve found good prompts usually include:
-
a clear goal
-
enough context
-
constraints, this is important, you don’t want agent to delete your data without permission.
-
and expected output format
If there’s one thing that really matters—it’s context.
AI only knows what you tell it right now.
So, if you leave things out, it fills in the blanks… sometimes very creatively.
So in real work, I include:
-
where to look for the existing pattern
-
what are the files need to be changed
-
and system details
Without detailed context, AI guesses and hallucinates.
Why fundamentals matter
Prompt engineering depends on strong fundamentals.
Because you need to:
-
understand the problem
-
recognize good vs bad solutions
-
catch subtle errors
If you don’t… the AI might sound right—but be completely wrong.
So AI doesn’t replace engineering skills.
It raises the bar.
Closing
So the biggest change isn’t just productivity.
It’s really how we think about software development. In a team, we collaborate, but each person owns a piece of the work. Now, I have agents owning those pieces.
I feel like a conductor of an orchestra—I set the direction, and the agents play their parts.
There are so many free resources today—you can learn almost anything.
I actually used ChatGPT to help map out my own learning path—and even track progress.
So yes—use AI.
But don’t abuse it.
What do I mean by “abuse it”?
Don’t just read summaries.
Summaries are like fast food.
They’re quick—but not very nutritious.
If you really want to learn:
slow down, grab a coffee, and actually read.
With that said,
For quick learning, I like a couple YouTube channels:
-
3Blue1Brown (great for fundamentals)
-
IBM Technology (more practical insights)
Speaking of judgment,
You can’t make good decisions without fundamentals.
And just as important—you need to know what you don’t know.
Critical thinking
In an AI world, everything comes down to:
critical thinking
You have to:
-
question results
-
analyze deeply
-
validate constantly
And that’s not something you learn on a weekend.
Your diploma does not mark the end of learning.
It’s the beginning of a lifelong habit of learning.
We’re lucky—there’s so much knowledge already out there.
Not just online—but in books that have been shaping how people think for decades… sometimes centuries.
If you’re wondering where to start, here are a few books that really changed how I think.
First—Daniel Kahneman’s Thinking, Fast and Slow.
Kahneman's book helps you notice when your brain is coasting… so you can actually slow down and think.
So instead of just trusting your gut, you start asking yourself: “Am I really thinking this through… or just reacting?”
Then there’s Yuval Noah Harari—Sapiens and Nexus.
In Sapiens,
Harari's point is that humans have always been running on shared stories. And the people who get to write those stories have enormous power.
Sapiens makes you question the stories we believe.
In Nexus,
Harari points out “more information doesn’t mean more truth.”
What actually happened is that whoever controlled the flow of information gained power.
Reading this book makes you question who’s telling you those stories, and why.
And finally, Adam Grant’s Think Again.
This one is about rethinking—how to question your own assumptions.
If Kahneman shows you how your thinking can go wrong,
Grant shows you what to do about it.
Happy Reading!