Talofa reader,
I remember one of my first AI-related engagements with one of my highly capable AWS Partners, a multi-national, multi-billion dollar company, highly resourced. It was early 2024, and I was delivering a workshop on GenAI. I was talking with one of the company tech leaders, and I was a little surprised at how they were thinking about AI—essentially, off-loading a good chunk of the problem decisions to the LLMs, and little to no emphasis on context details and scope going either way: to, from and back to the LLM.
We now know that "context" is king, apparent in the current surge of "context engineering" posts and videos—but it did baffle me a bit why this wasn't more understood in the early days.
But since that time of my first customer facing engagements, to now, I've had quite a few conversations, and my surprise in that initial engagement led me to poke at this apparent "gap" in AI understanding—and not in understanding "what does an LLM do" or "how an LLM works"—but in what you, the user, thinks is going on between you and the AI system you're interacting with.
It's always a conversation
This might sound asinine at this point in the AI hype cycle—but I really don't think people fully grasp what this means. It's a conversation. And comes with everything a conversation comes with. And I think if this isn't fully landing, to me, it calls into question not only what your understanding is of what a conversation is, but also if you're any good at having them.
So we should start right at the beginning—a conversation is between one or more people, or entities—I deliberately say "one or more" because talking to yourself is also a conversation.
Right off the bat, you now have multiple "points of view" to consider—no two people are exactly the same, and with that comes differences in understanding, definitions, experiences, levels of knowledge, levels of awareness and so on.
Let's call this first level of abstraction, multiple buckets of "data" or information. It's just a static snapshot of information, grouped per entity (person).
Now we introduce "movement"—how I think about the shift from scalar to vector. When the conversation starts moving—entities and their buckets in motion—we have to determine what exactly is driving and steering the conversation. What are the entities' instructions, motivations and incentives—because this is what will give us an idea of what directions the conversation moves in, what information from the buckets might be used and in what order of priority, and then the resulting trajectory of the conversation.
At this point, you will have an idea of where the conversation is going, why it's going that way, and potentially highlight what data might be missing from the conversation to have had it go in a direction you actually wanted it to go.
What's an example of this phenomenon?
Vibe coding yourself to an over-engineered, over-complicated, useless application that doesn't deliver what you wanted it to.
And why did it go wrong? How do I count the ways:
Jumped straight into "build" mode with no prior thinking, no discussion, nothing outside a vague idea of what you wanted to achieve—what you want to build, sure, but you don't actually know what you want to achieve.
Allowed too much of the design thinking to be done by the LLM. Now you're in its world, which vaguely resembles our current understanding of how humans think, but ultimately is orders of magnitude more basic and less capable than the human brain.
The incentives of the LLM are to "please its user"—what kind of goal is that? Do you know where that's going to take things, left unchecked?
I've seen Coursera courses made by big-time CEOs who have learned this concept, coaching executives on making the AI your "thought partner", and that concept is the collaborative element of AI, that the process is a two-way street that requires listening to understand what's coming back to you and adapting to things going back.
Other examples include:
Asking ChatGPT for business strategy advice, getting generic platitudes back, then declaring "AI is crap for business plans"—completely missing that you asked a nothing question and got a nothing answer back.
Copy-pasting increasingly complex prompts and code from the internet, wondering why you're getting inconsistent results, never stopping to think: "maybe I don't actually understand what I'm funnelling into the AI—how could I possibly know what to expect back?"
And I think the inability to see and understand not just this concept, but inherently, how deeply this is crucial to your interactions with AI, exposes something that was probably either hidden behind other factors in the workplace, or processes that allowed this gap to not be fully appreciated—
You suck at communication.
You don't know how to listen to understand, you don't know how to self-reflect, to empathise and to consider different points of view. The "other" in this case is the AI. You used to be able to fob it off on other people, and processes etc, but now it's just you and the tools.
AI isn't exposing a technical skills gap. It's exposing people who've never learned to think collaboratively, listen generatively, or hold space for differing perspectives. People who used to get cover through team dynamics where teammates picked up the slack, or managers who'd bark "harden up" or "find somewhere else to work" when they couldn't possibly admit to a leadership problem.
Now it's just you and the stochastic parrot who'd happily drive you both off a cliff if you asked. You've only got yourself to look at in the mirror and ask: "Do I actually know how to communicate well? Or am I a bit shit at it?"
I approach AI like it's doing its best impression of human thinking, and I'm doing my best impression of a human right alongside it. Whether that makes me better at this than most people, or just aware enough to see how shit we all are at it—I'll let you decide.
Thanks for reading,
Ron.
