Episode 37: Chaotically Disobedient AI
June 5, 2025
When your tools start surprising you, is it a problem… or a new kind of progress?
For most of computing history, one thing held true: a computer would only do what you—or someone—explicitly told it to. That principle shaped how developers debugged, how IT pros solved problems, and how organizations thought about value and control. But with the rise of LLMs and generative AI, that truism no longer holds. Inputs don’t guarantee consistent outputs. Models behave probabilistically. And companies now invest in technology that may—or may not—do what they want it to. This episode reflects on the unsettling, fascinating shift toward tools that feel intelligent but resist predictability—and why that might be a feature, not a flaw.
Transcript
When I was coming up through computer science coursework and degree programs and all the rest of it, somewhere along the way I picked up a lesson that stuck with me and informed so many things I did.
The way I approached development and working with computers and helping, once I got into more IT work, helping other people troubleshoot their problems, it was a very formative piece of advice.
Not even advice, it's just a truth, or at least it always was, that I really took to heart and always remembered.
The general idea went like this, and I wish I could actually tell you who imparted this on me, but I can't really remember.
The idea is simple. It's that computers only do what you or someone told them to do.
So for some context, a lot of times when people are learning, and I'll just stick with programming, but it kind of branches out into just about anything you can do with a computer.
A lot of times people are programming, and they'll hit an error, or this program crashes, or whatever.
And they'll look at it, and they'll say, I don't understand what this thing's doing, or it crashed, or it's not working.
It's kind of similar to almost speaking in passive voice when you're writing.
It's a little different in that you're saying legitimately, like, it's crashing, it's erroring, what's wrong with IT, what is IT doing?
But the thing to remember is that the IT on the other side of that, it isn't a thinking, judging, deciding, free-will machine.
It's just a machine. It has been given a series of instructions.
You, or whoever was working on it before you, or whatever, has given it some set of guidelines, instructions.
It has told it what to do.
And so the answer is not to figure out what's wrong with the computer.
The answer is to figure out what's wrong with whatever instructions it was given.
And this stuck with me forever, because, you know, again, I went through a lot of coursework around working with computers, and programming, and web development, and database stuff.
And, you know, I ran the gamut.
And then I had a professional career on top of that in those general arenas.
And every time something would be going on, and I would get frustrated.
And there was plenty of times that that would happen.
You know, anyone who's done this work knows what it's like to spend 9, 10 hours troubleshooting something, and you still can't figure it out.
But during those frustrating times, I was almost always able to remind myself, it's only doing what I told it to do.
And the I, in this case, being extrapolated out to myself, developers before me, system designers, whatever.
But some, some person, somewhere in the chain, has told this thing to do what it is now doing and frustrating me with.
Now, obviously, there's exceptions to this, but I don't think getting hung up on them is a valuable thing.
Like, sure, if your hardware is malfunctioning, and, you know, it's misplacing memory or something, whatever.
But that's not what we're talking about.
We're talking about the normal case where you're going along doing something, something goes awry, you get frustrated, and you blame the machine.
So this has always held up true to me.
It has always held water and always helped me through some of these things that otherwise would just spin, you know, you'd spin your wheels forever.
I think one of the interesting things about the LLM movement, and really generative AI in general,
is that for the first time, or the first time in a very large-scale sense, this truism no longer holds up.
It's not true anymore.
LLMs are a complicated, sit on top of complicated neural networks and probabilities and a certain degree of randomness and content that's been sourced from all sorts of areas and walks of the world.
And you can tell, because if you put the same input twice into these things, or three times or four times, you get varying output each time.
This is true to the degree that, and one of the things I find super fascinating that's going on right now, that's kind of going on behind the scenes.
You know, there's the LLM and generative AI movement that's out in the consumer space, right, and the popular space and professional space.
People mostly using AI to boost productivity or do things they couldn't do before or whatever.
What's super interesting is that behind the scenes of all this, there are researchers and academics and companies researching how the thing they made actually works.
And I just find this so existential in some ways, and so interesting, the ability to create a computer system, which again, for my entire career until recently, I was able to really internalize the idea that this will only ever do what I tell it to do.
Now, there are whole sectors out there in the world researching how the thing that people are using every single day actually works behind the scenes that they also created.
People created these generative AI applications and models.
People are then using them and improving upon them.
And yet, people also have to research how they actually are working.
And some of that is an effort to improve them over time.
But some of it is legitimately just an academic exercise in what have we created.
And a lot of experts in the field will tell you that at the core of it, we don't really know.
We're not really sure how some of this stuff is working, how it operates behind the scenes and under the hood.
So, for the first time, people really are in a position where, and because of the rapid adoption of this, more and more we are in a position where we're looking at our technology and saying, and we can't say anymore, it's only doing what I told it to do.
So, in terms of terminology, this is what, in some, I mean, the extreme of this is what we mean when we say, oh, it's hallucinating.
You know, you talk about like a chat GPT or something, often you'll hear, oh, it hallucinates sometimes.
Can you imagine?
I mean, a hallucination in a person is not considered a fleeting thing.
If a person is having hallucinations, typically you either have something going on in your brain that really needs to be taken a look at, or you're on some kind of mind-altering drug, right?
That's how people get there.
But these things just kind of do that.
And that's just the extreme example.
There's also a ton of other examples.
Again, like even something as small as how you put input in twice and get two, not necessarily vastly different answers, but different details, different wordings, different ways of thinking about things, different ways of phrasing things sometimes.
And that they can't actually predict which way it's going to go at any given moment.
So, this got me thinking also, or I suppose something else I was thinking about sort of intersected with this.
And it has more, it has to do with, and I really, I'm not huge on buzzword-y kind of stuff, but the idea of a value proposition for a company.
When you talk about a value proposition, you know, what you're really talking about at a company level is you're going to spend money on something or you're going to acquire something or whatever, whether it's talent or a company or a system or whatever.
And then it is going to make your company function in a way that it's going to add value to your company in some way.
And there's been a huge movement over decades to narrow down a value proposition to really, in many cases, just an ROI.
You're going to spend $100, can you get $200 of value in return?
You're going to spend $1 million, can you get $2 million of value in return?
And over what period of time?
And now companies are racing to adopt AI, and should be.
Like, I want to make that clear, like, you know, get on the bandwagon.
But I think for the first time, what we're also seeing is that we're going to see a more and more ambiguous definition of value proposition.
No longer is a value proposition something that can so easily be boiled down to,
I spent $100 and got $200 back.
Now, companies that are really investing in AI, they're putting money in almost for the possibility of getting a value proposition,
of maybe getting value in return.
Because you can't be sure that when you do any particular activity with that AI,
that it's going to give you the correct or right or reasonable or...
You can't say definitively it will give you an output of value.
And therefore, you can't really say definitively that the things we're using AI for in companies will absolutely return value.
And it's an interesting space to be.
And in a lot of ways, early adopters are paying money to learn how to formulate our own questions better,
so that then we can try to get more consistent value out of a very new technology.
Now, I like the chaos.
So, I welcome this with open arms, this idea that things are a little more nebulous,
things are a little more ambiguous, things are a little more...
You can't just be sure that if you put A in, you get B out.
I like that idea.
It's fun.
That brings interest to what would otherwise just become increasingly stagnant.
So, I look forward to this, and I look forward to both seeing the ways that the world attempts to mitigate that chaos,
and also the ways that the world moves to dive into it.
Because I think both will happen, and everything in between.
And so, think about that next time you're using one of these technologies, right?
Think about, in this, whatever it is you're doing, in this situation, should I be looking for additional randomness and chaos?
Should I be, am I trying to remove as much as possible?
Is it really better to do one or the other, or land somewhere in the middle?