Episode 88: A Step Toward AI Governance
January 27, 2026
We can make it like bugs in food.
As artificial intelligence becomes more embedded in daily life, questions of power, access, and responsibility are becoming harder to ignore. This reflection considers what makes large technologies feel different once they reach a certain scale, and why oversight often lags behind adoption. Rather than focusing on fear or hype, it sits with a quieter concern: how societies decide what is acceptable when tools become essential, opaque, and controlled by only a few.
Transcript
I need to do a better job of when I make notes for episode ideas for this show.
I need to give myself a little more context.
I had a note here about this topic.
Because a lot of times what happens to me is I'm out, I'm moving around, or I'm at the gym, or whatever I'm doing.
And something occurs to me.
And maybe it's triggered by something I'm listening to on a podcast, or it's just something I'm thinking about, or whatever.
So I'll try to take that note real quick so it doesn't fly out of my head.
And I try to give myself enough context that I remember what was going on that spurred this.
In this case, I put down enough information about what I actually want to talk about.
But it came out of something in an accidental tech podcast ATP episode made me think about this.
But I didn't give myself nearly enough context as to what it was in the show that made me think about this.
And their episodes are quite long, and I didn't find it when I looked at it in their show notes in terms of what would have spurred this.
So whatever.
Long story short, you know, I'll just put in a plug, I've done this before for ATP.
If you like tech-related news and discussion and stuff like that, go check out ATP.fm.
Great podcast that spurred this idea somehow.
I don't remember how.
I think we're all scrambling to try to figure out...
Oh yeah, okay, great, hot take.
We are all scrambling to try to figure out what's going on with this AI stuff.
Depending on who you talk to, what mood they're in, what mood you're in, what's happened recently, how knowledgeable you are on the subject, whatever.
All these factors, this introduction of AI into our daily life can range anywhere from, you know, this is the best thing ever, all the way to this is going to result in the, you know, collapse of society as we know it.
And everywhere in between.
There's a lot of stuff I have come to, or the conclusion that I've come to, is that the problem is the inter...
It's not the technology so much, it's not even the way that we use it.
It's the intersection of this new, amazing technology that we have the ability to distribute this at mass scale to essentially every human on the planet if they wanted it or, you know, had any access to even a phone.
Combined with the position of power that the large companies that now control, both produce and control this technology, it's the fact that they have become so powerful, intersected with the advent of this technology, that I think is causing us problems, or is going to cause us problems.
We are in a, I've said this before, we are in a sort of golden era of this right now.
It feels a little bit like, well, I've likened it before to when social media started putting out all these open APIs for developers to build stuff on top of for free for no apparent reason.
Like, there was no real wise business decision for that, and that's evidenced by the fact that they were largely shut down over the years because there was no business reason.
For other developers to build on top of your stuff.
But that was kind of a golden era in some ways of at least being a developer within social media.
And in some ways, even for a consumer of social media, the ability to have all these cool third-party apps and do different things with it was kind of a golden era for that as well.
Now we're just locked into these platforms where you stare at, you know, some long-running feed and look at short-term videos, half of which are generated by AI, which brings me back to that, I suppose.
Companies have now produced these models and systems and interfaces that we are quickly becoming hooked on as society.
I know that I feel as if I've gotten to a point probably over the last, let's call it 12 months, where if the rug was taken out from under me and all of a sudden you didn't have access to LLM, to an easy LLM, like a chat GPT anymore, that was good, that was high quality, my life would become very much more difficult very quickly.
Now, I think it's unlikely that that rug gets pulled.
There's too much investment in it.
It's become too core.
This type of technology is probably going to become just like the web became, but the web did it much slower, where it becomes almost ubiquitous.
And it's like you couldn't shut it down even if you wanted to, short of like a true apocalypse.
But much like the web, it's very easy.
Well, in the web, it was much more decentralized.
At least initially.
In this case, it will be very centralized.
And the companies that produce these things that we've come to rely on very heavily can very easily get worse very fast, right?
In a number of ways.
You know, the, I don't know what the plan's called, pro or premium, wherever the hell, in chat GPT, you know, right now it's 20 bucks a month.
But I guarantee you they are probably losing money on that subscription cost.
In fact, I'm pretty sure I've heard the financials that they are, in fact, hemorrhaging money because your usage of chat GPT far exceeds the $20 you pay.
So what if they need to turn a profit on this suddenly?
And what if your subscription costs aren't 20, but they're 50 bucks a month or 100 bucks a month?
Or they can just kind of keep squeezing until you are paying very high amounts of money for this?
You know, there was a time that a cell phone only cost like 10 bucks a month to run.
Now, I don't know about your bill, but mine's like well over 100.
So the pricing is like one way that this can get worse.
Another way it can get worse, and if you spend any time on something like a Reddit or whatever, I'm sure you've seen some of this in like meme form.
It won't be too long, most likely, that advertising becomes baked into these LLMs.
And that's going to be a very sad world.
And also, that will probably erode trust pretty quickly.
But at that point, societally, we may just be too damn hooked on these things to move away from them.
Even though when you ask for, you know, what's a great method for cleaning a stain or whatever, and then they try to sell you a vacuum, that experience is going to be terrible.
But it's probably going to not just be terrible, but probably also be fairly sneaky.
It will probably be integrated in ways that are manipulative.
And I don't know about you, but I'm not really looking forward to like one more way that I'm going to be manipulated into buying stuff.
But when a technology like this costs as much as it does to produce a quality output, it is only the largest of enormous companies that can actually produce something like this.
Unlike the web, and this is where the metaphor kind of breaks down, where for very low amounts of money, pretty much anyone could spin up a website if they had enough technical know-how.
And there's a lot more people, I know technical know-how is a rare skill, and it was even a rarer skill at those times.
But there's a lot more people with technical know-how than there are people with billions of dollars to produce quality AI models and interfaces.
One is a much harder and more expensive endeavor than the other.
So some of these thoughts are just things that circle in my head.
And then something in this ATP episode that I cannot remember spurred this thought, which is that I think we could solve this.
I think we could solve a lot of these things.
If there was some sort of regulation or law or whatever, that if you're going to produce a mass-scale LLM, and I don't know how to define that exactly, but let's just sweep over that one for a minute.
And assume that this applies to things like ChatGPT, Gemini, you know, the big ones, Claude.
If there was some sort of regulation that stated that, oh, I'm sorry, let me back up a second.
The other piece that has been in contention, and that I think societally we are almost just comfortable with in a way that we probably shouldn't be, copyright infringement, plagiarism, like reading into stuff that these companies openly, not openly, but have been caught, you know, consuming information that they shouldn't be.
From websites, from books, from publishers, from all sorts of areas that probably should be some sort of copyright infringement problems or proprietary information that they're not supposed to have or whatever.
And societally, we made kind of a big deal about this when we first started hearing about it.
And I feel like at this point, we all just ignore it.
It's just like, yeah, there's probably all kinds of stuff in that model, right?
And this is part of what leds me to, and let me get back to what I was trying to say.
If there was some sort of regulation that stated that any large-scale LLM must, in the public domain, like publish in some way or show in some way, all of the data it was trained on.
I think we would, that would be a big step in a direction for good.
And here's why, right?
So, obviously, when companies are making their models, that's a proprietary thing for them.
So, I'm not suggesting that we fully roll that back, right?
I'm not saying that we need to produce showing how they have weighted their models and how they went about training necessarily, but to produce the raw information that went into the models.
This would, A, let people know where some of this data is coming from, and B, give researchers an ability to go through that and try to find things like where are biases likely and how could we compensate for them?
And what maybe should be in this body of work that's not, and getting that out into the public eye, I think, could do a lot of good.
Now, I think one of the biggest problems with this is that it opens things up to nitpicking, where it's like, oh, this little article over here or whatever shouldn't have been included, and how do you agree on that and whatever.
But I think we have a model for this, and I think it's food, weirdly enough.
The idea being that, much like with food, I've always heard these things, maybe you have too, where it's like per tons of grain or whatever, there is some amount of non-food material allowed in that mix.
Like, it could be like bugs from the farm it was kept at or whatever.
But you try to get as close to zero as possible, but you can never quite get there.
And so, I think you could apply a similar threshold with some of these stacks of data, where it's like, okay, you can't use proprietary copyright infringed material.
However, as long as the full body of work has less than whatever percent of that, it's okay.
Or like it's an accepted margin of error, or something like that.
So, I guess my point here is really twofold.
One piece of this is, I think one way to start wrapping our heads around how to societally deal with AI is that we should – the companies can keep their training information.
That can be proprietary, but I think they should be forced to publish the data that's in it.
But this is the mounds of information that we stuck into this model to produce it.
We're not going to tell you how we did it, but here's the component pieces.
That could really go a long way towards opening up research and fine-tuning some things and just getting understanding around what's in there.
And secondly, this raises, you know, again, the need for some sort of thresholding around some percentage of content in there probably doesn't belong in there for one reason or another.
Maybe it has prejudices.
Maybe it's, you know, proprietary.
Maybe it's a copyrighted piece of material.
Maybe – I don't know.
Maybe there's things I'm not even thinking of.
I'm sure there are.
But some threshold where, like, you know, you can't have more than 2% of this stuff that you shouldn't have in here.
Like, 98% of your model information should be things that it's okay to have.
And that could be produced and we could look at it and say, yep, this model is built from good component pieces.
Component pieces that society is like, okay, we can deal with this.
And there was a lot of implementation details in there.
It was just a thought that I had as we both individually and societally try to wrestle with what all of this means.
And I think the piece that has made me most squeamish is that unlike many other things, we just have no insight into this.
And it's being controlled by a bunch of companies that are notoriously terrible at things like transparency and trust and, you know, trying to do anything other than just serve as agents of constant economic growth.
And I think that sometimes things come around that are just more important than pure economic growth.