The Great AI Debate Has Reached Ludicrous Speed

I have concluded, however, that's important for each of us to establish our own ethics of interaction. These are mine.

The Great AI Debate Has Reached Ludicrous Speed
Join the Nerd Roundup Discord Server!
Check out the Nerd Roundup community on Discord - hang out with 81 other members and enjoy free voice and text chat.

Fifteen years ago I wrote an essay that posited that communications/internet technology and humanity were on a co-evolutionary path, one which we were unprepared to reasonably discuss. There were a dozen or so public folks that seemed to get it. If I remember correctly, Eliezer Yudkowsky said that he had already thought of that. (This is when I learned that a sign you're onto something important is when they switch from ignoring you to saying they've already thought of it.)

Took almost a decade, but sometime in the late 2010s a huge hunk of folks came to see some kind of existential threat as well. As predicted, this was a horrible reductionist conversation. I made a new personal rule: if you can't describe to me what you think is wrong with social media without giving away your politics, you don't have anything interesting to add to the conversation. I'm sorry,  but the topic is much bigger than decade-by-decade trends. Acknowledging that is where the good analysis begins.

And now we have ChatGPT-powered stuff. Very cool. I've enjoyed playing with it almost daily and it's a hoot. I'd miss it if it were gone.

Along with it, however, came the online clout farmers. "I wrote an entire stack to solve world hunger in just two hours!" is the kind of thing I read. Good fucking grief, I don't want to ride yet another hype cycle. Enough already. Something can be really cool and also really bad. Guns are like that. Rockets are like that. We live in a world of things which can both be awesome and horrible. These guys are even worse that the politics/social media bunch.

On the other side, we have the "Kill it! Kill it now!" bunch. I feel for you guys. This is humanity's biggest footgun to date; that's what makes it so incredible.

The problem is not the tech. That ship has sailed. AI today consists of "Fool me into thinking you're a human so that X", where X can be anything, good or bad. Lotta people were fooled before AI, even more so with it. This situation will only get worse no matter how people rant about it. Experience shows that supporters will dream up all kinds of Xs that we would love. Reality will be a much more mixed and unpredictable bag.

I cannot support killing it, any more than I could support getting rid of biology or physics. At the same time, I will not tolerate bluesky bullshit on how great everything is going to be. Been there too many times, guys, sorry. Yes, markets are great, and yes, we want creative destruction to improve the species, but no, AI isn't just some new version of the airplane. AI is predicated on some level, no matter what the disclaimers, on fooling people into thinking some kind of intelligence is at work. Intelligence is in the freaking name.

So I will draw my own line. I will make a new rule. I do not believe technology should ever be created or deployed that deliberately creates a model of reality in a users mind that is either knowingly false or has a significant and unpredictable failure rate. You want AI to drive my car? Fine. You want AI to tell me the biography of Adam Smith? Go jump in a lake. I will not code this way myself and I will not support those who do.

It's possible, of course, to take AI into these fuzzy areas: simply build something that can explain in English how it reaches conclusions in such a way that can be verified or argued with, just like real people do. Even then, however, it's not a real person. This leads me to my second rule (which is a social one): I will not participate knowingly in online communities that allow AI to generate and submit content. This is my own failing. I can't know everything I need in life and I choose to trust all of the millions of edge cases that I am unable to examine to living, breathing humans that I can better understand and mentally model if needed. That's how humans socially learn, not through regurgitated Wikipedia entries or puppet account attention sinks.

Of course, the ludicrous level hype train will continue. Why wouldn't it? There's too much money involved. I have concluded, however, that's important for each of us to establish our own ethics of interaction. These are mine.


I am a long-term optimist, singulartarian, and believer in most all things technology and AI related. In the short-term, however, I am unable to escape my conclusion of almost two decades ago that we are facing a Great Filter, perhaps even an ELE.

We insist on publicly discussing AI (and tech in general) in Machiavellian, Manichean terms.  It's us versus them, good versus evil. Bring in some Hollywood extras playing activists or corporate overlords and we've got a wonderful, understandable narrative to pitch online to one another.

Instead, what we actually have is a co-evolving systems of variously sentient entities, both machine and human. These systems are constantly interacting in ways that one human could never understand. Like economics, we're stuck making big categories and big boxes to think and talk about effects we see.

But it is an evolving system. We know that. We can see clear changes in the way tech and people behave, both alone and in various combinations. This seems observable to any reasonable outsider.

If humans are the ultimate over-labelers, providing  categories and narratives where none exist, mahines are the ultimate generalizers. Any machine/computational/AI system is built and run to optimize some generic bunch of stuff, whether stamping out widgets or capturing eyeballs on Facebook. Without a goal, and without that goal being applicable to large hunks of folks, startups fail, tech languishes, nothing happens.

But wait, you'll say! We're actually fixing that! AI will allow interaction at a massive, uncomprehendable manner to help people will all sorts of individual goals! They get to decide! We've actually identified that problem and solved it!

We have been making this same mistake developing computers for decades, and frankly I'm sick of it. Somebody states a problem, a need, we explore traction for that need in a market. Presto chango! Problem solved.

If so, why do we keep making tech that solves the same problems over and over again?

We make the mistake of thinking our personal or group generalization of the problem being solved and the way we solve it is actually identical to the problem itself. They are interchanbeable. You invent the wheel, you never need to invent wheels again.

Knowledge work isn't like that. It doesn't work that way. Without not knowing the problem you're sovling, you're not solving a problem.

Put differently, as we interact with tech, we change tech. Tech also changes us. We may be building the ultimate AI, AGI computational system. Yay us. But at the same time, bet your fucking life that tech is also building the generic person. Both systems of systems are co-evoloving to get the most evoluationary value from one another. There is no black-and-white here. Brain cells don't argue about how brains work. Sheep don't argue the merits of the serenghetti. We can see evolution, we can see changes happening, we can even make tentative, plausible guesses at goals and what might happen next.  But we can't fucking argue about it like this is some new version of the production line, or a new episode in Sucession.

My fellow dudes. Stop becoming the generic person, going through the same motions on those same online platforms with those same other generics. You're better than that. We're better than that.

-d