Slavery is bad. Murder is bad. Thievery is bad. This essay is not about right or wrong, it's about how we all think about right or wrong.
A THOUGHT EXPERIMENT
At 58, I'm learning to be a fiction writer. That means I like thinking about all the edge cases about what's right or wrong. In my mind, that's where all the good drama happens: get a sympathetic character, put them in a pickle, turn the screws, then find the moral edge cases. It's why good long-form written fiction is unlike any other form of media (And I love most all other forms.)
So please bear with me.
Slavery is bad, 99.99% of the time it's bad. But maybe? Maybe one time in ten thousand it's good. Wouldn't that be interesting? In ancient Rome, it wasn't uncommon to take slaves out of the leaders of the people you conquered, take them back home, raise and educate them in the ways of Rome: reading, writing, and so forth, then let them earn their way back to their homeland. Now their homeland knows more about Rome, Romans know more about these strange peoples, and over time the world becomes a better place, arguably in a way that couldn't happen otherwise.
Robbing people is bad. 99.9% of the time it's bad. But maybe? Maybe one in a thousand times, perhaps when there's a fire or flood and you're starving or need medicine where it's okay. I think so.
Strip searching people wanting to fly in a plane or ride in a boat or train is bad. 99% of the time it's bad. But maybe? Maybe one in a hundred times it's okay if you're afraid and have very solid evidence that those folks might be in danger.
Police beating people is bad. 98% of the time it's bad. But maybe? Maybe 2% of the time you've got a person who will not be constrained and violence is the only option. I think so.
Stealing secrets from people is bad. 95% of the time it's bad. But if you're an intelligence service and stealing secrets might save millions of lives? I'm okay with that.
Murder is bad. 90% of the time it's bad. But if you're in a war, or somebody breaks into your house with a weapon, or you decide that you don't want to live with the pain of a terminal illness any longer? I believe there are several good reasons for why it might happen.
The percentages are just made up, but you get my point. There's always an edge case. A place where your thinking is challenged.
So, looking back over thousands of years or so, do we constantly screw up these things? We didn't just do a little slavery. We ended up enslaving tens of millions and fighting a brutal war in the states to end it. We didn't just make exceptions after 9-11 for the possibility of terrorists, we set up a system where every traveler is treated like a criminal. We don't just use violence in policing as a last resort. Instead it happens in my mind far too often. We don't just steal secrets to save lives, we classify every freaking thing we touch and try to scoop up all information on everybody everywhere. And so on.
It's like we learn lessons, then forget them again. Over and over again.
Thesis: Once we start creating economic systems, systematizing, murky moral issues, the money ends up driving out the ambiguity
The founders knew chattel slavery was bad (mostly), but at the time it wasn't big money. They felt reasonable, good people would eventually figure it out. They didn't, because it became big money. Other governments where slavery wasn't big money didn't have this problem.
We were concerned about terrorism before 9-11. But it was one of many things we were concerned about, and terrorism had to fight with other concerns for law enforcement money. Afterwards, though? We created a huge agency called TSA with tens of thousands of employees. Now there's big money involved. Now we'll do anything to keep the machine running and keep convincing people of how scared we should be and how valuable TSA is. Good luck ever dialing that back.
I am in favor of ending your own life if you choose to do so. I am not, however, in favor of any kind of big money system to help people do this. That's insane. You never want the money trade to not align with the greater good. You start paying one bunch of people to kill another bunch, you'll end up with a system of killing people for money. That's not ending your own life.
Most of the time, in our economy the money trade aligns with the moral value trade. I buy an apple from you for a buck. You're happy, you've got a buck. I'm happy, I've got an apple. You might go get more apples and I might get to eat them! It's a wonderful thing at any scale.
But these iffy situations do not scale like our apple example. If I paid you money to shoot anybody robbing my house? Let's say I pay you a million dollars for anybody robbing my house that you kill. It might work a few times, but pretty soon there'd be a lot more dead "robbers" than we've ever had before. The money system is not aligned with values.
And by "values", I mean the ability to scale. You can't have a world economy with chattel slavery (even though it still exists). It doesn't scale. You can't have this massive panopticon security state. It doesn't scale. While I might give you some money to help me end my life, you can't have a society where lives are priced on the open market in such a way. It doesn't scale. Life and society simply can't go on like that. It's not that you can't get away with it here or there for a while; it's that it constrains the growth of every other human activity. If you institutionalize it things don't blow up. They fizzle out.
I'd love to make a moral argument. This is wrong and this other thing is right! But that kinda thing is what got us continuing to screw this up in the first place. Morals are endlessly flexible. The guy at the police station who beats everybody he arrests is doing so because morals, guaranteed. That line of argument won't work. Instead, ask yourself if incentivizing these things, if turning them into a mechanistic economic system, is something that would allow humanity to scale? Also, it's a negative argument. Does it prevent scaling? It's not whether you can imagine some system that helps us out. That kind of imaginative reasoning has the same problem: it's endlessly flexible.
When we make systems out of ambiguous situations, we destroy our ability to reasonably discuss these things. Complex topics boil down to "right" or "wrong"
That makes us all infants.
That can't scale. Stop it.
Obligatory AI Reference
This has interesting implications for large-language models, which promise to take billions of complex online conversations and upchuck some answer that seems reasonable and useful.
90% of the time, it's useful. 10% of the time, however, it's total bullshit. In fact, it's worse than bullshit: it's plausible and seemingly well-sourced information that came from some AI hallucination. If you're asking it a question that's important, don't you want to go prove it one way or another yourself? So why the AI? (answer: give us a starting place to search) But if you're asking it something you think is not important, how the hell do you know it's not important? Because the question seems easy and the answer seems plausible? Because nobody ever told you about the edge cases?
When we create economic systems that make lots of money doing things with ambiguity that needs addressing, the ambiguity always goes away.
We find, in the rules laid down by the greatest English Judges, who have been the brightest of mankind; We are to look upon it as more beneficial, that many guilty persons should escape unpunished, than one innocent person should suffer. The reason is, because it’s of more importance to community, that innocence should be protected, than it is, that guilt should be punished; for guilt and crimes are so frequent in the world, that all of them cannot be punished; and many times they happen in such a manner, that it is not of much consequence to the public, whether they are punished or not. But when innocence itself, is brought to the bar and condemned, especially to die, the subject will exclaim, it is immaterial to me, whether I behave well or ill; for virtue itself, is no security. And if such a sentiment as this, should take place in the mind of the subject, there would be an end to all security what so ever.
Related: Tik-Tok Brain and Attention Spans