Info-Ops 3: Introduction

We suck at learning. It's slowly killing our society.

Info-Ops 3: Introduction

(The following is a draft introduction to Info-Ops 3)

We suck at learning. It's slowly killing our society.

We're good learners individually. Over-clocked brains and high-energy diets combined with the extreme evolutionary pressures we needed to survive ensures that. We suck at *knowing how we learn*. For something we do naturally, we're profoundly ignorant of how it happens. We can teach theoretical science. We can teach more practical things like tool-making. We've created large institutions, tens of thousands of them, that have a primary mission of teaching. We have theories of teaching. We have theories of education.  Quite frankly, we have too many of these theories, and many disagree with others. But how the brain works, we're still flailing around. And as far as a general theory of how sentience works in concert with learning, whether it's aliens, corporations, governments, or artificial intelligence? We might as well be hanging from the trees on the savannah waiting for the next roadkill.

We need an intellectual model of how learning happens. That model needs to be consistent with our ideas about how knowledge is gained, how language evolves, and society works. Most of all, we need something we can use right away in programming, organizations, social groups, and research. We need traction, something that we can work with even if the mechanics of how it all happens remains a mystery.

This book provides that. Here's why:

In late 2019, participants in online forums began talking about a new kind of virus coming out of an obscure province in China. At first institutions in China denied anything was happening. Doctors and nurses were silenced. The streets were beginning to fill with the bodies of the dead. By mid-spring of 2020, however, it was apparent that the world finally had the pandemic it had been worrying about for decades, ARS-CoV-2, a variant on the common SARS and Covid corona viruses. Across the planet, organizations and governments switched into high gear in order to learn as fast as they could about the new virus. What could be done about it?

At least that's the way the press release reads. That's not really what happened, though. What really happened was that both organizations and individuals participating in fighting the pandemic responded doing their best to help inside of their overall mission. Prefabricated plans were implemented. If their mission, like the World Health Organization, was encouraging countries to participate in U.N. health efforts, they made a big deal of praising countries for participating. After the praising was done, they continued working for world health by advising and helping meet the needs of the pandemic.

It is a mistake and a horrible lie to say that any of these people or organizations deliberately did anything to hurt anybody (with the exception of the usual charlatans, of course). Each one, however, had a mission and a message that came first that framed how they would help.

If you were an expert in diseases, you spread the word about how important diseases were: wash your hands, wear masks, socially distance, and so forth. If asked about some other area, say economics, you were happy to make public statements. After all, the more you appeared in and spoke in public, the more the public would know about infectious diseases, a terribly-important topic. When asked about something outside your area, you gave opinions while pointing to experts in those fields who said things that generally-agreed with your mission of emphasizing how important disease protection was.

If you were a leader in government, say a Governor or a local health department official, you made a point to appear on TV as much as possible, looking and acting like a leader. After all, people needed a leader more than anything else right now. You were confident. You made bold decisions. If some of those decisions were not as optimal as they should have been, you downplayed it. After all, when working with the unknown mistakes were expected. Right now leadership was the primary thing. Later could come the retrospectives, perhaps. When asked about something outside your area, you gave opinions while pointing to experts on those fields who said things that generally-agreed with your mission of emphasizing how important good leadership was.

If you were in the press, your job was to get the word out from the experts in these narrow silos to the world at large. After all, sharing knowledge was critical to our overall response to the pandemic. You scoured the list of dozens of experts in various fields, sharing as much as you could that looked valuable and letting the media consumers work out what things were more or less important than others. Your job was interacting with the public, and you watched those real-time Internet metrics for which things people wanted to read and which things people didn't. If asked to give an opinion about virtually any topic, you were happy to oblige while pointing to experts in those various fields who said things that generally got more people clicking, liking, sharing, commenting, and otherwise consuming your media. And of course agreeing with what you said. After all, getting information out was the most important thing.

Meanwhile, in the back, in the dark and dusty corners of the web, emergent learning was taking place. Unpopular forums on reddit sprung up, dedicated only to scientific publications and knowledgeable comments on them. It originally had 1/100th of the readership of the other, wide-open forum for sharing any kind of pandemic information. Doctors who were dealing with the crisis first-hand in Emergency Departments met on various Free and Open Access Meducation (FOAM) boards, creating and sharing things like the MATH+ protocols. Many times these front-line professionals had to remain anonymous; their various institutions forbade any sort of public interaction that might bring disfavor or be out of line with the larger organizational mission/message.

Watching this emergent learning, I was struck by several things. First, depending on the ED you showed up at, your chances of survival could vary widely, much depended on whether or not the hospital was adapting/learning quickly or stuck in policy inertia. As one doctor put it, the thing they kept emphasizing over and over in medical school was to treat the patient, not the numbers, but the first thing you did when you started working in a hospital was to treat the numbers. That's the way policies are written. Doctors shared pictures of patients with blood oxygen levels in the low-50s, a critical, dangerous, life-threatening number, who were wide-awake, lucid, and texting friends on social media. Do these people get intubation? Should they? It all depends on the institution.

As it turns out, the pandemic we got wasn't the pandemic we had been preparing for. Standard procedure and polices set up years or decades ahead of times sometimes worked and sometimes didn't. Some institutions learned quickly, some learned slowly. Much learning happened in places not in the public spotlight. Any group of people over time convinces itself that it's able to learn and adapt quickly because of all the expertise it contains. In most cases, however, the larger the group and the longer it's been around the worse the actual learning.

True learning requires a profound sense of ignorance and wonder. Groups of people  over time become self-reassuring in their knowledge and lose those feelings.  In fact, these two traits seem impossible to keep up. Nothing might illustrate this better than the U.S. military from 1940 to 1970.

In 1940 the United States appeared to be going to war. It wasn't going to be any kind of war it had ever fought before. Industrialization, mechanization, citizen soldiers,  mobile warfare tactics, the role of airpower ... the list of completely new things went on and on. Nobody really had any idea what we were getting into, only that we had to somehow figure out how to win.

Our first large land engagement, in North Africa, began as a total disaster. U.S. troops broke and ran, they were poorly-led and poorly-supported. Just after that battle, Kasserine Pass, the overall theater general landed and went to find the general who was in charge of the engagement. He found him many miles from the action, located in caves in the mountains. He had a complete staff and was well-protected, well-informed, and was working as professionally as anybody could expect.

"You, sir, are relieved," he said, firing the man on the spot and naming a replacement.

This continued throughout the war. Dozens if not hundreds of leaders of all levels were immediately relieved of duty and replaced by others. General Patton may have said it best when he was confronted with a battalion commander who could seem to figure out how to advance against enemy fire.

"You're fired!" he yelled, pointing at the man beside him, "Now you're in charge. And if you can't get us across that line I'll fire your ass and find some other son-of-a-bitch who can do it."

The lesson here is not about war, glory, heroes, or genius generals, although many times that's the way it's told. There's nothing macho here. The lesson is that the U.S. Military realized that nobody, including the military itself, had any idea at all how to do the work they were doing. How could they? These dozens or hundreds of people who were relieved didn't go home in disgrace and shame. In most cases, they were just given other jobs, perhaps ones they were better suited for. It was nobody's fault. Who knew? The organization was simply learning.

Thirty years later the Vietnam War was another story.

By the time of the Vietnam War, the U.S. Military had convinced itself that it knew all about how to fight and win wars. There were schools, a war college, live exercises, a professional military corps. Leaders were instructed in detail using hundreds of years of examples and the best minds in the world. We knew exactly what we were doing.

Nobody got relieved.

The commanding general, Westmoreland, was called "old two up, one back". That's because the classic way of fighting was to send two units out and hold the third one back, and he fought by the book. Everything has already been figured out and he followed the plan. He used large-scale troop maneuvers, looking to find huge numbers of enemy troops and destroy them.

However the enemy chose not to fight in the way that the U.S. Military expected. Instead they used guerrilla warfare, terror, infiltration, spycraft, desultory warfare. They used every tactic they could that worked and over time found more and more tactics that worked.

One of the saddest untold stories of the war, aside from it being fought at all, was the many times small groups of U.S. forces got something working only to be ignored by the organization itself. Marines, when arriving, reviewed their history in the Philippines and announced that they should split into smaller groups, live and work alongside the civilian population. That was shot down. The Navy started using radical training methods for its pilots that had great results. That was ignored. Clear demonstrations of overwhelming direct force, like Operation Arclight, met with progress in negotiations. That was scaled back and shutdown. Preference was given to non-direct, softer, hidden applications of force outside of South Vietnam, whether they worked well or not. It was in the book.

Mỹ Lai was the ultimate demonstration of this organizational corruption. It was a horrible incident. It was a war crime where U.S. troops directly attacked a village, killing scores of innocent civilians. The carnage would have continued had not a brave helicopter pilot put his helicopter directly in the way of attacking troops, preventing more horror. It was a terrible, terrible thing.

Sadly, terrible things happen in war, that's one of the hundreds of reasons war is so awful. So how did the organization learn from it so that it wouldn't happen again?

It did not learn. While there were many investigations and even some trials, in the end there was really not much done to the soldiers who committed the crimes. In fact, the President pardoned them. What about their commanding officers? It was obvious to any reasonable observer that they had to know what was going on. There is lots of evidence that indicates they instigated it. Nothing was done there either. Not even acknowledgment of a problem or reprimand or letter on their service record.

As horrible as this is, even more horrible is the understanding that nobody here set out to do horrible things, and most of the people involved thought they did the right thing, for exactly the same reasons many organizations failed to learn quickly during the pandemic: they all felt they were working supporting a larger mission and that larger mission came first. Just like governors, health organizations, hospitals, experts, communicators and the like, they realized that at times their mission might take them away from directly engaging with the problem, but they had faith overall that the larger organization would fix it over time.

Nobody was relieved during Vietnam and no charges were filed for war crimes because in order to publicly take these actions, the Department of Defense would be admitting that all of that schooling and certification structure was broken. The view which everybody believed but nobody said was that people might make mistakes, but the overall structure was self-reinforcing and could never directly be attacked and torn down, no matter what results it produced. When bad things happened, people had faith that somehow, somewhere else the organization would adapt.

Groups of people over time become experts. Experts put themselves in narrow silos. Experts, in order to maintain their image of expertise, generally give up profound ignorance and open curiosity, the willingness to look stupid, for structures, traditions, certifications, and self-reinforcing assurances. Organizations of people naturally stop learning.

There is a natural desire to think that everything can be understood and taught; if it cannot be understood or taught, a rough-enough generalization exists to substitute for now. Some systems defy detailed understanding, and we're never sure which heuristics work under which circumstances. In the programming community, many of us are familiar with the concept of the "expert beginner".

Suppose we take humans completely out of the equation? Surely machines don't have any of these problems with institutional inertia. The military did so well in WWII because it had one clear, measurable goal: win the war. It discounted everything that got in the way of that goal. The humans are what brought in all of the problems later on. Let's create clear, measurable goals and have machines learn on their own how to reach them.

Facebook did just that.

In the early days of Facebook, Mark Zuckerberg realized that if left to their own devices, his brilliant staff would drift into all sorts of other activities that did not support the growth of the company, so he built a huge counter on the development floor. At any instant, everybody working at Facebook could tell how many people had signed-up. Conference rooms had glass walls. If you were talking to somebody at their desk or in a meeting in a conference room, at the water cooler or in the breakroom, wherever you were, you could see the thing you were supposed to be working towards. Things that made that number grow faster were done. Things that didn't were set aside.

At first the organization used humans to learn how to grow. They started small in college campuses, learning from each one the things that would work and the things that wouldn't. As they learned more and more, they automated what they had learned. Poor organizations assign problems to people. Good organizations learn, codify, structure, and then automate. Facebook was a good organization.

But even then, learning and eventually automating was taking too long. Lessons became more and more about how to interact with each person instead of overall. Facebook decided to automate the learning process itself. They had a clear goal: engagement and sign-ups. They had a system with a lot of data. They just needed to set an AI on top that would continually tweak each user's experience in order to learn how to accomplish those goals. It would learn at a scale and rate that was humanly impossible.

They succeeded.

Here are the results:

“When you’re in the business of maximizing engagement, you’re not interested in truth. You’re not interested in harm, divisiveness, conspiracy. In fact, those are your friends,” says Hany Farid, a professor at the University of California, Berkeley who collaborates with Facebook to understand image- and video-based misinformation on the platform.

"...With new machine-learning models coming online daily, the company created a new system to track their impact and maximize user engagement...If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored. On Twitter, Gade explained that his engineers would get notifications every few days when metrics such as likes or comments were down. Then they’d decipher what had caused the problem and whether any models needed retraining...But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions..."

It's been confirmed over and over again, machine-learning models that maximize engagement increase polarization, radicalism, urban myths, and so on.

Mission accomplished. Now, we hear that some engineers, believing that fixing the problem is just one AI jump away, are openly wondering if Facebook should determine a user's state of mind as they interact with the product, then adapt. The end-result seems to be already fixed: Facebook will determine what is truth and what isn't. The only piece missing is how to "pitch" that truth to various users in order to, you guessed it, maximize engagement. The system programs us.

If the machine was learning how to climb a hill it would work wonderfully. Instead, however, once people are introduced, no matter in what role, we get back to exactly the same spot we left, only now we've made the overall system so complex that it defies understanding, reasoning, and public discussion. At this point the only thing that's really tractable about Facebook is the thing that they wanted to be tractable: engagement. They reduced discussions to one number. The organization is self-referencing and solidifying around that one number. It's as if the military became so good at fighting using horses that they determined that all future wars would only involve horses. We fixed it.

Is there a way out of this. What are the causes of all of these failures? How do we build organizations that learn? How do we scale out startups into large organizations without losing our flexibility and agility? What are the requirements of society at large to enable learning? How can old, rigid institutions be refurbished? What guidelines should we have around our use of technology and why? How can we make the primitive AI we currently have work better for us. What do ethical and unethical computer programs look like? How do we jumpstart to the next level of machine intelligence? What mistakes might we currently be making? What would a true Turing Test actually look like?

When we finally meet intelligent aliens, will they have already worked their way through these issues or are these problems uniquely human?

These problems are not unsolvable. Let's solve them. Let's learn how to learn at scale.