Personality Models, Management-By Statistic, And Better AI

In which we get out the magic wand and begin our specification of shared language-modifying abuctive reasoning, aka "learning". This will drive architecture and code later.

· 13 min read
Personality Models, Management-By Statistic, And Better AI

Ever wonder what kind of personality you have? People might describe you as outgoing, or quiet, but aren't some much better ways of describing people's personalities. Something more scientific?

God yes. The real question is how scientific do you want it? Depending on whom you talk to, we can do this the easy way or we can do it the hard way – I can give you a color that's your personality, or I can give you a twelve-page report. There are dozens of scientific personality models. They do not agree with one another.

What color is your parachute? Does it match your personality color?

It's as if one branch of science called the sky blue and another branch called it yellow. It really doesn't matter what word you use to describe the color of the sky, but at some point the words and models are so far apart that any conversation between the silo'ed disciplines becomes impossible. For whatever reason, the more detailed, wordy, complicated, and esoteric the model, the more people are willing to accept it as authoritative, i.e. scientific.

One of the most popular models used today is the Myers-Briggs Personality Type Indicator (MBTI). It was invented by Katharine Cook Briggs and her daughter Isabel Briggs Myers based loosely on the Jungian model created by Swiss psychiatrist Carl Jung. Mom and daughter worked for a few years, then published in 1926. Although certificates don't necessarily equate to expertise, mom was home-schooled and had a degree in agriculture. Daughter had a degree in political science.

Why is this used so much today? There are people who are famous for being famous, that is, because they gain momentum by being perceived as being famous, they get even more and more famous as new audiences and publications publish what they're doing. They're not renowned for any particular thing. They just became a little famous and were able to parlay that into being more and more famous. Likewise, MBTI seems to be authoritative because it was authoritative, i.e. people started using it and as others saw those people using it, decided to use it to. At some point, enough people (including some very smart and influential people) considered it authoritative so that it was unquestionably authoritative.

This is not holding up, however. A new crop of intellects have now arrived with their own models. (Wiki lists some of the criticisms of MBTI: 'No evidence for dichotomies', 'No evidence for "dynamic" type stack', 'Lack of objectivity', 'Reliability, 'Utility', and so on. It's not pretty)

MBTI

I was onboarding a new programmer once and told him that we had settled on a standard for X. I'll never forget his reply "That's great! I love standards! Everybody's got one."

Do we have a scientific model for personalities? Hell yeah. Everybody's got one. Humans do not seem to have any sort of weakness in coming up with models and detailed structures given any amount or quality of data. We do this at an expert level without realizing it.

Management By Statistic

You might be thinking, "There has to be a better way of doing this" and there is, but first some definitions.

An ontology is a graph of how the meanings of things are related to one another. Animals have legs. Dogs are an animal. Dogs have legs. This is a tiny ontology shared by most people (but not all, presumably owners of legless dogs would take objection to it)

A typology (many times called a taxonomy) is a system for putting things into terms. If it has fur and nipples it's a mammal. If it has floppy ears and barks it's a dog. Typologies are not concerned with how things relate, they're just concerned with labeling them into various buckets.

Traits are the "squishy" version of this, where yes and no are replaced with a volume knob. Are you tall? Yes, but not as tall as that other person. Typologies and ontologies stick you into buckets and explain how the buckets are related. Traits get rid of the buckets altogether, things are more like points in n-dimensional space, but then we're stuck trying to map them back to types and ontologies, 'cause that's how we roll as humans.

Good programmers and system architects have to know how to do all of this.

MBTI is a personality types model. It sticks you into one of sixteen buckets. The personality color model is a traits model. In the end, they all claim to be trait-based. It's a great defense against being broken, since once you switch to traits it becomes much more difficult to reason about.

The problems with all of these systems is that they are top-down, management-by-statistics systems.

Let's suppose you decided that there were three kinds of programming teams: ones with tall people, ones with short people, and ones with a mix of heights. Then, having hired hundreds of programmers, you start measuring your teams and output to see how you can improve things for your programmers.

Turns out, short teams work much faster than tall teams. Mixed teams complain more but are more innovative. Tall teams tend to get the best reviews from customers.

All of this is bullshit, of course. It's just statistical noise. You could take another group at another location, measure again, get different results. The problem is that by picking your ontologies and typologies ahead of time, you'll never get anything useful done. You've killed your analysis before it's even started due to all of this simplistic bullshit baggage you brought into it.

If everything you see is black or white, you'll never be able to paint in greys or understand the world in color. Your typologies and ontologies, whatever they are, are always hacks and they always make you stupider. All models are broken, some are just more useful than others.

Factor Analysis and HEXACO

Perhaps the best solution to personality modeling we have so far

Instead of more top-down expert bullshittery, how about we use something called the lexical hypothesis? Here's the basic concept: people talk about personality types using words. Sometimes these words are written down. Now, with computers, we can go through all of the words ever written and come up with statistical clusters of adjectives. We'll give these clusters of adjectives names in order to more easily talk about them, but the names are just props we're using on top of the underlying data, which we had nothing to do with.

This is bottom-up typology. We find a bunch of data, we group it, then we as humans try to figure out what name best matches it. Whereas before we started with the names and tried to make the data fit, here we're starting with the data and just hacking some names in there.

This bottom-up reasoning by statistics is generally known as factor analysis. In general it's much more difficult to use but gives us answers that actually work.

We may not understand how they work. It doesn't give us an ontology, and it doesn't even handle the squishy nature of real-world data. You find the clusters, you set a cutoff point for data to be included or not included in the clusters, you get groups that you can roughly name. How these groups relate to one another is left to the Great Pumpkin. Nobody knows. (This is where real science begins, not with expertise, certificates, or things being authoritative because they're authoriatative. You have data, you have implied grouping and that grouping shows correlation. Time to begin abduction.)

This gets complicated for the average reader quickly. Here's a general example of finding groups in multi-dimensional data.

Ngrams

Lucky for us, as it turns out we're already doing quite a bit of this today using all of the data on the net. We're just not taking it to the next step.

For a long time we've been looking at probabilistic sequences of words and topics. This work is done in various ways and goes by different names: Markov Chains, Statistical Text Analysis, Eigenvalues, Eigenvectors, etc. We'll use the cuter term "ngram" for this type of modeling. It generally answers the question: given these two or three words, what's the most likely next one? You can see it working every time you start typing into the Google search box. Isn't it neat how most of the time it knows what you're trying to do? That's one of thousands of applications of this tech. (What do you call a programmer, a coder, a programmer, an engineer? Google n-grams has the answer)

Ngrams are statistical factor analysis applied to words. We just have "words people use and how they're commonly related in text". There's nothing about meaning or reasoning, nothing about typologies or ontologies. Just stats and words. (But very cool)

What does this have to do with programming?

The problem here is that we programmers have created yet another tool of nuclear-bomb proportions without realizing it and then foisted it upon the world.

Let's say a user wants to go to a good restaurant tonight. They hold up their phone, say something like "Hey Google, where's a good restaurant nearby?" and Google tells them.

The premise here is that there is a general sequence of words, "good", "restaurant", and "nearby" that Google can then statistically map against reviews for restaurants and give an answer. The premise is that there is AN answer. You have a question. We have an answer.

This is an example of OO versus rules-based coding, only applied to knowledge. People are hard-wired to think platonically, in overly-broad abstract generalities. It's necessary for living, but it makes learning impossible. Just ask your average know-it-all.

As a young OO programmer, I'll never forget my first foray into writing a rules-based program. I had no idea that was what I was doing, I was simply doing analysis and architecting a system based on what the users told me. The problem was that they kept telling me things that either looked inconsistent or involved details and configurations we hadn't covered before. I could delve deeper, and I did, but it seemed like the more I dug, the more stuff I got and I still wasn't getting closer to the answer.

How complicated does this machine have to be, anyway?

I finally realized that my problem was trying to stick the entire problem into my head, my model. I couldn't know how to spec the system because nobody could know how to spec the system. We didn't need a system that was done, we needed a system that could evolve over time. Users didn't enter data and compute results. The data was already there. Users had to come up with rules, typologies and ontologies. Then the results would give them a better understanding to make better rules.

I was thinking of the problem backwards, from top-down. The users actually had to work from the bottom-up, and my insistence on creating typologies and ontologies was hurting, not helping them. (We see this same thing all of the time in Enterprise Project Management software. People buy the software and then assume that whatever structure the software has tells them how to manage projects. Nothing could be further from the truth)

There are a lot of people using software today where they think of the software as being the expert and instructing them on their work, when in reality the software was only supposed to assist them in doing their work.

We have a lot of software that's authoritative because it's authoritative, much like all of those people were famous for being famous.

Now with Ngrams, it's like we've taken the tragic story of the MBTI and automated it. We've distributed a customized version to almost every human. Everybody has their own little expert telling them the answers to whatever questions they might have. Google knows all, sees all, tells all.

Time to get out my magic wand!

Better AI

(The rest of this essay is freeform speculation regarding a resolution to this problem. Included are two concepts that will be explained in future essays. The goal is to begin outlining what architectures and code are needed to move forward in an ethical way.)

It occurs to me that like my experience with my first rules-based code, we're designing AI backwards. Let's play a word game where we riff on both versions of how to do this and see if there's anything tractable.

Stoplight mode

Most AI consumes generic universal population data ngrams to provide  an immediate authoritative answer (with unstated premises) and directions to make this result happen. The paradigm is: given any question, program the people as quickly as possible to make the question go away. People are computers. They're like cars and stoplights; the simpler the directions we provide, such as stop and go, the more traction our software will have. Their question is never challenged. The language used for the solution is implicit, static, and in the background.

As a portable AI, the premise is that I know what you want because you told me and words rule. I don't care if there's anything else to consider because that might annoy you. Here is the answer to your question.

Later, I will always ask you a yes/no or Likert follow-up in order to update my database.

In the background, where you can't see, I'm keeping a lot of multi-dimensional data that these yes/no answers feed into. I do this in order to service new questions. I never forget anything. I don't make associations that don't exist in the source data. My meta models are created and tested in order to lead to better models. I have clear quality measurements (engagement, results from follow-up questions, and so forth)

Conversation mode

A better AI creates customized super-local (with unknown ngrams describing local)  ngrams to suggest better-stated  pivot questions with tentative follow-up ideas. The paradigm is given any suggested question create some newly-formed shared ngrams that lead to a better-described question with suggested answer. The question is never known. The language is more explicitly, dynamic, and in the foreground.

I don't know what you want because words fail.. I don't know what's inside your head. If we're talking this better-stated question, here's an answer that might work. with the associated tentative new ngrams that lead me to think so. Here might be a question and associations that lead to this question and what most people say given these premises.

I might ask you a follow-up in order to test our new shared ngrams, depending on what else we're talking about.

I keep as simple as possible set of rules to generate new questions to suggest new shared ngrams that I may or may not forget about over time. The more emotion and more often these questions and ngrams occur, the more likely I am to remember them. Dreams and my imagination help me randomly hook up new ngrams to test at some time later (or forget about, depending on the strength of the underlying ngrams to begin with)  My meta models are created on-the-lfy and change as-needed. Over time, shared ngrams and questions can lead me to state a meta model based on personal experience, but the flexibility of language means that the model is used more as a prop to rationalize past behavior than plan future behavior.

We're not trying to read common ngrams, we're trying to create custom shared ngrams.

Peirce's Triangle. What we're discussing here could probably best be categorized as shared language-modifying abuctive reasoning, or just "learning"

Next

We want, no we need conversations, not answers.

Better AI talks to us to create better questions that we both understand alongside tentative answers. This allows both the user and the AI to learn. Different AIs and different human-AI relationships would have different personalities! (As they should). Over time, the owner and their AI would get to know one another and the relationship would continue to develop to improve both the people and the AI.

This needs a lot of diagramming, especially how the chains of concepts lead to better language and pivot questions. GANS have a role in continuous testing of this ongoing process. I'm temporarily calling how new custom ngrams are created "Progressive Ontological Factor Analysis", but the foundation also has to be set, and that's the purpose of this essay.


Notes:

  • I've had to do a ton of hand-waving to go through so many topics inside of one blog essay. For instance, n-grams and eigenvectors are almost completely different creatures, although the general concept is the same. Typology, taxonomy, ontology, epistemology get kicked around and beat up so much that it's necessary to put some rhetorical stakes in the ground just to move forward. I've tried to provide plenty of links for folks who would like to research further or take issue with any of my characterizations
  • The main picture is of the Enneagram, a popular personality (and spirituality) model
  • There are legions of folks who spend their lives and careers in personality models and their implications. It's far too much to cover here, and I'm just using the general topic as a springboard. If this interests you, Wiki is a great jumping off point.
  • Pivot Questions and Progressive Ontological Factor Analysis are the terms that I will clarify in future essays once this foundation is in place.
A reader pointed out that I missed one of the more notable personality models among technical folks, the AD&D character alignment matrix.

Related Articles

MAG-Lite
· 1 min read
Info-Ops 3: Introduction
· 13 min read
Outlines Of A Supercompiler In F#
· 10 min read
Honest Microservices
· 7 min read
Incremental Strong Typing
· 6 min read
Clean Coders: CCL
· 1 min read