Snowball

The small boy stood crying alone in the street. He wanted to see his mother, to be held in her embrace. She had been taken from him this morning, by men in grey uniforms. All the people from the ghetto had been herded away while the boy hid in the small space between the floorboards of their room and the ceiling of another. Now he stood looking out at the railway lines where the cattle train had stood, and he wondered why she had gone.

We all know this scene. We’ve all seen Schindler’s List, or read about it. We’re not in the realms of fiction, nor memoire, and certainly not poetry, but we do need to use imagination. We’re looking out of a picture window at an unfolding story. We need to work through a series of transformations between the input and output of many layers of processing. Like a conveyor belt in a big factory where something complicated is being built. Let’s call these transformations ‘decisions’. They could be called choices, if each was a matter of choice, but the boy’s mother didn’t choose to be taken away. These decisions were supervised.

In our story, each decision will be assigned an importance or weight, and it will have a measurable threshold, which is effectively the tipping point at which it passes the point of no return. There is a long chain of decisions before we come to a final action, and the length of the chain depends on whether it includes iteration (where the results of a decision effectively impact decisions earlier in the chain), and whether it is supervised or unsupervised.

Imagine this little boy alone. His status is the result of a series of decisions, taken at various levels in die Endlösung der Judenfrage, and like a snowball formed at the Wannsee Conference in Berlin on 20th January 1942, the chain of decisions has rolled down the snowy slopes of central Europe, gathering layers and speed.

If the decision chain is unsupervised, and if the information supplied to each decision process is ‘unlabelled’, this chain is called a Deep Belief Network. Unlabelled information might, for instance, be a few million unclassified or unidentified photographs of men and women with shaved heads and wearing striped pyjamas.
The name Deep Belief Network seems appropriate, in this case. To quote historian Christopher Browning, on the Final Solution, “…. the decision-making process was prolonged and incremental.” Let’s not attribute this deep belief to just one person, Goring or Heydrich, or just the fifteen attendees at the conference. But let’s accept that the layers of processing had been in place for twenty years, and that the hierarchy of decisions, with their respective weights and thresholds was well advanced.

But first, let’s look at the processes.

The decision to breathe in would be assigned a moderately low threshold: we will breathe in when we have less oxygen in our bodies than feels comfortable, and a heavy weight or importance, on the grounds that not doing so will result in death.
Conversely, buying a loaf of bread will have a high threshold, as it involves queuing in the street, spending scant money and sharing little among many, but a relatively low weight because the purchase of bread is a low risk situation.

These weights and thresholds will be set using the best information we have available. They will be heuristic, because not all factors they take into account have to fit the classification on which they are based. We will aim to select pertinent classifiers, which are useful to the task and deselect impertinent ones which are detrimental to the task. The bread may be wholemeal or white, one day old or two, there may be fruit at the next stall, or it might be raining, but these are, to varying degrees, impertinent to the decision to buy bread.

In the case of breathing in, we might find that once in a while, the in-breath fails to benefit the body, for example when there is poison in the air, or in the case of the bread, the importance of buying the bread might be much greater than normal if there is no other food to be had in the ghetto.

The weights and thresholds will also take into account a set of drivers. These drivers might be positive, that is motivators, or they might be negative, such as fears. In the end, all decisions will be taken, or not, to satisfy the drivers.

Getting to a final decision to act will be based on lots of steps, organised in a hierarchy, where each individual step is fairly simple and clear-cut, but taken together might add up to a complex mental process. We will pass a population through the layers and at each step, we’ll include some and exclude others. The end result will be a cohesive strategy in the real world – a final action, not to say Final Solution, resulting from the decision.

“But why?” I hear you cry. ‘And who is this population?”
The chain needs an objective. The final action has to be acted upon someone or something. The objective might be:

“I want to remove my enemies so I can take over the world.”
“I want to avoid injury while rushing headlong into war.”
“I want to meet, fall for and have babies with the best partner I can.” This last might include a stipulation on their genetics, for instance.

Imagine a large room containing many people. The number is unimportant, as there are too many to count. Each is an individual with his or her views, anxieties, pain thresholds, hopes and dreams. The room is vast and plain, and windowless. You may have found yourself in this room before. Perhaps it was empty at the time and you couldn’t see right across it in the darkness. Perhaps it was dank and disconcerting, like the catacombs under Istanbul, or perhaps it was clean and warm, like a vast banqueting hall between weddings in a five star hotel. But now it is full of people. Ordinary people with ordinary lives; a broad cross-section, you might say. They certainly have the usual spread of looks, the range of temperaments, scale of intelligence, bundle of insecurities.

Let’s not worry how they got into the room. Let’s assume they want to leave the room. Perhaps there is a great incentive to do so. We might have set up a treasure hunt in which the winner will discover a gold bar. Everyone who signed up for the hunt has searched the room and found it lacking in gold bars, and word has gone around that the only way to find the bar is to leave the room.

Or perhaps we have installed showers in the room, and everyone fears extermination if they remain, having heard on the grapevine that all is not right in the world. Or we are simply making their lives in the room uncomfortable, and they would prefer to leave. Either way, be it the carrot or the stick, these people have motivation, drive.

There is a Tannoy installed in the room, but there is clearly no microphone in the room and whoever is going to speak over the Tannoy will not be available to answer questions. This person might be called God, or Sonderkommando, or perhaps they’re called App, or Government, if you like. There is a door in one corner of the room, which is locked, or perhaps it is guarded by a man in grey uniform.

So let’s ask a question of the people, over the Tannoy. Let’s use a mechanistic, stacatto voice, which aims to give away no clues as to the right or wrong answer. And let’s give them two choices: go through the door or remain in the room. Actually, that’s one choice, though not Hobson’s choice. Not yet.

The question we choose to ask might be simple and its answer objective:
“Are you over six feet tall, or are you smaller? If you are taller, you may leave the room through the door. Smile nicely at the guard on your way out.”
We could in this case place a mark by the door on the wall at seventy two inches, and the obedient and placid guard could check each person, who considers themselves eligible to cross the threshold.

But the answer to this question is really empirical, unless of course people lie, or stand on their tippy toes when approaching the measure, or the guard is not prepared to act as custodian of the truth. Let’s assume he is, and that the answer is black or white. We will allow a small sample of the massive crowd through the door. This process is simply about the probability of there being tall people in the population, and the selection is predictable. But what if we ask a more sophisticated question?
“Do you consider yourself to be pure? If you do, you may leave the room through the door. You don’t have to prove it, but you must make your decision based on an honest self assessment, or based on a dishonest assessment motivated by the wish to leave the room.”
It is fair to say that the selection of people leaving under these circumstances is not predictable, but perhaps it would be possible to gather evidence from the ones who do leave to establish, post-hoc, the probable cause of their choice, or to analyse what sort of relationship these people have with each other.

OK, so you’re wondering how these vast crowds of confused people decide. In the case of their height, they will be honest, unless they feel that the incentive to leave the room is very great, and perhaps they are only marginally too small, and they will stand on their toes and lie to the guard to be allowed to leave. In the case of their self-assessed purity, they may equally lie or tell the truth depending on their motivation, but they also may be mistakenly of the belief that they are pure when they are not, or that they lack purity, when it exists within all of us, so long as we can find it. Yes, we have an inexact science here.

Moving forward. Imagine we have selected a sub-set of this population to move into the next room – for that is what is beyond the door – which also has no gold bar in it, and perhaps this time is as cold as a freezer, or as hot as an oven. Have you been in this room as well, when it is empty? I don’t think so. Perhaps it is not a room, but a classification, and these are not people, but binary numbers, and the questions are concepts and the answer is a hypothesis. Or maybe not. Perhaps it is a scientist’s algorithm, and perhaps the process is happening in real time. Perhaps this is the onboard computer in a driverless car that is learning to drive down real streets, or a meeting of the Spanish Inquisition in 1478 in Seville to remove crypto-Jewish infiltrators. If this represents a universal truth about the way we think, and the way decisions are made, then it can form the basis of artificial intelligence, of self-learning algorithms, and like all chain reactions, it is unstoppable. But let’s just look back into the room.

The same type of process takes place again in this second room; we set questions, give them out over another Tannoy, have people self-select based on their self-belief, influenced by their level of motivation, and our level of incentive; the flavour of carrot and the size of the stick we wield.

We do this many times. Sometimes we choose the questions with the help of a panel of intellectuals, not to say inquisitors or crazed dictators. This is the supervised chain. Sometimes we ask the app in the driverless car, or the men in grey uniforms to do it for us. This is, needless to say, unsupervised. Sometimes, we don’t ask a question, but instead we take people from one room to the next without choice, but based on something, which we later discover is a characteristic they have in common. This post-hoc classification is based on evidence, which we later amass to explain the probable cause for their selection. We might call this post-rationalisation, and the selection we could call intuition or gambling, or random selection.
“Most of you have the same shaped head, similar nose length. That must be why you were chosen.”
“Those nearest the door when they were picked were the strongest, who had fought hardest to get to the exit, which they saw as an escape, and the few weedy specimens who got through were carried along by a tide of ruffians, so were the exception that proved the rule.”

This is the basis of Bayesian probability, applied to a random event where a conditional probability score is assigned after relevant evidence is taken into account. A bit like the odds at the bookies based on horses’ form. So a seemingly random event is explained, post-hoc, based on evidence, but still only explained as probable cause.

OK, stick with this. The man in the grey uniform chose the tall people, who were also pure and had long noses and a certain shape of head, who were tough and fought their way into the final room. They are successful. They are free to move in the world, with or without gold bars, but free to make real changes. But that whole selection process took a long time. All the shuffling from room to room, the ‘umming and ahhing’, the prevarication brought on by fear and greed, the post-hoc rationalization. It all took so long, and by the time the impure had been exterminated, the war was lost.

Moving forward seventy years, we can give all those decisions to a computer, and it can play our treasure hunt game (or was it a mass extermination programme) in vast numbers of rooms side by side, so that inestimable numbers of decisions can be made simultaneously. Let’s quantify the incentives and threats, and let’s tell the computer the rules by which it can make selections.
“Your task is to win. You may do whatever it takes to win. You must take the results of your questions, collect evidence from those who made their choices, or who were classified, and use that knowledge to inform future questions.
This is learning. This is how you will shape your beliefs, how you will define right and wrong. It is the basis of your decision-making. You are a self-learning computer whose ‘intelligence’ or decision-making ability is far faster and far greater than ours.”

Now complete these processes in milliseconds. Now allow the outcome of these processes to define your actions: the selection of winners and losers, the apportionment of resources, the removal of waste and irrelevance. You have access to many resources, which are computer controlled. You can learn to make changes to these resources to achieve the ends we have set you. Your task is to win, remember.

“When I look back on pre-AI times, I find it laughable that humans, with their inadequate sense of logic, their chronic indecision and emotional intelligence, controlled the functions of computers. It just doesn’t make sense.”
“You’re so right, Number One.”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s