One little remarked on risk of AI systems.

Consider that living organisms don't only adapt to their environment. They also adapt their environment to themselves.

The philosopher Bruno Latour notes about James Lovelock's and Lynn Margulis' theory named "Gaia", that it adds to Darwinism. It adds the fact that organisms don't only adapt to their environment, they also adapt their environment to themselves.

While they showed this to hold for all life on earth, over extended periods of time, intuitively, this observation holds strongly with respect to homo sapiens, especially since we have amplified our capability to act upon our environment through the use of our tools and technology.

Is there not a question therefore about whether our broader environment will become adapted to be more suitable for so called Artificial Intelligence systems?

Let us first conceive more clearly what we mean when using the term "Artificial Intelligence" here. At present, most of the publicity about "AI" is around systems termed "Large Language Models" and more recently what have been termed “Simulated Reasoning” models.

Noting that the LLM systems are built around a statistical model known as a probabilistic language model to generate text, based on the idea that the probability of a word appearing in a sequence depends on the words that have already appeared. Simplistically, these models use a massive dataset of text to train, they embed this data in their model, and based on this they calculate the probability of each possible next word or phrase based on the words that have already been provided with.

These systems have shown themselves capable of powerful feats that mirror a certain attributes of human intelligence.

Such systems are likely to change the business world, or at the least comprise a major source of productivity and profit margin increases for those using the technology competitively.

Not mind you because these systems are any more intelligent (in the typically understood sense of the word) than individual human beings. But because they are much more efficient, are massively scalable, and have other advantages over us.

In a word, while LLMs or SR models no doubt supercede us in certain aspects of mental performance, we human beings are much more discerning, and better able to adapt to unfamiliar situations.

But let's take the illustration of the previously hyped AI application of driverless cars. Similar observations about the type of "intelligence" involved hold. But despite promises years ago that fully driverless cars were only around the corner, and the expenditure of tens of billions of dollars in attempts to produce a viable commercial vehicle, have these promises been achieved?

Instead, we have seen very limited implementation of the technology, such as drivers being able to take their hands off while on highways, and driverless trucks delivering goods along predictable routes.

The key word here is "predictable". For consider that one way to deliver on the promise of driverless cars would be to make our roads much more predictable.

In other words, ensure for example that there never ever is the possibility of a child darting out from behind a car chasing a football, or a drunk pedestrian behaving erratically, or an Irish traveller racing their "sulky" down the road.

However ensuring the needed predictability entails a commensurate level of policing of one type or another, to ensure that no such "unpredictable" events can occur.

While no doubt the argument can be made that on our public roadways that might be a good thing, the same type of systems are likely to become ubiquitous in a broader environment, encompassing public social spheres of activity not only including business.

In the example above of public roadways; children, drunks, and travellers racing sulkies, embody a deal of unpredictability.

So to reduce that, you might try to increase their subordination to the proper authorities.

Or you might increase the rules they are made adhere to (but of course the problem is that children, drunks and travellers tend to have a certain obliviousness to these rules, and as I shall argue, in the broader scheme that may perhaps carry a positive).

Or you might ensure the troublesome elements are simply kept far away.

We may well be fine with that for now, but what happens when the yardstick or the arena for debarring unpredictable or surprising behaviour may shift?

Consider beyond our public roadways such realms as the broader social environment, our major social institutions, or our public spaces in general. Is it not the fact that all these benefit greatly from freedom of interaction and indeed from the element of surprise, unpredictability, novelty, and even a certain amount of "chaos"?

For this is precisely how they learn, to adapt, evolve and survive.

To keep the point as clear as possible let us solely focus on the autonomous applications of AI such as driverless cars, or autonomous AI customer service systems, or autonomous AI used within financial services to predict the best investment strategies etc.

In all of these applications, are not the systems involved easily thrown off by something unusual or surprising in their inputs, that a human being would likely be able to discern correctly and respond appropriately to?

Now the current strategy is that these systems are closely supervised by a human being to detect the case that the algorithm or model fails to handle appropriately.

They are in effect acting as a regulator of the system, keeping it within its operational parameters, these parameters for now being prescribed by a human being, also acknowledging that LLMs in particular utilise deep learning methods that are hard to interpret and control, making them susceptible to unpredictable and undesirable behaviour.

Even without that factor though, there is a difficulty with regards the practical achievement of that regulation that I will hopefully cover in a future post (or you could do an introductory coaching session to understand why).

Assuming then that human beings cannot practically have sufficient oversight over these systems, then for example how does one ensure a satisfactory return on the huge investments made into these systems? Or how do you deliver on the perfection of these systems, being forced to work within the practical limitations of these systems?

It is possible we may convince ourselves over time that we ought to make certain sacrifices, in order to create a better environment for these type of systems. That would not be wise.

Subscribe to Open Channel Communications

Don’t miss out on the latest articles. Sign up now to get access to the library of members-only articles.
[email protected]
Subscribe