It’s Artificial, but is it Intelligent?

Artificial intelligence. A.I. Everywhere you look, articles discuss its impact, whether it's set to save, destroy, or leave the world essentially unchanged. These discussions fill inboxes, news websites, LinkedIn posts, and television news segments. For instance, the recent "60 Minutes" segment on A.I. could have been a superb advertisement for Google's A.I. initiatives.

Artificial Intelligence tools are projected to drastically boost our productivity more than any technological leap since the wheel or become the biggest time-suck since Steve Chen posted the first cat video.

When OpenAI released Dall-E 2, it amazed the public with its ability to generate images from descriptions. Although some images were better than others (pictures of hands and feet were less successful and sometimes unsettling), the software's capacity to produce decent images based on natural language descriptions seemed almost magical. When OpenAI’s Chat GPT was released to the public on November 30 last year, the people went apeshit.

A.I. has been among the business community’s “phrases du jour” for many ‘jours,’ however. In business, particularly advertising, 'A.I.' and 'machine learning' are frequently used interchangeably. However, 'Artificial intelligence' has a more advanced, magical connotation, so marketing mavens use it more often. It has replaced 'blockchain' or ‘web3’ as the buzzword signifying the new, the powerful, and the futuristic.

Since the term 'Artificial Intelligence' was coined in 1955, it has been called other things by the people closest to it: computational intelligence, synthetic intelligence, or computational rationality. The basic definition is 'the study and design of intelligent agents,' where an intelligent agent is a system that perceives its environment and acts in ways that increase its chances of success. (“What is Artificial Intelligence | IGI Global”)In current business, media, and marketing discourse, A.I. typically refers to fast machines that simulate intelligent outcomes.

Artificial intelligence is often categorized into three types, with four demonstrations: narrow, general, and strong A.I., which involve reactive machines, limited memory, theory of mind, and self-awareness.

 

Genuine artificial intelligence is non-human intelligence with agency and can use standard objective input to arrive at subjective conclusions. For instance, it's a fact that Van Gogh was born in the south of the Netherlands, but claiming he is the greatest Post-Impressionist is subjective. Similarly, diagnosable pain is a fact, but the experience of pain is subjective.

 

Stripped to its underwear, a machine that gets the correct answer based on the proper amount of data is a puzzle-solving machine, like a calculator. Even then, it’s not solving a puzzle; it’s going through motions that only look that way, a set of electro-mechanical states that appear computational to an observer. Over time, these operations have improved to the point that they can simulate intelligence. But that doesn't mean the machines possess intelligence. For example, philosopher John Searle has argued that a computer winning a game isn't truly winning because winning means you know you’re playing a game; it requires consciousness and, in turn, understanding.

 

A pertinent question is whether it matters that a machine is conscious if its simulation of intelligence is convincing enough. This question is particularly relevant in modern robotics.

(See Sherry Turkle’s Alone Together, reviewing the human relationship with machines). If a simulation is good enough, isn’t that good enough? The question is a cousin to AC Clarke’s old dictum: “Any sufficiently advanced technology is indistinguishable from magic.”

 

There's no clear answer, and what the industry and society will accept is far from being determined. What most companies call AI is actually ML. And the latest A.I. is really good ML. While we might dream of a benevolent Sonny-like robot, superhuman in all ways but serving humanity, currently what we call A.I. is mostly a recommendation engine that suggests I might like silver plated sugar tongs because I bought sugar cubes more than once on Amazon.

 

The programs labeled A.I. that have sped to market are quickly advancing over if-you-liked-this-you-might-like-that advice. But we shouldn’t be too irrational in our exuberance over what these tools appear to do. Given millions of labeled images of dogs, an artificial intelligence system should be capable of identifying an unlabeled canine when it encounters one. While a seemingly simplistic process, it sometimes offers a rough caricature of human abilities. There is a hilarious example of AI models able to distinguish dogs from each other but struggling to differentiate chihuahuas from blueberry muffins.

 

This isn’t to say we shouldn’t be amazed; it is to say we shouldn’t become worshipful. It's important to note that in an increasing number of fields, machine learning can discern patterns that are too intricate for the human eye to detect. But that’s akin to saying a microscope can detect objects too small for the eye to perceive. Steve Jobs once called computers the “bicycle of the mind.” I think that’s an apt description and a healthy attitude to have, particularly in these early days of A.I. It puts the tools in perspective and sets us up for thinking more about how to program them rather than be programmed by them.

Previous
Previous

The Irrational Exuberance around A.I.

Next
Next

"This streamed program is brought to you by..."