Podcast: Play in new window | Download
One of the recurring themes I see expressed from time to time in the media is humanity’s fascination, mixed with fear, that a humanoid robot equipped with artificial intelligence could be created, and that it’s kind would one day take over the world, perhaps subjugating humans in the process. This scenario is termed the “AI takeover”. Many people have worried that as artificial intelligence progresses over time, this kind of horror is inevitable. Notable individuals such as Stephen Hawking and Elon Musk have called for research into measures that would keep AI under human control and thus make such an eventuality less likely.
Of course, intelligence in robots and in computers takes many forms, but the one that people most worry about is AGI, or artificial general intelligence, where computers act with the skill of humans. Some of the tests that have been used to distinguish whether AGI capabilities are present in a given robot include some common tasks that humans can do but robots have had difficulty in carrying out so far. One of them is the coffee test, in which a robot is sent into a typical American household and told to find the coffee machine, the coffee, the water, and combine the ingredients, push the right button to make the coffee. Another one is the college freshman test where the robot enrolls in college, attends classes, and takes exams just like a human would. There are some other tests, of course, but I won’t belabor the point here.
The factors that make a future AGI takeover possible come down to basic biology and physics, coupled with consistent advances in computer technology. Our human brains are three pounds of tissue with a gelatin-like consistency, often termed ‘wetware’ rather than hardware. The brain houses our mind, and the mind can be thought of as the software of the brain. Though a human brain is a processing device, and the most complex object in the known universe, signals within the brain only move at about 100 miles per second, as opposed to a computer where signals move at the speed of light, which is 186,000 miles per second. In addition, biological neurons operate at a frequency of about 200 Hz, while the processing frequency of a modern computer has exceeded 2 Billion Hz. This means that computers have a significant, and growing edge in raw processing power over the human brain.
But aside from raw processing power, the fear of robots comes down to a fear of their intentions, and particularly their goals and values. Our species seems obsessed with goals and values, some of which could be categorized as incredibly idiotic, harmful, and destructive. Goals can assume various forms, and can even become obsessions within the human mind, including the pursuit of money, fame, power, or any of a hundred other choices. There are many types of goals, that when pursued to the extreme, create bad results for humanity in general, although the individuals involved may feel fulfilled for a time. There is essentially no connection between how intelligent a being is and how appropriate its goals and values are. Any level of intelligence can be combined with any set of goals, including goals that are basically stupid and values that are amoral.
If AGI does emerge one day, the fear is that robot processes could run off the rails if programmed with the wrong goals and values, coupled with sufficient power to achieve them. For instance, in a somewhat silly example, a robot that was a paperclip maximizer could theoretically destroy the world by continuing to produce paperclips at a rapid rate utilizing whatever inputs were handy. As we assign goals and values to robots and design processes for them to carry out, it would seem to be important to understand the implications in order to keep them from becoming our robotic overlords. Almost any goal, when taken to an extreme by a robotic machine that seeks goal maximization, may not turn out well.
That brings us to a discussion of the robotic overlords that are already here. Perhaps you have not recognized them as such. I am speaking of the large computer / human integrations that are among us. This is where humans, generally operating in the belly of a large public corporation, pursue specific goals and utilize computer-driven processes to gain scale and speed advantages. The goals can take different forms. Whether it is the maximization of profit, shareholder value, or production, the single-minded pursuit of growth through goal maximization can be destructive in the long run. Yet, there are many self-reinforcing mechanisms in the stock market and elsewhere (including quarterly reporting for public companies) that perpetuate this unhealthy reality.
Companies that operate in robotic fashion often care little for their employees. One such employee described his experience in surviving a series of eight corporate layoffs in a hostile environment. When he became toast on the ninth, he said it was like being a prairie dog in a prairie dog town located next door to an angry farmer who occasionally leaned across the fence with a shotgun to take out a few of your fellow prairie dogs. You never knew when the next attack was coming. It wasn’t until he had actually been laid off that he realized the amount of stress he had been living under.
Yes, robotic overlords are currently in place, and their goals are not benevolent to most of us. The Economist has noted that the goal of shareholder value maximization (as reflected in the stock price) provides a license for bad conduct, including skimping on investment, exorbitant pay for the C-suite, high leverage in the financial makeup of the company, silly takeovers, accounting shenanigans, and large share buybacks (which have been running around $600 billion in America in recent times). A corollary to shareholder value maximization is agency theory that holds that the C-suite should be well compensated with stock options in order to align their interests with those of the owners. Ironically, the short-term pursuit of shareholder value results in the destruction of shareholder value in the long run. It is not surprising that Steve Denning, writing in Forbes, has called this the second robber baron era, complete with the rise of monopoly power and little enforcement of antitrust regulations. Investment by public companies in their businesses is running near historic lows, only about 4%, while profits are at record highs of 12% or so. We don’t see the same problem in privately owned companies because different incentives are in place. There, investment is twice as high as in public companies. Main Street beats Wall Street in this area.
Peter Drucker noted in 1954 that the only valid purpose of an enterprise is to create a customer. Public companies listed on the stock market seem to have forgotten this truth. The current refrain from executives about “the stock market made us do it” is becoming a cliche somewhat akin to “the dog ate my homework”. The reasons that we have robotic overlords in place is that shareholder value thinking coupled with agency theory has things back to front. Steve Denning believes that the root of the second robber baron era is essentially shareholder value maximization, which Jack Welch, the former CEO of GE has called “the dumbest idea in the world”. The problem is with the single-minded pursuit of a goal in robotic fashion, especially one focused on shareholder value maximization.
There is some history to what we see going on in large public corporations. Much of the logic for shareholder maximization originated at the Chicago School of Economics under the direction of Milton Frieman and his colleagues. His famous opinion piece in the NY Times in September 1970 proclaimed (in response to the growing movement for Corporate Social Responsibility or CSR) that the sole social purpose of a firm was to make as much money as possible for its owners. The CEO was viewed as being ultimately responsible to the shareholders (equated with owners). Many business schools and economics department have been teaching shareholder value maximization ever since. But it is really not true. Shareholders do not own the company, they only have a claim to some of the residual assets of the company. No one owns a public company; it owns itself. As the British say, it’s like the river Thames, nobody owns it.
What we have tried to do today is to talk about the goal model, and how the robotic pursuit of certain goals can lead organizations astray; in fact, they give undeserved legitimacy to the robotic overlords that are currently in place. It is likely to be difficult to remove them. Michael Porter, in a 2011 article, tried to address this by offering a new “shared value” creation model, where a company operating in society is admonished to think broader, expand its reach, and create additional value with society in mind. Others have criticized this idea as a one-trick pony because the model relies only on economic value creation, but is missing social value and other values that could be brought into play.
As a listener to this podcast, another solution may readily come to mind — the Outcome-focused Model (OFM) for organizational effectiveness. Within the OFM, the goal of every organization is the same, that is, to be effective within its environment. The effective organization understands its environment, serves it to the best of its ability, and is rewarded in return. It is not about maximizing shareholder value, but about providing customer value, and creating products and services that elicit favorable customer responses. In the OFM, the demand side always remains in control of whether any given transaction will be completed. The supply side cannot run full steam ahead unless the demand side is in agreement. In this model, the achievement of effectiveness is a win-win for both the organization and its environment, so robotic overlords could not run amuck if programmed to obey this production function. The future of the world may come down to simply programming the right goals and values into our robotic future. Of course, there is still the question of what we do about the robotic overlords that are currently in place.
Charles G. Chandler, Ph.D.
[email protected]