What would the father of cybernetics think about artificial intelligence today?
Review Norbert Wiener's seminal 1950 book, The Human Use of Human being.
The usefulness of man, Norbert Wiener's seminal 1950 book Cybernetics: or the science of control and communication in animals and machines (1948), in a machine In an increasingly computationally powerful and powerful world, study the interactions between humans and machines. This is a very prescient book, but it goes horribly wrong. Written at the height of the Cold War, this book serves as a reminder of the dangers of totalitarian organizations and societies, and the dangers of democracy as it seeks to fight totalitarianism with its weapons.
Wiener's cybernetics conducts a detailed scientific study of the control process through feedback. Because he was immersed in the problem of control, Wiener viewed the world as a complex, interlocking set of feedback loops in which sensors, signals, and actuators (such as engines) interacted through complex exchanges of signals and information. The engineering application of cybernetics had enormous impact and effect, producing rockets, robots, automated assembly lines, and a host of precision engineering techniques—in other words, laying the foundations for contemporary industrial society.
Wiener had greater ambitions for cybernetic concepts, however, in The Usefulness of Man, he elaborated his ideas and applied them to a variety of subjects, including Maxwell's demon , human language, the brain, insect metabolism, legal systems, the role of technological innovation in government, and religion. These broader applications of cybernetics have almost certainly failed. From the late 1940s to the early 1960s, cybernetics was hyped - somewhat similar to the hype of computer and communications technology that led to the bursting of the dotcom bubble in 2000-2001 - which provided satellite and telephone exchange systems, but produced little useful development in social organization and society as a whole.
Nearly 70 years later, however, The Usefulness of Being Human has more to teach us about humanity than it first did. Perhaps the most striking feature of this book is that it covers a large number of topics related to human-computer interaction that are still quite relevant. The book has a somber tone, making several predictions about the catastrophe that will happen in the second half of the 20th century, many of which are nearly identical to today's predictions for the second half of the 21st century.
For example, Wiener foresees a near-future in 1950, when humanity will cede control of society to a controlling artificial intelligence, which will then wreak havoc on humanity. Wiener predicted that the automation of manufacturing would dramatically increase productivity and put many workers out of work—a chain of events that did happen over the next few decades. Wiener warns that unless society can find productive employment for these displaced workers, revolt will ensue.
But Wiener did not foresee key technological developments. Like almost all technologists in the 1950s, he failed to predict the computer revolution. He believed that the price of computers would eventually drop from hundreds of thousands of dollars (in the 1950s) to tens of thousands of dollars; neither he nor his colleagues had foreseen the huge increase in computer power with the development of transistors and integrated circuits. explode. Finally, because of his emphasis on control, Wiener could not foresee a technological world in which innovation and self-organization were bottom-up rather than top-down.
Focusing on the evils of totalitarianism (politics, science, and religion), Wiener has an extremely pessimistic view of the world. His book warns that if we don't reform soon, a disaster awaits us. More than half a century after this book was published, the human and machine world today is far more complex, richer, and more inclusive of political, social, and scientific systems than he could have imagined. Yet warnings of what could happen if we get it wrong—for example, the control of the entire internet by global totalitarian regimes—are as relevant and urgent today as they were in 1950.
what did wiener do right
Wiener's most famous mathematical work focused on problems of signal analysis and the effects of noise. During World War II, he developed techniques for targeting anti-aircraft fire by building models to predict future trajectories of aircraft by extrapolating their past behavior. In Cybernetics and The Usefulness of Human Beings, Wiener points out that past behavior includes the quirks and habits of human pilots; thus, a mechanical device can predict human behavior. Like Alan Turing's Turing test showing that computers could answer questions that were indistinguishable from human responses, Wiener was fascinated by the concept of using mathematical descriptions to capture human behavior. In the 1940s he applied knowledge of control and feedback loops to neuromuscular feedback in biological systems and was responsible for bringing Warren McCulloch and Walter Pitts to MIT Faculty, where they did their seminal work on artificial neural networks.
Wiener's central point is that the world should be understood in terms of information. Complex systems, such as organisms, brains, and human societies, are composed of interlocking feedback loops, and the exchange of signals between subsystems leads to complex and stable behavior. When the feedback loop collapses, the system is unstable. He constructed a convincing picture of how complex biological systems work that is largely universally accepted today.
Wiener's view of information as the central quantity that controls the behavior of complex systems was remarkable at the time. Today, when cars and refrigerators are crammed with microprocessors, and much of human society revolves around computers and cell phones connected by the Internet, the centrality of information, computing, and communication seems unremarkable. In Wiener's time, however, the first digital computers were just emerging, and the internet wasn't even there in the eyes of technologists.
Wiener not only designed complex systems, but viewed all complex systems as loops around signals and computations, a powerful concept that greatly contributed to the development of complex artificial systems. For example, the method he and others developed to control missiles was later used to build the Saturn V lunar rocket, one of the greatest engineering achievements of the 20th century. In particular, Wiener's application of cybernetic concepts to the brain and computerized perception is a direct precursor to today's neural network-based deep learning circuits and artificial intelligence itself. But current developments in these fields have deviated from his vision, and their future developments are likely to affect the use of humans and machines.
what did wiener get wrong
It was in extending the concept of cybernetics to humans that Wiener's concept fell short of its purpose. Putting aside his musings on language, law, and human society for a moment, let's take a look at a more obscure but potentially useful innovation he thought was on the horizon in 1950. Prosthetics would be more effective if the wearer of the prosthesis could communicate directly with the prosthesis through his own neural signals, receive pressure and position information from the prosthesis, and guide subsequent movements of the prosthesis, Wiener noted. The problem turned out to be much more difficult than Wiener had envisioned: Seventy years later, prosthetics that incorporate neurofeedback are still in their very early stages. Wiener's concept is very good - it's just that the problem of interfacing neural signals with electromechanical devices is difficult to solve.
What's more, Wiener (and almost everyone in 1950) greatly underestimated the potential of digital computing. As mentioned earlier, Wiener's mathematical contributions were in the analysis of signals and noise, while his analytical methods were applicable to continuously varying or analog signals. Although he was involved in the development of wartime digital computing, he never foreseen the exponential growth in computing power brought about by the introduction and gradual miniaturization of semiconductor circuits. This was by no means Wiener's fault: the transistor had not yet been invented, and the vacuum-tube technology for digital computers with which he was familiar was cumbersome, unreliable, and not scalable to larger devices. In an appendix to the 1948 edition of Cybernetics, he predicted computers that could play chess and predicted that they would be able to predict two or three moves. He might be surprised to learn that within half a century, a computer will beat a human world champion at chess.
Technology Overstimulation and the Risk of Singularity
An important example of technological overvaluation was about to happen when Wiener wrote the book. In the 1950s, researchers such as Herbert Simon, John McCarthy, and Marvin Minsky made their first attempts at developing artificial intelligence by writing computer programs to perform simple tasks and constructs a basic robot. The success of these initial efforts inspired Simon to declare that "within 20 years, machines will be able to do anything a human can do." This prediction proved to be wildly wrong. As computers get more powerful, they get better at playing chess because they can systematically generate and evaluate a large number of possible future move options. But most of AI's predictions, such as robot maids, are illusory. In 1997, when Deep Blue beat Garry Kasparov at a chess match, the most powerful room-cleaning robot was Roomba, which could vacuum around at will and squeak when it got caught under the sofa the sound of.
Technology forecasts are especially unreliable because technology progresses through a series of improvements, stalls due to obstacles, and overcomes through innovation. Many obstacles and some innovations are foreseeable, but many more are not. In building quantum computers myself with experimenters, I often find that some technical steps I think are easy are actually impossible, and some tasks that I think are impossible are actually easy. You don't know until you try.
In the 1950s, John von Neumann, inspired to some extent by a conversation with Wiener, came up with the concept of "technological singularity". Technology tends to evolve exponentially, doubling in capability or sensitivity over certain time intervals. Von Neumann deduced from the observed exponents of technological progress that, in the not too distant future, "technological progress will become incomprehensively rapid and complicated", surpassing human capabilities. In fact, if the growth of raw computing power is extrapolated at the current rate, in the next 20 to 40 years, computers will rival the human brain (depending on how one estimates the information processing capacity of the human brain).
Artificial intelligence's initial overly optimistic predictions failed, silencing the discussion of the technological singularity for decades, but since the publication of Ray Kurzweil's "The Singularity Is Near" in 2005, the technology has advanced Bringing the idea of superintelligence back into fashion. Some believers, including Kurzweil, see the singularity as an opportunity: Humans could fuse their brains with superintelligence and live forever. Others, like Stephen Hawking and Elon Musk, worry that this superintelligence could prove harmful and consider it the greatest existing threat to human civilization. Others believe this statement is overstated.
Wiener's life's work, and his failure to predict its consequences, is closely tied to the idea of an impending technological singularity. His work in neuroscience, and his initial support of McCulloch and Pitts, foreshadowed the amazingly effective deep learning methods available today. Over the past decade, and especially over the past five years, this deep learning technique has finally demonstrated what Wiener likes to call Gestalt—for example, the ability to recognize that a circle is a circle, Even when it is tilted to one side it looks like an ellipse. His work on control, coupled with his work on neuromuscular feedback, has important implications for the development of robotics and is the inspiration for neural-based human-machine interfaces. However, his missteps in technology forecasting suggest that we should be skeptical of the notion of a technological singularity. The general difficulties of technological forecasting and the specific problems of developing superintelligence should warn us against overestimating the power and efficiency of information processing.
Arguments for Singularity Skepticism
Exponential growth will not last forever. Nuclear explosions will grow exponentially, but not until the fuel runs out. Likewise, the exponential progress of Moore's Law is beginning to be limited by fundamental physics. Fifteen years ago, computers were clocked at billions of hertz simply because chips started melting. The miniaturization of transistors has encountered quantum mechanical problems due to tunneling effects and leakage currents. Eventually, all kinds of exponential improvements in memory and processing will come to a halt, driven by Moore's Law. In a few decades, however, the raw information processing power of computers may catch up with the brain—at least in terms of bits and bit flips per second, as a rough measure.
The human brain is complex, a process of natural selection over millions of years. In Wiener's time, our understanding of the structure of the brain was primitive and simple. Since then, increasingly sensitive instruments and imaging techniques have revealed that our brains are far more complex in structure and function than Wiener had imagined. I recently asked Tomaso Poggio, one of the pioneers of modern neuroscience, if he was concerned that as computer processing power increases rapidly, they will soon mimic the functioning of the human brain. "Impossible," he replied.
Recent advances in deep learning and neuromorphic computing are very good at replicating a specific aspect of human intelligence that focuses on the workings of the cerebral cortex, where patterns are processed and recognized. These advances have allowed computers to beat not only world chess champions, but also Go world champions, an impressive feat, but they're nowhere near enough for a computer-controlled robot to tidy up a room.
Raw information processing capabilities do not imply complex information processing capabilities. While the power of computers has grown exponentially, the programs that computers run often go nowhere. One of the main responses of software companies to increased processing power is to add "useful" features, which often make the software harder to use. Microsoft Word peaked in 1995, and since then it has been slowly declining under the weight of new features. Once Moore's Law begins to slow, software developers will be faced with a difficult choice between efficiency, speed and functionality.
A major concern of singularists is that as computers become more involved in designing their own software, they will soon lead themselves to superhuman computing power. But the evidence for machine learning points in the opposite direction. As machines become more powerful and more capable of learning, they will also learn more and more things like humans. Education is as hard and slow for computers as it is for teenagers. As a result, deep learning-based systems are becoming more and more user-friendly. The skills they bring to learning do not “superior” but “complement” human learning: computer learning systems can recognize patterns that humans cannot—and vice versa. The best chess players in the world are neither computers nor humans, but people who work with computers. It's true that cyberspace is full of unwanted programs, but those programs are mostly in the form of malware -- viruses known for their malicious blindness, not their superintelligence.
Wiener points out that exponential technological progress is a relatively modern phenomenon, and not all technological progress is good. He believed that the development of atomic weapons and missiles with nuclear warheads was responsible for the demise of the human species. He likens the rash exploitation of Earth's resources to the crazy tea party in Alice in Wonderland: After we have destroyed the environment in one place, we can make progress by continuing to destroy the environment in the next place. Wiener is optimistic about the development of computers and neuromechanical systems, but he is pessimistic about the exploitation of these systems by authoritarian governments, such as the Soviet Union, and democracies, such as the United States, becoming more authoritarian in the face of the threat of authoritarianism .
How would Wiener see the usefulness of people today? He would be amazed at the power of computers and the Internet. He'll be delighted that the early neural networks he plays have produced powerful deep learning systems that exhibit the perceptual abilities he demands—though he probably won't be impressed by one of the most prominent examples of this computer gestalt , that is the ability to identify pictures of kittens on the World Wide Web. I suspect he does not see machine intelligence as a threat, but as a separate phenomenon, distinct from and co-evolving with our human intelligence.
Not surprised by global warming - the crazy tea party of our time - Wiener will be hailing exponential advances in alternative energy technologies and will use his cybernetic expertise to develop a complex set of feedbacks loop, integrating these technologies into the upcoming smart grid. Still, recognizing that the solution to climate change is at least political as well as technical, he will no doubt be pessimistic about our timely resolution of this civilization-threatening problem. Wiener hates peddlers—political peddlers most of all—but he admits that peddlers will always be with us.
It's easy to forget how scary Wiener's world is. The United States and the Soviet Union were in a full-on arms race to install hydrogen bombs on the nuclear warheads of ICBMs with navigation systems, and Wiener himself — to his dismay — also contributed. I was four when Wiener died. In 1964, my kindergarten class was practicing dodging and covering under their desks in preparation for a nuclear attack. Given the usefulness of humanity in his day, if Wiener could see our current state, his first reaction would be to be glad we're still alive.
Compiled from: What Would the Father of Cybernetics Think About AI Today?
Like my work? Don't forget to support and clap, let me know that you are with me on the road of creation. Keep this enthusiasm together!