Home Tech Our attitudes toward AI reveal how we really feel about human intelligence

Our attitudes toward AI reveal how we really feel about human intelligence

0 comments
Our attitudes toward AI reveal how we really feel about human intelligence

IThe idea that superintelligent robots are alien invaders coming to “steal our jobs” reveals deep flaws in the way we think about work, value, and intelligence itself. Work is not a zero-sum game, and robots are not an “other” competing with us. Like any technology, they are part of us, emerging from civilization in the same way that hair and nails grow from a living body. They are part of humanity, and we are part machine.

When we consider a fruit-picking robot as a competitor in a zero-sum game, we miss the real problem: the farm owners and society consider the human who used to pick the fruit to be disposable when he or she is no longer fit for the job. This implies that the human worker was already being treated as a non-person, i.e., as a machine. We are in the untenable position of considering the machine as alien because we are already in the untenable position of alienating one another.

Many of our concerns about artificial intelligence are rooted in that ancient and often regrettable part of our heritage that emphasizes dominance and hierarchy. The broader story of evolution, however, is one in which cooperation allows simpler entities to join forces, creating larger, more complex, and more enduring ones—this is how eukaryotic cells evolved from prokaryotes, how multicellular animals evolved from single cells, and how human culture evolved from groups of humans, domesticated animals, and crops. Mutualism is what has allowed us to scale.

As an AI researcher, my primary interest is not so much computers (the “artificial” in AI) as intelligence itself. And it has become clear that, no matter how it is realized, intelligence requires scale. The “Language Model for Dialogue Applications” or “LaMDA,” an early large language model we built internally at Google Research, convinced me in 2021 that we had crossed an important threshold. While still highly unpredictable, LaMDA, with its (for the time) impressive 137 billion parameters, could almost Three years later, state-of-the-art models have grown by an order of magnitude and have consequently improved greatly. In a few years, we will probably see models with as many parameters as there are synapses in the human brain.

As a species, modern humans are also the result of an explosion in brain size. Over the past few million years, the skulls of our hominid ancestors quadrupled in volume. Social group size has grown at the same rate, as researchers discover when they correlate primate pack size with brain volume. Bigger brains allow larger groups to cooperate effectively. Larger groups are, in turn, smarter.

What we think of as “human intelligence” is a collective phenomenon that arises from the cooperation of many narrower individual intelligences, like yours and mine. When we catalogue our intellectual achievements – antibiotics and indoor plumbing, art and architecture, higher mathematics and ice cream with hot fudge sauce – let us recognize how clueless most of us are, individually. you How can you make ice cream even if you start with domesticated cows, cocoa pods, vanilla beans, sugar cane and refrigeration – with 99% of the hard work already done?

Human intelligence is made up not only of people, but also of a range of plant and animal species, microbes, and even technologies spanning from the Paleolithic to the present day. The cows and cocoa plants, the rice and wheat, the ships, trucks, and railroads that have supported explosive population growth are all central. To ignore the existence of all these companion species and technologies is like imagining ourselves as a disembodied brain in a jar.

Moreover, our intelligence manifests and distributes itself in a variety of ways. And this will become even more so as artificial intelligence systems proliferate, making it increasingly difficult to pretend that our achievements are individual or even uniquely human. Perhaps we should adopt a broader definition of “human” that includes this entire biotechnological package.

Some of our most impressive feats, such as silicon chip manufacturing, are truly global in scale. Our challenges are also increasingly global. Threats such as the climate crisis and the resurgent possibility of nuclear war were not created by a single actor, but by all of us, and we can only solve them collectively. The increasing depth and breadth of collective intelligence is a good thing if we want to thrive on a planetary scale, but such growth is not often perceived as cumulative and mutual. Why?

In short, because we care about who will be on top. But dominance hierarchies are nothing more than a particular trick to allow troops of cooperative animals with aggressive tendencies toward each other, born of internal competition for mates and food, to avoid constant fighting by agreeing on who will be on top. would do win, where a fight for priority breaks out. In other words, such hierarchies may simply be a trick for half-intelligent monkeys, not a universal law of nature.

AI models may have considerable intelligence, just like human brains, but they are not apes competing for status. As products of high human technology, they depend on people, wheat, cows, and human culture in general to an even greater degree than Homo sapiens did. They are not plotting to eat our food or steal our romantic partners. They depend on us; we may come to depend on them just as deeply. Yet concerns about dominance hierarchy have shadowed AI development from the beginning.

The term “robot”, introduced by Karel Çapek in his 1920 work Rossum’s Universal Robots, comes from the Czech word for forced labor, robotNearly a century later, a respected AI ethicist titled an article Robots Should Be Slaves, and although she later regretted her choice of words, the debate over robots continues to revolve around domination. AI doomsayers now worry about the possibility of humans being enslaved or exterminated by superintelligent robots. On the other hand, AI deniers believe that computers are, by definition, incapable of any agency, but are merely tools that humans use to dominate one another. Both perspectives rely on zero-sum, us-versus-them thinking.

Many labs are currently developing AI agents. They will become commonplace in the coming years, not because robots are “taking over,” but because a cooperative agent can be much more useful, both to humans and to human society, than a mindless agent. Robot.

If there is any threat to our social order, it comes not from robots, but from inequalities between human beings. Too many of us have not yet grasped that we are interdependent. We are all involved in this: humans, animals, plants and machines alike.

You may also like