In this century, artificial intelligences (AIs) will overtake human beings across an ever-widening swath of capabilities. These capabilities will in many cases be integrated into our living systems. We thus face the question: What might remain uniquely human? And does this question even matter?
AIs will become better and faster than unenhanced homo sapiens at nearly everything.
AIs will become better and faster than unenhanced homo sapiens at nearly everything. Our technological systems have already acquired ways to sense the world that have been unavailable to us. A trivial example is the ability to sense light beyond the visible spectrum. In 2016, AI company DeepMind’s AlphaGo defeated the world’s reigning GO master, Lee Sedol, a feat many theorists had predicted would require decades to accomplish, if ever.
I have yet to uncover solid reasons to believe that our computational offspring could not eventually imitate and excel at any capability or sensibility we today consider human. This isn’t to say we would necessarily want to imitate humans— beings fraught with faults— in every manner, but nothing appears to prevent AI from doing so. Some capabilities and characteristics will require many years to develop, though note that 30 years, for instance, isn’t that long. I’d be grateful to anyone who provides solid arguments and evidence to the contrary. (A sincere request.)
Why imitate humans when we’re so fraught with faults? Can’t AI do better?
Consider two examples. Creativity and empathy are often proposed as capabilities beyond the range of AI to acquire, at least for a very long time. I predict this will not be so. The Santa Fe Institute (SFI), one of the world’s leading research institutions, has an open competition to test when experts will no longer be able to discern whether a piece of art has been created by human or computer. Sony Research’s Flow Machines can competently compose pop music in almost any style. More advanced systems are combining styles to create what is experienced as new music. The path from such activities to what we might consider ‘genius’ will likely be long, but what will the remainder of the 21st Century bring? This is simultaneously exciting and terrifying.
Empathy appears unassailable. It’s not.
Empathy feels unassailable. Due to natural selection, we have evolved capabilities to sense and interpret inputs from other entities (humans and other living things, though even inanimate objects can trigger emotional responses) that elicit empathetic feelings, and then to behave in ways that express empathy. When considered as inputs and outputs, computational systems should be imminently qualified to imitate empathy. The results could be indistinguishable from those generated by humans.
Commentating on a 2016 report on AI by the European Parliament, Mady Delvaux asserted, “a robot can show empathy. But it can never feel empathy.” (See Federico Guerrini’s essay for more). Does this distinction really matter on a pragmatic, daily basis? We might console ourselves to believe that emotion will remain exclusively human, but if AIs present a comprehensive experience of engaging with a thinking and feeling entity, then are they not, in effect, a thinking, feeling entity?
If AIs engage as thinking, feeling entities, then aren’t they so?
In some ways, AIs should become better at employing empathy than we have been. Humans make decisions based on both rationality and emotions, filtered through numerous biases. (For relevant research, see Daniel Kahneman’s Thinking Fast and Slow.) We do so because it works often enough. But our emotions and biases can also be leveraged against us. Even in situations where evidence presents that empathy should be discounted, emotions can overtake us. Expert marketers and political campaign strategists know this well. AI systems can efficiently assess an ever-wider set of data and past experiences to enhance decisions. This is similar to how humans learn from experience, directly and vicariously, but AIs should be able to learn more rapidly from each other in vast connected networks. What Moran Cerf and I refer to as intelligence-to-intelligence (i2i) networks. Just as self-driving cars will be able to rapidly share safety insights across entire fleets, AIs will build insights into how to employ empathy— and when to avoid it.
AIs will build insights into how best to employ empathy — and when to avoid it.
Such AI collectives might appear foreign or threatening to our humanity, but note we are social beings. Rather than somehow inhuman, broader and deeper connectivity will likely amplify our social natures. We see this occurring as a result of social media technologies both connecting and isolating us, generating anxieties about individual identities in the face of rapid integration.
As Ray Kurzweil and others have argued, we will over this century become increasingly integrated with our technologies. We’ll integrate technology where doing so enhances our capabilities and experiences. While Kurzweil’s optimism sometimes overwhelms healthy skepticism, he is directionally correct. The ‘humans vs computers’ apocalypse as presented in popular culture is unlikely. Robots taking over, only to be saved at the last minute by the maverick, intuitive human hero. If humans and AIs become integrated, who will be fighting whom? We ourselves change alongside our technologies, as we have since we first threw stones.
The humans vs. computers apocalypse is unlikely. As humans and AIs become integrated, who would be fighting whom?
Each major technological shift in history has generated both good and bad, intended and unintended consequences. Our current transitions are no exception. We face enormous risks, even existential challenges, as our technologies become more capable and independent. These issues require unremitting, thoughtful exploration of practical and ethical implications.
Science and technology tend to ask what and how things work. Ethics, metaphysics and faith systems ask why. Not “why” as in causality, but as in purpose or raison d’etre, typically questions beyond reach of the scientific method. Some would say questions that cannot be reliably answered. Rather than banishing metaphysics to the realm of pseudo-science, as some philosophers from at least David Hume onward have proposed, metaphysical questions become even more essential in a world of shifting foundations. Our creations, and our re-formed selves, will create dimensionalities of experience beyond the worlds considered by prior philosophers, whether Nietzsche’s will to power or Heidegger’s dasein.
Why must we believe ourselves to be somehow better than what we create?
What will remain uniquely human? Perhaps nothing— and perhaps this is a faulty question. Why must we believe ourselves to be unique or somehow better than what we create? Or than anything else in nature? We have been in a long, slow grieving process since Galileo recognized a changing universe in which we weren’t central. Instead of a sense of loss, recognize the breathtaking opportunities. And the need for humble caution. We are becoming ever more capable of manipulating natural forces we might never fully understand.
Rather than asking what might remain uniquely human, perhaps we should question what kind of humanity we aspire to become.
This article was written by Robert C. Wolcott from Forbes and was legally licensed through the NewsCred publisher network.