In recent years, films from Ex Machina to Her have caused millions of moviegoers to ask haunting questions about where “machine” ends and “human” begins. And their themes may not be so farfetched. Thanks to several key breakthroughs, AI technologies are making the leap from science fiction to the real world. They are revolutionizing how individuals make decisions, marketers target consumers, and companies do business. In the coming years, as AI continues advancing, it promises to scaffold a brave new world for the Homeland Generation—in ways some cheer, others fear, and in plenty that we have yet to understand.
Artificial intelligence refers to the development of machines and software that simulate human intuition. The field can be divided into two broad categories: efforts focused on interpreting sophisticated input (such as speech, emotions, or coordinated movements) and those aiming to recreate higher-level cognitive capabilities like learning and decision-making. Already, AI-powered systems from antilock brakes to Siri have come to surround us in everyday life, and recently these examples have grown in ambition and scale to include drones, self-driving cars, and humanoid robots.
Most of these advances have been pioneered by tech titans. Alongside Google, Microsoft, and Apple are IBM, Amazon, and Facebook, each of whom have poured billions of dollars into AI since 2010 in a race to command the field. Meanwhile, startups Affectiva, Vicarious, and Sentient tackle technological issues such as emotion analytics, image recognition, and data analytics. According to estimates from the business analytics firm Quid, AI has drawn more than $17 billion in investments since 2009.
These new capabilities are making a measurable impact on virtually every industry. AI is being used to spot financial fraud; help doctors diagnose illnesses; write news stories; carry out tasks in hazardous situations; and generate irresistible advertising copy. Where AI is most valuable, however, is in industries with the most data to crunch and high-value problems to solve—such as insurance, health care, and cybersecurity.
Long considered the stuff of science fiction, AI’s great leap forward has been driven by a perfect storm of technological change. First is a growth in capabilities: Rapid advancements in computing power and falling hardware costs have made AI-related computations much cheaper to perform. Second is the advent of Big Data, which has enabled deep-learning algorithms in which the systems themselves learn bottom-up from a vast, fast-expanding universe of digital information.
Tech gurus speculate that the marriage of Big Data, the Internet of Things, and AI will eventually result in “ambient intelligence”—an ever-present digital fog in tune with our behavior and physiological state. Affectiva’s founder, Rana el Kaliouby, predicts in The New Yorker that before long, devices will have an “emotion chip” that functions unseen in the background the way that geolocation does in phones. Verizon has drafted plans for a sensor-laden media console that could scan a room and determine a driver’s license worth of information about its occupants. All these data would then determine the console’s selection of TV advertising: Signs of stress might prompt a commercial for a vacation, while cheery humming could result in more ads with upbeat messages.
What kind of mark will AI ultimately leave on society? Observers are deeply divided. A majority of the experts surveyed in a recent Pew study see AI as a largely positive breakthrough that will help people accomplish more than ever before. The transhumanist movement even goes so far as to predict that humans will eventually merge with machines and become immortal.
Countering them are cautious voices like Erik Brynjolfsson and Andrew McAfee, who argue in their book The Second Machine Age that these technologies will displace lower-level workers and exacerbate income inequality. More drastically, tech luminaries from Bill Gates to Elon Musk have expressed fears that a Skynet-style uprising might not be far behind. Stephen Hawking even warns that the development of AI “could spell the end of the human race.”
These divergent views don’t even bother with the disorienting moral issues that AI will inevitably raise. Most frequently discussed is privacy: Emotionally aware technology could easily cross the line into intrusiveness. Author Clive Thompson offers the example of an insurance company that raises its fees once customers show signs of being ill. Another issue is accountability: Some argue that high-stakes decisions like loan approvals should require human oversight. Moreover, AI broaches uncharted legal waters. Who is responsible, for example, if a self-driving car crashes? How should autonomous weapons be treated under international law?
Responses to AI also vary significantly by generation. Boomers, interested in the “why” behind every choice, are uneasy about the idea of machines operating according to rules no one understands. These suspicions are also shared by Generation X—though Xers are more likely to let those concerns slide if they see the use of AI resulting in measurable benefits. Overall, however, neither of these generations would be comfortable with an all-encompassing “personal assistant” mindset (in which a meta-layer of intelligence draws from multiple apps at once to solve a wide range of problems) that would allow AI to work to its full potential.
Millennials, on the other hand, may see AI as simply the next phase in an always-on world that they take for granted. They don’t need to know how it works, only that it does—and view technology, even with its pitfalls, as a positive force that has largely improved their lives.
In any case, we’re still far from immersive AI being considered a normal part of life. It won’t be until the Homeland Generation comes of age that many of these questions will reflect realities rather than speculation. But the future starts now—and with it, a grand evolution in the way we interact and relate to the world around us. As el Kaliouby remarks: “I think that, ten years down the line, we won’t remember what it was like when we couldn’t just frown at our device, and our device would say, ‘Oh, you didn’t like that, did you?’”
This article was written by Neil Howe from Forbes and was legally licensed through the NewsCred publisher network.