More than 90 years ago, one of the first onscreen depictions of an android made her debut. The Maschinenmensch, otherwise known as “Maria,” terrified audiences at showings of sci-fi masterpiece Metropolis. The year was 1927, and the futuristic idea of an evil robot disguising itself as a human was distant — a fantasy, but no less chilling. Maria played on very human fears: being controlled by that which we control, being deceived, and most importantly, being replaced. She represented a future that was bleak and scary, and though it made for an excellent film, no audience member wanted it to become a reality.
The Maschinenmensch was a proto-artificial intelligence, and like AI characters that came after, she reflected her time. Over the past century, AI onscreen has represented our anxieties, hopes, and ambitions, as well as our deepest values. Of course, that focus has shifted over time. Today, when our real-life artificial intelligence has become adaptable and dynamic, human-mimicking AI seems less a fantasy and more a not-too-distant eventuality. Because of this, our perceptions and expectations of AI onscreen have shifted, and we have begun a new exploration of what it means to be human. The future of AI is threatening, exhilarating, and wrapped in uncertainty and opportunity, as it always has been. But as AI has developed, so have our expectations of it.
The Stepford Wives tracks changes in AI perception
A clear example of changing times informing AI onscreen is in the case of The Stepford Wives. In both the 1975 original and the 2004 remake, wives in a seemingly idyllic New England town are replaced by robots modeled to look like them. These new versions are submissive, helpful, and perfect in the eyes of their husbands, but it’s a frightening concept to viewers who wonder if they, too, could be replaced by better robotic versions of themselves.
The two film adaptations, however, handle this fear differently. While the original is a thriller that reads like social commentary and drips with danger and mystery, the newer version plays the tale off as a campy horror-comedy, light on the horror. Over the nearly 30 years that separated the versions, people lost the white-knuckle fright they felt toward robotic replacements and replaced it with dark bemusement. AI had changed, and so had our expectations.
Early appearances of AI in film
Even before 1975, AI was influencing film and vice versa. The 1960s were a time of exploration in AI. There weren’t many practical applications yet, but researchers like Ray Solomonoff were working to develop thought patterns and decision-making algorithms for AI programs. It was a time of what-ifs, which naturally invited the less-than-comforting questions: What if the computers outsmart us? What if they don’t want to serve us?
To answer these questions, in 1968 a terrifying onscreen artificial intelligence chilled audiences. 2001: A Space Odyssey introduced H.A.L. 9000, the murderous machine that terrorizes a small group of astronauts. This film was groundbreaking in many ways, but what people seem to remember most is Hal’s fearsome phrase: “I’m sorry, Dave. I’m afraid I can’t do that.” It was the moment audiences truly knew Hal wasn’t under its humans’ control, and it was the most iconic and terrifying moment in the film.
Hal wasn’t the only representation of what we thought AI would be. The 1970s saw increased development in the real world of artificial intelligence. Machines began learning and adapting to language. Though it would be 40 years until a program finally passed the Turing test, computer programs were making and publishing scientific discoveries for the first time. Artificial intelligence was just that: intelligent. It sparked imagination, both for hopeful and fearful purposes.
Films begin to show mixed emotions about AI
Through the 1970s and 1980s, a crop of now-famous androids appeared onscreen. Westworld’s Gunslinger, Blade Runner’s replicants, and The Terminator‘s T-800 represented threats, while Star Wars and Knight Rider flipped the script with the helpful and charming C-3PO, R2-D2, and KITT. These were the most dynamic and humanlike robots yet seen onscreen, reflecting our collective realization that AI could be more.
The late 1980s also introduced perhaps one of the best-known AI characters of all time: Data, from Star Trek: The Next Generation. Much debate was had on the show about Data’s human qualities, but sci-fi fans held tightly to their feeling, cat-loving android, even before the “emotion chip.” His development during the show’s run through the 1990s felt natural against the real-world backdrop of AI development, where researchers were making huge strides in machine learning thanks to the work of experts such as Ian Horswill, Gerry Tesauro, and Ernst Dickmanns. Computers were puzzle solvers, game winners, and even possibly drivers.
Representation following the World Wide Web
Then, of course, came the World Wide Web. We were connected to and by computers, and new ideas of what they could be filled screens. The turn of the millennium ushered in new visions of what a world brimming with technology could become. Films such as I, Robot and A.I. Artificial Intelligence offered complicated new takes and rose to meet audience expectations. Cut-and-dry depictions of AI were no more; only deeply complex characterizations pushed the limits of what we knew enough to still be interesting.
This complexity has continued to evolve. Ex Machina, Chappie, and Her offered unique depictions of artificial intelligence, each with a different tone and projection. Television shows like Humans and the updated Westworld show androids every bit as human as we are, and they seem believable, like it’s a future that isn’t far off. In an era when AI programs are indeed, in some ways, smarter than us, imagining that next step — our creations surpassing our control — doesn’t seem far out of reach.
Today, our relationship with artificial intelligence and technology in general has advanced to where technology influences nearly every part of our lives. We rely on our personal computers to inform us about AI, connect us to technology and each other, and even tell us how to watch films about all of that. Some of us choose to be afraid, while some choose to see technology’s advancements as essential steps into the future. But whatever AI’s future holds, we can count on one thing: We’ll have movies to document its progression.
Alex Haslam is a freelance writer who covers consumer technology, tech deals, and cord-cutting topics for PCWorld, US News & World Report, and Macworld.