March 15, 2016 brought us a milestone in artificial intelligence 10 years earlier than experts expected: AlphaGo, the AI-based computer created by Google DeepMind, beat world champion Go player Lee Sedol at the game. Go is among the most ancient of games — simple in concept, yet spectacularly complex to master. The final score in the five-game match was 4-1, but after AlphaGo took a 3-0 lead, it was clear we were in a new era. Sedol himself said after the match, “I never imagined I would lose. It’s so shocking.”
This may sound familiar, bringing you back to IBM’s Deep Blue beating chess champion Kasparov in 1997. But under the covers, AlphaGo is as different from Deep Blue as the DVDs introduced in 1997 are from a Netflix movie download. Deep Blue’s strength came from brute force computing — literally evaluating the likely result of each possible move. When playing Go, brute force search is not an option. The number of possible plays is simply too vast, even in comparison to chess. There are about 10170legal plays on the Go board. To put this astounding number into perspective, our entire universe contains only about 1080 atoms. This is why Go has been seen as the holy grail for artificial intelligence (AI) research. Winning at Go is not about evaluating every possible move, it requires strategy — and according to Sedol himself, AlphaGo’s strategy was “excellent.”
But enough about games. There are broader implications here. We can expect similar advances in commercial applications, such as self-driving cars. Demis Hassabis, who heads Google’s machine learning team, previously said, “The methods we’ve used are general-purpose; our hope is that one day they could be extended to help us address some of society’s toughest and most pressing problems, from climate modelling to complex disease analysis.”
These machine learning methods will also have significant impact on how we perform unstructured and complex business processes and decision-making tasks in day-to-day work.
Businesses already use AI and machine learning to deliver millions of valuable recommendations and observations every day. Well-known examples include product recommendations by Amazon, movie recommendations from Netflix, and personalized search results from Google. In the enterprise, examples include customer targeting, lead scoring, opportunity risk analysis, sales forecasting, and churn prediction. So with AI already delivering daily business value and AlphaGo’s victory in the news, a natural question for those of us in enterprise computing is, what should we expect next from AI and machine learning in the enterprise?
What’s new in AlphaGo?
What differentiates AlphaGo from previous technology is its learning capability. AlphaGo learns using two complementary deep neural networks: One decides which moves are more promising (data scientists say this “reduces the width of the search space”), the other learns an “intuition” about how likely it is a potential play will result in a win (“reducing the depth of the search space”). These two networks learn — or we could say are trained — first by analyzing many past matches played by professionals. This is known as “learn by example” or “supervised learning.” Based on that foundation, AlphaGo then improves by playing games against itself — millions of games at a dazzling speed most of us can barely imagine. This self-play is known as “reinforcement learning.” If you remember the 1983 movie War Games, in which a computer “decides” not to start World War III by playing out different scenarios at lightning speed only to learn that every scenario results in world destruction, you’ve got an image of this sort of self play. AlphaGo is not fed the winning patterns of Go. Instead, it abstracts and summarizes patterns from actually playing Go. In this way, AlphaGo is truly “intelligent” about playing the game.
What new capabilities can a learning system offer?
Not long ago, even the most advanced supercomputer could not keep up with a four-year-old child in identifying cats in photos. No longer. With rapid advancement in AI, we are seeing breakthroughs on many tasks considered formidable challenges for computers — not just object recognition in images, but self-driving cars, question answering through natural language, composing newspaper articles, even painting and drawing.
Replicating actions humans consider trivial is only the start. AI routinely considers options ignored by human beings. For example, in AlphaGo’s first three games, it made “surprising” moves a typical human professional would not consider. At the time, some observers identified the moves as mistakes. Yet 20 more steps proved these surprising moves to be brilliantly innovative tactics. I believe Go professionals will study these moves and, in doing so, expand the set of options they consider in future human-only championships. In this sense, AI is creative, helping humans achieve more.
Surely there are limitations? Absolutely.
Although research in AI started in the 1950s, true learning systems are still in their infancy. It’s true that AlphaGo learned quickly. It took only five months for AlphaGo to move from beating a level-2 Go professional to beating Sedol, a level-9 champion. That progress would take a talented human years. But if we compare AlphaGo to that talented human, the path is quite different. AlphaGo was able to play tens of millions of games in that five month span, where a person can only play 1,000 or so games per year. So AlphaGo, and AI in general, is data inefficient in terms of learning. You may think this is a moot point because an AI system is capable of playing millions of games. But keep in mind that for many applications outside gaming, self-play is not practical, making learning a significant hurdle. So it remains an area of intense study for AI scientists.
Moreover, Go is a relatively simple task for AI because, even with its daunting set of options, it is well defined. Each player has complete information about the state of the game, past moves, and available future moves — no uncertainty. Contrast this with a game like bridge, where each player must make guesses about unknown cards, or poker where a player’s ability to bluff adds new twists. And for games like Go, each move is deterministic and the final rewards are explicit, either a win or loss. In the real world, especially for many situations in an enterprise, only partial information is available, and the final reward is difficult to quantify.
What is AlphaGo for the enterprise?
As I mentioned in a recent post, Data Science, Self-driving Applications, and the Rise of Conversational UI, “self-driving” enterprise applications — able to seek out data, apply intelligence, and present findings in a useful way — are the new frontier. With the addition of AI, many enterprise apps will act more like human assistants. They will detect relevant context changes (location, target customer, timing) and deliver relevant information at the moment it is most helpful. The interaction between a user and their applications will be more natural, more like talking to a trusted human assistant than enduring endless typing and clicking. And value grows over time as the AI analyzes the results of ongoing operations, such as marketing campaigns, lead conversions, sales meetings, email flows, interactions with customer success teams, or customer churn.
You may find yourself thinking, “Sure, if I had infinite time to pore over reports, I could see useful trends too.” That, of course, is the point. AI allows tedious tasks to be handled by machines, allowing people to concentrate on tasks better suited to us as humans. This brings us back to the point that AI systems, like AlphaGo, excel where “codified rules” exist. Go options may be so numerous as to seem infinite, but the rules of the game are clear. Even the 10170 possible moves do not include throwing off your opponent by showing them pictures of your cat. Letting machines handle the tedium means people can make the leaps of creative intuition still far out of reach for AI.
The real world requires the complementary nature of artificial intelligence and human intelligence. AI thrives at computation, memorizing, and even reasoning as long as the problem space is constrained. Human beings excel at perception, decision making, disruptive creativity, and interpersonal relationships. Success in the enterprise requires so many mundane tasks: updating data records, monitoring databases for changes, evaluating real-time results in marketing campaigns, detecting which customers are likely to churn, etc. All of these are candidates for automation and, in particular, candidates for AI because they require a system able to learn the difference between critical observations and irrelevant anomalies. As a result, human beings can focus on tasks requiring the unique spark of human intelligence: creating an unprecedented campaign, meeting one-on-one to win over clients, or designing the next generation of AI. Palintir, a company creating analysis software for U.S. government anti-terrorism efforts and the financial industry, offers a sophisticated example, building what they call “man-machine symbiosis.”
Before leaving the topic of human/AI teamwork, we should acknowledge that people and machines also make mistakes. So the power of teaming is not just to reach new heights but also to reduce errors. The protection machines can offer range from the trivial, such as an email system that warns you when message content implies an attachment should be included, to the life saving, like the loud “stall” warning in the cockpit of a commercial jet. And no one who has used voice recognition or auto-correct on their phone needs to be reminded that computers make mistakes. Machines and humans must work together to deliver optimal results.
As impressive as the AlphaGo victory is, we’re early in the development of AI systems. That means we’re also early in understanding the best ways for people and AI systems to join forces. But just as AlphaGo made innovative moves in Go that will ignite new thinking and creativity among the best Go players, we are confident tomorrow’s AI will spark new innovation among those who embrace its value.
Lei Tang is Chief Data Scientist at forecasting and predictive analytics company Clari.
This article originally appeared on VentureBeat
This article was written by Lei Tang and Clari from VentureBeat and was legally licensed through the NewsCred publisher network.