In the past two decades, we’ve seen chess grandmasters and the best Jeopardy players in the world alike fall in competition to computers. Heads-up No-limit Texas Hold ’em poker may be next. But the future of artificial intelligence (AI) is about way more than games.
Last April and May, Carnegie Mellon University’s AI, Claudico (developed by Professor Tuomas Sandholm and his team), played an 80,000-hand tournament against four poker pros. When the game ended, three of the four players had bigger hands than Claudico.
“We think we’re at the point where poker will fall,” says Andrew Moore, professor and dean of CMU’s School of Computer Science. “We think it will happen at the next competition in two years.”
Know when to hold ’em …
“In computer science terms, the algorithm it needs [to play poker] is exponentially harder than chess, and it’s all because it’s a game of hidden information,” he says. “A game like chess or backgammon, both sides can see the board at all times and know what the future scenarios might be. When you’re playing in a hidden game, it turns out you have to think about every possible card your opponent might have and what every possible game-play strategy might be. The computation is much, much harder. You can have a world championship-level chess player running on one moderate laptop now — something only a few humans have a chance of beating. But you still need a massive super computer to play No-Limit Texas Hold ’em.”
[ Related: Accenture invests in artificial intelligence R&D ]
Moore, whose own background is in statistical machine learning, AI, robotics and statistical computation for large volumes of data, heads up a department of 30 faculty at CMU focused on AI research. CMU faculty have been working on things like AIs playing poker because teaching AIs to process scenarios with hidden information will unlock whole new vistas of applicability for the technology.
For instance, Moore says two CMU faculty are working on projects that will help Apple’s Siri or Microsoft’s Cortana become better personal assistants. You might spot a set of red sneakers that catch your fancy and ask your AI to buy them for you — if it can get a good deal. Or maybe you want it to get you a deal on baseball tickets.
“To do that, your own personal computer assistant can’t just advertise what you’re willing to pay,” Moore says. “It needs to withhold information.”
The technology could also lead to blind negotiations with no opportunity for gaming the system. One example, Moore says, would be organ exchanges. Consider kidneys. Some people are willing to donate a kidney to help a relative whose kidney is failing. But, of course, not all relatives are donor matches. Still, another person may be in a similar situation. One possible solution is a large system for completely fair negotiation in which willing donors swap with other people that are matches.
Even in areas without hidden information, AIs may soon be assuming a large role in very human negotiations. Moore points to Spliddit, a not-for-profit academic endeavor by CMU faculty and students, which exists to help people divide things fairly, whether rent for an apartment, the fare for a taxi or credit for a project or business endeavor.
“It sounds like a very special case, but it’s happening all over the place right now,” Moore says. “You might have several people taking an Uber cab and having to split up the bill, or a bunch of departments who each want a certain amount of space for their operations and the dean needs to figure out what’s best. These are tools which can take us and some of our emotions and ability to intimidate each other out of the loop. There’s no benefit to lying about what you want.”
The future of intelligence
Beyond negotiation, Moore says CMU is betting several other AI areas are going to become hugely important in the near future.
Moore says AI testing will see significant growth in the near term. For instance, consider trucks that autonomously drive supplies across Afghanistan.
“It is very hard to test in advance whether they meet requirements,” Moore says. “I would think companies that can provide services for other companies that can test autonomous or learning systems will really prosper over the next few years.”
Self-driving vehicles, or vehicles that take over from a human driver in the event of extreme circumstances, have been the subject of a great deal of research for some time, of course. But researchers in this area need to bring ethics and philosophy to bear.
[ Related: IT pros don’t fear rise of the robots ]
“We have the capability now, in the last second before a devastating crash, for the car’s computer to take over,” Moore says. “They have got the processing and information and understanding opportunities to make at least 1,000 more decisions in what happens during the crash. It is a wonder possibility that a car can be very critically controlled while crashed to reduce the loss of human life.”
But that capability leads to some interesting moral quandaries that need to be dealt with. Is it worth killing one million domestic cats to save one human life?
“If there’s a risk to the driver or a pedestrian, should the car treat their safety equally? That’s going to be a very hard question for an engineer to answer,” Moore says. “One of our faculty is very interested in that question. It’s slightly less horrific to work with the question of animal collisions — how the car should be dealing with the risk and reward of different behaviors.”
As for tracking microexpressions, Moore says it will have big implications for healthcare and treatment of mental illnesses as well as in education and entertainment. By capturing microexpressions, he says, we can start to capture information about whether a person is becoming frustrated, distressed or worried in the midst of an interaction.
This article was written by Thor Olavsrud from CIO and was legally licensed through the NewsCred publisher network.