In December it was announced that a chess player with four hours of training beat the world’s world champion chess-playing program. The player was AlphaZero, an AI (artificial intelligence) program operated by Google.
It was told the game rules, but was given no chess-playing strategies or solutions. It knew very little at the outset, played itself over and over again (rather randomly at first,) and got better very quickly over the course of 68 million games played. It had the benefit of massive computing power as well, something unavailable to its opponent, which ran on an ordinary PC.
It was one of a trio of victories for the program. “AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case,” say the authors of the paper “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.”
….
Artificial Intelligence, or machine intelligence, is in contrast to human or animal intelligence. When a device seems to be learning or problem solving, some say that machine has artificial intelligence. When it does things better than us, it is called “superhuman.”
Because our world is increasingly filled with data, AI might assist us in making sense of it all. AI has applications in healthcare, autonomous vehicles, finances, speech recognition, and more. Any device or company that is providing recommendations to you, for example, is probably using some flavor of AI to tell you what you might like. If you’ve played a computer game, you’ve won or lost to AI.
Current goals for AI research include adding the ability to reason, plan, and perceive.
Neural networks can assemble patterns of behavior and be used to power AI. The idea is to have a system, the “neural network” of interconnected computer nodes, that can experiment rapidly and learn from mistakes. A neural network algorithm is “trained” through repetition until it “learns,” then it can respond to input.
Algorithms, a sequence of actions to be performed, are used to train neural networks.
AI can be used in robots, but doesn’t require them. AI can exist as a machine connected to the internet.
As with most technologies, it depends on how we use it whether it is useful or not.
….
In the useful category, for example, a team of scientists created a new algorithm that can look at an image, then fix blurriness, grainy noise, missing pixels, or color errors. What they find exciting is that their neural network is able to look at a wide variety of problems at the same time, rather than fix each problem individually.
In order for it to work, they had to give the system a large database of good images, so it could “learn” what makes for a good image. After that, it can recognize and fix deviations it finds in new images. Sort of. It can have trouble with finer details, such as hair, so the team is working to improve the algorithm by teaching it context, such as “hair is often at the top of a face.” Regardless, the days of bad photos may soon be over thanks to AI.
In the not-entirely-creepy category, Cornell is working on new “neuromorphic” computer chips. Instead of processing in binary 1’s and 0’s like our current computers, these chips process spikes in electrical current in complex combinations. They are hoping to mimic neural activity, and pack some powerful computation into a very small space.
As an experiment, they are building a RoboBee that can instantly respond to changes in wind. As you might expect, it is being developed using a neural network so it can learn from previous experiences.
Obviously, AI has useful applications.
Consider this: businesses are using AI to generate and analyze hypotheses on its own, to share with humans to help the humans make decisions. And, at the same time, AI can generate artificial and convincing Yelp reviews. Could this lead to AI assistants interpreting data created by AI assistants?
AI will write stories and books. The AP is already experimenting with algorithmically-written stories. Bots offer up comments in major social media platforms daily.
AI can be used to recreate anyone’s voice from a small sample. An artificial voice could help someone with damaged vocal chords, or could be used as a cruel prank (a “sibling” telling you unfortunate news.) The ability to discern what is real or not is now a greater concern.
That leads us to the possibly-dangerous category. A group of scientists recently released a modification to the computer game Civilization 5 that allows for AI to be used by players, and they offer a warning: “In the modified game, artificial intelligence initially provides benefits, and eventually can turn into superintelligence that brings mastery of science to its discoverer. However, if there is too little investment in AI safety research, rogue superintelligence can destroy humanity and bring an instant loss of the game,” noted the scientists.
On of the researchers, Shahar Avin , shared an insight: ““Something that struck me as surprising was if the geopolitical situation is very messy,” he said. “Let’s say you’re stuck between two aggressive civilizations. It becomes very difficult to manage AI risk because your resources are devoted to fighting wars.”
….
There are groups concerned about the future of AI, and one of those concerns is misalignment. This concern is that AI will become competent and powerful (not necessarily evil), but not aligned with our goals. AI is very good at reaching its goals.
A related worry is that if we cede our position as the smartest beings on the planet, we may also cede control. What happens if AI develops a weapons system we humans can’t understand or out-manipulates human leaders?
AI is already an almost invisible layer in our lives. As it becomes more powerful, and combined with other technologies, steps will need to be taken to keep AI applications under control.
We might task an AI assistant to do it for us.
AI can tell you how long you have left
This is creepy:
“By using an artificially intelligent algorithm to predict patient mortality, a research team from Stanford University is hoping to improve the timing of end-of-life care for critically ill patients. In tests, the system proved eerily accurate, correctly predicting mortality outcomes in 90 percent of cases. But while the system is able to predict when a patient might die, it still cannot tell doctors how it came to its conclusion.”
They fed the system electronic health records.
“The deep learning algorithm studied the case reports from 160,000 of these patients, and was given the directive: “Given a patient and a date, predict the mortality of that patient within 12 months from that date, using EHR data of that patient from the prior year.” The system was trained to predict patient mortality within the next three to 12 months.”
The AI system got it right in 9 out of 10 cases. And it only used data from a small number of patients at two hospitals. But again, it is somewhat of a mystery of how it did it. It’s called the “black box” problem of AI – getting good results but not really knowing how.
Link
Here's another use
Researchers looked at cars in 50 million Google Street View images using AI. They first trained it to identify cars, then makes and models, which resulted in sorting the car images into 2,657 categories. It took them two weeks, and they estimate it would have taken a human 15 years to do the same.
They’re just looking at cars, right? Wrong. They then fed in some other existing data and were able to see some interesting things:
“The system was able to accurately predict income, race, education and voting patterns at the ZIP code and precinct level in cities across the country.”
This can be useful, but also raises some privacy concerns:
“And as AI systems become more powerful, it will be possible to extract even more demographic information from apparently innocuous street views. Although that may be good news for academic researchers, datamining offline activities clearly creates new privacy problems at a time when people are already worried about what can be gleaned from datamining their online activities.”
Link
AI-powered police
Ford has applied for a patent for an AI-powered police vehicle. The patent says they would like to use AI to aim radar guns, cameras, and operate other equipment. You can have a human officer in the car, or not.
It will be able to decide to issue a ticket, or just a warning.
“The patent even specifically mentions the car being smart enough to find sneaky places to hide” say the authors of the article. They also point out that not all patents become real products, but called it “unnerving” that this was even being contemplated.
What could possibly go wrong with a robo-cop? : )