By Aaron Brown
An intelligent machine capable of anticipating your next move minutes in advance sounds like the stuff of nightmares – but is now a reality.
Researchers have taught an AI to recognise patterns in people’s actions, allowing it to accurately predict the next move in a sequence minutes in advance.
The software, which was built by a team at the University of Bonn in Germany, was taught to anticipate actions by watching hours of cooking videos.
Dr Jürgen Gall believes the intelligent software will eventually be able to prophesize your actions ‘hours before they happen’.
If the team manages to fine-tune the algorithm to be able to anticipate actions that far in advance, it’s possible to imagine a slew of real-world application, from home automation gadgets, to Big Brother-esque surveillance.
Researchers told the AI what was happening in the video for the first 20 per cent of the clip – and then asked the algorithm to predict the next action before it took place
To teach the AI to accurately predict actions before they take place, Dr Jürgen Gall and his team focused on cooking videos.
Using pre-recorded videos of people preparing a meal, the researchers were able to teach the machine to recognise each action being performed on-screen, including cutting tomatoes, adding salt and flipping a pancake.
In total, some 40 videos were used to teach the AI.
Each of these recordings was around six minutes long and contained some 20 different actions.
After four hours, the algorithm was able to recognise the sequence of events needed to prepare a dish, which is far from trivial given the variety in approaches and recipes in the pre-recorded clips.
‘Then we tested how successful the learning process was,’ explained Dr. Jürgen Gall.
‘For this we confronted the software with videos that it had not seen before.’
Like before, the machine was told what was happening in the video for the first 20 or 30 per cent of the clip.
The algorithm was then asked to predict the next action before it took place on-screen.
The machine flagged-up its ‘observation’ before the action, drawing on its knowledge of the recipe and its understanding of how similar sequences have played out before.
The AI was able to correctly anticipate actions in the near future with surprising accuracy.
Dr Gall said: ‘Accuracy was over 40 percent for short forecast periods, but then dropped the more the algorithm had to look into the future.’
For activities which were more than three minutes in the future, the algorithm was only able to accurately predict the outcome in 15 per cent of cases.
Researchers only considered the prediction correct if both the activity and its timing were correct.
Gall and his colleagues want the study, which will be presented at the Conference on Computer Vision and Pattern Recognition in Salt Lake City on June 19, to be understood as a first step into the new field of activity prediction.
According the researchers, the algorithm performed noticeably worse when it was forced to recognise the actions in the first part of the video, instead of being told.
The project was spearheaded by Dr. Jürgen Gall (pictured, right) who worked alongside Yazan Abu Farha to teach the AI to anticipate actions ahead of time. Both Dr Gall and Abu Farha (left) created the machine at the Institute of Computer Science at the University of Bonn, Germany
HOW DOES ARTIFICIAL INTELLIGENCE LEARN?
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images
Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.
The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.
Artificial Intelligence (AI) capable of accurately predicting your next move minutes in advance sounds like the realm of science fiction, like the Terminator film franchise (pictured above), but researchers from the University of Bonn have shown early promise in the field
In future, Dr Gall wants the algorithm to be able to forecast actions much further in advance.
‘We want to predict the timing and duration of activities – minutes or even hours before they happen,’ he claims.
The researchers anticipate the algorithm will have initial applications in smart home appliances, where it could dramatically improve automation.
For example, AI-powered kitchen appliances could pass over ingredients as soon as you need them, or pre-heat the oven in anticipation of the next step in the recipe it has identified.
However, it is possible to imagine the AI being used in security systems to create advanced surveillance akin to the fictional Big Brother in George Orwell’s 1984.
The University of Bonn study was developed as part of a research group dedicated to the prediction of human behaviour and financially supported by the German Research Foundation (DFG).
WHY ARE PEOPLE SO WORRIED ABOUT AI?
It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.
SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon’.
He believes super intelligent machines could use humans as pets.
Professor Stephen Hawking said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
They could steal jobs
More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.
And 27 percent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.
As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.
A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade.
They could ‘go rogue’
Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.
If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.
This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.
For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.
They could wipe out humanity
Some people believe AI will wipe out humans completely.
‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.
He singled out artificial intelligence, or AI, as the ‘number one risk for this century’.
Musk warned that AI poses more of a threat to humanity than North Korea.
‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.
‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’
Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.
He has argued that controls are necessary in order protect machines from advancing out of human control.