Online: | |
Visits: | |
Stories: |
Story Views | |
Now: | |
Last Hour: | |
Last 24 Hours: | |
Total: |
Many of us who find ourselves challenged in one area or another have sought out YouTube as a tutorial to help us complete our task.
Now researchers from the University of Maryland in collaboration with Australia’s NICTA have embarked on deeper machine learning that uses the popular video network as a database of information to enhance artificial intelligence.
The abstract from their paper entitled: “Robot Learning Manipulation Action Plans by ‘Watching’ Unconstrained Videos from the World Wide Web” states:
In order to advance action generation and creation in robots beyond simple learned schemas we need computational tools that allow us to automatically interpret and represent human actions. This paper presents a system that learns manipulation action plans by processing unconstrained videos from the World Wide Web. Its goal is to robustly generate the sequence of atomic actions of seen longer actions in video in order to acquire knowledge for robots.
The lower level of the system consists of two convolutional neural network (CNN) based recognition modules, one for classifying the hand grasp type and the other for object recognition. The higher level is a probabilistic manipulation action grammar based parsing module that aims at generating visual sentences for robot manipulation.
Experiments conducted on a publicly available unconstrained video dataset show that the system is able to learn manipulation actions by “watching” unconstrained videos with high accuracy. (Full paper available here)
The goal of attaining accelerated machine learning is taking various forms. Earlier this year it was announced that University of Washington computer scientists were working on crowdsourcing as a comprehensive way to teach robots how to complete tasks. By learning from a larger online community, rather than just one set of instructions, puzzle-solving takes the more real-world form that each human typically experiences.
“Because our robots use machine-learning techniques, they require a lot of data to build accurate models of the task. The more data they have, the better model they can build. Our solution is to get that data from crowdsourcing,” said Maya Cakmak, a UW assistant professor of computer science and engineering.
LEVAN (Learning EVerything about ANything) is another University of Washington project that uses the Internet itself as a database – in this case, the vast array of books and images – in order to develop a more robust and comprehensive dataset for accelerated learning.
…the program searches millions of books and images on the Web to learn all possible variations of a concept, then displays the results to users as a comprehensive, browsable list of images, helping them explore and understand topics quickly in great detail.
“It is all about discovering associations between textual and visual data,” said Ali Farhadi, a UW assistant professor of computer science and engineering. “The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them.” (Source)
Then of course there is the even more ambitious project of establishing a language where computers can take what they learn and share that with other A.I. systems. Called the Wikipedia for Robots, it is essentially a cloud network where robots can do their own research, communicate with one another, and collectively increase their intelligence in a full simulation of human interaction.
RoboEarth is another similar concept that involves machine cooperation:
RoboEarth’s proof-of-concept demonstration is simple for humans, but hard for robots: serve fruit juice to a random patient in a hospital bed. In a fake hospital room at Eindhoven Technical University in the Netherlands, one robot mapped out the space, located the “patient’s” bed and a nearby carton of juice, then sent that data to RoboEarth’s cloud.
A second robot, accessing the data supplied by robot number one, unerringly picked up the juice and carried it to the bed. (Source)
RoboHow seeks to synthesize all of the above, taking into account the expanding Internet of Things:
RoboHow has to make explicit many parts of complex procedures that humans can simply infer — like how to turn on an oven, or where to find needed ingredients. The plan is to eventually enable robots to search the internet for info or instructions they need to complete assigned tasks without external intervention.
For now, people have to identify, demonstrate and feed RoboHow the right data, as bots left to their own devices would inevitably grab bad or incomplete information. So, it seems that our future robot overlords still need us meatbags around … for a little while longer, at least. (Source)
Clearly a full-spectrum approach is being taken in the development of robotic intelligence. The only question that seems to remain is what is the timetable for when robots surpass humans and become a Superintelligence.
The article Robots Begin Learning New Skills From YouTube Videos published by TheSleuthJournal – Real News Without Synthetics