Intelligent Autonomous Flying Robots Learn and Map Environment As They Fly 37
An anonymous reader writes with this story about a machine-learning project out of the UK's University of Sheffield: Using simple drones the researchers have created automatic-control software that enables the "flying robot" to learn about its surroundings using a camera and an array of sensors. The robot starts with no information about its environment and the objects within it. But by overlaying different frames from the camera and selecting key reference points within the scene, it builds up a 3D map of the world around it. Other sensors pick up barometric and ultrasonic data, which give the robot additional clues about its environment. All this information is fed into autopilot software to allow the robot to navigate safely, but also to learn about the objects nearby and navigate to specific items.
Re: (Score:2)
I want some of whatever it is you're smoking!
Applications (Score:1)
It would be nice if these were fed back in real time to a remote monitor. Maybe a 21st century canary in a coal mine? Applications for search and rescue, scouting real time optimal traffic routes for police / fire / paramedics.
Does anybody know more details about this project? (Score:2)
Cool project, but the article/video is short on detail. I'd like to know more about the way this robot is actually learning. Is it a neural network? How does it know an oscilloscope is an oscilloscope? Does it use binocular vision to recognize distance? Ultrasound? Both? What type of computing hardware is on board? For that matter, what type of quadracoptor is this? And more importantly where can I get one?
Re: (Score:1)
So the robot starts with no information about its environment and the objects within it. By overlaying different frames from the camera and selecting key reference points within the scene, it builds up the 3D map of the world around it. Barometric and ultrasonic sensors give the robot additional clues about its environment. All this information is fed into autopilot software to allow the robot to navigate safely, but also to learn about the objects nearby and navigate to specific items.
Instead of a neural n
Different than the SLAMM stuff at Carnegie Lab (Score:1, Interesting)
Carnegie Mellon folks developed the SLAMM algorithm (and variants of if) some years back to do live mapping on their quadrotors. It has been used by almost everybody who is doing autonomous flying robots. It is hard not to say that anyone was not influenced by that work. Some of their work had laser scanners that would map the surroundings and identify walls -- building out a maze of sorts as it explored. Heck on seeedstudio.com you can pickup a (LIDAR) 360 2D laser scanner and algorithm to build your own.
N
Re: (Score:3)
Re: (Score:2)
they used PTAM and replicated Jens Nyman's thesis from 2 years ago
https://github.com/nymanjens/a... [github.com]
Re: (Score:3)
SLAM? (Score:5, Informative)
Doing this is called Simultaneous Localization and Mapping, or SLAM. There's been enormous progress in that in the last decade. The basic idea is to take a large number of images of the same scene, possibly with inacccurate data about where they were taken, and build up a 3D model. It sort of works most of the time. Some algorithms do well indoors, especially where there are lots of strong edges and corners. Those are easy features to lock onto. Outdoors is tougher, although outdoors you can usually use GPS. It's a basic capabiilty robots need.
The video is frustrating. There's no comparison with previous work. Is this an advance, or did they just use known algorithms. [openslam.org]
Re: (Score:1)
If I can expand on what you said: specifically using a large collection of photographs to generate a 3D model is known as "Structure From Motion"(SFM) or Photogrammetry. It's a popular thing to do with aerial photos taken by drone/UAS for GIS data gathering and there were a number of applications for that purpose, although many of the free online ones have been commercialized last I went looking. Catch123 is one such product from Autodesk, although if you want more control over the pipeline then doing it yo
Re: (Score:2)
Unfortunately (Score:5, Funny)
They tend to bump into the same walls repeatedly before learning they're there and proceeding to bump into the adjacent wall.
Re: (Score:1)
What could possibly go wrong? (Score:2)
Re: (Score:2)
As soon as I saw the article I thought that it was just the sort of thing they had in Prometheus. It would be extremely useful for the military to be able to map out the inside of a building. Of course, you won't know what's behind closed doors, but they'll add something for that next.
off the shelf software (Score:4, Informative)
They are using PTAM package from Uni of Oxford
http://www.robots.ox.ac.uk/~gk... [ox.ac.uk]
Whats more they are using off the shelf ardrone-PTAM package
https://github.com/nymanjens/a... [github.com]
and replicating something done TWO YEARS AGO by Jens Nyman (from Belgian uni)
https://www.youtube.com/watch?... [youtube.com]
so W T F
Re: (Score:2)
skynet? (Score:2)
Yay, another likely tool of war (Score:1)
Not too easily purposed to warfare and domination of other peoples. Just what we need more of.
Re: (Score:2)
Its just the evolution of terrain following radar.
No one here remembers the F111 family, capable of flying on autopilot through canyons at penetration speed, wingtips just feet from the walls?