U.S. Navy Is Funding Research to Build Teams of Surveillance Robots

Researchers at Cornell are using the funding to develop a system to enable groups of robots to share information as they move around and, if necessary, get help in interpreting what they see.
Published: May 1, 2017

ITHACA, N.Y. – Researchers at Cornell University are developing a system to connect camera robots, drones and smart cameras so they can readily share information and interpret the details they see.

As envisioned, the machines could one day perform jobs too dangerous for humans, such as disposing of landmines, cleaning up after a nuclear meltdown or surveying hurricane or flood damage, according the university.

Cornell’s project, “Convolutional-Features Analysis and Control for Mobile Visual Scene Perception,” is funded by a four-year, $1.7 million grant from the U.S. Office of Naval Research (ONR). The researchers will be using Segue robots with automatic cameras that can be programmed to pan/tilt/zoom (p/t/z) for their experiments. The research team is led by Professor Silvia Ferrari, director of the Laboratory for Intelligent Systems and Controls at Cornell.

“We are trying to teach robots to follow things of interest, like people, cars and animals, and to reason about what they are seeing, what kind of activity is happening and what the agent might be doing next,” Ferrari told Recode.

SSI Newsletter

One day, the software might be able to manage and coordinate hundreds of robotic cameras, but for the initial experiment, the Cornell researchers plan to trial up to 12 camera systems operating simultaneously.

In an area under surveillance that is quite large, a network of mobile robots could be a big help, since one robot – or even an array of mounted cameras – can’t capture everything, according to the Recode article.

Ferrari explained this is to make the robotic cameras as autonomous as possible. The researchers are programming the surveillance robots to fuse together all available video data to reason about a scene. The robots will also be connected to the web so they can access more data for when they detect holes in their understanding.

Typically, surveillance systems send data back to a human operator, who interprets the scene to make tactical decisions about what other information is needed and how to collect it.


READ NEXT: How the Security Industry Can Take Advantage of Drone Payloads


“Our intention is to automate that side of the network so that the robots are actually in charge of perception,” said Ferrari.

The surveillance robots will be communicating to each other in a computer language, Ferrari says, but will also be able to translate what they’re thinking into “some syntax” that a human can understand.

“This is basically the only time they’ll be interfacing with a human being,” says Ferrari.

To get robots to reason and make decisions about what to pursue and where to go, the team at Cornell is building artificial intelligence (AI) navigation algorithms that are coupled with the ability to perceive and understand the information they are collecting.

That means these robots won’t be programed to simply know how to avoid obstacles or get from point A to point B, like most navigation algorithms, but they also will be able to deduce what needs to be focused on and what agent in its video feed is the right one to pursue.

For now, though, this roving robot surveillance fleet technology is still being built and there’s a lot of work to do before it’s ready to be deployed in the field.

Ferrari says her team should have a working demonstration in the next three years.

Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series
Strategy & Planning Series