Now robots that can navigate using visual cues deliver packages; researchers used bees to ‘waggle dance’ | India News

BENGALURU: Researchers from indian institute of science (IISc) and the University of Maryland have shown that robots can use visual cues instead of the usual network-based communication, to navigate and also deliver a package, with the help of a human.
We all should have navigated through a noisy room looking for a friend. Our eyes scan the room hoping to catch a glimpse of the friend, so we can walk in that direction. Now imagine the same thing, but with robots instead of humans.
Recent work by Abhra Roy ChowdhuryAssistant Professor, IISc’s Center for Product Design and Manufacturing (CPDM), in collaboration with Kaustubh Joshi (University of Maryland), showed this.
In the study published in “Frontiers in Robotics and AI”, the researchers said: “This research presents a new bio-inspired framework for two robots interacting together for a cooperative parcel delivery task with a human in the loop. It helps eliminate the need for network-based robot-robot interaction in constrained environments.
Why research?
According to the researchers, humans are adept at using audio and visual cues for communication while performing collaborative tasks. However, humans must rely entirely on non-verbal communication like visual gestures to coordinate in a noisy environment.
“This research aims to implement a similar ability to use gesture interaction in a networked robot system. Traditional methods of robotic communication in a multi-robot system rely heavily on network connections through communication protocols. What if a robot system were to be deployed in an area where network resources were lacking? In such a scenario, robots will have to rely on other sensors for interaction,” they explained.
In this research, two robots were used to demonstrate a vision-based gesture interaction framework for performing a package handling task in cooperation with a human. “This approach allows each robot to be independent of a centralized controller or server,” the study says.
The “waggling dance” of bees
According to IISc, a human first signals to a messenger robot by hand gestures the destination to which a package should be delivered. The messenger robot then signals a package handling robot, moving along paths of specific geometric shapes, such as a triangle, circle, or square, to communicate the direction and distance to that destination.
This was inspired by the “waggling dance” that bees use to communicate with each other. The robots use an object detection algorithm and depth perception to detect and react to gestures.
“Such interactive robots can be deployed for search and rescue operations in environments that humans cannot enter, for commercial applications such as package delivery, and in industrial settings where multiple robots talk to each other when Communication networks are unreliable,” the IISc said.
The researchers further explained that an individual robot is instructed to move in specific shapes with a particular orientation at a certain speed for the other robot to infer using object detection and depth perception.
“The shape is identified by calculating the area occupied by the detected polygonal route. A measure of area extent is calculated and used empirically to assign regions to specific shapes and yields an overall accuracy of 93.3% in simulations and 90% in a physical setup,” they wrote. declared.
Additionally, gestures are analyzed for their accuracy of direction, distance, and target coordinates on the map. The system gives an average position error of 0.349 in simulation and 0.461 in physical experiment.

Briana R. Cross