Home Doctor NewsNeurology News Unnoticed eye movements can be key for better self-driving cars: Study

Unnoticed eye movements can be key for better self-driving cars: Study

by Vaishali Sharma
eye movements

Andrea Benucci and colleagues at the RIKEN Center for Brain Science have built artificial neural networks that learn to detect things more rapidly and precisely.

The study, which was just published in the scientific journal PLOS Computational Biology, focuses on all of our unconscious eye movements. It demonstrates that they play an important role in helping us to recognise items consistently.

These discoveries may be used to machine vision, for example, to help self-driving cars learn how to distinguish crucial road elements.

Objects in the environment do not blur or become unidentifiable despite our frequent head and eye movements throughout the day, despite the fact that the actual information striking our retinas changes continually.

This perceived constancy is most likely enabled by brain copies of movement commands. These copies are supposed to be transmitted throughout the brain each time we move, allowing the brain to account for our own motions and maintain our perception steady.

In addition to steady perception, research shows that eye movements and associated motor duplicates may assist humans in recognising things in the world, albeit how this occurs is unknown.

Benucci created a convolutional neural network (CNN) that solves this challenge. The CNN was created to maximise item categorization in a visual scene while the eyes are moving.

First, the network was taught to categorise 60,000 black-and-white photos into ten groups. Although it performed well on these pictures, when tested with shifted images that mirrored naturally changing visual input that would occur as the eyes moved, performance plummeted to the level of chance.

However, categorization improved dramatically after training the network using altered pictures, as long as the direction and magnitude of the resulting eye movements were also incorporated.

In particular, adding the eye movements and their motor copies to the network model allowed the system to better cope with visual noise in the images. “This advancement will help avoid dangerous mistakes in machine vision,” said Benucci. “With more efficient and robust machine vision, it is less likely that pixel alterations–also known as ‘adversarial attacks’–will cause, for example, self-driving cars to label a stop sign as a light pole, or military drones to misclassify a hospital building as an enemy target.”

Also Read: WHO declares Monkeypox as Public Health Emergency

Bringing these results to real-world machine vision is not as difficult as it seems. As Benucci explains, “the benefits of mimicking eye movements and their efferent copies implies that ‘forcing’ a machine-vision sensor to have controlled types of movements, while informing the vision network in charge of processing the associated images about the self-generated movements, would make machine vision more robust, and akin to what is experienced in human vision.”

The next step in this research will involve collaboration with colleagues working with neuromorphic technologies. The idea is to implement actual silicon-based circuits based on the principles highlighted in this study and test whether they improve machine-vision capabilities in real-world applications.

Follow Medically Speaking on Instagram

You may also like