A new and improved camera inspired by the human eye

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

A team led by computer scientists at the University of Maryland has invented a camera mechanism that improves the way robots see and react to the world around them. Inspired by the way the human eye works, their innovative camera system mimics the small involuntary movements used by the eye to maintain clear and stable vision over time. The team's prototyping and testing of the camera — called the Artificial Microsecond-Enhanced Event Camera (AMI-EV) — is detailed in a paper published in the journal. Science Robotics In May 2024.

“Event cameras are a relatively new technology for tracking moving objects compared to traditional cameras, but – today's event cameras struggle to capture sharp, blur-free images when there's a lot of movement,” Pepper's A computer, said lead author Botao He. Science Ph.D. student at UMD. “This is a huge problem because robots and many other technologies — such as self-driving cars — rely on accurate and timely images to respond correctly to changing environments. So, we asked ourselves: : How do humans and animals ensure their vision remains focused on a moving object?”

For his team, the answer was microsaccades, small, rapid eye movements that occur involuntarily when a person tries to focus on their vision. Through these minute yet continuous movements, the human eye can focus on an object and its visual texture — such as color, depth, and shadow — with precision over time.

“We thought that just as our eyes need these small movements to focus, a camera could use a similar principle to produce sharp and clear images without motion blur,” he said. Can take accurate pictures.”

The team successfully replicated microsaccades by inserting a rotating prism inside the AMI-EV to redirect light rays captured by the lens. The constant rotating movement of the prism mimics the movements that naturally occur within the human eye, allowing the camera to stabilize the composition of a recorded object in the same way that a human would. The team then developed software to compensate for prism movement within the AMI-EV to stabilize images from transmitted lights.

Study co-author Yannis Alemonos, professor of computer science at UMD, sees the team's invention as a major step forward in the field of robotic vision.

“Our eyes take pictures of the world around us and those pictures are sent to our brain, where the images are analyzed. Perception happens through this process and that's how we understand the world,” Eloemons explained. , who is also the director of Computer Vision. Laboratory at the University of Maryland Institute for Advanced Computer Studies (UMIACS). “When you're working with robots, replace the eyes with a camera and the brain with a computer. Better cameras mean better perception and response for robots.”

The researchers also believe their innovation could have important implications beyond robotics and national defense. Scientists working in industries that rely on image accuracy and shape detection are constantly looking for ways to improve their cameras — and AMI-EV could be a key solution to many of the problems they face. Is.

“With their unique features, event sensors and AMI-EV are poised to take center stage in the realm of smart wearables,” said research scientist Cornelia Formular. “They have distinct advantages over classical cameras — such as high performance in extreme lighting conditions, low latency and low power consumption. These features are ideal for virtual reality applications, for example, where no Impediment experience and rapid head and body count movements are essential.”

In initial testing, AMI-EV was able to accurately capture and display motion in a variety of contexts, including human pulse detection and fast-moving shape recognition. The researchers also found that the AMI-EV can capture motion in the tens of thousands of frames per second, surpassing commonly available commercial cameras, which capture an average of 30 to 1,000 frames per second. This smoother and more realistic depiction of motion could be important in anything from more immersive augmented reality experiences and better safety surveillance to improving how astronomers take images in space.

“Our new camera system can solve many specific problems, such as helping a self-driving car detect what is and isn't a human on the road,” Alemonos said. “As a result, it has many applications that ordinary people already interact with, such as autonomous driving systems or even smartphone cameras. We believe our new camera system will be more advanced and Paving the way for competent systems to come.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment