Recognizing the partially seen

Recognizing the partially seen
Martin Schrimpf. Credit: Kris Brewer

When we open our eyes in the morning and take in that first scene of the day, we don't give much thought to the fact that our brain is processing the objects within our field of view with great efficiency and that it is compensating for a lack of information about our surroundings—all in order to allow us to go about our daily functions. The glass of water you left on the nightstand when preparing for bed is now partially blocked from your line of sight by your alarm clock, yet you know that it is a glass.

This seemingly simple ability for humans to recognize partially occluded objects—defined in this situation as the effect of one object in a 3-D space blocking another object from view—has been a complicated problem for the community. Martin Schrimpf, a graduate student in the DiCarlo lab in the Department of Brain and Cognitive Sciences at MIT, explains that machines have become increasingly adept at recognizing whole items quickly and confidently, but when something covers part of that item from view, this task becomes increasingly difficult for the models to accurately recognize the article.

"For models from computer vision to function in everyday life, they need to be able to digest occluded objects just as well as whole ones—after all, when you look around, most objects are partially hidden behind another object," says Schrimpf, co-author of a paper on the subject that was recently published in the Proceedings of the National Academy of Sciences (PNAS).

In the new study, he says, "we dug into the underlying computations in the brain and then used our findings to build computational models. By recapitulating visual processing in the human , we are thus hoping to also improve models in computer vision."

MIT graduate student Martin Schrimpf and Professor Gabriel Kreiman of Boston Children's Hospital and Harvard Medical School describe their latest work showing how recurrent computations may help the brain solve the fundamental challenge of pattern completion. Credit: Center for Brains, Minds, and Machines

How are we as humans able to repeatedly do this everyday task without putting much thought and energy into this action, identifying whole scenes quickly and accurately after seeing only pieces? Researchers in the study started with the human visual cortex as a model for how to improve the performance of machines in this setting, says Gabriel Kreiman, an affiliate of the MIT Center for Brains, Minds, and Machines. Kreinman is a professor of ophthalmology at Boston Children's Hospital and Harvard Medical School and was lead principal investigator for the study.

In their paper, "Recurrent computations for visual pattern completion," the team showed how they developed a computational model, inspired by physiological and anatomical constraints, that was able to capture the behavioral and neurophysiological observations during pattern completion. In the end, the provided useful insights towards understanding how to make inferences from minimal information.

More information: Hanlin Tang et al. Recurrent computations for visual pattern completion, Proceedings of the National Academy of Sciences (2018). DOI: 10.1073/pnas.1719397115

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation: Recognizing the partially seen (2018, September 21) retrieved 19 April 2024 from https://techxplore.com/news/2018-09-partially.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Research identifies key weakness in modern computer vision systems

25 shares

Feedback to editors