It accepts that each neuron in the subpopulation is well approxim

It accepts that each neuron in the subpopulation is well approximated by a set of NLN parameters, but that many of these myriad parameters are highly idiosyncratic to each subpopulation. Our hypothesis is that each ventral stream cortical subpopulation uses at least three common, genetically encoded mechanisms (described below) to carry out that meta job description and that together, those mechanisms direct it to “choose” a set of input weights, a normalization pool, and a static Vismodegib price nonlinearity that lead to improved subspace

untangling. Specifically, we postulate the existence of the following three key conceptual mechanisms: (1) Each subpopulation sets up architectural nonlinearities that naturally tend to flatten object manifolds. Specifically, even with random (nonlearned) filter weights, NLN-like models tend to produce easier-to-decode object identity manifolds largely on the strength of the normalization operation (Jarrett et al., 2009, Lewicki and Sejnowski, 2000, Olshausen and Field, 2005 and Pinto et al., 2008b), similar in spirit to the overcomplete approach of V1 (described above). Experimental approaches are effective at describing undocumented behaviors of ventral stream neurons, but alone they cannot indicate when that search is complete.

Similarly, “word models” (including ours, above) are not falsifiable Y-27632 molecular weight algorithms. To make progress, we need to construct ventral-stream-inspired, instantiated computational models and compare their performance

with neuronal data and human performance on object recognition tasks. Thus, computational modeling cannot be taken lightly. Together, the set of alternative models define the space of falsifiable alternative hypotheses in the field, and the success of some such algorithms will be among our first indications that we are on the path to understanding visual object recognition in the brain. The idea of using biologically inspired, hierarchical computational algorithms to understand the neuronal mechanisms underlying invariant object recognition tasks is not new: “The mechanism of pattern recognition in the brain is little known, and it seems to be almost impossible to reveal it only by conventional physiological experiments…. If we could make a neural network model which has the same capability for pattern recognition as a human Rutecarpine being, it would give us a powerful clue to the understanding of the neural mechanism in the brain” ( Fukushima, 1980). More recent modeling efforts have significantly refined and extended this approach (e.g., Lecun et al., 2004, Mel, 1997, Riesenhuber and Poggio, 1999b and Serre et al., 2007a). While we cannot review all the computer vision or neural network models that have relevance to object recognition in primates here, we refer the reader to reviews by Bengio, 2009, Edelman, 1999 and Riesenhuber and Poggio, 2000, and Zhu and Mumford (2006).

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>