The following sections describe projects that have occupied the Meister Lab in the last few years. There are open questions and ongoing work on each of these topics.
Retinal Processing of Image Motion
Adaptation in the Retina
Models of Early Visual Processing
Genetically Tagged Cell Types in the Retina
Gene Therapy for Retinal Degeneration
For the goals laid out above, the retina must be the ideal scientific arena! The circuit is of obvious importance for behavior, after all it is the only source of visual sensation. All its input neurons are known -- the photoreceptors. All its output neurons are known -- the retinal ganglion cells. There is (almost) no feedback from the rest of the brain. The input signals can be controlled easily just by shining an optical image on the retina. The output signals can be monitored easily, for example by placing a multi-electrode array against the surface of retinal ganglion cells. Because of these attractions, the retina has been studied intensely for over a century, and we can rely on a great deal of background knowledge about its neurons, synapses, neurotransmitters and receptors. Nevertheless, the circuit continues to offer up surprises at the level of network function, whose exploration yields insights into broader mechanism of neural computation. ↑
The image on the retina is always in motion. Even when you try holding your gaze still, the eye continues to wander randomly. Any hobby photographer knows that moving the camera with the shutter open is a bad idea. The retina performs a good deal of image processing that tries to counteract or even exploit these constant scanning movements. For example, we found a type of retinal ganglion cell that remains silent when a rigid image scans over the retina, but fires vigorously when a small patch of the image moves differently from its surroundings (part 1 of the research program). These neurons can discriminate the image motion caused merely by eye movements from that caused by object movements in the outside world (part 3). In unraveling how this sub-circuit works, we found that Nature uses a surprisingly simple solution for an apparently complex computation (part 2).
Another source of image motion are the large "saccadic" eye movements by which we reorient our gaze a few times a second. These fast image changes deliver a jolt of synchronized input to the whole retina, which can perturb the ongoing signal processing. Among the more curious effects, we found that such a fast image shift can temporarily change a ganglion cell from Off-type (excited by a decrease in light level) to On-type (excited by an increase). We have a good handle on the circuit mechanism that produces this: It is a neural switch, by which one neuron gates the signal that a second neuron conveys to a third.
Finally, image motion results when an object moves in the outside world. A serious challenge for tracking fast objects is the time delay incurred by visual sensation. The photoreceptors already introduce a delay of several tens of milliseconds. As a result, one expects the output of the retina to lag behind the actual motion trajectory of the object. Instead, we found that the retinal output image is aligned nicely with the instantaneous object position. It emerged that the retina contains a circuit for predicting motion along a trajectory. This circuit operates on delayed data from the photoreceptors to produce an estimate of the current location. When the object executes a sudden turn, the retinal output temporarily continues along the earlier trajectory, as expected for a predictive mechanism. ↑
The rules for image processing in the retina are not set in stone, but get adapted flexibly to the particular visual environment. The best known of these phenomena is light adaptation: As the mean light level in the scene changes -- for example with time of day -- the retina's sensitivity changes approximately in inverse proportion. This way the output to the brain always covers about the same dynamic range. Some years ago, we found that the retina also implements "contrast adaptation", a function previously assigned to the visual cortex. This is a change in sensitivity that depends on the range of intensities, not on the mean. Unlike light adaptation, contrast adaptation happens largely in circuits of the inner retina.
More recently it became apparent that these adaptive phenomena are much richer than initially thought. In fact, the retina can adjust its sensitivity to any number of spatio-temporal patterns in the image. If a given image pattern (say horizontally oriented lines) recurs frequently, the response of ganglion cells to that pattern declines, while the response to other patterns (say vertical lines) remains strong. In effect, the retina suppresses predictable structure in the environment and emphasizes novelty. We are busy tracking down the mechanism for this intriguing function, and have our eyes on a retinal circuit that may account for many other adaptation phenomena as well. ↑
This is a collaboration with Haim Sompolinsky (Hebrew University and Harvard Center for Brain Science). Visual processing (fortunately) does not end in the retina. The ganglion cell signals provide the input to other brain areas. As we understand more about how the retina operates, we also gain a better estimate of what signals the thalamus and the visual cortex receive. We are exploring what consequences the recently discovered components of retinal function may have on our models of processing in early visual cortex. For example, a recent theoretical study considered how a cortical network of neurons might deal with the incessant fixational eye movements, to effectively undo the resulting image blur. ↑
This is a collaboration with the group of Josh Sanes (Harvard, MCB Department). The goal is to find useful genetic tags to identify neurons in the retina. This could serve several purposes: On a simple level, such tags would make cell typing unambiguous. Depending on who you ask, the retina contains about 50 distinct neuron types, judged by functional and morphological criteria. But, except for a few dramatically shaped neurons, the boundaries between these types are rather ill-defined. Different research groups use different boundaries and names. If a genetic tag were available, and if it were easy to read, for example by a fluorescent label, then different laboratories could tell whether they are talking about the same neurons or not. At a more ambitious level, such a genetic handle sets the stage for manipulating the activity of the chosen cell type selectively. For example, by transiently silencing the selected type, and watching the effects on visual computation in the retina, we would understand better how those cells participate in the circuit. ↑
This is a collaboration with Michael Sandberg (Harvard, Massachusetts Eye & Ear Infirmary). Many patients with retinal degeneration suffer loss of vision because the photoreceptor cells die. Often the retinal ganglion cells are still intact, and able to convey signals to the brain. Several ideas for a retinal prosthesis center on driving these ganglion cells with an electrical device operated by a camera. Instead, we are exploring whether the ganglion cells could themselves be made light sensitive, like photoreceptors, through expression of a visual pigment gene. The work is in a very exploratory phase using mice, far from any human application. ↑
In visual areas of the brain, the neurons are arranged in visuotopic maps: two cells located next to each other in the brain tend to respond to adjacent locations in the visual field. Such a visuotopic layout is thought to make for efficient brain architecture, because computations that are local within the visual image can be performed by purely local wiring within the neuronal map. It has been suggested that the olfactory system follows a similar principle, using a "chemotopic map" in the early sensory areas.
Mice and rats have a ~1000 different types of receptor cells in the olfactory epithelium. Their axons project to the olfactory bulb, where they terminate in ~2000 glomeruli, and each glomerulus seems to get input from just a single receptor type. Thus the upper layer of the olfactory bulb forms a two-dimensional sensory map with 2000 discrete locations. Neurons at each location are driven by a single chemoreceptor. We want to understand the logic behind the layout of this map and its consequences for neural computations.
First, we measured the developmental precision of the layout, and found that it is reproducible across individuals to within 1-2 position in the array of glomeruli. Second we tested the notion of chemotopy, namely that nearby locations in the map respond to "similar" odors. There is a weak tendency for chemically related odors to activate nearby domains of the olfactory bulb. However, on a fine scale, we found that nearby locations on the bulb are remarkably diverse in their odor sensitivity. So far there is little evidence for local chemotopy. Finally we are exploring the consequences of this arrangement for subsequent processing, by probing how the mitral cells combine the signals from multiple glomeruli. ↑
This is a collaboration with Gilles Laurent (Caltech) and Maria Geffen (Rockefeller). In the study of smell, unlike for the other senses, the stimulus dynamics have been relatively neglected. During a typical physiology experiment, the odor is turned on, then turned off again a second later. By contrast, natural stimuli can include very rapid variation, for example when a rat sniffs 8 times a second, or when an insect encounters odor filaments in flight. To address this, we stimulate the locust antenna with a broad-band pseudo-random odor sequence, similar in statistics to a turbulent odor plume. We record spike trains from the projection neurons in the antennal lobe (analogous to vertebrate mitral cells). These responses can be analyzed by methods of reverse correlation. Indeed, applying some of the principles developed in visual and auditory research gives a new perspective on the dynamics of population activity in the antennal lobe. ↑
This is a collaboration with Alan Litke (UC Santa Cruz) and Thanos Siapas (Caltech). Most neuroscientists will agree that there is a great benefit to monitoring many neurons simultaneously instead of one at a time. Similarly, there is growing consensus that the brain works differently when the animal is awake and behaving than under anesthesia. To facilitate both goals, we built a wireless recording system for neural signals. The current version is designed for a rat (or similarly sized creature). It can record the voltage signals from 64 implanted electrodes with a bandwidth of 10 kHz each while the animal is moving untethered over a range of ~100 m. The first scientific experiments with this system are underway, with the aim of monitoring responses in the visual cortex during natural behaviors. ↑