The extraordinary capabilities of the human eye points to intelligent designHow many megapixels equivalent does the eye have?
On most digital cameras, you have orthogonal pixels: they're in the same distribution across the sensor (in fact, a nearly perfect grid), and there's a filter (usually the "Bayer" filter, named after Bryce Bayer, the scientist who came up with the usual color array) that delivers red, green, and blue pixels.
So, for the eye, imagine a sensor with a huge number of pixels, about 130 million. There's a higher density of pixels in the center of the sensor, and only about 6 million of those sensors are filtered to enable color sensitivity. Somewhat surprisingly, only about 100,000 sense for blue! Oh, and by-the-way, this sensor isn't made flat, but in fact, semi-spherical, so that a very simple lens can be used without distortions -- real camera lenses have to project onto a flat surface, which is less natural given the spherical nature of a simple lens (in fact, better lenses usually contain a few aspherical elements).
This is about 22mm diagonal on the average, just a bit larger than a micro four-thirds sensor... but the spherical nature means the surface area is around 1100mm^2, a bit larger than a full-frame 35mm camera sensor. The highest pixel re
solution on a 35mm sensor is on the Canon 5Ds, which stuffs 50.6Mpixels into about 860mm^2.
So that's the hardware. But that's not the limiting factor on effective resolution. The eye seems to see "continuously", but it's cyclical, there's kind of a frame rate that's really fast... but that's not the important one. The eye is in constant motion from ocular microtremors that occur at around 70-110Hz. Your brain is constantly integrating the output of your eye as it's moving around into the image you actually perceive, and the result is that, unless something's moving too fast, you get an effective resolution boost from 130Mpixels to something more like 520Mpixels, as the image is constructed from multiple samples.
Except you don't. For one, your luminance-only rod cells, being sensitive in low light, actually saturate in bright light. So in full daylight or bright room light, they're completely switched off. That leaves you 6 million or so cone cells alone as your only visual function. With microtremors, you may have about 24 million inputs at best… not exactly the same as 24 megapixels. And per eye, of course, so call it 48 megapixels if you want to draw that equivalence.
In the dark, the cones don't detect much, it's all rods at that point. Technically that's more “pixels,” but your eye and brain are dealing with a low photon flux density — the same thing that causes ugly “shot noise” in low light photographs. So you brain is only getting input from rods that actually detect something.
And all of the 130 million sensors are “wired” down to about 1.2 million axions of the ganglion cells that wire the eye to the brain. There is already processing and crunching on your visual data before it gets to the brain,
Which makes perfect sense -- our brains can do this kind of problem as a parallel processor with performance comparable to the fastest supercomputers we have today. When we perceive an image, there's this low-level image processing, plus specialized processes that work on higher level abstractions. For example, we humans are really good at recognizing horizontal and vertical lines, while our friendly frog neighbors have specialized processing in their relatively simple brains looking for a small object flying across the visual field -- that fly he just ate. We also do constant pattern matching of what we see back to our memories of things. So we don't just see an object, we instantly recognize an object and call up a whole library of information on that thing we just saw.
Another interesting aspect of our in-brain image processing is that we don't demand any particular resolution. As our eyes age and we can't see as well, our effective resolution drops, and yet, we adapt. In a relatively short term, we adapt to what the eye can actually see... and you can experience this at home. If you're old enough to have spent lots of time in front of Standard Definition television, you have already experienced this. Your brain adapted to the fairly terrible quality of NTSC television (or the slightly less terrible but still bad quality of PAL television), and then perhaps jumped to VHS, which was even worse than what you could get via broadcast. When digital started, between VideoCD and early DVRs like the TiVo, the quality was really terrible... but if you watched lots of it, you stopped noticing the quality over time if you didn't dwell on it. An HDTV viewer of today, going back to those old media, will be really disappointed... and mostly because their brain moved on to the better video experience and dropped those bad-TV adaptations over time.
Back to the multi-sampled image for a second... cameras do this. In low light, many cameras today have the ability to average several different photos on the fly, which boosts the signal and cuts down on noise... your brain does this, too, in the dark. And we're even doing the "microtremor" thing in cameras. The recent Olympus OM-D E-M5 Mark II has a "hires" mode that takes 8 shots with 1/2 pixel adjustment, to deliver what's essentially two 16Mpixel images in full RGB (because full pixel steps ensure every pixel is sampled at R, G, B, G), one offset by 1/2 pixel from the other. Interpolating these interstitial images as a normal pixel grid delivers 64Mpixel, but the effective resolution is more like 40Mpixel... still a big jump up from 16Mpixels. Hasselblad showed a similar thing in 2013 that delivered a 200Mpixel capture, and Pentax is also releasing a camera with something like this built-in.
We're doing simple versions of the higher-level brain functions, too, in our cameras. All kinds of current-model cameras can do face recognition and tracking, follow-focus, etc. They're nowhere near as good at it as our eye/brain combination, but they do ok for such weak hardware.
They're only few hundred million years late...Human Eye Visual Hyperacuity: A New Paradigm for Sensing?
The human eye appears to be using a low number of sensors for image capturing. Furthermore, regarding the physical dimensions of cones–photoreceptors responsible for the sharp central vision–, we may realize that these sensors are of relatively small size and area. Nonetheless, the eye is capable to obtain high-resolution images due to visual hyperacuity and presents an impressive sensitivity and dynamic range when set against conventional digital cameras of similar characteristics. This article is based on the hypothesis that the human eye may be benefiting from diffraction to improve both image resolution and the acquisition process. 2
Visual acuity (VA) commonly refers to the clarity of vision, but technically rates an examinee's ability to recognize small details with precision. Visual acuity is dependent on optical and neural factors, i.e., (1) the sharpness of the retinal image within the eye, (2) the health and functioning of the retina, and (3) the sensitivity of the interpretative faculty of the brain.
This excerpt from a paper about hyperacuity shows how stunning the ability really is:
“While in some tasks (e.g., in telling apart two nearby dots) thresholds are in the range of 30-60 arcsec, in other tasks such as the vernier, the threshold may be as low as 5 arcsec. A threshold of 5 arcsec means that the observer reliably resolves features that are less than 0.02 mm at a 1 m distance, or the size of a quarter-dollar coin viewed at 17 km! One can better appreciate the astonishing precision of this performance by considering the optical properties of the eye. In the spatially most sensitive region of the retina, the fovea, the diameter of the photoreceptors is in the range of 30-60 arcsec and the sizes of the receptive fields of the retinal ganglion cells may be even larger. Thus, humans can resolve detail with an accuracy of better than one fifth of the size of the most sensitive photoreceptor.The Sensitivity of the Human Eye (ISO Equivalent)
Our warm, wet, multicellular eyes have evolved such a high level of sensitivity that they can, on occasion, detect a single photon aimed at the retina. Even the most sophisticated man-made devices require a cool, temperature-controlled environment to achieve the same feat. 5
A single photon is the the smallest particle that light is made of, and it is extremely hard to see.
Eyes are actually incredibly sensitive. The human eye is so sensitive it can detect even a single photon of light! Our eyes are more sensitive than any camera sensor out there. Sure, we can all marvel at the 4,000,000 ISO Canon ME20F-SH and the kind of footage it can capture, but it doesn’t hurt to marvel at the ‘tech’ built into our own skull-mounted cameras from time to time. 4 Direct detection of a single photon by humans
3 Humans can detect a single-photon incident on the cornea.The cell biology of vision
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3101587/Looking at the big picture: the retina is a neural circuit composed of different cell types
Each eye’s photoreceptors include around 120 million rods, which react to light intensity, and 6 to 7 million color-sensitive cones. Rods occupy the majority of retinal real estate, but the very center is a tiny, highly concentrated population of cones called the fovea. As the only photosensitive cells in the human body, the rods and cones are essential for the conversion of visual data into electrochemical signals. Neurons in the retina can then begin to parse the visual field by registering contrasts in the photoreceptor data. Contrasts — or “edges” — are the basic units of all visual processing. Like a camera, the eye must be pointed directly at something in order to see it with as much clarity as possible; even the most powerful lenses can’t capture details with maximum resolution across an entire image. Your eyes can only see in the sharpest resolution, or in 100 percent acuity, in the fovea, a very small fraction of your visual field. “About 0.1 percent of your visual field, at any given time, is the only place you’ve ever had 20/20 vision. The fact that you don’t notice the rest of the world transforming into a blurry dreamscape every time you glance at your watch is a testament to the sublime engineering in the visual cortex. As you take in the view of a room, your brain sees not only the picture in front of you, but also the images from your most recent involuntary, staccato twitches called saccades. These images, plus your visual memory, together form a mental model of the space around you that is updated with every glance. Thus, even though only a tiny fraction of the field of vision is in focus at any given moment, the entire panorama seems equally sharp, no matter where you’re looking.
The visual cortex uses unknown means to create visual information out of thin air. Dan Sasaki, the VP of Optical Engineering at Panavision, discussed in a 2017 presentation that the greater sub-pixels in the image “provides the viewer with much more information from which to render the images in their brains, and this provides a sense of greater depth and more realism.”
So the theoretical limit on how much detail the human eye can actually process may be more of a guideline than rule. Dr. Martinez-Conde points out that the enigma encompasses all types of perception. “Fundamentally,” she adds,” “we don’t understand the neural basis of experience.” One thing is clear, however: The 33 million pixels that 8K TVs are able to display are changing the way we watch television, and making it a truly immersive viewing experience.
The retina carries out considerable image processing through circuits
that involve five main classes of cells (i.e., photoreceptors, bipolar cells, amacrine cells, horizontal cells, and ganglion cells;The visual sense organ.
(A) Diagrams of the eye; an enlarged diagram of the fovea is shown in the box. Retina forms the inner lining of the most of the posterior part of the eye. The RPE is sandwiched between the retina and choroids, a vascularized and pigmented connective tissue.
(B) Diagram of the organization of retinal cells. R, rod; C, cone; B, bipolar cell; H, horizontal cell; A, amacrine cell; G, ganglion cells; M, Müller cell.
(C) An H&E-stained transverse section of human retina. Retina has laminated layers. The nuclei of the photoreceptors constitute the outer nuclear layer (ONL). The nuclei of the bipolar cells, amacrine cells, horizontal cells, and Müller glial cells are found in the inner nuclear layer (INL), and the nuclei of ganglion cells form the ganglion cell layer (GCL). The outer plexiform layer (OPL) contains the processes and synaptic terminals of photoreceptors, horizontal cells, and bipolar cells. The inner plexiform layer (IPL) contains the processes and terminals of bipolar cells, amacrine cells, and ganglion cells. The processes of Müller glial cells fill all space in the retina that is not occupied by neurons and blood vessels.
These processes collectively amplify, extract, and compress signals to preserve relevant information
before it gets transmitted to the midbrain and the thalamus through the optical nerves (axons of the ganglion cells). The retinal information received by the midbrain is processed to control eye movement, pupil size, and circadian photoentrainment. Only the retinal input that terminates at the lateral geniculate nucleus of the thalamus is processed for visual perception and gets sent to the visual cortex. There, information about shade, color, relative motion, and depth are all combined to result in one’s visual experience.Question:
How could human eye sight have evolved, if the five beforementioned cells work collectively to perserve relevant information ?
Visual perception begins when the captured photon isomerizes the chromophore conjugated with the visual pigment in the photoreceptor cell. The photoexcited visual pigment then initiates a signal transduction cascade that amplifies the signal and leads to the closure of cation channels on the plasma membranes. As a result, the cells become hyperpolarized. The change in membrane potential is sensed by the synapses, which react by releasing fewer neurotransmittersObservation:
This is a interdependent, irreducible system, where all players form an integrated system, which ony works with all players in place.The morphological and molecular characteristics of vertebrate rod.(A)
3D cartoons depict the inter-relationship between rod and RPE (left) and IS–OS junction (right); RPE apical microvilli interdigitate the distal half of the OS. R, RPE; V, microvilli; O, OS; I, IS; N, nucleus, S, synaptic terminal. (B)
A schematic drawing of a mammalian rod depicting its ciliary stalk and microtubule organizations; the axonemal (Ax) and cytoplasmic microtubules (not depicted) are anchored at the basal body in the distal IS. CP, calycal process; BB, basal body. The interactions between opposing membranes are depicted in color. The yellow shade indicates that the putative interaction of the ectodomains of usherin–VLGR1–whirlin complexes appear on both CC plasmalemma and the lateral plasmalemma of the IS ridge complex. The green shade indicates the putative chlosterol–prominin-1–protocadherin 21 interaction. (C)
Electron micrographs reveal the hairpin loop structures of the disc rims and the fibrous links across the gap between the disc rims and plasma membranes (arrowheads). Bar, 100 nm. (D)
The OS plasma membrane and disc membrane have distinctive protein compositions; molecules are either expressed on the plasma membrane or the disc membranes, but not both. The only exception is rhodopsin; rhodopsin is present on disc membrane (with a much higher concentration) and plasma membrane (not depicted). The cGMP-gated channel: Na/Ca-K exchanger complex on the plasma membrane directly binds to the peripherin-2–ROM-1 oligomeric complex on the disc rim. The cGMP-gated channel is composed of three A1 subunits and one B1 subunit. ABCA4, a protein involved in retinoid cycle, is also enriched on the disc rim. RetGC1, retinal guanylyl cyclase; CNG channel, cGMP-gated channel. Adapted from Molday (2004). (E)
Electron micrograph showing the longitudinal sectioning view of IS–OS junction of rat rod. Arrows point to the CC axonemal vesicles. An open arrow points to the fibrous structures linking the opposing membranes. Bar, 50 nm. Inset: a transverse section through the CC shows 9+0 arrangement; an arrow points to the cross-linker that gaps the microtubule doublet and adjacent ciliary membrane. R, apical IS ridge. Bar,100 nm. (F) Electron micrographs of a low-power (inset) and high-power images of the rat retina, at the junction between the rod OS and the RPE. MV, RPE microvillar processes enwrapped the distal OS. A white arrow points to a group of saccules from the tip of OS curls and upwards. White arrows in inset point to two distal OS fragments that are engulfed by RPE. Bar, 500 nm.The vertebrate rod: elegance and efficiency
Rods have evolved a unique structure to detect and process light with high sensitivity and efficiency; human rods can detect single photons (Hecht et al., 1942; Baylor et al., 1979). Each rod contains four morphologically distinguishable compartments: the OS, inner segment (IS), nucleus, and axon/synaptic terminal (Fig. 2 A). The length of the rod OS ranges from ∼30 to 60 µm in length (and ∼1.4–10 µm in diameter), depending on the species. Basically, the rod OS is a cylindrically shaped membrane sac filled with ∼1,000 flattened, lamellar-shaped membrane discs that are orderly arrayed perpendicular to the axis of the OS. These discs appear to be floating freely, although filamentous structures bridging adjacent discs and disc rims to the nearby plasma membrane do exist. The visual pigment of the rod, rhodopsin, comprises ∼95% of the total amount of disc protein; it is densely packed within the disc lamellae (i.e., ∼25,000 molecules/µm2). The high density of rhodopsin, together with its ordered alignment with respect to the light path, increases the probability of capturing an incident photon. 6The human visual system
(a) Visual perception begins in the eye, where the cornea and lens
(1) project an inverted image of the world onto the retina
(2), which converts incident photons into neural action potentials. (b) The retina consists of three layers of cells. The photoreceptors (PR)
, which are in contact with the retinal pigment epithelium (RPE)
, convert light into neural signals that propagate to the horizontal (HC), bipolar (BC) and amacrine cells (AC)
of the inner nuclear layer. The axons of the retinal ganglion cells (RGCs)
form the retinal nerve fiber layer (RNFL)
. They converge onto the optic disk
(3), where they congregate to form the optic nerve (4), which relays neural signals to the brain. (c) Signals from the left and right visual fields of both eyes are combined at the optic chiasm
(5). The lateral geniculate nucleus (6) relays the left visual field to the right visual cortex and the right visual field to the left visual cortex through neuron axons called the optic radiation. Higher visual processing finally takes place in the visual cortex
(7), and further downstream in the brain.
Photoreceptors are graded-response neurons (i.e. they do not generate action potentials) that transduce photons into changes in their membrane potential by means of light-sensitive proteins called opsins. The vertebrate retina is inverted, so that photoreceptors are located at the back of the eye in contact with the retinal pigment epithelium (RPE), which is essential to the health and function of the photoreceptors. RPE cells regenerate photopigments and digests outer segments shed by the photoreceptors. Without support from the RPE, photoreceptor cells progressively atrophy and die.
Photoreceptors relay visual information to the neurons in the inner nuclear layer of the retina, where 2 types of horizontal cells, about 12 types of bipolar cells, and as many as 30 types of amacrine cells process the visual signals.Humans have ∼130 million photoreceptors
, ∼5 million bipolar cells, and ∼1 million ganglion cells
.Rods outnumber cones by ∼20-fold, and are distributed throughout the retina with the exception of the fovea region. The vertebrate rod: elegance and efficiency
Rods have a unique structure to detect and process light with high sensitivity and efficiency;
human rods can detect single photons. Each rod contains four morphologically distinguishable compartments: the OS, inner segment (IS), nucleus, and axon/synaptic terminal (Fig. 2 A). The length of the rod OS ranges from ∼30 to 60 µm in length (and ∼1.4–10 µm in diameter), depending on the species. Basically, the rod OS is a cylindrically shaped membrane sac filled with ∼1,000 flattened, lamellar-shaped membrane discs that are orderly arrayed perpendicular to the axis of the OS. These discs appear to be floating freely, although filamentous structures bridging adjacent discs and disc rims to the nearby plasma membrane do exist (Fig. 2, C and D;) The visual pigment of the rod, rhodopsin, comprises ∼95% of the total amount of disc protein; it is densely packed within the disc lamellae (i.e., ∼25,000 molecules/µm2). The high density of rhodopsin, together with its ordered alignment with respect to the light path, increases the probability of capturing an incident photon.Transduction of the light message: from the photon to the optic nerve
One truly fascinating aspect of retinal neurotransmission is that it is a meeting point for neurophysiology and biophysics. Light, as an electromagnetic wave or stream of energy quanta is essentially a physical agent; through interaction with the retinal tissue, light stimulus results in the excitation of a nerve fiber which generates an electrical signal. In this way, the retina achieves the overall equivalent of a photoelectric effect. 7 We know that the photoreceptor cell, as a “photon detector”, operates in two phases; the first phase is the absorption of incident photons and is photochemical; the second phase, activated by the first, is electrophysiological. Thus, even at the photoreceptor stage, the light signal is already electrical in nature. It is noteworthy that the message remains electrical right to the nerve fibers that emerge from the retina. Meanwhile, the initial signal is modulated to encode the visual information. Each step of this modulation, which constitutes the neurotransmission, may be considered as an electrical circuit; accordingly, the term retinal microcircuitry is often employed. As we shall see, we are still a long way from a complete and accurate picture of all the processes that occur, but considerable progress has been made especially in recent years, so that a coherent description of this neurocircuitry can now be outlined. The membrane depolarization-hyperpolarization duality and information encoding at the optic nerve
The depolarization-hyperpolarization duality of the nerve fiber membrane is fundamental to the understanding of the retinal microcircuitry.
Except for the highly differentiated photoreceptor cells that are specialized in photon detection, the other excitable cells in the retina, which make up the neurocircuits
, are “classical” neurons that respond to excitation by a variation in membrane potential. As regards the encoding of the information at the nerve, the nature of the elecrophysiological response (variation of the membrane potential over time) suggests two possible mechanisms a priori: i) an amplitude code; ii) a time code.
As the nerve fiber operates in an “all or nothing”
mode, an amplitude code can be ruled out. The code is in fact a time code, the variable being the duration of the prepotential, which encodes the intensity of the stimulus;
the more intense the stimulus, the shorter the prepotential. Taking into account the refractory period of the nerve, it is the frequency of occurrence of action potentials along the nerve fiber which is a measure of the intensity of the stimulation. This is a highly specific feature of the neuron due to the existence of a threshold for the occurrence of a membrane potential response to a stimulus. At the optic nerve, the encoding of the information thus uses a time code, and the frequency of the action potentials indicates the intensity of the light stimulation of the retinal photoreceptor cells.
Let us consider the situation in which the photoreceptor (cone or rod) is operationally linked to a first associated neuron via a chemical neuromediator (fig 1). Here just as elsewhere, it is the depolarization of the cell upstream which enables the release of the neurotransmitter.
Since the photoreceptor is depolarized in the dark and hyperpolarized in light, we must assume that the neuromediator is being constantly released by the photoreceptor in the dark and that light restricts and ultimately suppresses this release. As for the neuron associated with the photoreceptor, there are two possibilities: i) the neurotransmitter is excitatory and causes membrane depolarization of the cell; ii) the neurotransmitter is inhibitory and causes hyperpolarization of the neuron membrane. Actually, the same neurotransmitter can be excitatory for some cells and inhibitory for others. Thus, one class of cells are depolarized, ie excited by light; these are the cells for which the neurotransmitter is inhibitory; another cell category is hyperpolarized, ie inhibited by light; these are the cells for which the neurotransmitter released in the dark is excitatory. This situation holds of course for all the neurons that make up the retinal circuitry. We have therefore to distinguish between two types of excitable retinal neurons: i) those which are hyperpolarized in the light are termed OFF neurons; ii) those which are depolarized in the light are termed ON neurons. This second ONOFF duality, a direct consequence of the depolarization-hyperpolarization alternative, means that there are two types of retinal circuit; ON circuits and OFF circuits.Organization of the mammalian retina
It is still usual to adopt a radial description of the retina which follows the light path. Thus, the “functional triad”: i) photoreceptor cells; ii) bipolar cells; iii) ganglion cells (fig 2) has become conventional.
For the cones, the situation is both simpler and more complex; simpler because the saccules stacked in the outer segment are formed by successive folds of the segment membrane and so Q priori need no intracellular messenger; more complex because of the photopic nature of the stimulus that implies a mechanism of color discrimination. We now know that the spectral sensitivity of the cones is determined by the photopigment they contain. Like rhodopsin, these are proteins coupled to chromophores. We also know that the phototransduction mechanism is similar to that for the rods, involving a G-protein with transmission of information linked to guanoside-phosphates. Experiments have shown that the photoexcitation of the cones, like that of the rods, results in an hyperpolarization of their outer segment membranes. Thus, the first electrophysiological signal resulting from the absorption of light by the retina is the hyperpolarization of the photoreceptor cells . These are among the rare excitable cells of the organism whose excitation by natural stimuli results in hyperpolarization.
The photoreceptor cell has a highly specific organization. In this scheme the bipolar cells are the first associated neurons: these are entirely intraretinal, and correspond to the inner nuclear layer. The ganglion cells are the second associated neurons; their axons belong to the optic nerve. This radial description was soon completed by a “tangential description” which allows for the presence of three types of neuron situated in planes parallel to the retina:
i) the horizontal cells of the outer plexiform layer;
ii) the amacrine cells of the inner plexiform layer;
iii) the interplexiform cells associating the two plexiform layers.
Thus, it was for a long time classical to assert that the neurosensory axis of the retina was the photoreceptor - bipolar cell - ganglion cell association and that the horizontal and amacrine cells served to “modulate” the transfer of visual information at the photoreceptor - bipolar cell synapses and the bipolar ceIl - ganglion cell synapses respectively. However, the diversity of the cell types identified in the retina, and a more thorough study of their interconnections, have led to a more specific functional approach to the organization of the retina. The various cell categories fall into several subgroups:
I Photoreceptors: cones and rods. 2 Bipolar cells: - those related to the cones, termed cone bipolar cells, divided into ON bipolar cells which are depolarized by light and OFF bipolar cells which are hyperpolarized by light, - those related to the rods, termed rod bipolar cells, which are all ON, ie depolarized by light. 3 Ganglion cells, which schematically can be either: - OFF-center ganglion cells that respond (hyperpolarization) to offset of light and which connect in sublamina a of the inner plexiform layer, - ON-center ganglion cells which respond (depolarization) to the onset of light and which connect in sublamina b of the inner plexiform layer. Here too an ON-OFF duality operates. 4 Amacrine cells: the same retina can contain up to 30 morphologically distinct types of amacrine cell. For simplicity, we shall mention here only certain sub-groups characterized by their connections or the neurotransmitters they contain: - cholinergic, indolaminergic, dopaminergic amacrine cells etc. - a particularly important sub-group for the understanding of the retinal microcircuitry: the rod amacrine cells or A11 cells. 5 Horizontal cells: two distinct types, which are specifically involved in the encoding of information, as we shall see: - type A horizontal cells which have a wide range of action, receiving an excitatory message from a cone and inhibiting, in response, the cones in their field with which they are connected, - type B horizontal cells which have a more restricted range of action, and which, when excited by a cone, excite in turn the cones to which they are connected, in effect amplifying the information. In the detailed description of the retinal circuitry given further on we shall see the specific role of each of these cell types.Origin of the electrophysiological signal
The phototransduction takes place entirely inside the outer segment of the photoreceptor, which contains large amounts of photopigment composed of a functional association of a protein macromolecule and a chromophore. The role of the photopigment is to absorb the photons impinging on the retina, which is the first step in their detection. This absorption of radiant energy brings about a change in the membrane potential of the outer segment of the photoreceptor. In the rods, the saccules containing the photopigment (rhodopsin) are totally independent of the outer segment, in which they float. This implies some messenger between the saccule membrane and the outer segment membrane. This messenger is cGMP. Thus, the sequence of identified events in the transduction include - absorption of light by rhodopsin with resulting photoisomerization, - activation by rhodopsin of transducin, the G-protein with which it is associated, - the alpha sub-unit of transducin thus released can act on its effector, a phosphodiesterase, by suppressing its inhibition, - the phosphodiesterase is responsible for the hydrolysis of molecules of cGMP to 5’-GMP. The visual message that comes from the saccules thus consists of a decrease in the cytosol level of cGMP. Since the permeability to sodium ion of the outer segment membrane is directly dependent on cGMP, the effect of this sudden drop in concentration is to close the membrane sodium channels; the sodium ions accumulate over the outer surface of the outer segment membrane of the rod, resulting in hyperpolarization involving a G-protein with transmission of information linked to guanoside-phosphates. Experiments have shown that the photoexcitation of the cones, like that of the rods, results in an hyperpolarization of their outer segment membranes. Thus, the first electrophysiological signal resulting from the absorption of light by the retina is the hyperpolarization of the photoreceptor cells . These are among the rare excitable cells of the organism whose excitation by natural stimuli results in hyperpolarization.Description of retinal neurocircuits
At the outset, the neurocircuits leading from the cones to the ganglion cells have to be distinguished from those which originate at the rods. The first type of circuit is more direct and simpler then the second, but the two can interpenetrate under certain circumstances. The cone circuit
Each cone is linked to two types of bipolar cell; ON cone bipolar cells, depolarized by light, and OFF cone bipolar cells, hyperpolarized by light. Thus the ON-OFF duality occurs in the cone circuit at the very first relay (fig 4).