1. A transistor can be considered an artificial Neuron. Every living cell within us is a hybrid analog–digital supercomputer. The brain is like 100 billion computers working together.
2. Biological cells are programmed to be experts at taking inputs, running them through a complicated series of logic gates through circuit-like operations and producing the desired programmed output.
3. The origin of programs, logic gates, and complex circuits to obtain a purposeful specific outcome is always tracked back to intelligent implementation.
Hidden Computational Power Found in the Arms of Neurons
The dendritic arms of some human neurons can perform logic operations that once seemed to require whole neural networks. Mounting research has quietly shifted some of the attention to individual neurons, which seem to shoulder much more computational responsibility than once seemed imaginable. Tiny compartments in the dendritic arms of cortical neurons can each perform complicated operations in mathematical logic. Now it seems that individual dendritic compartments can also perform a particular computation — “exclusive OR” — that mathematical theorists had previously categorized as unsolvable by single-neuron systems. XOR functions were for many years deemed impossible in single neurons. But as it comes out, single neurons operate as multilayered networks, have much more processing power and can therefore learn or store more. A single neuron may be able to compute truly complex functions. For example, it might, by itself, be able to recognize an object. The discovery implies that individual neurons are extensive information processors. Brains are far more complicated than previously thought. Neural networks have traditionally been thought to be made of neurons seen as simple, unintelligent switches. There is an unexpected deep network within a single neuron. And that’s much more powerful in terms of learning difficult problems, in terms of cognition.
Dendritic action potentials and computation in human layer 2/3 cortical neurons
The active electrical properties of dendrites shape neuronal input and output and are fundamental to brain function. In dendrites of layer 2 and 3 (L2/3) pyramidal neurons of the human cerebral cortex, we discovered a class of calcium-mediated dendritic action potentials (dCaAPs) whose waveform and effects on neuronal output have not been previously described. In contrast to typical all-or-none action potentials, dCaAPs were graded; their amplitudes were maximal for threshold-level stimuli but dampened for stronger stimuli. These dCaAPs enabled the dendrites of individual human neocortical pyramidal neurons to classify linearly nonseparable inputs—a computation conventionally thought to require multilayered networks. It has long been assumed that the summation of excitatory synaptic inputs at the dendrite and the output at the axon can only instantiate logical operations such as AND and OR. Traditionally, the XOR operation has been thought to require a network solution. We found that the dCaAPs’ activation function allowed them to effectively compute the XOR operation in the dendrite by suppressing the amplitude of the dCaAP when the input is above the optimal strength. Thus, on the basis of our results and those of previous studies,we consider a model that portrays the somatic and dendritic compartments of L2/3 neurons as a network of coupled logical operators and corresponding activation functions. The XOR operation is performed in the dendrites with dCaAPs, whereas AND/OR operations are performed at the soma and at tuft and basal dendrites.
Single neuron dynamics and computation 2014 Elsevier
The computation performed by single neurons can be defined as a mapping from afferent spike trains to the output spike train which is communicated to their postsynaptic targets.
What can a single neuron compute?
Real neurons take as inputs signals at their synapses and give as outputs sequences of discrete, identical pulses-action potentials or 'spikes'.
Every living cell within us is a hybrid analog–digital supercomputer that implements highly computationally intensive nonlinear, stochastic, differential equations with 30 000 gene–protein state variables that interact via complex feedback loops. The average 10 μm human cell performs these amazing computations with 0.34 nm self-aligned nanoscale DNA–protein devices, with 20 kT per molecular operation (1 ATP molecule hydrolysed), approximately 0.8 pW of power consumption (10 M ATP s−1) and with noisy, unreliable devices that collectively interact to perform reliable hybrid analog–digital computation. Based on a single amino acid among thousands of proteins, immune cells must collectively decide whether a given molecule or molecular fragment is from a friend or foe, and if they err in their decision by even a tiny amount, autoimmune disease, infectious disease, or cancer could originate with high probability every day. Even at the end of Moore's law, we will not match such performance by even a few orders of magnitude. 1
There is a deep connection between electronics and chemistry, which greatly aids the design of analog circuit motifs and analog computation in synthetic biology. This deep connection arises because there are astounding similarities between the equations that describe noisy electronic flow in subthreshold transistors and the equations that describe noisy molecular flow in chemical reactions, both of which obey the laws of exponential thermodynamics Therefore, circuit motifs from the electronic domain are useful for creating circuit motifs in biology and vice versa
The self-organizing amorphous soup in a cell processes information while it destroys, repairs and rebuilds the structures needed to do so. It is remarkable that it does so through a self-aligned nanotechnology with no explicit wiring. Instead, chemical binding among specific molecules serves to ‘implicitly wire’ them together and causes them to interact via chemical reactions. These reactions cause transformations of state, which are necessary for computation to occur.
While significant progress has been made in fundamentals and applications in the field of synthetic biology, it has failed to scale significantly in complexity over more than a decade. One important reason for this failure has been its overemphasis on digital paradigms of thought: because digital design is relatively straightforward and scalable, but because molecules and atoms are discrete.
While logic basis functions and positive-feedback loops are certainly used by cells to make irreversible decisions, to organize sequential computation and to perform signal restoration, analog computation is extremely important for the cell's incredible efficiency w.r.t. the use of energy, time and space. Below a certain crossover computational precision, it is highly advantageous to compute in an analog fashion to reduce the energy, part count or number of molecules (and thus volume or space) needed for the computation. It is therefore not surprising that cells exploit analog computation to perform their moderate-precision computations.
I think that should not surprise us if we depart from the assumption that a higher super intelligent power implemented these computer systems which is capable of such feat. Chance is not.