ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview
Welcome to my library—a curated collection of research and original arguments exploring why I believe Christianity, creationism, and Intelligent Design offer the most compelling explanations for our origins. Otangelo Grasso
In the Origin of life community, there are two hypotheses of how life came to be. The metabolism first, and the RNA world hypothesis. Both are plagued with problems. Let's give a closer look at the RNA world. How to get RNA and DNA on the early Earth, and how to get information to give life the first go are major unsolved problems. The synthesis of RNA and DNA in prebiotic conditions has never been demonstrated in the laboratory. I have listed 37 different unsolved issues in regards of RNA synthesis. No naturalistic explanations exist, despite decades of attempts to solve the riddle. There is no evidence that the elements, in special carbon, and nitrogen in the usable form were extant on the early earth to make the basic building blocks of life, including RNA and DNA. Catalysis on clay to form polymerization of RNA strands is just wishful thinking.
The primary incentive behind the theory of self-replicating systems that Manfred Eigen outlined was to develop a simple model explaining the origin of biological information and, hence, of life itself. Eigen’s theory revealed the existence of the fundamental limit on the fidelity of replication, called the Eigen threshold: The very origin of the first organisms presents at least an appearance of a paradox because a certain minimum level of complexity is required to make self-replication possible at all; high-fidelity replication requires additional functionalities that need even more information to be encoded. Hence, the dramatic paradox of the origin of life is that to attain the minimum complexity required for a biological system to start, a system of a far greater complexity appears to be required. How such a system could emerge is an unsolved puzzle. The origin of life—or, to be more precise, the origin of the first replicator systems and the origin of translation—remains a huge enigma, and progress in solving these problems has been very modest—in the case of translation, nearly negligible.
The next huge step would be to go from short polypeptide RNA to long, stable DNA chains. The transition from RNA to DNA is the next overwhelmingly huge problem. Highly complex nanomachines are required to synthesize DNA from RNA: Amongst them, hypercomplex enzymes like Ribonucleotide reductase proteins. Of course, to make those, DNA is required, which turns the riddle a catch22 problem:
What came first, DNA or the machines that make DNA?The next problem would be to form the genetic code, consistent of 64 triplet codons, and the assignment of the meaning of each codon to one of the 20 amino acids used to make proteins. That is the genetic cipher or the translation code. Assigning the meaning of one symbol to something else is ALWAYS based on the mind. There is NO viable alternative explanation. One science paper has called the origin of the genetic code the universal enigma On top of that, the genetic code is near-optimal amongst 1 million alternative codes, which are less robust. How to explain that feat? Furthermore, an “overlapping language” has been found in the genetic code. Now, let's suppose we had RNA, DNA, polymerization, and the genetic code. We can equate it to an information storing hard disk but of far higher sophistication than anything devised by man. Even Richard Dawkins had to admit in The Blind Watchmaker: there is enough information capacity in a single human cell to store the Encyclopaedia Britannica, all 30 volumes of it, three or four times over.
Next: Where did the information come from to make the first living organism? One of the simplest free-living bacteria is Pelagibacter ubique. It has complete biosynthetic pathways for all 20 amino acids. These organisms get by with about 1,300 genes and 1,3 million base pairs and code for about 1,300 proteins. If a chain could link up, what is the probability that the code letters would line up to confer instructional, complex, codified information to make this bacteria? The chances to get the sequence randomly would be 4^1,200,000 or 10^722,000 There are 10^80 of atoms in the whole universe. Consider that we cannot explain its origin through evolution, because evolution depends on DNA replication, which origin we try to elucidate.
But we have not yet dealt with the origin of the machines that encode, send, and decode the information, that is the transcription and translation machinery, necessary to express the genetic information, to make proteins. Where did that machinery come from? Of course, genetic information is required to specify the amino acid chains that make these proteins. To make proteins, and direct and insert them to the right place where they are needed, at least 25 extremely complex biosyntheses and production-line like manufacturing steps are required. Each step requires molecular machines composed of numerous subunits and co-factors, which require the very own processing procedure described to make them, which makes its origin an irreducible catch22 problem
To exemplify this, lets take one of those macromolecules: the Ribosome The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The design of the translation system in even the simplest modern cells is extremely complex. At the heart of the system is the ribosome, a large complex of at least three RNA molecules and 60–80 proteins arranged in precise spatial architecture and interacting with other components of the translation system in the most finely choreographed fashion. These other essential components include the complete set of tRNAs for the 20 amino acids, the set of 20 aminoacyl-tRNA synthetases. Furthermore, about 200 scaffold and assembly proteins, and 75 co-factors are required, to synthesize the Ribosome. How could that have occurred, in special considering, that there was no evolution at this stage?
Once all this emerged, DNA replication errors had to be reduced ten billion times by error check and repair mechanisms. All the machinery described required a protective envelope, which had to create a homeostatic environment, diminishing the calcium concentration in the cell ten thousand times below the external environment, to permit signaling. At the same time, a signaling code would have had to be established, and immediately begin to function, with a common agreement between sender and receiver................energy supply would have been a major problem, since almost all life forms depend on the supply of glucose, which is a product of complex metabolic pathways, and not readily available on the prebiotic earth. Most proteins require active metal clusters in their reaction centers.
These clusters are in most cases ultracomplex, each cluster had to have the right atoms interconnected in the right way, and get the correct 3-dimensional form. They require the complex uptake of the basic materials, like iron and sulfur, molybdenum, and complex biosynthesis processes, and after the correct assembling, the insertion in the right way and form inside the proteins. All these processes require energy, in the form of ATP, not readily available - since ATP is the product of complex nano-factories, like ATP synthase - which by themselves depend on a proton gradient.
So, the end question: How is all this better explained? By chance, or intelligent design? I go with the latter.
Last edited by Admin on Sat Aug 08, 2020 6:49 pm; edited 3 times in total
Posts : 9787 Join date : 2009-08-09 Age : 58 Location : Aracaju brazil
The central problem to get the basic elements to make the building blocks of life on early earth
There is no evidence that the atoms required to make the basic building blocks of life were extant in a usable form on the early earth. A paper published by Nature magazine in 2016 claimed, that the foremost and only known nitrogen-fixing mechanism trough nitrogenase enzymes was extant in the last universal common ancestor. 1 But nitrogenase enzymes are of the HIGHEST complexity, truly marvels of nanomachinery, a molecular sledgehammer.
The two main constituents of our atmosphere, oxygen (21%) and nitrogen (78%), both play important roles in the makeup of living things. Both are integral parts of the amino acids that join together in long chains to make all proteins, and of the nucleotides which do the same thing to form DNA and RNA. Getting elemental oxygen (O2) to split apart into atoms and take part in the reactions and structures of life is not hard; in fact, oxygen is so reactive that keeping it from getting into where it's not wanted becomes the more challenging job. However, elemental nitrogen poses the opposite problem. Like oxygen, it is diatomic (each molecule contains two N atoms) in its pure form (N2); but, unlike oxygen, each of its atoms is triple-bonded to the other. This is one of the hardest chemical bonds of all to break. So, how can nitrogen be brought out of its tremendous reserves in the atmosphere and into a state where it can be used by living things?
It is claimed that mineral-catalyzed dinitrogen reduction might have provided a significant source of ammonia to the Hadean ocean. But, there is a huge gap to go from such scenario to the ammonia production through nitrogenase enzymes.
The chief enzyme is nitrogenase. With assistance from an energy source (ATP) and a powerful and specific complementary reducing agent (ferredoxin), nitrogen molecules are bound and cleaved with surgical precision. In this way, a ‘molecular sledgehammer’is applied to the NN bond, and a single nitrogen molecule yields two molecules of ammonia. The ammonia then ascends the ‘food chain’, and is used as amino groups in protein synthesis for plants and animals. This is a very tiny mechanism but multiplied on a large scale it is of critical importance in allowing plant growth and food production on our planet to continue. 1
One author summed up the situation well by remarking, ‘Nature is really good at it (nitrogen-splitting), so good in fact that we've had difficulty in copying chemically the essence of what bacteria do so well.’ If one merely substitutes the name of God for the word 'nature', the real picture emerges.
The second problem is how to fix carbon dioxide to make glucose. The ultimate origin of Glucose - sugars is a huge problem for those who believe in life from non-life without requiring a creator. In order to provide credible explanations of how life emerged, a crucial question must be answered: Where did Glucose come from in prebiotic earth? The source of glucose and other sugars used in metabolic processes would have to lie in an energy-collecting process. Without some means to create such sugar, limitations of food supply for metabolic processes would make the origin of life probably impossible. Sugars are by far the most attractive organic energy substrate of primitive anaerobic life, because they are able to provide all the energy and carbon needed for the growth and maintenance of the first organism.
The hypothesis is that an ensemble of minerals that are capable of catalyzing each of the many steps of the reverse citric acid cycle was present anywhere on the primitive Earth, or that the cycle mysteriously organized itself topographically on a metal sulfide surface. The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own.
But even, if, let's suppose, somehow, carbon fixation would have started on metal sulfide surface, there is an unbridgeable gap from that kind of prebiotic self-organization and carbon production, to even the most simple enzymatic carbon fixation pathway, used in anaerobic bacteria. the reductive tricarboxylic acid cycle rTCA is claimed to be the best candidate. That cycle requires nine sophisticated enzymes, some with complex molybdenum co-factors, which also have to be synthesized in highly ordered sequential multistep production pathways by various enzymes. How did that come to be without evolution? 3
An illustration: On the one side, you have an intelligent agency based system of the irreducible complexity of tight integrated, information-rich functional systems that have ready on hand energy directed for such, that routinely generate the sort of phenomenon being observed. And on the other side imagine a golfer, who has played a golf ball through a 9 hole course. Can you imagine that the ball could also play itself around the course in his absence? Of course, we could not discard, that natural forces, like wind, tornadoes, or rains or storms could produce the same result, given enough time. the chances against it, however, are so immense, that the suggestion implies that the non-living world had an innate desire to get through the 9 hole course. 4
Outlining just two elements demonstrates the size of the problem. But overall metabolism is based on seven non-metal elements, H, C, N, 0, P, S, and Se. With these elements, all the major polymers of all cells are made. Hence the major metabolic pathways involve them. In total, over 20 different elements, including heavy elements, like molybdenum, are absolutely essential for life to start.
The emergence of concentrated suites of just the right mix thus remains a central puzzle in origin-of-life research. Life requires the assembly of just the right combination of small molecules into much larger collections - "macromolecules" with specific functions. Making macromolecules is complicated by the fact that for every potentially useful small molecule in the prebiotic soup, dozens of other molecular species had no obvious role in biology. Life is remarkably selective in its building blocks, whereas the vast majority of carbon-based molecules synthesized in prebiotic processes have no obvious biological use. 5
One of the few biologists, Eugene Koonin, Senior Investigator at the National Center for Biotechnology Information, a recognized expert in the field of evolutionary and computational biology, is honest enough to recognize that abiogenesis research has failed. He wrote in his book: The Logic of Chance page 351: " Despite many interesting results to its credit, when judged by the straightforward criterion of reaching (or even approaching) the ultimate goal, the origin of life field is a failure—we still do not have even a plausible coherent model, let alone a validated scenario, for the emergence of life on Earth. Certainly, this is due not to a lack of experimental and theoretical effort, but to the extraordinary intrinsic difficulty and complexity of the problem. A succession of exceedingly unlikely steps is essential for the origin of life, from the synthesis and accumulation of nucleotides to the origin of translation; through the multiplication of probabilities, these make the final outcome seem almost like a miracle.
Eliminative inductions argue for the truth of a proposition by demonstrating that competitors to that proposition are false. Either the origin of the basic building blocks of life and self-replicating cells are the result of the creative act by an intelligent designer, or the result of unguided random chemical reactions on the early earth. Science, rather than coming closer to demonstrate how life could have started, has not advanced and is further away to generating living cells starting with small molecules. Therefore, most likely, cells were created by an intelligent designer.
The implausibility of prevital RNA and DNA synthesis
How would prebiotic processes have purified the starting molecules to make RNA and DNA which were grossly impure? They would have been present in complex mixtures that contained a great variety of reactive molecules. How did the Synthesis of the nitrogenic nucleobases in prebiotic environments occur? How did fortuitous accidents select the five just-right nucleobases to make DNA and RNA, Two purines, and three pyrimidines? How did unguided random events select purines with two rings, with nine atoms, forming the two rings: 5 carbon atoms and 4 nitrogen atoms, amongst almost unlimited possible configurations? How did stochastic coincidence select pyrimidines with one ring, with six atoms, forming its ring: 4 carbon atoms and 2 nitrogen atoms, amongst an unfathomable number of possible configurations? How did random trial and error foresee that this specific atomic arrangement of the nucleobases is required to get the right strength of the hydrogen bond to join the two DNA strands and form Watson–Crick base-pairing? How did mechanisms without external direction foresee that this specific atomic arrangement would convey one of, if not the best possible genetic system to store information? How would these functional bases have been separated from the confusing jumble of similar molecules that would also have been made? How were high-energy precursors to produce purines and pyrimidines produced in a sufficiently concentrated form and joined to the assembly site? How could the adenine-uracil interaction function in any specific recognition scheme under the chaotic conditions of a "prebiotic soup" considering that its interaction is weak and nonspecific? How could sufficient uracil nucleobases accumulate in prebiotic environments in sufficient quantities, if it has a half-life of only 12 years at 100◦C ? How could the ribose 5 carbon sugar rings which form the RNA and DNA backbone have been selected, if 6 or 4 carbon rings, or even more or less, are equally possible but non-functional? How would the functional ribose molecules have been separated from the non-functional sugars? How were the correct nitrogen atom of the base and the correct carbon atom of the sugar selected to be joined together? How could right-handed configurations of RNA and DNA have been selected in a racemic pool of right and left-handed molecules? Ribose must have been in its D form to adopt functional structures ( The homochirality problem ) How could random events have brought all the 3 parts together and bonded them in the right position ( probably over one million nucleotides would have been required ?) How could prebiotic reactions have produced functional nucleosides? (There are no known ways of bringing about this thermodynamically uphill reaction in aqueous solution) How could prebiotic glycosidic bond formation between nucleosides and the base have occurred if they are thermodynamically unstable in water, and overall intrinsically unstable? How could RNA nucleotides have accumulated, if they degrade at warm temperatures in time periods ranging from nineteen days to twelve years? These are extremely short survival rates for the four RNA nucleotide building blocks. How was phosphate, the third element, concentrated at reasonable concentrations?. (The concentrations in the oceans or lakes would have been very low) How would prebiotic mechanisms phosphorylate the nucleosides at the correct site (the 5' position) if, in laboratory experiments, the 2' and 3' positions were also phosphorylated? How could phosphate have been activated somehow? In order to promote the energy dispendious nucleotide polymerization reaction, and (energetically uphill) phosphorylation of the nucleoside had to be possible. How was the energy supply accomplished to make RNA? In modern cells, energy is consumed to make RNA. How could a transition from prebiotic to biochemical synthesis have occurred? There are a huge gap and enormous transition that would be still ahead to arrive at a fully functional interlocked and interdependent metabolic network. How could RNA have formed, if it requires water to make them, but RNA cannot emerge in water and cannot replicate with sufficient fidelity in water without sophisticated repair mechanisms in place? How would the prebiotic synthesis transition of RNA to the highly regulated cellular metabolic synthesis have occurred? The pyrimidine synthesis pathway requires six regulated steps, seven enzymes, and energy in the form of ATP. The starting material for purine biosynthesis is Ribose 5-phosphate, a product of the highly complex pentose phosphate pathway, which uses 12 enzymes. De novo purine synthesis pathway requires ten regulated steps, eleven enzymes, and energy in the form of ATP.
DNA is more stable than RNA. uracil (U) is replaced in DNA by thymine (T) At the C2' position of ribose, an oxygen atom is removed by hypercomplex RNR molecular machines. The thymine-uracil exchange is the major chemical difference between DNA and RNA. Before being incorporated into the chromosomes, this essential modification takes place. The synthesis of thymine requires seven enzymes. De novo biosynthesis of thymine is an intricate and energetically expensive process. All in all, not considering the metabolic pathways and enzymes required to make the precursors to start RNA and DNA synthesis, at least 26 enzymes are required. How did these enzymes emerge, if DNA is required to make them?
Amino acids
Chemical evolution of amino acids and proteins ? Impossible !! https://www.youtube.com/watch?v=1L1MfGrtk0A
How could ammonia (NH3), the precursor for amino acid synthesis, have accumulated on prebiotic earth, if the lifetime of ammonia would be short because of its photochemical dissociation? How could prebiotic events have delivered organosulfur compounds required in a few amino acids used in life, if in nature sulfur exists only in its most oxidized form (sulfate or SO4), and only some unique groups of procaryotes mediate the reduction of SO4 to its most reduced state (sulfide or H2S)? How did unguided stochastic coincidence select the right amongst over 500 that occur naturally on earth? How was the concomitant synthesis of undesired or irrelevant by-products avoided? How were bifunctional monomers, that is, molecules with two functional groups so they combine with two others selected, and unifunctional monomers (with only one functional group) sorted out? How did prebiotic events produce the twenty amino acids used in life? Eight proteinogenic amino acids were never abiotically synthesized under prebiotic conditions. How did a prebiotic synthesis of biological amino acids avoid the concomitant synthesis of undesired or irrelevant by-products? How could achiral precursors of amino acids have produced and concentrated only left-handed amino acids? ( The homochirality problem ) How did the transition from prebiotic enantiomer selection to the enzymatic reaction of transamination occur that had to be extant when cellular self-replication and life began? How would natural causes have selected twenty, and not more or less amino acids to make proteins? How did natural events have foreknowledge that the selected amino acids are best suited to enable the formation of soluble structures with close-packed cores, allowing the presence of ordered binding pockets inside proteins? How did nature "know" that the set of amino acids selected appears to be near ideal and optimal? How did Amino acid synthesis regulation emerge? Biosynthetic pathways are often highly regulated such that building blocks are synthesized only when supplies are low. How did the transition from prebiotic synthesis to cell synthesis of amino acids occur? A minimum of 112 enzymes is required to synthesize the 20 (+2) amino acids used in proteins.
Prebiotic cell membrane synthesis How could simple amphiphiles, which are molecules containing a nonpolar hydrophobic region and a polar hydrophilic region will self-assemble in aqueous solutions to form distinct structures such as micelles have been available in the prebiotic inventory if there has never been evidence for this? Furthermore, sources of compounds with hydrocarbon chains sufficiently long to form stable membranes are not known. How could prebiotic mechanisms have transported and concentrated organic compounds to the pools and construction site? How could membranous vesicles have self-assembled to form complex mixtures of organic compounds and ionic solutes, if science has no solution to this question? How could there have been a prebiotic route of lipid compositions that could provide a membrane barrier sufficient to maintain proton gradients? Proton gradients are absolutely necessary for the generation of energy. How to explain that lipid membranes would be useless without membrane proteins but how could membrane proteins have emerged or evolved in the absence of functional membranes? How did prebiotic processes select hydrocarbon chains which must be in the range of 14 to 18 carbons in length? There was no physical necessity to form carbon chains of the right length nor hindrance to join chains of varying lengths. So they could have been existing of any size on the early earth. How could there have been an "urge" for prebiotic compounds to add unsaturated cis double bonds near the center of the chain? How is there a feasible route of prebiotic phospholipid synthesis, to the complex metabolic phospholipid and fatty acid synthesis pathways performed by multiple enzyme-catalyzed steps which had to be fully operational at LUCA? How would random events start to attach two fatty acids to glycerol by ester or ether bonds rather than just one, necessary for the cell membrane stability? How would random events start to produce biological membranes which are not composed of pure phospholipids, but instead are mixtures of several phospholipid species, often with a sterol admixture such as cholesterol? There is no feasible prebiotic mechanism to join the right mixtures. How did unguided events produce the essential characteristic of living cells which is homeostasis, the ability to maintain a steady and more-or-less constant chemical balance in a changing environment? The first forms of life required an effective Ca2+ homeostatic system, which maintained intracellular Ca2+ at comfortably low concentrations—somewhere ∼10,000–20,000 times lower than that in the extracellular milieu. There was no mechanism to generate this gradient. How was the transition generated from supposedly simple vesicles on the early earth to the ultracomplex membrane synthesis in modern cells, which would have to be extant in the last universal common ancestor, hosting at least over 70 enzymes?
Prebiotic source of hydrocarbons How would an ensemble of minerals present anywhere on the primitive Earth be capable of catalyzing each of the many steps of the reverse citric acid cycle? How would a cycle mysteriously organize itself topographically on a metal sulfide surface? How would such a cycle, despite the lack of evidence of its existence, a transition to the “life-like” complexity of the Wood-Ljundahl cycle, or reverse TCA cycle, commonly proposed as the first carbon fixing cycles on earth?
Large deposits of montmorillonite are present on the Earth today and it is believed to have been present at the time of the origin of life and has recently been detected on Mars. It is formed by aqueous weathering of volcanic ash. It catalyses the formation of oligomers of RNA that contain monomer units from 2 to 30–50. Oligomers of this length are formed because this catalyst controls the structure of the oligomers formed and does not generate all possible isomers. Evidence of sequence-, regio- and homochiral selectivity in these oligomers has been obtained. Postulates on the role of selective versus specific catalysts on the origins of life are discussed. An introduction to the origin of life is given with an emphasis on reaction conditions based on the recent data obtained from zircons 4.0–4.5 Ga.
Take the clay used in the Ferris et al. experiments, for instance. Montmorillonite (often used in cat litter) is a layered clay "rich in silicate and aluminum oxide bonds" (Shapiro 2006, 108). But the montmorillonite employed in the Ferris et al. experiments is not a naturally-occuring material, as Ertem (2004) explains in detail. Natural or native clays don't work, because they contain metal cations that interfere with phosphorylation reactions:
(Shapiro 2006, 108)
This handicap was overcome in the synthetic experiments by titrating the clays to a monoionic form, generally sodium, before they were used. Even after this step, the activity of the montmorillionite depended strongly on its physical source, with samples from Wyoming yielding the best results....Eventually the experimenters settled on Volclay, a commercially processed Wyoming montmorillonite provided by the American Colloid Company. Further purification steps were applied to obtain the catalyst used for the "prebiotic" formation of RNA.
Several years ago, a prominent origin of life researcher complained to me in private correspondence that 'you ID guys won't be satisfied until we put a spark through elemental gases, and a cell crawls out of the reaction vessel.'
But this is not an unreasonable demand that ID theorists make of the abiogenesis research community. It is, rather, what that community claims to be able to show -- namely, that functional complexity arises without intelligent intervention, strictly from physical precursors via natural regularities and chance events.
Thus, pointing out where intelligent intervention (design) is required for any product is hardly unfair sniping. It is simply realism: similar criticisms apply to the other steps in the Ferris et al. RNA experiments, such as the source of the activated mononucleotides employed, a point Ferris himself acknowledges:
A problem with the RNA world scenario is the absence of a plausible prebiotic synthesis of the requisite activated mononucleotides. (Huang and Ferris 2006, 8918) -
The very origin of the first organisms presents at least an appearance of a paradox because a certain minimum level of complexity is required to make self-replication possible at all; high-fidelity replication requires additional functionalities that need even more information to be encoded (Penny, 2005).
The crucial question in the study of the origin of life is how the Darwin-Eigen cycle started—how was the minimum complexity that is required to achieve the minimally acceptable replication fidelity attained? In even the simplest modern systems, such as RNA viruses with the replication fidelity of only about 10^3 and viroids that replicate with the lowest fidelity among the known replicons (about 10^2; Gago, et al., 2009), replication is catalyzed by complex protein polymerases. The replicase itself is produced by translation of the respective mRNA(s), which is mediated by the immensely complex ribosomal apparatus. Hence, the dramatic paradox of the origin of life is that, to attain the minimum complexity required for a biological system to start on the Darwin-Eigen spiral, a system of a far greater complexity appears to be required. How such a system could evolve is a puzzle that defeats conventional evolutionary thinking, all of which is about biological systems moving along the spiral; the solution is bound to be unusual.
from the book: The Logic of Chance: The Nature and Origin of Biological Evolution By Eugene V. Koonin
The primary incentive behind the theory of self-replicating systems that Manfred Eigen outlined was to develop a simple model explaining the origin of biological information and, hence, of life itself. Eigen’s theory revealed the existence of the fundamental limit on the fidelity of replication (the Eigen threshold): If the product of the error (mutation) rate and the information capacity (genome size) is below the Eigen threshold, there will be stable inheritance and hence evolution; however, if it is above the threshold, the mutational meltdown and extinction become inevitable (Eigen, 1971). The Eigen threshold lies somewhere between 1 and 10 mutations per round of replication (Tejero, et al., 2011); regardless of the exact value, staying above the threshold fidelity is required for sustainable replication and so is a prerequisite for the start of biological evolution (see Figure 12-1A).
Indeed, the very origin of the first organisms presents at least an appearance of a paradox because a certain minimum level of complexity is required to make self-replication possible at all; high-fidelity replication requires additional functionalities that need even more information to be encoded (Penny, 2005). However, the replication fidelity at a given point in time limits the amount of information that can be encoded in the genome. What turns this seemingly vicious circle into the (seemingly) unending spiral of increasing complexity—the Darwin-Eigen cycle, following the terminology introduced by David Penny (Penny, 2005)—is a combination of natural selection with genetic drift. Even small gains in replication fidelity are advantageous to the system, if only because of the decrease of the reproduction cost as a result of the increasing yield of viable copies of the genome. In itself, a larger genome is more of a liability than an advantage because of higher replication costs. However, moderate genome increase, such as by duplication of parts of the genome or by recombination, can be fixed via genetic drift in small populations. Replicators with a sufficiently high fidelity can take advantage of such randomly fixed and initially useless genetic material by evolving new functions, without falling off the “Eigen cliff” (see Figure 12-1B). Among such newly evolved, fitness-increasing functions will be those that increase replication fidelity, which, in turn, allows a further increase in the amount of encoded information. And so the Darwin- Eigen cycle recapitulates itself in a spiral progression, leading to a steady increase in genome complexity (see Figure 12-1A). The crucial question in the study of the origin of life is how the Darwin-Eigen cycle started—how was the minimum complexity that is required to achieve the minimally acceptable replication fidelity attained? In even the simplest modern systems, such as RNA viruses with the replication fidelity of only about 10^3 and viroids that replicate with the lowest fidelity among the known replicons (about 10^2; Gago, et al., 2009), replication is catalyzed by complex protein polymerases. The replicase itself is produced by translation of the respective mRNA(s), which is mediated by the immensely complex ribosomal apparatus. Hence, the dramatic paradox of the origin of life is that, to attain the minimum complexity required for a biological system to start on the Darwin-Eigen spiral, a system of a far greater complexity appears to be required. How such a system could evolve is a puzzle that defeats conventional evolutionary thinking, all of which is about biological systems moving along the spiral; the solution is bound to be unusual.
What must be explained, is the arrangement of the codons in the standard codon table which is highly non-random, and serves to translate into the amino acid sequence to make proteins, and the origin of the assignment of the 64 triplet codons to the 20 amino acids. That is the origin of its translation. The origin of an alphabet through the triplet codons is one thing, but on top, it has to be translated to another " alphabet " constituted through the 20 amino acid sequence. That is, to explain the origin of the capability to translate the English language into Chinese. On top of that, the machinery itself to promote the process itself has also to be explained, that is the hardware. When humans translate English to Chinese, for example, we recognize the English word, and the translator knows the equivalent Chinese symbol and writes it down.
In the cell, Aminoacyl tRNA synthetase recognizes the triplet anticodon of the tRNA, and attach the equivalent amino acid to the tRNA. How could random chemical reactions have produced this recognition? Let's suppose rather than intelligence, the chance was the mechanism. The imaginary cell would have to select randomly any of the amino acids, restrict by an unknown mechanism to the 20 used for life, since there are more out there, select by an unknown mechanism only left-handed ones, and make a test drive and produce a polymer chain and see what happens. Some theories try to explain the mechanism, but they all remain unsatisfactory. Obviously. Furthermore, Aminoacyl tRNA synthetase is complex enzymes. For what reason would they have come to be, if the final function could only be employed after the whole translation process was set in place, with a fully functional ribosome being able to do its job? Remembering the catch22 situation, since they are by themselves made through the very own process in question?
Why is it not rational to conclude that the code itself, the software, as well as the hardware, are best explained through the invention of a highly intelligent being, rather than random chemical affinities and reactions? Questions: what good would the ribosome be for without tRNAs ? without amino acids, which are the product of enormously complex chemical processes and pathways? What good would the machinery be good for, if the code was not established, and neither the assignment of each codon to the respective amino acid? had the software and the hardware not have to be in place at the same time? Were all the parts not only fully functional if fully developed, interlocked, set-up, and tuned to do its job with precision like a human-made motor?
And even it lets say, the whole thing was fully working and in place, what good would it be for without all the other parts required, that is, the DNA double helix, its compactation through histones and chromatins and chromosomes, its highly complex mechanism of information extraction and transcription into mRNA? Had the whole process, that is INITIATION OF TRANSCRIPTION, CAPPING, ELONGATION, SPLICING, CLEAVAGE, POLYADENYLATION, AND TERMINATION, EXPORT FROM THE NUCLEUS TO THE CYTOSOL, INITIATION OF PROTEIN SYNTHESIS (TRANSLATION), COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING, and its respective machinery not have to be all in place? Does that not constitute an interdependent and irreducibly complex system?
http://web.archive.org/web/20131209121838/http://gencodesignal.org/faq/ The confusion comes from the ambiguity in using the term “genetic code”. Here is a quote from Francis Crick, who seems to be the one who coined this term: Unfortunately the phrase “genetic code” is now used in two quite distinct ways. Laymen often use it to mean the entire genetic message in an organism. Molecular biologists usually mean the little dictionary that shows how to relate the four-letter language of the nucleic acids to the twenty-letter language of the proteins, just as the Morse code relates the language of dots and dashes to the twenty-six letters of the alphabet… The proper technical term for such a translation is, strictly speaking, not a code but a cipher. In the same way the Morse code should really be called the Morse cipher. I did not know this at the time, which was fortunate because “genetic code” sounds a lot more intriguing than “genetic cipher” (from “What Mad Pursuit”, 1988)
Transfer RNA, Delivery Vehicle for Amino Acids 11
While the mRNA is being processed by the ribosome in order to assemble amino acids into a protein, how will these amino acids actually be brought into the proper order? There does not seem to be any innate attraction or affinity between an amino acid and the RNA letters which code for it. In the early research after the Watson-Crick breakthrough, it became apparent that there must be intermediates to bring the amino acids to the ribosome in proper order. Two such vital go-betweens were finally located. One serves as a transport molecule. It is called transfer-RNA, which is a different form of RNA from that which has been described. Transfer-RNA, written tRNA, is a comparatively short chain of RNA containing some seventy-five or eighty ribonucleotides.
The RNA strand doubles back on itself, and base-pairs with its own chain in some places. The overall shape of the tRNA molecule in some ways resembles a key or a cloverleaf. If tRNA is to do its job properly, the shape must be very precise, and this seems to depend in part upon the right temperature and the correct concentration of certain ions (e.g., magnesium and sodium) in the cell fluid. Transfer-RNA is perfectly fitted for its mission. First of all, each tRNA type attaches to only one variety of the twenty amino acids. Secondly, the particular tRNA delivers that amino acid in the proper sequence for the forming protein. This is possible because the tRNA molecule has at one end a special RNA triplet of code letters which match the mRNA codon which specifies that particular amino acid. When these complementary codons come together by base-pairing, the amino acid being transported by that tRNA is thus in position to be linked to the growing protein chain in the correct order. All this takes place at the ribosome, which is like a mobile assembly machine as it moves along the mRNA strand (or as the mRNA tape passes through the ribosomes).
https://www.youtube.com/watch?v=D5vH4Q_tAkY
https://vimeo.com/114101147
The cell converts the information carried in an mRNA molecule into a protein molecule. This feat of translation was a focus of attention of biologists in the late 1950s, when it was posed as the “coding problem”: how is the information in a linear sequence of nucleotides in RNA translated into the linear sequence of a chemically quite different set of units—the amino acids in proteins?
The first scientist after Watson and Crick to find a solution of the coding problem, that is the relationship between the DNA structure and protein synthesis was Russian physicist George Gamow. Gamow published in the October 1953 issue of Nature a solution called the “diamond code”, an overlapping triplet code based on a combinatorial scheme in which 4 nucleotides arranged 3-at-a-time would specify 20 amino acids. Somewhat like a language, this highly restrictive code was primarily hypothetical, based on then-current knowledge of the behavior of nucleic acids and proteins. 3
The concept of coding applied to genetic specificity was somewhat misleading, as translation between the four nucleic acid bases and the 20 amino acids would obey the rules of a cipher instead of a code. As Crick acknowledged years later, in linguistic analysis, ciphers generally operate on units of regular length (as in the triplet DNA scheme), whereas codes operate on units of variable length (e.g., words, phrases). But the code metaphor worked well, even though it was literally inaccurate, and in Crick’s words, “‘Genetic code’ sounds a lot more intriguing than ‘genetic cipher’.”
An mRNA Sequence Is decoded in sets of three nucleotides Once an mRNA has been produced by transcription and processing, the information present in its nucleotide sequence is used to synthesize a protein. Transcription is simple to understand as a means of information transfer: since DNA and RNA are chemically and structurally similar, the DNA can act as a direct template for the synthesis of RNA by complementary base-pairing. As the term transcription signifies, it is as if a message written out by hand is being converted, say, into a typewritten text. The language itself and the form of the message do not change, and the symbols used are closely related.
In contrast, the conversion of the information in RNA into protein represents a translation of the information into another language that uses quite different symbols. Moreover, since there are only 4 different nucleotides in mRNA and 20 different types of amino acids in a protein, this translation cannot be accounted for by a direct one-to-one correspondence between a nucleotide in RNA and an amino acid in protein. The nucleotide sequence of a gene, through the intermediary of mRNA, is translated into the amino acid sequence of a protein. This code was deciphered in the early 1960s.
Question: how did the translation of the triplet anticodon to amino acids, and its assignment, arise? There is no physical affinity between the anticodon and the amino acids. What must be explained, is the arrangement of the codon " words " in the standard codon table which is highly non-random, redundant and optimal, and serves to translate the information into the amino acid sequence to make proteins, and the origin of the assignment of the 64 triplet codons to the 20 amino acids. That is the origin of its translation. The origin of an alphabet through the triplet codons is one thing, but on top, it has to be translated to another " alphabet " constituted through the 20 amino acids. That is to explain the origin of the capability to translate the English language into Chinese. We have to constitute the English and Chinese language and symbols first, in order to know its equivalence. That is a mental process.
Stephen Meyer, Signature in the cell, page 99:
nothing about the physical or chemical features of the nucleotides or amino acids directly dictated any particular set of assignments between amino acids and bases in the DNA text. The code could not be deduced from the chemical properties of amino acids and nucleotide bases. Just as a specific letter of the English language can be represented by any combination of binary digits, so too could a given amino acid correspond to any combination of nucleotide bases.
tRNA's are adapter molecules. A cipher or translation system permits the translation of the information from DNA’s four-character base sequences into the twenty-character “language” of proteins. the cell needs a means of translating and expressing the information stored on DNA. Groups of three nucleotides (called codons) on the mRNA specify the addition of one of the twenty protein-forming amino acids during the process of protein synthesis. Other scientists discovered that the cell uses a set of adapter molecules to help convert the information on mRNA into proteins.
how did biological specificity and functional information arise?The proteins would have to possess the correct sequences of amino acids in order to be able to unwind and copy genetic information; the ribosomal proteins and RNAs would need to be sequenced precisely in order to fold into subunits that fit together to form a functional ribosome; the transfer RNAs would have to mediate specific associations in order to convert the random sequences of bases on the polynucleotides into specific amino-acid sequences; and the sequences of amino acids thus produced would have to be arranged precisely in order to fold into stable three-dimensional structures.
The sequence of nucleotides in the mRNA molecule is read in consecutive groups of three. RNA is a linear polymer of four different nucleotides, so there are 4 x 4 x 4 = 64 possible combinations of three nucleotides: the triplets AAA, AUA, AUG, and so on. However, only 20 different amino acids are commonly found in proteins. Either some nucleotide triplets are never used, or the code is redundant and some amino acids are specified by more than one triplet. The second possibility is, in fact, the correct one, as shown by the completely deciphered genetic code shown below:
Each group of three consecutive nucleotides in RNA is called a codon, and each codon specifies either one amino acid or a stop to the translation process.
In principle, an RNA sequence can be translated in any one of three different reading frames, depending on where the decoding process begins (Figure below). However, only one of the three possible reading frames in an mRNA encodes the required protein. We see later how a special punctuation signal at the beginning of each RNA message sets the correct reading frame at the start of protein synthesis.
AUG is the Universal Start Codon. Nearly every organism (and every gene) that has been studied uses the three ribonucleotide sequence AUG to indicate the "START" of protein synthesis (Start Point of Translation).
The same interrogation point goes here: Why and how should natural processes have " chosen " to insert a punctuation signal, a Universal Start Codon in order for the Ribosome to " know " where to start translation? This is essential in order for the machinery to start translating at the correct place.
Note that three codons are referred to as STOP codons: UAA, UAG, and UGA. These are used to terminate translation; they indicate the end of the gene's coding region.
tRNA Molecules match Amino Acids to codons in mRNA The codons in an mRNA molecule do not directly recognize the amino acids they specify: the group of three nucleotides does not, for example, bind directly to the amino acid. Rather, the translation of mRNA into protein depends on adaptor molecules that can recognize and bind both to the codon and, at another site on their surface, to the amino acid. These adaptors consist of a set of small RNA molecules known as transfer RNAs (tRNAs), each about 80 nucleotides in length.
RNA molecules can fold into precise three-dimensional structures, and the tRNA molecules provide a striking example. Four short segments of the folded tRNA are double-helical, producing a molecule that looks like a cloverleaf when drawn schematically. See below:
For example, a 5"-GCUC-3" sequence in one part of a polynucleotide chain can form a relatively strong association with a 5"-GAGC-3" sequence in another region of the same molecule. The cloverleaf undergoes further folding to form a compact L-shaped structure that is held together by additional hydrogen bonds between different regions of the molecule. Two regions of unpaired nucleotides situated at either end of the L-shaped molecule are crucial to the function of tRNA in protein synthesis. One of these regions forms the anticodon, a set of three consecutive nucleotides that pairs with the complementary codon in an mRNA molecule. The other is a short single- stranded region at the 3" end of the molecule; this is the site where the amino acid that matches the codon is attached to the tRNA. The genetic code is redundant; that is, several different codons can specify a single amino acid . This redundancy implies either that there is more than one tRNA for many of the amino acids or that some tRNA molecules can base-pair with more than one codon. In fact, both situations occur. Some amino acids have more than one tRNA and some tRNAs are constructed so that they require accurate base-pairing only at the first two positions of the codon and can tolerate a mismatch (or wobble) at the third position . See below
Wobble base-pairing between codons and anticodons. If the nucleotide listed in the first column is present at the third, or wobble, position of the codon, it can base-pair with any of the nucleotides listed in the second column. Thus, for example, when inosine (I) is present in the wobble position of the tRNA anticodon, the tRNA can recognize any one of three different codons in bacteria and either of two codons in eucaryotes. The inosine in tRNAs is formed from the deamination of guanine, a chemical modification that takes place after the tRNA has been synthesized. The nonstandard base pairs, including those made with inosine, are generally weaker than conventional base pairs. Note that codon–anticodon base pairing is more stringent at positions 1 and 2 of the codon: here only conventional base pairs are permitted. The differences in wobble base-pairing interactions between bacteria and eucaryotes presumably result from subtle structural differences between bacterial and eucaryotic ribosomes, the molecular machines that perform protein synthesis.
(Adapted from C. Guthrie and J. Abelson, in The Molecular Biology of the Yeast Saccharomyces: Metabolism and Gene Expression, pp. 487–528. Cold Spring Harbor, New York: Cold Spring Harbor Laboratory Press, 1982.)
This wobble base-pairing explains why so many of the alternative codons for an amino acid differ only in their third nucleotide . In bacteria, wobble base-pairings make it possible to fit the 20 amino acids to their 61 codons with as few as 31 kinds of tRNA molecules. The exact number of different kinds of tRNAs, however, differs from one species to the next. For example, humans have nearly 500 tRNA genes but, among them, only 48 different anticodons are represented.
Specific enzymes couple each Amino Acid to its appropriate tRNA Molecule We have seen that, to read the genetic code in DNA, cells make a series of different tRNAs. We now consider how each tRNA molecule becomes linked to the one amino acid in 20 that is its appropriate partner. Recognition and attachment of the correct amino acid depends on enzymes called aminoacyl-tRNA synthetases, which covalently couple each amino acid to its appropriate set of tRNA molecules
Most cells have a different synthetase enzyme for each amino acid (that is, 20 synthetases in all); one attaches glycine to all tRNAs that recognize codons for glycine, another attaches alanine to all tRNAs that recognize codons for alanine, and so on. Many bacteria, however, have fewer than 20 synthetases, and the same synthetase enzyme is responsible for coupling more than one amino acid to the appropriate tRNAs. In these cases, a single synthetase places the identical amino acid on two different types of tRNAs, only one of which has an anticodon that matches the amino acid. A second enzyme then chemically modifies each “incorrectly” attached amino acid so that it now corresponds to the anticodon displayed by its covalently linked tRNA. The synthetase-catalyzed reaction that attaches the amino acid to the 3" end of the tRNA is one of many reactions coupled to the energy-releasing hydrolysis of ATP , and it produces a high-energy bond between the tRNA and the amino acid. The energy of this bond is used at a later stage in protein synthesis to link the amino acid covalently to the growing polypeptide chain. The aminoacyl-tRNA synthetase enzymes and the tRNAs are equally important in the decoding process
These enzymes are not gentle with tRNA molecules. The structure of glutaminyl-tRNA synthetase with its tRNA (entry 1gtr) is a good example ( see above ) The enzyme firmly grips the anticodon, spreading the three bases widely apart for better recognition. At the other end, the enzyme unpairs one base at the beginning of the chain, seen curving upward here, and kinks the long acceptor end of the chain into a tight hairpin, seen here curving downward. This places the 2' hydroxyl on the last nucleotide in the active site, where ATP and the amino acid (not present in this structure) are bound.
The tRNA and ATP fit precisely in the active site of the enzyme, and the structure is configured and designed to function in a finely tuned manner. How could such a functional device be the result of random unguided forces and chemical reactions without an end goal?
The genetic code is translated by means of two adaptors that act one after another. The first adaptor is the aminoacyl-tRNA synthetase, which couples a particular amino acid to its corresponding tRNA; the second adaptor is the tRNA molecule itself, whose anticodon forms base pairs with the appropriate codon on the mRNA. An error in either step would cause the wrong amino acid to be incorporated into a protein chain. In the sequence of events shown, the amino acid tryptophan (Trp) is selected by the codon UGG on the mRNA.
This was established by an experiment in which one amino acid (cysteine) was chemically converted into a differentamino acid (alanine) after it already had been attached to its specific tRNA. When such “hybrid” aminoacyl-tRNA molecules were used for protein synthesis in a cell-free system, the wrong amino acid was inserted at every point in the protein chain where that tRNA was used. Although, as we shall see, cells have several quality control mechanisms to avoid this type of mishap, the experiment establishes that the genetic code is translated by two sets of adaptors that act sequentially. Each matches one molecular surface to another with great specificity, and it is their combined action that associates each sequence of three nucleotides in the mRNA molecule—that is, each codon—with its particular amino acid.
Editing by tRNA Synthetases Ensures Accuracy Several mechanisms working together ensure that the tRNA synthetase links the correct amino acid to each tRNA. The synthetase must first select the correct amino acid, and most synthetases do so by a two-step mechanism. First, the correct amino acid has the highest affinity for the active-site pocket of its synthetase and is therefore favored over the other 19. In particular, amino acids larger than the correct one are effectively excluded from the active site. However, accurate discrimination between two similar amino acids, such as isoleucine and valine (which differ by only a methyl group), is very difficult to achieve by a one-step recognition mechanism. A second discrimination step occurs after the amino acid has been covalently linked to AMP. When tRNA binds the synthetase, it tries to force the amino acid into a second pocket in the synthetase, the precise dimensions of which exclude the correct amino acid but allow access by closely related amino acids. Once an amino acid enters this editing pocket, it is hydrolyzed from the AMP (or from the tRNA itself if the aminoacyl-tRNA bond has already formed), and is released from the enzyme. This hydrolytic editing, which is analogous to the exonucleolytic proofreading by DNA polymerases , raises the overall accuracy of tRNA charging to approximately one mistake in 40,000 couplings.
Editing significantly decreases the frequency of errors and is important for translational quality control, and many details of the various editing mechanisms and their effect on different cellular systems are now starting to emerge. 8
High Fidelity
Aminoacyl-tRNA synthetases must perform their tasks with high accuracy. Every mistake they make will result in a misplaced amino acid when new proteins are constructed. These enzymes make about one mistake in 10,000. For most amino acids, this level of accuracy is not too difficult to achieve. Most of the amino acids are quite different from one another, and, as mentioned before, many parts of the different tRNA are used for accurate recognition. But in a few cases, it is difficult to choose just the right amino acids and these enzymes must resort to special techniques.
Isoleucine is a particularly difficult example. It is recognized by an isoleucine-shaped hole in the enzyme, which is too small to fit larger amino acids like methionine and phenylalanine, and too hydrophobic to bind anything with polar sidechains. But, the slightly smaller amino acid valine, different by only a single methyl group, also fits nicely into this pocket, binding instead of isoleucine in about 1 in 150 times. This is far too many errors, so corrective steps must be taken. Isoleucyl-tRNA synthetase (PDB entry 1ffy) solves this problem with a second active site, which performs an editing reaction. Isoleucine does not fit into this site, but errant valine does. The mistake is then cleaved away, leaving the tRNA ready for a properly-placed leucine amino acid. This proofreading step improves the overall error rate to about 1 in 3,000. 9
This is an amazing error proofreading technique, which adds to other repair mechanisms in the cell. Once again the question arises: How could these precise molecular machines have arisen by natural means, without intelligence involved? This seems to be one more amazing example of highly sophisticated nanomolecular machinery designed to fulfill its task with a high degree of fidelity and error minimization, which can arise only by the foresight of an incredibly intelligent creator.
aaRS come in two unrelated families; 10 of the 20 amino acids need a Class I aaRS, the other 10 a Class II aaRS. This landscape is thus littered with perplexing questions like these: I. Why wasn’t one ancestor enough when they both do the same job? II. How did the two types of ancestral synthetases avoid competition that might have eliminated the inferior Class? 12
A new peer-reviewed paper in the journal Frontiers in Genetics, "Redundancy of the genetic code enables translational pausing," finds that so-called "redundant" codons may actually serve important functions in the genome. Redundant (also called "degenerate") codons are those triplets of nucleotides that encode the same amino acid. For example, in the genetic code, the codons GGU, GGC, GGA, and GGG all encode the amino acid glycine. While it has been shown (see here) that such redundancy is actually optimized to minimize the impact of mutations resulting in amino acid changes, it is generally assumed that synonymous codons are functionally equivalent. They just encode the same amino acid, and that's it. 5
The ribosome is capable of reading both sets of commands -- as they put it, "[t]he ribosome can be thought of as an autonomous functional processor of data that it sees at its input." To put it another way, the genetic code is "multidimensional," a code within a code. This multidimensional nature exceeds the complexity of computer codes generated by humans, which lack the kind of redundancy of the genetic code. As the abstract states:
The codon redundancy ("degeneracy") found in protein-coding regions of mRNA also prescribes Translational Pausing (TP). When coupled with the appropriate interpreters, multiple meanings and functions are programmed into the same sequence of configurable switch-settings. This additional layer of Ontological Prescriptive Information (PIo) purposely slows or speeds up the translation decoding process within the ribosome. Variable translation rates help prescribe functional folding of the nascent protein. Redundancy of the codon to amino acid mapping, therefore, is anything but superfluous or degenerate. Redundancy programming allows for simultaneous dual prescriptions of TP and amino acid assignments without cross-talk. This allows both functions to be coincident and realizable. We will demonstrate that the TP schema is a bona fide rule-based code, conforming to logical code-like properties. Second, we will demonstrate that this TP code is programmed into the supposedly degenerate redundancy of the codon table. We will show that algorithmic processes play a dominant role in the realization of this multi-dimensional code.
The paper even suggests, "Cause-and-effect physical determinism...cannot account for the programming of sequence-dependent biofunction."
Crucial role of the tRNA activating enzymes 7 To try to explain the source of the code various researchers have sought some sort of chemical affinity between amino acids and their corresponding codons. But this approach is misguided:
First of all, the code is mediated by tRNAs which carry the anti-codon (in the mRNA) rather than the codon itself (in the DNA). So, if the code were based on affinities between amino acids and anti-codons, it implies that the process of translation via transcription cannot have arisen as a second stage or improvement on a simpler direct system - the complex two-step process would need to have arisen right from the start.
Second, the amino acid has no role in identifying the tRNA or the codon (see Footnote). This association is done by an activating enzyme (see Figure 2 below) which attaches each amino acid to its appropriate tRNA (clearly requiring the enzyme to correctly identify both components). There are 20 different activating enzymes - one for each type of amino acid.
Interestingly, the end of the tRNA to which the amino acid attaches has the same nucleotide sequence for all amino acids - which constitutes a third reason. Interest in the genetic code tends to focus on the role of the tRNAs, but as just indicated that is only one half of implementing the code. Just as important as the codon-anticodon pairing (between mRNA and tRNA) is the ability of each activating enzyme to bring together an amino acid with its appropriate tRNA. It is evident that implementation of the code requires two sets of intermediary molecules: the tRNAs which interact with the ribosomes and recognise the appropriate codon on mRNA, and the activating enzymes which attach the right amino acid to its tRNA. This is the sort of complexity that pervades biological systems, and which poses such a formidable challenge to an evolutionary explanation for its origin. It would be improbable enough if the code were implemented by only the tRNAs which have 70 to 80 nucleotides; but the equally crucial and complementary role of the activating enzymes, which are hundreds of amino acids long, excludes any realistic possibility that this sort of arrangement could have arisen opportunistically.
Progressive development of the genetic code is not realistic
In view of the many components involved in implementing the genetic code, origin-of-life researchers have tried to see how it might have arisen in a gradual, evolutionary, manner. For example, it is usually suggested that to begin with the code applied to only a few amino acids, which then gradually increased in number. But this sort of scenario encounters all sorts of difficulties with something as fundamental as the genetic code.
First, it would seem that the early codons need have used only two bases (which could code for up to 16 amino acids); but a subsequent change to three bases (to accommodate 20) would seriously disrupt the code. Recognising this difficulty, most researchers assume that the code used 3-base codons from the outset; which was remarkably fortuitous or implies some measure of foresight on the part of evolution (which, of course, is not allowed).
Much more serious are the implications for proteins based on a severely limited set of amino acids. In particular, if the code was limited to only a few amino acids, then it must be presumed that early activating enzymes comprised only that limited set of amino acids, and yet had the necessary level of specificity for reliable implementation of the code. There is no evidence of this; and subsequent reorganization of the enzymes as they made use of newly available amino acids would require highly improbable changes in their configuration. Similar limitations would apply to the protein components of the ribosomes which have an equally essential role in translation.
Further, tRNAs tend to have atypical bases which are synthesized in the usual way but subsequently modified. These modifications are carried out by enzymes, so these enzymes too would need to have started life based on a limited number of amino acids; or it has to be assumed that these modifications are later refinements - even though they appear to be necessary for reliable implementation of the code.
Finally, what is going to motivate the addition of new amino acids to the genetic code? They would have little if any utility until incorporated into proteins - but that will not happen until they are included in the genetic code. So the new amino acids must be synthesised and somehow incorporated into useful proteins (by enzymes that lack them), and all of the necessary machinery for including them in the code (dedicated tRNAs and activating enzymes) put in place – and all done opportunistically! Totally incredible!
What must be explained, is the arrangement of the codons in the standard codon table which is highly non-random, and serves to translate into the amino acid sequence to make proteins, and the origin of the assignment of the 64 triplet codons to the 20 amino acids. That is the origin of its translation. The origin of an alphabet through the triplet codons is one thing, but on top, it has to be translated to another " alphabet " constituted through the 20 amino acid sequence. That is, to explain the origin of the capability to translate the English language into Chinese. On top of that, the machinery itself to promote the process itself has also to be explained, that is the hardware. When humans translate English to Chinese, for example, we recognize the English word, and the translator knows the equivalent Chinese symbol and writes it down.
In the cell, Aminoacyl tRNA synthetase recognizes the triplet anticodon of the tRNA and attach the equivalent amino acid to the tRNA. How could random chemical reactions have produced this recognition? Let's suppose rather than intelligence, the chance was the mechanism. The imaginary cell would have to select randomly any of the amino acids, restrict by an unknown mechanism to the 20 used for life, since there are more out there, select by an unknown mechanism only left-handed ones, and make a test drive and produce a polynucleotide and see what happens. Some theories try to explain the mechanism, but they all remain unsatisfactory. Obviously. Furthermore, Aminoacyl tRNA synthetase is complex enzymes. For what reason would they have come to be, if the final function could only be employed after the whole translation process was set in place, with a fully functional ribosome being able to do its job? Remembering the catch22 situation, since they are by themselves made through the very own process in question?
Why is it not rational to conclude that the code itself, the software, as well as the hardware, are best explained through the invention of a highly intelligent being, rather than random chemical affinities and reactions? Questions: what good would the ribosome be for without tRNAs? without amino acids, which are the product of enormously complex chemical processes and pathways? What good would the machinery be good for, if the code was not established, and neither the assignment of each codon to the respective amino acid? had the software and the hardware not have to be in place at the same time? Were all the parts not only fully functional if fully developed, interlocked, set-up, and tuned to do its job with precision like a human-made motor?
And even it lets say, the whole thing was fully working and in place, what good would it be for without all the other parts required, that is, the DNA double helix, its compactation through histones and chromatins and chromosomes, its highly complex mechanism of information extraction and transcription into mRNA? Had the whole process, that is INITIATION OF TRANSCRIPTION, CAPPING, ELONGATION, SPLICING, CLEAVAGE, POLYADENYLATION, AND TERMINATION, EXPORT FROM THE NUCLEUS TO THE CYTOSOL, INITIATION OF PROTEIN SYNTHESIS (TRANSLATION), COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING, and its respective machinery not have to be all in place? Does that not constitute an interdependent and irreducibly complex system?
Koonin, the logic of chance, page 237 The origin of translation: The key ideas and models During the 40 years since the discovery of the translation mechanism and deciphering of the genetic code, numerous theoretical (inevitably, speculative, sometimes far-fetched, often highly ingenious) models of the origin and evolution of various components of the translation apparatus and different aspects of the translation process have been proposed. It is unrealistic to provide here a thorough critical review of these models. Instead, I consider a few central ideas that are germane to the thinking about the origin of translation and then discuss in somewhat greater detail the only two coherent scenarios I am aware of. The main general point about the evolution of translation is that selection for protein synthesis could not have been the underlying cause behind the origin of the translation system. To evolve this complex system via the Darwinian route, numerous steps are required, but proteins appear only at the last steps; until that point, an evolving organism “does not know” how good proteins could be.
The DNA - Enzyme System is Irreducibly Complex 10
An often undiscussed aspect of complexity is how the tRNA get assigned to the right amino acids. For the DNA language to be translated properly, each tRNA codon must be attached to the correct amino acid. If this crucial step in DNA replication is not functional, then the language of DNA breaks down. Special enzymes called aminoacyl - tRNA synthetases (aaRSs) ensure that the proper amino acid is attached to a tRNA with the correct codon through a chemical reaction called "aminoacylation." Accurate translation requires not only that each tRNA be assigned the correct amino acid, but also that it not be aminoacylated by any of the aaRS molecules for the other 19 amino acids. One biochemistry textbook notes that because all aaRSs catalyze similar reactions upon various similar tRNA molecules, it was thought they "evolved from an common ancestor and should therefore be structurally related." (Voet and Voet pg. 971-975) However, this was not the case as the, "aaRSs form a diverse group of [over 100] enzymes … and there is little sequence similarity among synthetases specific for different amino acids." (Voet and Voet pg. 971-975) Amazingly, these aaRSs themselves are coded for by the DNA: this forms the essence of a chicken-egg problem. The enzymes themselves build help perform the very task which constructs them!
Stephen Meyer writes in Biocomplexitys paper :
Can the Origin of the Genetic Code Be Explained by Direct RNA Templating? 1 following :
The three main naturalistic concepts on the origin and evolution of the code are the stereochemical theory, according to which codon assignments are dictated by physico-chemical affinity between amino acids and the cognate codons (anticodons).
The genetic code as we observe it today is a semantic (symbol- based) relation between (a) amino acids, the building blocks of proteins, and (b) codons, the three-nucleotide units in messen- ger RNA specifying the identity and order of different amino acids in protein assembly. The actual physical mediators of the code, however, are trans- fer RNAs (tRNAs) that, after being charged with their specific amino acids by enzymes known as aminoacyl transfer RNA synthetases (aaRSs), present the amino acids for peptide bond formation in the peptidyl-transferase (P) site of the ribosome, the molecular machine that constructs proteins.
When proteins are produced in cells based on the "genetic code" of codons, there is a precise process under which molecules called transfer RNA (tRNA) bind to specific amino acids and then transport them to cellular factories called ribosomes where the amino acids are placed together, step by step, to form a protein. Mistakes in this process, which is mediated by enzymes called synthetases, can be disastrous, as they can lead to improperly formed proteins. Thankfully, the tRNA molecules are matched to the proper amino acids with great precision, but we still lack a fundamental understanding of how this selection takes place. 4
The secondary structure of a typical tRNA see figure below, reveals the coding (semantic) relations that Yarus et al. are trying to obtain from chemistry alone - a quest Yockey has compared to latter-day alchemy
At the end of its 3' arm, the tRNA binds its cognate amino acid via the universally conserved CCA sequence. Some distance away—about 70 Å—in loop 2, at the other end of the inverted cloverleaf, the anticodon recognizes the corresponding codon in the mRNA strand. (The familiar ‘cloverleaf’ shape represents only the secondary structure of tRNA; its three-dimensional form more closely resembles an “L” shape, with the anticodon at one end and an amino acid at the other.)Thus, in the current genetic code, there is no direct chemical interaction between codons, anticodons, and amino acids. The anticodon triplet and amino acid are situated at opposite ends of the tRNA: the mRNA codon binds not to the amino acid directly, but rather to the anticodon triplet in loop 2 of the tRNA.
Since all twenty amino acids, when bound to their corresponding tRNA molecules, attach to the same CCA sequence at the end of the 3’ arm, the stereochemical properties of that nucleotide sequence clearly do not determine which amino acids attach, and which do not. The CCA sequence is indifferent, so to speak, to which amino acids bind to it
Nevertheless, tRNAs are informationally (i.e., semantically) highly specific: protein assembly and biological function—but not chemistry—demand such specificity. As noted, in the current code, codon-to-amino acid semantic mappings are mediated by tRNAs, but also by the enzymatic action of the twenty separate aminoacyl-tRNA synthetases
Aminoacyl tRNA synthetase
An aminoacyl tRNA synthetase (aaRS) is an enzyme that catalyzes the esterification of a specific cognate amino acid or its precursor to one of all its compatible cognate tRNAs to form an aminoacyl-tRNA. In other words, aminoacyl tRNA synthetase attaches the appropriate amino acid onto its tRNA. This is sometimes called "charging" or "loading" the tRNA with the amino acid. Once the tRNA is charged, a ribosome can transfer the amino acid from the tRNA onto a growing peptide, according to the genetic code. Aminoacyl tRNA therefore plays an important role in DNA translation, the expression of genes to create proteins. 2
This set of twenty enzymes knows what amino acid to fasten to one end of a transfer-RNA (tRNA) molecule, based on the triplet codon it reads at the other end. It's like translating English to Chinese. A coded message is complex enough, but the ability to translate a language into another language bears the hallmarks of intelligent design. 6
Most cells use twenty aaRS enzymes, one for each amino acid. Each of these proteins recognizes a specific amino acid and the specific anticodons it binds to within the code. They then bind amino acids to the tRNA that bears the corresponding anticodon.
Thus, instead of the code reducing to a simple set of stereochemical affinities, biochemists have found a functionally interdependent system of highly specific molecules, including mRNA, a suite of tRNAs, and twenty specific aaRS enzymes, each of which is itself constructed from information stored on the very DNA strands that the system as a whole decodes.
Attempts to explain one part of the integrated complexity of the gene-expression system, namely the genetic code, by reference to simple chemical affinities lead not to simple rules of chemical attraction, but instead to an integrated system of multiple large molecular components. While this information-transmitting system exploits (i.e., uses) chemistry, it is not reducible to direct chemical affinities between codons or anticodons and their cognate amino acids.
The DRT model and the sequencing problem
One further aspect of Yarus’s work needs clarification and critique. One of the longest-standing and most vexing problems in origin-of-life research is known as the sequencing problem, the problem of explaining the origin of the specifically-arranged sequences of nucleotide bases that provide the genetic information or instructions for building proteins. Yet, in addition to its other deficiencies it is important to point out that Yarus et al. do not solve the sequencing problem, although they do claim to address it indirectly. Instead, Yarus et al. attempt to explain the origin of the genetic code—or more precisely, one aspect of the translation system, the origin of the associations between certain RNA triplets and their cognate amino acids.
Yarus et al. want to demonstrate that particular RNA triplets show chemical affinities to particular amino acids (their cognates in the present-day code). They try to do this by showing that in some RNA strands, individual triplets and their cognateamino acids bind preferentially to each other. They then envision that such affinities initially provided a direct (stereochemical) template for amino acids during protein assembly.
Since Yarus et al. think that stereochemical affinities originally caused protein synthesis to occur by direct templating, they also seem to think that solving the problem of the origin of the code would also simultaneously solve the problem of sequencing. But this does not follow. Even if we assume that Yarus et al. have succeeded in establishing a stereochemical basis for the associations between RNA triplets and amino acids in the present-day code (which they have not done; see above), they would not have solved the problem of sequencing.
The sequencing problem requires that long RNA strands would need to contain triplets already arranged to bind their cognate amino acids in the precise order necessary to assemble functional proteins. Yarus et al. analyzed RNA strands enriched in specific code-relevant triplets, and claim to have found that these strands show a chemical affinity with their cognate amino acids. But they did not find RNA strands with a properly sequenced series of triplets, each forming an association with a code-relevant amino acid as the DRT model would require, and arranged in the kind of order required to make functional proteins. To synthesize proteins by direct templating (even assuming the existence of all necessary affinities), the RNA template must have many properly sequenced triplets, just as we find in the actual messenger RNA transcripts.
The evidence of DNA storage 1. In the scientific magazine ‘Nature,’ in January 2013, Nick Goldman et al. reported a successful use of DNA to store large amounts of data. 2. “Here we describe a scalable method that can reliably store more information than has been handled before. We encoded computer files totaling 739 kilobytes of hard-disk storage and with an estimated Shannon information of 5.2× 10^6 bits into a DNA code, synthesized this DNA, sequenced it and reconstructed the original files with 100% accuracy. Theoretical analysis indicates that our DNA-based storage scheme could be scaled far beyond current global information volumes and offers a realistic technology for large-scale, long-term and infrequently accessed digital archiving. In fact, current trends in technological advances are reducing DNA synthesis costs at a pace that should make our scheme cost-effective for sub-50-year archiving within a decade.” 3. "DNA-based storage has potential as a practical solution to the digital archiving problem and may become a cost-effective solution for rarely accessed archives," said Goldman. 4. DNA far surpasses any current manmade technology and can last for thousands of years. To get a handle on this, consider that 1 petabyte is equivalent to 1 million gigabytes of information storage. This paper reports an information storage density of 2.2 petabytes per gram. 5. Scientists needed many decades to find out such an incredibly useful design of the DNA made, as they say, by nature. The discovery of the complex design of the DNA needed intelligence. How one can deny a superior intelligence that designed hundreds of different DNA’s, necessary for the survival of all the species. 6. That intelligence of nature is actually the intelligence of God since intelligence is only a property of a person. 7. Thus God inevitably exists.
Perry Marshall, Evolution 2.0, page 192 Ultra-High-Density Data Storage and Compression Your cells contain at least 92 strands of DNA and 46 double-helical chromosomes. In total, they stretch 6 feet (1.8 meters) end to end. Every human DNA strand contains as much data as a CD. Every DNA strand in your body stretched end to end would reach from Earth to the sun and back 600 times. When you scratch your arm, the dead skin cells that flake off contain more information than a warehouse of hard drives. Cells store data at millions of times more density than hard drives, 1021 bits per gram . Not only that, they use that data to store instructions vastly more effectively than human-made programs; consider that Windows takes 20 times as much space (bits) as your own genome. We don’t quite know how to quantify the total information in DNA. The genome is unfathomably more elegant, more sophisticated, and more efficient in its use of data than anything we have ever designed. Even with the breathtaking pace of Moore’s Law—the principle that data density doubles every two years and its cost is cut in half—it’s hard to estimate how many centuries it may take for human technology to catch up. Hopefully the lessons we learn from DNA can speed our efforts. A single gene can be used a hundred times by different aspects of the genetic program, expressed in a hundred different ways (248). The same program provides unique instructions to the several hundred different types of cells in the human body; it dictates their relationships to each other in three-dimensional space to make organs, as well as in a fourth dimension, the timeline of growth and development. It knows, for instance, that boys’ voices need to change when they’re 13 and not when they’re 3. It’s far from clear how this information is stored and where it all resides. Confining our understanding of DNA data to computer models is itself a limiting paradigm. This is all the more reason why our standard for excellence ought to be the cell and not our own technology:
• DNA is a programming language, a database, a communications protocol, and a highly compressed storage device for reading and writing data—all at the same time. • As a programming language it’s more versatile than C, Visual Basic, or PHP. • As a database it’s denser than Oracle or MySQL. • As a communications protocol it wastes far less space than TCP/IP and it’s more robust than Ethernet. • As a compression algorithm it’s superior to WinZip or anything else we’ve dreamed of. • As a storage medium it’s a trillion times denser than a CD, and packs information into less space than any hard drive or memory chip currently made. • And even the smallest bacterium is capable of employing all these mechanisms to dominate its environment and live in community with other cells.
Dawkins, The Blind Watchmaker, pp. 116–117.... there is enough information capacity in a single human cell to store the Encyclopaedia Britannica, all 30 volumes of it, three or four times over. ... There is enough storage capacity in the DNA of a single lily seed or a single salamander sperm to store the Encyclopaedia Britannica 60 times over. Some species of the unjustly called ‘primitive’ amoebas have as much information in their DNA as 1,000 Encyclopaedia Britannicas.
Why is DNA (and not RNA) a stable storage form for genetic information? http://biochemistryrevisited.blogspot.com.br/2008/01/why-is-dna-and-not-rna-stable-storage.html
Dazzling design in miniature: DNA information storage, by Werner Gitt
The cells of the human body can produce at least 100,000 different types of proteins, all with a unique function. The information to make each of these complicated molecular machines is stored on the well-known molecule, DNA. We think that we have done very well with human technology, packing information very densely on to computer hard drives, chips and CD-ROM disks. However, these all store information on the surface, whereas DNA stores it in three dimensions. It is by far the densest information storage mechanism known in the universe. Let's look at the amount of information that could be contained in a pinhead volume of DNA. If all this information were written into paperback books, it would make a pile of such books 500 times higher than from here to the moon! The design of such an incredible system of information storage indicates a vastly intelligent Designer. In addition, there is the information itself, which is stored on DNA, and transmitted from generation to generation of living things. There are no laws of science that support the idea that life, with all its information, could have come from non-living chemicals. On the contrary, we know from the laws of science, particularly in my own area of expertise, that messages (such as those that we find in all living things) always point back to an intelligent message sender. When we look at living things in the light of DNA, Genesis creation makes real sense of the scientific evidence.
90GB of data stored in 1g of bacteria. DECEMBER 13, 2010 While current electronic data storage methods approach their limits in density, the team achieved unprecedented results with a colony of E.coli. Their technique allows the equivalent of the United States Declaration of Independence to be stored in the DNA of eighteen bacterial cells. Given there are approximately ten million cells in one gram of biological material, the potential for data storage is huge. Furthermore, data can be encrypted using the natural process of site specific genetic recombination: information is scrambled by recombinase genes, whose actions are controlled by a transcription factor. 1
‘The information content of a simple cell has been estimated as around 10^12 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.” Carl Sagan, “Life” in Encyclopedia Britannica: Macropaedia (1974 ed.), pp. 893-89………
The 10^12 bits of information number for a bacterium is derived from entropic considerations, which is, due to the tightly integrated relationship between information and entropy, considered the most accurate measure of the transcendent information present in a ‘simple’ life form. For calculations please see the following site: Molecular Biophysics – Information theory. Relation between information and entropy 2
The greatest known density of information is that in the DNA of living cells. The diameter of this chemical storage medium is d = 2 nm, and the spiral increment of the helix is 3.4 nm (1 nm = 10-9 m = 10-6 mm). The volume of this cylinder is V = h • d2 • π/4:
V = 3.4 • 10-6 mm • (2 • 10-6 mm)2 • π/4 = 10.68 • 10-18 mm3 per winding. There are 10 chemical letters (nucleotides) in each winding of the double spiral (= 0.34 • 10-9 m/letter), giving a statistical information density of: r = 10 letters/(10.68 • 10-18 mm3) = 0.94 • 1018 letters per mm3. This packing density is so inconceivably great that we need illustrative comparisons. First: What is the amount of information contained in a pinhead of DNA? How many paperback books can be stored in this volume? Example: The paperback Did God Use Evolution? has the following dates: Thickness = 12 mm, 160 pages, LB = 250,000 letters/book Volume of a pinhead of 2 mm diameter (r = 1 mm): VP = 4/3 πr3 = 4.19 mm3 How many letters can be stored in the volume of 1 pinhead? LP = VP • r = 4.19 mm3 • (0.94• 1018 letters/mm3) = 3.94 • 1018 letters How many books can be stored in the volume of 1 pinhead? n = LP/LB = 3.94 • 1018 letters /(250,000 letters/book) = 15.76 • 1012 books What is the height of the pile of books? h = 15.76 • 1012 books • 12 mm/book = 189.1 • 1012 mm = 189.1 • 106 km Distance to the moon M = 384,000 km How many times the distance to the moon is this? m = h/M = 189.1 • 106 km /384,000 km = 492.5 times Secondly: The human genome has 3 • 109 letters (nucleotides). In body cells there are 6 • 109 letters. The length of the genome LG is given by LG = (0.34 • 10-9 m/letter) • 3 • 109 letters = 1.02 m The vlume of the human genome VG is VG = LG/r = 3 • 109 letters/(0.94 • 1018 letters/mm3) = 3.19 • 10-9 mm3 Volume of a pinhead of 2 mm diameter: V = 4/3 πr3 = 4.19 mm3 How many human genomes could be contained in 1 pinhead? k = 4.19 mm3 / (3.19 • 10-9 mm3) = 1.313 • 109 times
These are the genomes of more than thousand million people or one fifth of the population of the world.
Thirdly: A huge storage density is achieved, manifold greater than can be attained by the modern computers. To grasp the storage density of this material, we can imagine taking the material from the head of a pin with a diameter of 2 mm and stretching it out into a wire, which has the same diameter as a DNA molecule. How long would this wire be?
Diameter of the DNA molecule d = 2 nm = 2 • 10-6 mm (radius r = 10-6 mm)
If we are stretching out the material of a pinhead into a wire with the same thin diameter as a DNA molecule it would have a length more than 30 times around the equator.
These comparisons illustrate in a breath-taking way the brilliant storage concepts we are dealing with here, as well as the economic use of material and miniaturisation. The highest known (statistical) information density is obtained in living cells, exceeding by far the best achievements of highly integrated storage densities in computer systems.
Posts : 9787 Join date : 2009-08-09 Age : 58 Location : Aracaju brazil
Uncertainty quantification of a primordial ancestor with a minimal proteome emerging through unguided, natural, random events
1. The more statistically improbable something is, the less it makes sense to believe that it just happened by blind chance. 2. Statistically, it is practically impossible, that the primordial genome, proteome, and metabolome of the first living cell arose by chance. 3. Furthermore, we see in biochemistry purposeful design. 4. Therefore, an intelligent Designer is by far the best explanation of origins.
Self-replication had to emerge and be implemented first, which raises the unbridgeable problem that DNA replication is irreducibly complex. Evolution is not a capable driving force to make the DNA replicating complex, because evolution depends on cell replication through the very own mechanism we try to explain. It takes proteins to make DNA replication happen. But it takes the DNA replication process to make proteins. That’s a catch 22 situation.
Chance of intelligence to set up life: 100% We KNOW by repeated experience that intelligence produces all the things, as follows: factory portals ( membrane proteins ) factory compartments ( organelles ) a library index ( chromosomes, and the gene regulatory network ) molecular computers, hardware ( DNA ) software, a language using signs and codes like the alphabet, an instructional blueprint, ( the genetic and over a dozen epigenetic codes ) information retrieval ( RNA polymerase ) transmission ( messenger RNA ) translation ( Ribosome ) signaling ( hormones ) complex machines ( proteins ) taxis ( dynein, kinesin, transport vesicles ) molecular highways ( tubulins ) tagging programs ( each protein has a tag, which is an amino acid sequence informing other molecular transport machines were to transport them.) factory assembly lines ( fatty acid synthase ) error check and repair systems ( exonucleolytic proofreading ) recycling methods ( endocytic recycling ) waste grinders and management ( Proteasome Garbage Grinders ) power generating plants ( mitochondria ) power turbines ( ATP synthase ) electric circuits ( the metabolic network ) computers ( neurons ) computer networks ( brain ) all with specific purposes.
Chance of unguided random natural events producing just a minimal functional proteome, not considering all other essential things to get a first living self-replicating cell,is:
Let's suppose, we have a fully operational raw material, and the genetic language upon which to store genetic information. Only now, we can ask: Where did the information come from to make the first living organism? Various attempts have been made to lower the minimal information content to produce a fully working operational cell. Often, Mycoplasma is mentioned as a reference to the threshold of the living from the non-living. Mycoplasma genitalium is held as the smallest possible living self-replicating cell. It is, however, a pathogen, an endosymbiont that only lives and survives within the body or cells of another organism ( humans ). As such, it IMPORTS many nutrients from the host organism. The host provides most of the nutrients such bacteria require, hence the bacteria do not need the genes for producing such compounds themselves. As such, it does not require the same complexity of biosynthesis pathways to manufacturing all nutrients as a free-living bacterium.
A good candidate as the simplest free-living bacteria is Pelagibacter ubique. 13 It is known to be one of the smallest and simplest, self-replicating, and free-living cells. It has complete biosynthetic pathways for all 20 amino acids. These organisms get by with about 1,300 genes and 1,308,759 base pairs and code for 1,354 proteins. 14 They survive without any dependence on other life forms. Incidentally, these are also the most “successful” organisms on Earth. They make up about 25% of all microbial cells. If a chain could link up, what is the probability that the code letters might by chance be in some order which would be a usable gene, usable somewhere—anywhere—in some potentially living thing? If we take a model size of 1,200,000 base pairs, the chance to get the sequence randomly would be 4^1,200,000 or 10^722,000. This probability is hard to imagine but an illustration may help.
Imagine covering the whole of the USA with small coins, edge to edge. Now imagine piling other coins on each of these millions of coins. Now imagine continuing to pile coins on each coin until reaching the moon about 400,000 km away! If you were told that within this vast mountain of coins there was one coin different to all the others. The statistical chance of finding that one coin is about 1 in 10^55.
Furthermore, what good would functional proteins be good for, if not transported to the right site in the Cell, inserted in the right place, and interconnected to start the fabrication of chemical compounds used in the Cell? It is clear, that life had to start based on fully operating cell factories, able to self replicate, adapt, produce energy, regulate its sophisticated molecular machinery.
chemist Wilhelm Huck, professor at Radboud University Nijmegen A working cell is more than the sum of its parts. "A functioning cell must be entirely correct at once, in all its complexity
To make proteins, and direct and insert them to the right place where they are needed, at least 25 unimaginably complex biosyntheses and production-line like manufacturing steps are required. Each step requires extremely complex molecular machines composed of numerous subunits and co-factors, which require the very own processing procedure described below, which makes its origin an irreducible catch22 problem:
THE GENE REGULATORY NETWORK "SELECTS" WHEN, WHICH GENE IS TO BE EXPRESSED INITIATION OF TRANSCRIPTION BY RNA POLYMERASE TRANSCRIPTION ERROR CHECKING BY CORE POLYMERASE AND TRANSCRIPTION FACTORS RNA CAPPING ELONGATION SPLICING CLEAVAGE POLYADENYLATION AND TERMINATION EXPORT FROM THE NUCLEUS TO THE CYTOSOL INITIATION OF PROTEIN SYNTHESIS (TRANSLATION) IN THE RIBOSOME COMPLETION OF PROTEIN SYNTHESIS PROTEIN FOLDING MATURATION RIBOSOME QUALITY CONTROL PROTEIN TARGETING TO THE RIGHT CELLULAR COMPARTMENT ENGAGING THE TARGETING MACHINERY BY THE PROTEIN SIGNAL SEQUENCE CALL CARGO PROTEINS TO LOAD/UNLOAD THE PROTEINS TO BE TRANSPORTED ASSEMBLY/DISASSEMBLY OF THE TRANSLOCATION MACHINERY VARIOS CHECKPOINTS FOR QUALITY CONTROL AND REJECTION OF INCORRECT CARGOS TRANSLOCATION TO THE ENDOPLASMIC RETICULUM POSTRANSLATIONAL PROCESS OF PROTEINS IN THE ENDOPLASMIC RETICULUM OF TRANSMEMBRANE PROTEINS AND WATER-SOLUBLE PROTEINS GLYCOSILATION OF MEMBRANE PROTEINS IN THE ER ( ENDOPLASMIC RETICULUM ) ADDITION OF OLIGOSACCHARIDES INCORRECTLY FOLDED PROTEINS ARE EXPORTED FROM THE ER, AND DEGRADED IN THE CYTOSOL TRANSPORT OF THE PROTEIN CARGO TO THE END DESTINATIONS AND ASSEMBLY
Paul Davies, the origin of life, page 59 Proteins are a godsend to DNA because they can be used both as building material, to make things like cell walls, and as enzymes, to supervise and accelerate chemical reactions. Enzymes are chemical catalysts that ‘oil the wheels’ of the biological machine. Without them metabolism would grind to a halt, and there would be no energy available for the business of life. Not surprisingly, therefore, a large part of the DNA databank is used for storing instructions on how to make proteins. Here is how those instructions get implemented. Remember that proteins are long chain molecules made from lots of amino acids strung together to form polypeptides. Each different sequence of amino acids yields a different protein. The DNA has a wish list of all the proteins the organism needs. This information is stored by recording the particular amino acid sequence that specifies each and every protein on the list. It does so using DNA's four-letter alphabet A, G, C and T; the exact sequence of letters spells out the amino acid recipe, protein by protein – typically a few hundred base pairs for each. To turn this dry list of amino acids into assembled, functioning proteins, DNA enlists the help of a closely related molecule known as RNA (for ribonucleic acid). RNA is also made from four bases, A, G, C and U. Here U stands for uracil; it is similar to T and serves the same purpose alphabetically. RNA comes in several varieties; the one of interest to us here is known as messenger RNA, or mRNA for short. Its job is to read off the protein recipes from DNA and convey them to tiny factories in the cell where the proteins are made. These mini-factories are called ribosomes, and are complicated machines built from RNA and proteins of various sorts. Ribosomes come with a slot into which the mRNA feeds, after the fashion of a punched tape of the sort used by old-fashioned computers.
The mRNA ‘tape’ chugs throughthe ribosome, which then carries out its instructions bit by bit, hooking amino acids together, one by one in the specified sequence, until an entire protein has been constructed. Earthlife makes proteins from 20 different varieties of amino acids, 2 and the mRNA records which one comes after which so the ribosome can put them together in the right order. It is quite fascinating to see how the ribosome goes about joining the amino acids up into a chain. Naturally the amino acids don't obligingly come along in the right order, ready to be hooked on to the end of the chain. So how does the ribosome ensure that the mRNA gets its specified amino acid at each step? The answer lies with another set of RNA molecules, called transfer RNA, or tRNA for short. Each particular tRNA molecule brings along to the ribosome factory one and only one sort of amino acid stuck to its end, to present it to the production line. At each step in the assembly of the protein, the trick is to get the right tRNA, with the right amino acid attached, to give up its cargo and transfer it to the end of the growing protein chain, while rejecting any of the remaining 19 alternatives that may be on offer. This is accomplished as follows. The mRNA (remember, this carries the instructions) exposes a bit of information (i.e. a set of ‘letters’) that says ‘add amino acid such-and-such now’. The instructions are implemented correctly because only the targeted tRNA molecule, carrying the designated amino acid, will recognize the exposed bit of mRNA from its shape and chemical properties, and bind to it. The other tRNA molecules the ones that are carrying the ‘wrong’ amino acids – won't fit properly into the binding site. Having thus seduced the right tRNA molecule to berth at the production line, the next step is for the ribosome to persuade the newly arrived amino acid cargo to attach itself to the end of the protein chain. The chain is waiting in the ribosome, dangling from the end of the previously selected tRNA molecule.
At this point the latter molecule lets go and quits the ribosome, passing the entire chain on to the newly arrived tRNA, where it links on to the amino acid it has brought with it. The chain thus grows by adding amino acids to the head rather than the tail. If you didn't follow all of this on the first read through, don't worry, it isn't essential for understanding what follows. I just thought it was sufficiently amazing to be worth relating in some detail. When the protein synthesis is complete, the ribosome receives a ‘stop’ signal from the mRNA ‘tape’ and the chain cuts loose. The protein is now assembled, but it doesn't remain strung out like a snake. Instead it rolls up into a knobbly ball, rather like a piece of elastic that's stretched and allowed to snap back. This folding process may take some seconds, and it is still something of a mystery as to how the protein attains the appropriate final shape. To work properly, the three-dimensional form of the protein has to be correct, with the bumps and cavities in all the right places, and the right atoms facing outwards. Ultimately it is the particular amino acid sequence along the chain that determines the final threedimensional conformation, and therefore the physical and chemical properties, of the protein. This whole remarkable sequence of events is repeated in thousands of ribosomes scattered throughout the cell, producing tens of thousands of different proteins. It is worth repeating that, in spite of the appearance of purpose, the participating molecules are completely mindless. Collectively they may display systematic cooperation, as if to a plan, but individually they just career about. The molecular traffic within the cell is essentially chaotic, driven by chemical attraction and repulsion and continually agitated by thermal energy. Yet out of this blind chaos order emerges spontaneously.
Can an alphabet arise without an intelligent mind inventing it? What good is an alphabet for, if there is no medium like ink and paper to write a message on the paper, if there is no one to actually write a message and no one to read and understand it upon common agreement of meaning of words and language, between sender and receiver ??
Can the DNA code arise without an intelligent mind inventing it? What good is the DNA code for, if there is no medium like the DNA double helix, to encode the information required to make proteins? What good is the DNA code for, the software, and the DNA double helix, the hardware, if there is no machinery to read, transcribe and translate the message stored in DNA to make proteins? Must not all come into existence all at ones, otherwise, if one is missing, the others have no function?
http://www.ncbi.nlm.nih.gov/pubmed/8335231 The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, a collection of punctuation marks and regulatory sites, semantics.
In the cell, things are however far more complex. There is a whole chain of events that must all be fully operational, and the machinery in place, in order to make the final product, that is proteins. That chain is constituted by INITIATION OF TRANSCRIPTION, CAPPING, ELONGATION, SPLICING, CLEAVAGE, POLYADENYLATION, AND TERMINATION, EXPORT FROM THE NUCLEUS TO THE CYTOSOL, INITIATION OF PROTEIN SYNTHESIS (TRANSLATION), COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING. In order for evolution to work, the robot-like working machinery and assembly line must be in place, fully operational. So the origin of the machines cannot be explained through evolution. All it is left, are random chemical reactions, or design. Choose which explanation seems more fitting the evidence.
Targeting of proteins to appropriate subcellular compartments is a crucial process in all living cells. Secretory and membrane proteins usually contain an amino-terminal signal peptide, which is recognized by the signal recognition particle (SRP) when nascent polypeptide chains emerge from the ribosome. 4
The Signal recognition particle (SRP) and its receptor comprise a universally conserved and essential cellular machinery that couples the synthesis of nascent proteins to their proper membrane localization. The SRP and SRP receptor interacts with the cargo protein and the target membrane in this fundamental cellular pathway.
Proper localization of proteins to their correct cellular destinations is essential for sustaining the order and organization in all cells. Roughly 30% of the proteome is initially destined for the eukaryotic endoplasmic reticulum (ER), or the bacterial plasma membrane. The majority of these proteins are delivered by the Signal Recognition Particle (SRP), a universally conserved protein targeting machine (1–4).
The cotranslational SRP pathway minimizes the aggregation or misfolding of nascent proteins before they arrive at their cellular destination, and is therefore highly advantageous in the targeted delivery of membrane and secretory proteins. Despite the divergence of targeting machinery, the SRP pathway illustrates several key features that are general to almost all protein targeting processes:
(i) the cellular destination of a protein is dictated by its ‘signal sequence’, which allows it to engage a specific targeting machinery; (ii) targeting factors cycle between the cytosol and membrane, acting catalytically to bring cargo proteins to translocation sites at the target membrane; and (iii) targeting requires the accurate coordination of multiple dynamic events including cargo loading/unloading, targeting complex assembly/disassembly, and the productive handover of cargo from the targeting to translocation machinery.
Question : How could and would the protein find its way to the right destination without the signal sequence just right, right from the beginning ?
Not surprisingly, such molecular choreography requires energy input, which is often harnessed by GTPase or ATPase modules in the targeting machinery.
Cargo Recognition by the SRP
Timely recognition of signal sequences by the SRP is essential for proper initiation of cotranslational protein targeting. Signal sequences that engage the SRP are characterized, in general, by a core of 8–12 hydrophobic amino acids.
The multiple conformational rearrangements in the SRP•FtsY GTPase complex provide a series of additional checkpoints to further reject the incorrect cargos. These include:
(i) formation of the early intermediate, which is stabilized over 100-fold by the correct, but not incorrect cargos (Figure 3B, red arrow b); (ii) rearrangement of the early intermediate to the closed complex, which is ~10-fold faster with the correct than the incorrect cargos (Figure 3B, red arrow c); and (iii) GTP hydrolysis by the SRP•FtsY complex, which is delayed ~8-fold by the correct cargo to give the targeting complex a sufficient time window to identify the membrane translocon.
In contrast, GTP hydrolysis remains rapid with the incorrect cargo (t1/2 < 1s), which could abort the targeting of incorrect cargos (Figure 3B, arrow d). A mathematical simulation based on the kinetic and thermodynamic parameters of each step strongly suggest that all these fidelity checkpoints are required to reproduce the experimentally observed pattern of substrate selection by the SRP (40).
These results support a novel model in which the fidelity of protein targeting by the SRP is achieved through the cumulative effect of multiple checkpoints, by using a combination of mechanisms including
cargo binding, induced SRP–SR assembly, and kinetic proofreading through GTP hydrolysis. Additional discrimination could be provided by the SecYEG machinery, which further rejects the incorrect cargos (102). Analogous principles have been demonstrated in the DNA and RNA polymerases (103, 104), the spliceosome (105), tRNA synthetases (106) and tRNA selection by the ribosome (107), and may represent a general principle for complex biological pathways that need to distinguish between the correct and incorrect substrates based on minor differences.
The crowded ribosome exit site
Accumulating data now indicate that the ribosome exit site is a crowded environment where multiple protein biogenesis factors interact. As a newly synthesized protein emerges from the ribosomal exit tunnel, it interacts with a host of cellular factors that facilitate its folding, localization, maturation, and quality control. These include molecular chaperones.
Many proteins need to enter the ER for modification with sugars this occurs at the same time that they are being synthesized by the ribosome translation begins with synthesis of a short signal peptide sequence a signal recognition particle a protein complexbinds to the signal peptide while translation continues the SRP then binds to its receptor in the ER membrane anchoring the ribosome the ribosome binds its receptor and the signal peptide meets the protein translocator translation proceeds and the protein passes through the translocator the signal peptidase cleaves the signal peptide leaving the new protein molecule in the lumen of the endoplasmic reticulum
A non-mechanical example of irreducible complexity can be seen in the system that targets proteins for delivery to subcellular compartments. In order to find their way to the compartments where they are needed to perform specialized tasks, certain proteins contain a special amino acid sequence near the beginning called a 'signal sequence.'' As the proteins are being synthesized by ribosomes, a complex molecular assemblage called the signal recognition particle or SRP, binds to the signal sequence. This causes synthesis of the protein to halt temporarily. During the pause in protein synthesis the SRP is bound by the transmembrane SRP receptor, which causes protein synthesis to resume and which allows passage of the protein into the interior of the endoplasmic reticulum (ER). As the protein passes into the ER the signal sequence is cut off. For many proteins the ER is just a way station on their travels to their final destinations (Figure 10.3).
Proteins which will end up in a lysosome are enzymatically ``tagged'' with a carbohydrate residue called mannose- 6-phosphate while still in the ER. An area of the ER membrane then begins to concentrate several proteins; one protein, clathrin, forms a sort of geodesic dome called a coated vesicle which buds off from the ER. In the dome there is also a receptor protein which binds to both the clathrin and to the mannose-6-phosphate group of the protein which is being transported. The coated vesicle then leaves the ER, travels through the cytoplasm, and binds to the lysosome through another specific receptor protein. Finally, in a maneuver involving several more proteins, the vesicle fuses with the lysosome and the protein arrives at its destination. During its travels our protein interacted with dozens of macromolecules to achieve one purpose: its arrival in the lysosome.
Virtually all components of the transport system are necessary for the system to operate, and therefore the system is irreducible. And since all of the components of the system are comprised of single or several molecules, there are no black boxes to invoke. The consequences of even a single gap in the transport chain can be seen in the hereditary defect known as Icell disease. It results from a deficiency of the enzyme that places the mannose-6-phosphate on proteins to be targeted to the lysosomes. I-cell disease is characterized by progressive retardation, skeletal deformities, and early death.
Transport by vesicles: when proteins are made on the rough endoplasmic reticulum (RER), they get loaded into the Golgi apparatus. They are then sorted, modified and packaged in vesicles made from the budding-off of the Golgi membrane and discharged. Sorting signals directs the protein to the organelle. The signal is usually a stretch of amino acid sequence of about 15-60 amino acids long. There are at least three principles that characterize all vesicles mediated transport within cells:
i. The formation of membrane vesicles from a larger membrane occurs through the assistance of a protein coat such as clathrin that engulfs the protein because an adapter protein such as adaptin binds both to the coat and to the cargo protein bringing both close together. 5 The adaptin traps the cargo protein by biding with it’s receptors. After assembly particles bind to the clathrin protein they assemble into a basket-like network on the cytosolic surface of the membrane to shape it into a vesicle. Their final budding-off requires a GTP-binding protein called dynamin. ii. The process is facilitated by a number of GTP-binding proteins (ex; dynamin) that assemble a ring around the neck of a vesicle and through the hydrolysis of the phosphate group of GTP to GDP until the vesicle pinches off. In other words, GTP is one of the main sources of cellular energy for vesicle movement and fusion. iii. After a transport vesicle buds-off from the membrane, it is actively transported by motor proteins that move along cytoskeleton fibers to its destination. The vesicle then fuses with a target membrane and unloads the cargo (protein). But in order to fuse a vesicle with the membrane of another compartment, they both require complementary proteins, which in this case is soluble N-ethylmalei mide-sensitive-factor attachment protein receptor or, ahem, SNARE present in the membrane – one for the vesicle (vesicular SNARE) and one for the target membrane (t-SNARE).
From the book: Lateral gene transfer in evolution, page 6 To control and process DNA as an information and storage apparatus, an organism REQUIRES AT LEAST a minimal set of DNA polymerase, DNA ligase, DNA helicase, DNA primase, DNA topoisomarase, and a DNA-dependent RNA polymerase.
The origin of the ribosomal protein synthesis network is considered to be the singular defining event in the origin of cells and the Tree of Life 4
* Each cell contains around 10 million ribosomes, i.e. 7000 ribosomes are produced in the nucleolus each minute. * Each ribosome contains around 80 proteins, i.e. more than 0.5 million ribosomal proteins are synthesized in the cytoplasm per minute. * The nuclear membrane contains approximately 5000 pores. Thus, more than 100 ribosomal proteins are imported from the cytoplasm to the nucleus per pore and minute. At the same time 3 ribosomal subunits are exported from the nucleus to the cytoplasm per pore and minute.
The evidence from the ribosome a. “Spontaneous formation of the unlocked state of the ribosome is a multi-step process.” b. The L1 stalks of the ribosome bend, rotate and uncouple – undergoing at least four distinct stalk positions while each tRNA ratchets through the assembly tunnel. At one stage, for instance, “the L1 stalk domain closes and the 30S subunit undergoes a counterclockwise, ratchet-like rotation” with respect to another domain of the factory. This is not simple. “Subunit ratcheting is a complex set of motions that entails the remodeling of numerous bridging contacts found at the subunit interface that are involved in substrate positioning.” c.The enzyme machine that translates a cell’s DNA code into the proteins of life is nothing if not an editorial perfectionist…the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products… To their further surprise, the ribosome lets go of error-laden proteins 10,000 times faster than it would normally release error-free proteins, a rate of destruction that Green says is “shocking” and reveals just how much of a stickler (insisting) the ribosome is about high-fidelity protein synthesis. (Rachel Green, a Howard Hughes Medical Institute investigator and professor of molecular biology and genetics: The Ribosome: Perfectionist Protein-maker Trashes Errors, 2009) 4. Interactions between molecules are not simply matters of matching electrons with protons. Instead, large structural molecules form machines with moving parts. These parts experience the same kinds of forces and motions that we experience at the macro level: stretching, bending, leverage, spring tension, ratcheting, rotation and translocation. The same units of force and energy are appropriate for both – except at vastly different levels. 5. Every day, Every day, essays about molecular machines are giving more and more biomolecular details, many without mentioning evolution and giving details about the process of how these machines evolved. Ribosomes, however, are life essential, and a prerequisite to make the proteins which replicate DNA, hence, it had to emerge prior evolution could start. So its emergence cannot be explained by evolution. 6. These complexities are best explained by the work of an intelligent agency. 7. Hence, most probably, God exists.
Comparative genomic reconstructions of the gene repertoire of LUCA(S) point to a complex translation system that includes at least 18 of the 20 aminoacyl-tRNA synthetases (aaRS), several translation factors, at least 40 ribosomal proteins, and several enzymes involved in rRNA and tRNA modification. It appears that the core of the translation system was already fully shaped in LUCA(S) (Anantharaman, et al., 2002).
Two massive polymolecular units that combine to make the ribosome are needed for translation. The small unit is on the left and the large unit is on the right. Combined they represent 4 large RNA molecules with 70 proteins attached to the frame. The unit is a masterpiece of precision engineering… not a random association of stuff. 6
The synthesis of proteins is guided by information carried by mRNA molecules. To maintain the correct reading frame and to ensure accuracy (about 1 mistake every 10,000 amino acids), protein synthesis is performed in the ribosome, a complex catalytic machine made from more than 50 different proteins (the ribosomal proteins) and several RNA molecules, the ribosomal RNAs (rRNAs). A typical eukaryotic cell contains millions of ribosomes in its cytoplasm
The large and small ribosome subunits are assembled at the nucleolus, where newly transcribed and modified rRNAs associate with the ribosomal proteins that have been transported into the nucleus after their synthesis in the cytoplasm. These two ribosomal subunits are then exported to the cytoplasm, where they join together to synthesize proteins. Eukaryotic and bacterial ribosomes have similar structures and functions, being composed of one large and one small subunit that fit together to form a complete ribosome with a mass of several million daltons
There are millions of protein factories in every cell. Surprise, they’re not all the same 2
The plant that built your computer isn't churning out cars and toys as well. But many researchers think cells' crucial protein factories, organelles known as ribosomes, are interchangeable, each one able to make any of the body's proteins. Now, a provocative study suggests that some ribosomes, like modern factories, specialize in manufacturing only certain products. Such tailored ribosomes could provide a cell with another way to control which proteins it generates. They could also help explain the puzzling symptoms of certain diseases, which might arise when particular ribosomes are defective.
Biologists have long debated whether ribosomes specialize, and some remain unconvinced by the new work. But other researchers say they are sold on the finding, which relied on sophisticated analytical techniques. "This is really an important step in redefining how we think about this central player in molecular biology," says Jonathan Dinman, a molecular biologist at the University of Maryland in College Park.
A mammalian cell may harbor as many as 10 million ribosomes, and it can devote up to 60% of its energy to constructing them from RNA and 80 different types of proteins. Although ribosomes are costly, they are essential for translating the genetic code, carried in messenger RNA (mRNA) molecules, into all the proteins the cell needs. "Life evolved around the ribosome," Dinman says.
The standard view has been that a ribosome doesn't play favorites with mRNAs—and therefore can synthesize every protein variety. But for decades, some researchers have reported hints of customized ribosomes. For example, molecular and developmental biologist Maria Barna of Stanford University in Palo Alto, California, and colleagues reported in 2011 that mice with too little of one ribosome protein have short tails, sprout extra ribs, and display other anatomical defects. That pattern of abnormalities suggested that the protein shortage had crippled ribosomes specialized for manufacturing proteins key to embryonic development.
Definitive evidence for such differences has been elusive, however. "It's been a really hard field to make progress in," says structural and systems biologist Jamie Cate of the University of California (UC), Berkeley. For one thing, he says, measuring the concentrations of proteins in naturally occurring ribosomes has been difficult.
In their latest study, published online last week in Molecular Cell, Barna and her team determined the abundances of various ribosome proteins with a method known as selected reaction monitoring, which depends on a type of mass spectrometry, a technique for sorting molecules by their weight. When the researchers analyzed 15 ribosomal proteins in mouse embryonic stem cells, they found that nine of the proteins were equally common in all ribosomes. However, four were absent from 30% to 40% of the organelles, suggesting that those ribosomes were distinctive. Among 76 ribosome proteins the scientists measured with another mass spectrometry-based method, seven varied enough to indicate ribosome specialization.
Barna and colleagues then asked whether they could identify the proteins that the seemingly distinctive ribosomes made. A technique called ribosome profiling enabled them to pinpoint which mRNAs the organelles were reading—and thus determine their end products. The specialized ribosomes often concentrated on proteins that worked together to perform particular tasks. One type of ribosome built several proteins that control growth, for example. A second type churned out all the proteins that allow cells to use vitamin B12, an essential molecule for metabolism. That each ribosome focused on proteins crucial for a certain function took the team by surprise, Barna says. "I don't think any of us would have expected this."
Ribosome specialization could explain the symptoms of several rare diseases, known as ribosomopathies, in which the organelles are defective. In Diamond-Blackfan anemia, for instance, the bone marrow that generates new blood cells is faulty, but patients also often have birth defects such as a small head and misshapen or missing thumbs. These seemingly unconnected abnormalities might have a single cause, the researchers suggest, if the cells that spawn these different parts of the body during embryonic development carry the same specialized ribosomes.
Normal cells might be able to dial protein production up or down by adjusting the numbers of these specialized factories, providing "a new layer of control of gene expression," Barna says. Why cells need another mechanism for controlling gene activity isn't clear, says Cate, but it could help keep cells stable if their environment changes.
An overview of ribosomal structure and mRNA translation. mRNA translation is initiated with the binding of tRNAfmet to the P site (not shown). An incoming tRNA is delivered to the A site in complex with elongation factor (EF)-Tu–GTP. Correct codon–anticodon pairing activates the GTPase centre of the ribosome, which causes hydrolysis of GTP and release of the aminoacyl end of the tRNA from EF-Tu. Binding of tRNA also induces conformational changes in ribosomal (r)RNA that optimally orientates the peptidyl-tRNA and aminoacyl-tRNA for the peptidyl-transferase reaction to occur, which involves the transfer of the peptide chain onto the A-site tRNA. The ribosome must then shift in the 3′ mRNA direction so that it can decode the next mRNA codon. Translocation of the tRNAs and mRNA is facilitated by binding of the GTPase EF-G, which causes the deacylated tRNA at the P site to move to the E site and the peptidyl-tRNA at the A site to move to the P site upon GTP hydrolysis. The ribosome is then ready for the next round of elongation. The deacylated tRNA in the E site is released on binding of the next aminoacyl-tRNA to the A site. Elongation ends when a stop codon is reached, which initiates the termination reaction that releases the polypeptide 5
An overview of termination of translation. A stop codon in the mRNA A site (red hexagon) recruits either release factor-1 (RF1) or RF2 to mediate the hydrolysis and release of the peptide from the tRNA in the P site. This functions as a signal to recruit RF3–GDP, which induces the release of RF1/2. Exchange of GDP for GTP on RF3 and subsequent hydrolysis is thought to release RF3. The ribosome is left with mRNA and a deacylated tRNA in the P site. This complex is disassembled by the binding of ribosomal release factor (RRF) and the EF-G elongation factor62. GTP hydrolysis causes the dissociation of the 50S ribosomal subunit, and initiation factor-3 (IF3) is required to dissociate the deacylated tRNA from the P site.
1. from the book: The Logic of Chance: The Nature and Origin of Biological Evolution , page 228, By Eugene V. Koonin 2. http://www.sciencemag.org/news/2017/06/there-are-millions-protein-factories-every-cell-surprise-they-re-not-all-same 3. http://www.nobelprize.org/educational/medicine/dna/a/translation/ribosome_ass.html 4. https://www.sciencedirect.com/science/article/pii/S0040580918300789 5. https://sci-hub.tw/https://www.nature.com/articles/nrm2352 6. https://blueprintsforliving.com/cellular-ribosomes-the-origin-of-life/
Posts : 9787 Join date : 2009-08-09 Age : 58 Location : Aracaju brazil
Error detection and repair during the biogenesis & maturation of the ribosome, tRNA's, Aminoacyl-tRNA synthetases, and translation: by chance, or design?
The control of the error rate and its effects in biological processes of information transmission is one of the key requirements to have functional living cells. When talking about error check and repair of translation in the cell, then we have to consider several aspects. First, in order to have a functional and error-prone translation, the components involved in translation must be correctly synthesized in the cell. In translation, messenger RNA, transfer RNA, Aminoacyl-tRNA synthetase, and the Ribosome are involved. Most, if not all, are carefully error checked, either repaired when errors are detected, or discarded when misfolding through proteostasis. 13 Secondly, the process of translation is a multistep process, and error check and repair occur all along the way. They will be listed as follows:
Mistakes during DNA replication are on the order of ~10^8 and are kept to this extremely low level by a robust suite of error prevention, correction and repair mechanisms 12 Protein synthesis, offers the greatest opportunity for errors, with mistranslation events routinely occurring at a frequency of ~1 per 10,000 mRNA codons translated. The ribosome must select the correct aminoacyl-transfer RNAs (aa-tRNAs) from a large pool of near-cognate substrates fast enough to sustain an elongation rate of 10–20 amino acids per second !!. Proofreading and editing processes are used throughout protein synthesis to ensure a faithful translation of genetic information. The maturation of tRNAs and mRNAs is monitored, as is the identity of amino acids attached to tRNAs. Accuracy is further enhanced during the selection of aminoacyl-tRNAs on the ribosome and their base-pairing with mRNA.
Ribosome biogenesis: quality control mechanisms must be in place to survey nascent ribosomes and ensure their functionality. Chiral checkpoints during protein biosynthesis the ribosome act as “chiral checkpoints” by preferentially binding to L-amino acids or L-aminoacyl-tRNAs, thereby excluding D-amino acids 11 If misaminoacylated tRNA is successfully delivered to the ribosome, additional proofreading occurs within the A site of the ribosome based on aa-tRNA position and affinity Ribosomal interactions with additional tRNA-specific sequences and modifications facilitate accurate selection of aa-tRNAs based on kinetic discrimination during the initial selection stage and subsequent proofreading stage.
mRNA translation regulation by nearly 100 epigenetic tRNA modifications. The finer details in this sort of regulation have been proven to differ between prokaryotic and eukaryotic organisms. ( LUCA hello ??!! 10 )
tRNA structure monitoring: Export of defective or immature tRNAs is avoided by monitoring both structure and function of tRNAs in the nucleus, and only tRNAs with mature 5′ and 3′ ends are exported. 6 tRNA synthesis quality control: RTD and other mechanisms that degrade hypomodified or mutated mature yeast tRNAs serve as a surveillance system to eliminate tRNA molecules that have incorrect nucleosides or conformations
Aminoacyl-tRNA synthetaseerror minimization by preferential binding of the right amino acids, and selective editing and proofreading of near cognate amino acids Aminoacyl-tRNA synthetase Pre-transfer editing: Pre-transfer editing has been described in both class I and class II aaRSs and takes place after aa-AMP synthesis but before the aminoacyl moiety is transferred to the tRNA. 9 Aminoacyl-tRNA synthetase Post-transfer editing: Post-transfer editing takes place after the transfer of the amino acid to the tRNA and involves the hydrolysis of the ester bond, in a domain separated from the active site. Aminoacyl-tRNA synthetase Editing factors: Another important component of the translation quality control machinery is the trans-editing family, free-standing proteins that are not synthetases but are in some cases homologs to the editing domains of such enzymes. The role of these trans-editing factors is to clear the misacylated tRNA before it reaches the ribosome, acting as additional checkpoints to ensure fidelity. Aminoacyl-tRNA synthetases (aaRSs), selectively hydrolyze ( chemical reaction in which a molecule of water ruptures one or more chemical bonds ) incorrectly activated non-cognate amino acids and/or misaminoacylated tRNAs. 12 In addition to misactivation of genetically encoded proteinogenic amino acids (GPAs), cells also encounter non-proteinogenic amino acids (NPAs) environmentally or as metabolic by-products, and must discriminate against these substrates to prevent aberrant use in protein synthesis.
This is a list of eleven different error check and repair mechanisms during translation. Consider, that life cannot start, unless these mechanisms are fully in place, and operational. Consider as well, that all this machinery is a pre-requirement for living cells to kick-start life. Their origin cannot be explained by evolution. The alternatives are either all these hypercomplex life essential error check and repair mechanisms emerged by a fortuitous accident, spontaneously through self-organization by unguided stochastic coincidence, natural events that turned into self-organization in an orderly manner without external direction, chemical non-biological,purely physico-dynamic kinetic processes and reactions influenced by environmental parameters, or through the direct intervention, creative force and activity of an intelligent agency, a powerful creator. Which of the two makes more sense?
Maintaining the genetic stability that an organism needs for its survival requires not only an extremely accurate mechanism for replicating DNA, but also mechanisms for repairing the many accidental lesions that occur continually in DNA. Its evident that the repair mechanism is essential for the cell to survive. It could not have evolved after life arose, but must have come into existence before. The mechanism is highly complex and elaborated, as consequence, the design inference is justified and seems to be the best way to explain its existence.
5ʹ => 3ʹ polymerization 1 in 100.000 3ʹ => 5ʹ exonucleolytic proofreading 1 in 100 Strand-directed mismatch repair 1 in 1000 Combined 1 in 10.000.000.000
Jon Lieff MD:DNA Proofreading, Correcting Mutations during Replication, Cellullar Self Directed Engineering During replication, nucleotides, which compose DNA, are copied. When E coli makes a copy of its DNA, it makes approximately one mistake for every billion new nucleotides. It can copy about 2000 letters per second, finishing the entire replication process in less than an hour. Compared to human engineering, this error rate is amazingly low. E coli makes so few errors because DNA is proofread in multiple ways. An enzyme, DNA polymerase, moves along the DNA strands to start copying the code from each strand of DNA. This process has an error rate of about one in 100,000: rather high. When an error occurs, though, DNA polymerase senses the irregularity as a distortion of the new DNA’s structure and stops what it is doing. How a protein can sense this is not clear. Other molecules then come to fix the mistake, removing the mistaken nucleotide base and replacing it with the correct one. After correction, the polymerase proceeds. This correction mechanism increases the accuracy 100 to 1000 times.
A Second Round of Proofreading
There are still some errors, however, that escape the previous mechanism. For those, three other complex proteins go over the newly copied DNA sequence. The first protein, called MutS (for mutator), senses a distortion in the helix shape of the new DNA and binds to the region with the mistaken nucleotides. The second protein, MutL, senses that its brother S is attached and brings a third protein over and attaches the two. The third molecule actually cuts the mistake on both sides. The three proteins then tag the incorrect section with a methyl group. Meanwhile, another partial strand of DNA is being created for the region in question, and another set of proteins cut out the exact amount of DNA needed to fill the gap. With both the mistaken piece and newly minted correct piece present, yet another protein determines which is the correct one by way of the methyl tag. That is, the correct one does not have the methyl tag on it. This new, correct section is then brought over and added to the original DNA strand. This second proofreading is itself 99% efficient and increases the overall accuracy of replication by another 100 times.
Multiple Sensors There are multiple places where a protein “senses” what needs to be done. The computer-like sensing of the original mistake, cannot be directed by the original DNA. Clearly, there are other sources of decision-making in a cell. While DNA’s “quality control” is extremely complex in E.Coli, the same process is even more complex in the human cell. Human cells contain many different polymerases and many other enzymes to cut and mend mistakes. There are even different Mut-type systems that, along with other proofreading, render human DNA replication incredibly accurate. Very recent research has shown some of the complex mechanisms of the MutL family of mutation correction molecules. It shows that an energy molecule ATP stimulates the process whereby MutL cuts the DNA around the error. There are two grooves in the MutL molecule, one for ATP and one for the DNA strand. When ATP binds to MutL it changes the protein’s shape which allows the cutting to occur. In humans when MutL is not functioning properly it is known to cause cancer.
While mutations help determine evolutionary variety, we still don’t know how these very elaborate and multi-layered quality controls came about and how they are directed. Is it possible for DNA to directed its own editing? Somehow, these processes know which are appropriate DNA sequences and which are not.
Cellular Repair Capabilities. 20 First, then, all cells from bacteria to man possess a truly astonishing array of repair systems that serve to remove accidental and stochastic sources of mutation. Multiple levels of proofreading mechanisms recognize and remove errors that inevitably occur during DNA replication. These proofreading systems are capable of distinguishing between newly synthesized and parental strands of the DNA double helix, so they operate efficiently to rectify rather than fix the results of accidental misincorporations of the wrong nucleotide. Other systems scan non-replicating DNA for chemical changes that could lead to miscoding and remove modified nucleotides, while additional functions monitor the pools of precursors and remove potentially mutagenic contaminants. In anticipation of chemical and physical insults to the genome, such as alkylating agents and ultraviolet radiation, additional repair systems are encoded in the genome and can be induced to correct damage when it occurs. It has been a surprise to learn how thoroughly cells protect themselves against precisely the kinds of accidental genetic change that, according to conventional theory, are the sources of evolutionary variability. By virtue of their proofreading and repair systems, living cells are not passive victims of the random forces of chemistry and physics. They devote large resources to suppressing random genetic variation and have the capacity to set the level of background localized mutability by adjusting the activity of their repair systems.
DNA damage is an alteration in the chemical structure of DNA, such as a break in a strand of DNA, a base missing from the backbone of DNA, or a chemically changed base. Naturally occurring DNA damages arise more than 60,000 times per day per mammalian cell. DNA damage appears to be a fundamental problem for life. DNA damages are a major primary cause of cancer. DNA damages give rise to mutations and epimutations. The mutations, if not corrected, would be propagated throughout subsequent cell generations. Such a high rate of random changes in the DNA sequence would have disastrous consequences for an organism
Different pathways for DNA repair exists,
Nucleotide excision repair (NER), Base excision repair (BER), DNA mismatch repair (MMR), Repair through alkyltransferase-like proteins (ATLs) amongst others.
Base excision repair (BER) involves a category of enzymes known as DNA-N-glycosylases.
One example of DNA's automatic error-correction utilities are enough to stagger the imagination. There are dozens of repair mechanisms to shield our genetic code from damage; one of them was portrayed in Nature in terms that should inspire awe.
From Nature's article : Structure of a repair enzyme interrogating undamaged DNA elucidates recognition of damaged DNA 11
How DNA repair proteins distinguish between the rare sites of damage and the vast expanse of normal DNA is poorly understood. Recognizing the mutagenic lesion 8-oxoguanine (oxoG) represents an especially formidable challenge, because this oxidized nucleobase differs by only two atoms from its normal counterpart, guanine (G). The X-ray structure of the trapped complex features a target G nucleobase extruded from the DNA helix but denied insertion into the lesion recognition pocket of the enzyme. Free energy difference calculations show that both attractive and repulsive interactions have an important role in the preferential binding of oxoG compared with G to the active site. The structure reveals a remarkably effective gate-keeping strategy for lesion discrimination and suggests a mechanism for oxoG insertion into the hOGG1 active site.
Of the four bases in DNA (C, G, A, and T) cytosine or C is always supposed to pair with guanine, G, and adenine, A, is always supposed to pair with thymine, T. The enzyme studied by Banerjee et al. in Nature is one of a host of molecular machines called BER glycosylases; this one is called human oxoG glycosylase repair enzyme (hOGG1), and it is specialized for finding a particular type of error: an oxidized G base (guanine). Oxidation damage can be caused by exposure to ionizing radiation (like sunburn) or free radicals roaming around in the cell nucleus. The normal G becomes oxoG, making it very slightly out of shape. There might be one in a million of these on a DNA strand. While it seems like a minor typo, it can actually cause the translation machinery to insert the wrong amino acid into a protein, with disastrous results, such as colorectal cancer. 12
The machine latches onto the DNA double helix and works its way down the strand, feeling every base on the way. As it proceeds, it kinks the DNA strand into a sharp angle. It is built to ignore the T and A bases, but whenever it feels a C, it knows there is supposed to be a G attached. The machine has precision contact points for C and G. When the C engages, the base paired to it is flipped up out of the helix into a slot inside the enzyme that is finely crafted to mate with a pure, clean G. If all is well, it flips the G back into the DNA helix and moves on. If the base is an oxoG, however, that base gets flipped into another slot further inside, where powerful forces yank the errant base out of the strand so that other machines can insert the correct one.
Now this is all wonderful stuff so far, but as with many things in living cells, the true wonder is in the details. The thermodynamic energy differences between G and oxoG are extremely slight – oxoG contains only one extra atom of oxygen – and yet this machine is able to discriminate between them to high levels of accuracy.
The author, David, says in the Nature article :
Structural biology: DNA search and rescue DNA-repair enzymes amaze us with their ability to search through vast tracts of DNA to find subtle anomalies in the structure. The human repair enzyme 8-oxoguanine glycosylase (hOGG1) is particularly impressive in this regard because it efficiently removes 8-oxoguanine (oxoG), a damaged guanine (G) base containing an extra oxygen atom, and ignores undamaged bases.
Natural selection cannot act without accurate replication, yet the protein machinery for the level of accuracy required is itself built by the very genetic code it is designed to protect. Thats a catch22 situation. It would have been challenging enough to explain accurate transcription and translation alone by natural means, but as consequence of UV radiation, it would have quickly been destroyed through accumulation of errors. So accurate replication and proofreading are required for the origin of life. How on earth could proofreading enzymes emerge, especially with this degree of fidelity, when they depend on the very information that they are designed to protect? Think about it.... This is one more prima facie example of chicken and egg situation. What is the alternative explanation to design ? Proofreading DNA by chance ? And a complex suite of translation machinery without a designer?
I enjoy to learn about the wonder of these incredible mechanisms. If the apostle Paul could understand that creation demands a Creator as he wrote in Romans chapter one 18, how much more we today with all the revelations about cell biology and molecular machines?
DNA repair mechanisms, designed with special care in order to provide integrity of DNA, and essential for living organisms of all domains. In fact, Nature uses special proteins called ‘proofreading enzymes’ to prevent the occurrence of slight changes in sequence when DNA replicates.
Inaccurate replication would likely have limited the size of the progenote genome due to the risk of “error catastrophe,” the accumulation of so many genetic mistakes that the organism is no longer viable. To illustrate this point, consider the problem of replicating a genome of one million bases, which is sufficient to encode a few hundred RNAs and proteins. (The smallest known genome for an extant free-living bacterium is that of Pelagibacter ubique, which consists of 1.3 million bases.) If replication were even modestly faithful, with an error frequency of 0.1%, every replication of a genome consisting of 1 million bases would result in 1000 errors, approximately one or two in every gene. Some of those errors would have been harmless, and a few might have been beneficial, but many would have been detrimental, leading to macromolecules with impaired functions.
Replication forks may stall frequently and require some form of repair to allow completion of chromosomal duplication. Failure to solve these replicative problems comes at a high price, with the consequences being genome instability, cell death and, in higher organisms, cancer. Replication fork repair and hence reloading of DnaB may be needed away from oriC at any point within the chromosome and at any stage during chromosomal duplication. The potentially catastrophic effects of uncontrolled initiation of chromosomal duplication on genome stability suggests that replication restart must be regulated as tightly as DnaA-directed replication initiation at oriC. This implies reloading of DnaB must occur only on ssDNA at repaired forks or D-loops rather than onto other regions of ssDNA, such as those created by blocks to lagging strand synthesis.Thus an alternative replication initiator protein, PriA helicase, is utilized during replication restart to reload DnaB back onto the chromosome
Question: Could the first cell, with its required complement of genes coded for by DNA, have successfully reproduced for a significant number of generations without a proofreading function? A further question is how the function of synthesis of the lagging strand could have arisen, and the machinery to do so. That is, the Primosome, and the function of Polymerase I to remove the short peaces of RNA that the cell uses to prime replication, allowing the polymerase III function to fill the gap. These functions all require precise regulation, and coordinated functional machine-like steps. These are all complex, advanced functions and had to be present right from the beginning. How could this complex machinery have emerged in a gradual manner? the Primosome had to be fully functional, otherwise, polymerisation could not have started, since a prime sequence is required.
The enzymes that copy DNA to DNA, or DNA to RNA, are indeed very clever. They can sense at several stages during synthesis whether anything is going wrong; for example, if they have added or are about to add the wrong base, according to the Watson–Crick rules of pairing. Also, there are ‘repair’ enzymes that go around correcting occasional mistakes of copying or ‘mismatches’. Thus, Nature goes to great lengths to avoid errors in the copying of DNA, even though the atoms in the DNA structure are actually quite tolerant of mismatch pairings. These enzymes are extremely efficient in doing their job, yet no one knows exactly how they work.
The evidence through DNA repair 1. Broken or mismatched DNA strands can lead to serious diseases and even death. It is essential that DNA damage is recognized and repaired quickly. 2. A team at Rockefeller University and Harvard Medical School that found two essential proteins that act like “molecular tailors” that can snip out an error and sew it back up with the correct molecules. 3. These proteins, FANC1 and FANCD2, repair inter-strand cross-links, “one of the most lethal types of DNA damage.” This problem “occurs when the two strands of the double helix are linked together, blocking replication and transcription.” 4. Each of your cells is likely to get 10 alarm calls a day for inter-strand cross-links. 5. The FANC1 and FANCD2 link together and join other members of the repair pathway, and are intimately involved in the excision and insertion steps. 6. One repair operation requires 13 protein parts. 7. “If any one of the 13 proteins in this pathway is damaged, the result is Fanconi anemia, a blood disorder that leads to bone marrow failure and leukemia, among other cancers, as well as many physiological defects.” a. “Our results show that multiple steps of the essential S-phase ICL repair mechanism fail when the Fanconi anemia pathway is compromised.” 8. In the scientific paper and press release nor Darwin nor the possible way of how this tightly-integrated system might have evolved was mentioned. 9. The absolute necessity of FANC1 and FANCD2 are very much obvious from this discovery not only in one species but in all that has DNA. Their crucial role for survival of the species is undismissable. 10. There must have existed as perfectly functional units from the time of appearance of any species on this planet otherwise existence would be not possible. 11. This implies creation what further implies that God necessarily exists.
Reference: 1. Knipscheer et al, “The Fanconi Anemia Pathway Promote Replication-Dependent DNA Interstrand Cross-Link Repair,” Science, 18 December 2009: Vol. 326. no. 5960, pp. 1698-1701, DOI: 10.1126/science.1182372.
Argument from detection/correction codes 1. The GCL binary representation makes possible the existence of error detection/correction codes that operate along the strands of DNA. 2. “An error-control mechanism implies the organization of the redundancy in a mathematically structured way,” and “the genetic code exhibits a strong mathematical structure that is difficult to put in relation with biological advantages other than error correction.” 3. A peculiar and unique mathematical model accounts for the key properties of the genetic code that exhibits symmetry, organized redundancy, and a mathematical structure crucial for the existence of error-coding techniques operating along the DNA strands. 4. The DNA data tested using this model gave a strong indication that error-coding techniques do exist. 5. Such a wonderful design indicates purposeful creation that further indicates the existence of God. 6. God most probably exists.
The evidence of Rad51 1. The scientists from the Lawrence Berkeley National Lab in their essay: “Safeguarding genome integrity through extraordinary DNA repair,” write: Homologous recombination is a complex mechanism with multiple steps, but also with many points of regulation to ensure accurate recombination at every stage. This could be why this method has been favored during evolution. The machinery that relocalizes the damaged DNA before loading Rad51 might have evolved because the consequences of not having it would be terrible. 2. If evolution is a chance process with no goal or purpose, it would not care if something emerges or not. How can a mindless process “favor” a method? How would a mindless process “know” that the consequences of not having something would be terrible? How would that motivate a non-mind to produce machinery and complex mechanisms to avoid terrible consequences? 3. Thus instead of saying ‘Rad51 might have evolved’ it is clear that Rad51 was designed by an intelligent designer since without such a complex mechanism with multiple steps with many points of regulation to insure accurate recombination at every stage, life could not exist. 4. The ability of Rad51 that has the ability of extraordinary DNA repair proofs the existence of an intelligent designer all men call God. 5. God most probably exists.
DNA repair mechanisms make no sense in an evolutionary presupposition. Error correction requires error detection, and that requires the detection process to be able to compare the DNA as it is to the way it ought to be. DNA repair is regarded as one of the essential events in all life forms. 18 The stability of the genome is essential for the proper function and survival of all organisms. DNA damage is very frequent and appears to be a fundamental problem for life. DNA damage can trigger the development of cancer, and accelerate aging. 19
Kunkel, T.A., DNA Replication Fidelity, J. Biological Chemistry 279:16895–16898, 23 April 2004.
This machinery keeps the error rate down to less than one error per 100 million letters
Maintaining the genetic stability that an organism needs for its survival requires not only an extremely accurate mechanism for replicating DNA, but also mechanisms for repairing the many accidental lesions that occur continually in DNA. Most such spontaneous changes in DNA are temporary because they are immediately corrected by a set of processes that are collectively called DNA repair. Of the thousands of random changes created every day in the DNA of a human cell by heat, metabolic accidents, radiation of various sorts, and exposure to substances in the environment, only a few accumulate as mutations in the DNA sequence. For example, we now know that fewer than one in 1000 accidental base changes in DNA results in a permanent mutation; the rest are eliminated with remarkable efficiency by DNA repair. The importance of DNA repair is evident from the large investment that cells make in DNA repair enzymes. For example, analysis of the genomes of bacteria and yeasts has revealed that several percent of the coding capacity of these organisms is devoted solely to DNA repair functions.
Without DNA repair, spontaneous DNA damage would rapidly change DNA sequences
Although DNA is a highly stable material, as required for the storage of genetic information, it is a complex organic molecule that is susceptible, even under normal cell conditions, to spontaneous changes that would lead to mutations if left unrepaired.
DNA damage is an alteration in the chemical structure of DNA, such as a break in a strand of DNA, a base missing from the backbone of DNA, or a chemically changed base. 15 Naturally occurring DNA damages arise more than 60,000 times per day per mammalian cell.
DNA damage appears to be a fundamental problem for life. DNA damages are a major primary cause of cancer. DNA damages give rise to mutations and epimutations that, by a process of natural selection, can cause progression to cancer. 16
Different pathways to repair DNA
DNA repair mechanisms fall into 2 categories
– Repair of damaged bases – Repair of incorrectly basepaired bases during replication
Cells have multiple pathways to repair their DNA using different enzymes that act upon different kinds of lesions.
At least four excision repair pathways exist to repair single stranded DNA damage:
Nucleotide excision repair (NER) Base excision repair (BER) DNA mismatch repair (MMR) Repair through alkyltransferase-like proteins (ATLs)
In most cases, DNA repair is a multi-step process
– 1. An irregularity in DNA structure is detected – 2. The abnormal DNA is removed – 3. Normal DNA is synthesized
DNA bases are also occasionally damaged by an encounter with reactive metabolites produced in the cell (including reactive forms of oxygen) or by exposure to chemicals in the environment. Likewise, ultraviolet radiation from the sun can produce a covalent Iinkage between two adjacent pyrimidine bases in DNA to form, for example, thymine dimers This type of damage occurs in the DNA of cells exposed to ultraviolet or radiation(as in sunlight) A similar dimer will form between any two neighboring pyrimidine bases ( C or T residues ) in DNA. ( see below )
If left uncorrected when the DNA is replicated, most of these changes would be expected to lead either to the deletion of one or more base pairs or to a base-pair substitution in the daughter DNA chain. ( see below ) The mutations would then be propagated throughout subsequent cell generations. Such a high rate of random changes in the DNA sequence would have disastrous consequences for an organism
Its evident that the repair mechanism is essential for the cell to survive. It could not have evolved after life arose, but must have come into existence before. The mechanism is highly complex and elaborated, as consequence, the design inference is justified and seems to be the best way to explain its existence.
The DNA double helix is readily repaired The double-helical structure of DNA is ideally suited for repair because it carries two separate copies of all the genetic information-one in each of its two strands. Thus, when one strand is damaged, the complementary strand retains an intact copy of the same information, and this copy is generally used to restore the correct nucleotide sequences to the damaged strand. An indication of the importance of a double-stranded helix to the safe storage of genetic information is that all cells use it; only a few small viruses use single stranded DNA or RNA as their genetic material. The types of repair processes described in this section cannot operate on such nucleic acids, and once damaged, the chance of a permanent nucleotide change occurring in these singlestranded genomes of viruses is thus very high. It seems that only organisms with tiny genomes (and therefore tiny targets for DNA damage) can afford to encode their genetic information in any molecule other than a DNA double helix.Below shows two of the most common pathways. In both, the damage is excised, the original DNA sequence is restored by a DNA polymerase that uses the undamaged strand as its template, and a remaining break in the double helix is sealed by DNA ligase.
DNA ligase.
The reaction catalyzed by DNA ligase. This enzyme seals a broken phosphodiester bond. As shown, DNA ligase uses a molecule of ATP to activate the 5' end at the nick (step 1 ) before forming the new bond (step 2). In this way, the energetically unfavorable nick-sealing reaction is driven by being coupled to the energetically favorable process of ATP hydrolysis.
The main two pathways differ in the way in which they remove the damage from DNA. The first pathway, called
Base excision repair (BER) 9 It involves a battery of enzymes called DNA glycosylases, each of which can recognize a specific tlpe of altered base in DNA and catalyze its hydrolltic removal. There are at least six types of these enzymes, including those that remove deaminated Cs, deaminated As, different types of alkylated or oxidized bases, bases with opened rings, and bases in which a carbon-carbon double bond has been accidentally converted to a carbon-carbon single bond.
How is an altered base detected within the context of the double helix? A key step is an enzyme-mediated "flipping-out" of the altered nucleotide from the helix, which allows the DNA glycosylase to probe all faces of the base for damage ( see above image ) It is thought that these enzymes travel along DNA using base-flipping to evaluate the status of each base. Once an enzyme finds the damaged base that it recognizes, it removes the base from its sugar. The "missing tooth" created by DNA glycosylase action is recognized by an enzyme called AP endonuclease (AP for apurinic or apyrimidinic, endo to signify that the nuclease cleaves within the polynucleotide chain), which cuts the phosphodiester backbone, after which the damage is removed and the resulting gap repaired ( see figure below ) Depurination, which is by far the most frequent rype of damage suffered by DNA, also leaves a deoxyribose sugar with a missing base. Depurinations are directly repaired beginning with AP endonuclease.
While the BER pathway can recognize specific non-bulky lesions in DNA, it can correct only damaged bases that are removed by specific glycosylases. Similarly, the MMR pathway only targets mismatched Watson-Crick base pairs. 2
Molecular lesion A molecular lesion or point lesion is damage to the structure of a biological molecule such as DNA, enzymes, or proteins that results in reduction or absence of normal function or, in rare cases, the gain of a new function. Lesions in DNA consist of breaks and other changes in the chemical structure of the helix (see types of DNA lesions) while lesions in proteins consist of both broken bonds and improper folding of the amino acid chain. 6
DNA-N-glycosylases Base excision repair (BER) involves a category of enzymes known as DNA-N-glycosylases These enzymes can recognize a single damaged base and cleave the bond between it and the sugar in the DNA removes one base, excises several around it, and replaces with several new bases using Pol adding to 3’ ends then ligase attaching to 5’ end
DNA glycosylasesare a family of enzymes involved in base excision repair, classified under EC number EC 3.2.2. Base excision repair is the mechanism by which damaged bases in DNA are removed and replaced. DNA glycosylases catalyze the first step of this process. They remove the damaged nitrogenous base while leaving the sugar-phosphate backbone intact, creating an apurinic/apyrimidinic site, commonly referred to as an AP site. This is accomplished by flipping the damaged base out of the double helix followed by cleavage of the N-glycosidic bond. Glycosylases were first discovered in bacteria, and have since been found in all kingdoms of life. 8
One example of DNA's automatic error-correction utilities are enough to stagger the imagination. There are dozens of repair mechanisms to shield our genetic code from damage; one of them was portrayed in Nature in terms that should inspire awe. 10
How do DNA-repair enzymes find aberrant nucleotides among the myriad of normal ones? One enzyme has been caught in the act of checking for damage, providing clues to its quality-control process.
From Nature's article : Structure of a repair enzyme interrogating undamaged DNA elucidates recognition of damaged DNA 11
How DNA repair proteins distinguish between the rare sites of damage and the vast expanse of normal DNA is poorly understood. Recognizing the mutagenic lesion 8-oxoguanine (oxoG) represents an especially formidable challenge, because this oxidized nucleobase differs by only two atoms from its normal counterpart, guanine (G). The X-ray structure of the trapped complex features a target G nucleobase extruded from the DNA helix but denied insertion into the lesion recognition pocket of the enzyme. Free energy difference calculations show that both attractive and repulsive interactions have an important role in the preferential binding of oxoG compared with G to the active site. The structure reveals a remarkably effective gate-keeping strategy for lesion discrimination and suggests a mechanism for oxoG insertion into the hOGG1 active site.
Of the four bases in DNA (C, G, A, and T) cytosine or C is always supposed to pair with guanine, G, and adenine, A, is always supposed to pair with thymine, T. The enzyme studied by Banerjee et al. in Nature is one of a host of molecular machines called BER glycosylases; this one is called human oxoG glycosylase repair enzyme (hOGG1), and it is specialized for finding a particular type of error: an oxidized G base (guanine). Oxidation damage can be caused by exposure to ionizing radiation (like sunburn) or free radicals roaming around in the cell nucleus. The normal G becomes oxoG, making it very slightly out of shape. There might be one in a million of these on a DNA strand. While it seems like a minor typo, it can actually cause the translation machinery to insert the wrong amino acid into a protein, with disastrous results, such as colorectal cancer. 12
The machine latches onto the DNA double helix and works its way down the strand, feeling every base on the way. As it proceeds, it kinks the DNA strand into a sharp angle. It is built to ignore the T and A bases, but whenever it feels a C, it knows there is supposed to be a G attached. The machine has precision contact points for C and G. When the C engages, the base paired to it is flipped up out of the helix into a slot inside the enzyme that is finely crafted to mate with a pure, clean G. If all is well, it flips the G back into the DNA helix and moves on. If the base is an oxoG, however, that base gets flipped into another slot further inside, where powerful forces yank the errant base out of the strand so that other machines can insert the correct one.
Now this is all wonderful stuff so far, but as with many things in living cells, the true wonder is in the details. The thermodynamic energy differences between G and oxoG are extremely slight – oxoG contains only one extra atom of oxygen – and yet this machine is able to discriminate between them to high levels of accuracy.
The author, David, says in the Nature article :
Structural biology: DNA search and rescue
DNA-repair enzymes amaze us with their ability to search through vast tracts of DNA to find subtle anomalies in the structure. The human repair enzyme 8-oxoguanine glycosylase (hOGG1) is particularly impressive in this regard because it efficiently removes 8-oxoguanine (oxoG), a damaged guanine (G) base containing an extra oxygen atom, and ignores undamaged bases.
The team led by Anirban Banerjee of Harvard, using a clever new stop-action method of imaging, caught this little enzyme in the act of binding to a bad guanine, helping scientists visualize how the machinery works. Some other amazing details are mentioned about this molecular proofreader. It checks every C-G pair, but slips right past the A-T pairs. The enzyme, “much like a train that stops only at certain locations,” pauses at each C and, better than any railcar conductor inspecting each ticket, flips up the G to validate it. Unless it conforms to the slot perfectly – even though G and oxoG differ in their match by only one hydrogen bond – it is ejected like a freeloader in a Pullman car and tossed out into the desert. David elaborates:
Calculations of differences in free energy indicate that both favourable and unfavourable interactions lead to preferential binding of oxoG over G in the oxoG-recognition pocket, and of G over oxoG in the alternative site. This structure [the image resolved by the scientific team] captures an intermediate that forms in the process of finding oxoG, and illustrates that the damaged base must pass through a series of ‘gates’, or checkpoints, within the enzyme; only oxoG satisfies the requirements for admission to the damage-specific pocket, where it will be clipped from the DNA. Other bases (C, A and T) may be rejected outright without extrusion from the helix because hOGG1 scrutinizes both bases in each pair, and only bases opposite a C will be examined more closely.
Natural selection cannot act without accurate replication, yet the protein machinery for the level of accuracy required is itself built by the very genetic code it is designed to protect. Thats a catch22 situation. It would have been challenging enough to explain accurate transcription and translation alone by natural means, but as consequence of UV radiation, it would have quickly been destroyed through accumulation of errors. So accurate replication and proofreading are required for the origin of life. How on earth could proofreading enzymes emerge, especially with this degree of fidelity, when they depend on the very information that they are designed to protect? Think about it.... This is one more prima facie example of chicken and egg situation. What is the alternative explanation to design ? Proofreading DNA by chance ? And a complex suite of translation machinery without a designer?
I enjoy to learn about the wonder of these incredible mechanisms. If the apostle Paul could understand that creation demands a Creator as he wrote in Romans chapter one 18, how much more we today with all the revelations about cell biology and molecular machines?
Since the editing machinery itself requires proper proofreading and editing during its manufacturing, how would the information for the machinery be transmitted accurately before the machinery was in place and working properly? Lest it be argued that the accuracy could be achieved stepwise through selection, note that a high degree of accuracy is needed to prevent ‘error catastrophe’ in the first place—from the accumulation of ‘noise’ in the form of junk proteins specified by the damaged DNA. 18
Depending on the species, this repair system can eliminate abnormal bases such as Uracil; Thymine dimers 3-methyladenine; 7-methylguanine
14
Since many mutations are deleterious, DNA repair systems are vital to the survival of all organisms
Living cells contain several DNA repair systems that can fix different type of DNA alterations
Nucleotide excision repair (NER)
Nucleotide excision repair is a DNA repair mechanism. DNA damage occurs constantly because of chemicals (i.e. intercalating agents), radiation and other mutagens.
Nucleotide excision repair (NER) is a highly conserved DNA repair mechanism. NER systems recognize the damaged DNA strand, cleave it on both sides of the lesion, remove and newly synthesize the fragment. UvrB is a central component of the bacterial NER system participating in damage recognition, strand excision and repair synthesis.[/b] We have solved the crystal structure of UvrB in the apo and the ATP-bound forms. UvrB contains two domains related in structure to helicases, and two additional domains unique to repair proteins. The structure contains all elements of an intact helicase, and is evidence that UvrB utilizes ATP hydrolysis to move along the DNA to probe for damage. The location of conserved residues and structural comparisons allow us to predict the path of the DNA and suggest that the tight preincision complex of UvrB and the damaged DNA is formed by insertion of a flexible β-hairpin between the two DNA strands. 3
DNA constantly requires repair due to damage that can occur to bases from a vast variety of sources including chemicals but also ultraviolet (UV) light from the sun. Nucleotide excision repair (NER) is a particularly important mechanism by which the cell can prevent unwanted mutations by removing the vast majority of UV-induced DNA damage (mostly in the form of thymine dimers and 6-4-photoproducts). The importance of this repair mechanism is evidenced by the severe human diseases that result from in-born genetic mutations of NER proteins including Xeroderma pigmentosum and Cockayne's syndrome. While the base excision repair machinery can recognize specific lesions in the DNA and can correct only damaged bases that can be removed by a specific glycosylase, the nucleotide excision repair enzymes recognize bulky distortions in the shape of the DNA double helix. Recognition of these distortions leads to the removal of a short single-stranded DNA segment that includes the lesion, creating a single-strand gap in the DNA, which is subsequently filled in by DNA polymerase, which uses the undamaged strand as a template. NER can be divided into two subpathways (Global genomic NER and Transcription coupled NER) that differ only in their recognition of helix-distorting DNA damage. 4
Nucleotide excision repair (NER) is a particularly important excision mechanism that removes DNA damage induced by ultraviolet light (UV). 2UV DNA damage results in bulky DNA adducts - these adducts are mostly thymine dimers and 6,4-photoproducts. Recognition of the damage leads to removal of a short single-stranded DNA segment that contains the lesion. The undamaged single-stranded DNA remains and DNA polymerase uses it as a template to synthesize a short complementary sequence. Final ligation to complete NER and form a double stranded DNA is carried out by DNA ligase. NER can be divided into two subpathways: global genomic NER (GG-NER) and transcription coupled NER (TC-NER). The two subpathways differ in how they recognize DNA damage but they share the same process for lesion incision, repair, and ligation.
The importance of NER is evidenced by the severe human diseases that result from in-born genetic mutations of NER proteins. Xeroderma pigmentosum and Cockayne's syndrome are two examples of NER associated diseases.
Maintaining genomic integrity is essential for living organisms. NER is a major pathway allowing the removal of lesions which would otherwise accumulate and endanger the health of the affected organism. 5
Nucleotide excision repair (NER) is a mechanism to recognize and repair bulky DNA damage caused by compounds, environmental carcinogens, and exposure to UV-light. In humans hereditary defects in the NER pathway are linked to at least three diseases: xeroderma pigmentosum (XP), Cockayne syndrome (CS), and trichothiodystrophy (TTD). The repair of damaged DNA involves at least 30 polypeptides within two different sub-pathways of NER known as transcription-coupled repair (TCR-NER) and global genome repair (GGR-NER). TCR refers to the expedited repair of lesions located in the actively transcribed strand of genes by RNA polymerase II (RNAP II). In GGR-NER the first step of damage recognition involves XPC-hHR23B complex together with XPE complex (in prokaryotes, uvrAB complex). The following steps of GGR-NER and TCR-NER are similar.