Defending the Christian Worlview, Creationism, and Intelligent Design
Would you like to react to this message? Create an account in a few clicks or log in to continue.
Defending the Christian Worlview, Creationism, and Intelligent Design

This is my personal virtual library, where i collect information, which leads in my view to the Christian faith, creationism, and Intelligent Design as the best explanation of the origin of the physical Universe, life, and biodiversity


You are not connected. Please login or register

Defending the Christian Worlview, Creationism, and Intelligent Design » Intelligent Design » Information Theory, Coded Information in the cell » The genetic code, insurmountable problem for non-intelligent origin

The genetic code, insurmountable problem for non-intelligent origin

Go down  Message [Page 1 of 1]

Otangelo


Admin
The genetic code, insurmountable problem for non-intelligent origin

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin

The quest about if DNA is a code or not began at least 15 years ago. In my understanding, DNA itself is NOT a code, but an semantophoretic molecule, storing instructional assembly information through the genetic code.
https://web.archive.org/web/20090302024700/http://www.freeratio.org/showthread.php?t=135497&page=1

http://evo2.org/dna-atheists/dna-code/
Codes always involve a system of symbols that represent ideas or plans.

According to the Field Museum, DNA base pairs are “codes, or instructions, that specify the characteristics of an organism, from a body’s sex to the color of a pea”
My comment: This very sentence is the cause of a lot of confusion. The information stored in genes, aka the sequence of codons, which instructs the sequence of amino acids, is NOT the genetic code. The genetic code is the assignment of 64 trinucelotide codons to 20 amino acids.

Claim: DNA is a set of instructions only in the same sense that chemistry itself is a set of instructions. All molecules know or decode is the laws of physics.
Reply: The bits and bytes on a hard drive don’t “know” anything either, they simply obey the laws of physics. It’s a purely electro-mechanical process. But they still have to be programmed to do what they do. Computer programs don’t emerge naturally, they are designed.  A book cannot be reduced to paper and ink.

Codes are generally expressed as binary relations or as geometric correspondences between a domain and a counterdomain; one speaks of mapping in the latter case. Thus, in the International Morse Code, 52 symbols consisting of sequences of dots and dashes map on 52 symbols of the alphabet, numbers and punctuation marks; or in the genetic code, 61 of the possible symbol triplets of the RNA domain map on a set of 20 symbols of the polypeptide counterdomain.

The data on your computer cannot be explained purely in terms of the materials your computer is made of; which is as good an illustration of any as to why purely materialistic interpretations fail. Many years ago a discussion about this topic would seem hopelessly abstract to most people, but now we live in the information age. We all know exactly what information is and we all understand that information is the entity that defines living things, man-made things, and all designs.


Codes are the product of a mind. A thinking entity outside the cell has to be responsible for the double dose of the DNA and the Protein , which use different languages and yet can communicate with each other. One would never dare insist that random ordering could create the Morse Code or the Braille reading method. That would be utterly irrational. DNA base sequencing cannot be explained by chance nor physical necessity any more than the information in a newspaper headline can be explained by reference to the chemical properties of ink. Nor can the conventions of the genetic code that determine the assignments between nucleotide triplets and amino acids during translation be explained in this manner. The genetic code functions like a grammatical convention in a human language. The properties/shape of building bricks/blocks do not determine their arrangement in the construction of a house or a wall. Similarly, the properties of biological building blocks do not determine the arrangement of monomers into functional information-bearing DNA and RNA polypeptides, nor protein strands.

The cell often employs a functional logic that mirrors our own, but exceeds it in the elegance of its execution. “It’s like we are looking at 8.0 or 9.0 versions of design strategies that we have just begun to implement. When I see how the cell processes information,” he said, “it gives me an eerie feeling that someone else figured this out before we got here.”

The genetic code, insurmountable problem for non-intelligent origin Geneti10
The standard genetic code showing the specific amino acids that DNA base triplets specify after they are transcribed and translated during gene expression.
The genetic code, insurmountable problem for non-intelligent origin Geneti11
Part of the ASCII code.

The genetic code, insurmountable problem for non-intelligent origin F2.medium
The genetic code assigns similar codons to amino acids with similar polar requirements. By chance, or design?!!
In the figure each triplet has been colored, with hydrophobic polar requirements blue, intermediate ones gray, and very polar side chains red. The Standard Genetic Code SGC is exceedingly highly ordered with respect to the polar requirement, with large coherent domains for hydrophobic, intermediate, and polar amino acids (Polar amino acids are those with side-chains that prefer to reside in an aqueous (i.e. water) environment.) The Standard Genetic Code ( SGC’s) division into a few coherent regions is especially striking.
https://www.biorxiv.org/content/10.1101/2020.02.20.958546v2.full?fbclid=IwAR2mR9y3NQsFcJCSP8Cgr5e-rIM2wzWJFT_2LfsfWRs4N1vX0b9-jsge4Jg

DNA base sequencing cannot be explained by chance nor physical necessity any more than the information in a newspaper headline can be explained by reference to the chemical properties of ink. Nor can the conventions of the genetic code that determine the assignments between nucleotide triplets and amino acids during translation be explained in this manner.  The genetic code functions like a grammatical convention in a human language. 

The genetic code is to the genetic information on a strand of DNA as the Morse code is to a specific message received by a telegraph operator. Molecular biologists have failed to find any significant chemical interaction
between the codons on mRNA (or the anticodons on tRNA) and the amino acids on the acceptor arm of tRNA to which the codons correspond. This means that forces of chemical attraction between amino acids and these groups of bases do not explain the correspondences that constitute the genetic code.

The genetic code, insurmountable problem for non-intelligent origin Trna10


1. The assignment of a word to represent something, like the word chair to an object to sit down, is always of mental origin.
2. The translation of a word in one language, to another language, is always of mental origin. For example the assignment of the word chair, in English, to xizi, in Chinese, can only be made by intelligence upon common agreement of meaning.
3. In biology the genetic code is the assignment and convention ( a cipher) of 64 triplet codons corresponding to 20 amino acids. It functions as a higher-level constraint distinct from the laws of physics and chemistry, much like a grammatical convention in a human language.
4. Since we know only of intelligence to be able to do so, this assignment is best explained by the deliberate, arbitrary action of a non-human intelligent agency.

1. The origin of the genetic cipher 
1.Triplet codons must be assigned to amino acids to establish a genetic cipher.  Nucleic-acid bases and amino acids don’t recognize each other directly but have to deal via chemical intermediaries ( tRNA's and  Aminoacyl tRNA synthetase ), there is no obvious reason why particular triplets should go with particular amino acids.
2. Other translation assignments are conceivable, but whatever cipher is established, the right amino acids must be assigned to permit polypeptide chains, which fold to active functional proteins. Functional amino acid chains in sequence space are rare.  There are two possibilities to explain the correct assignment of the codons to the right amino acids. Chance, and design. Natural selection is not an option, since DNA replication is not set up at the stage prior to a self-replicating cell, but this assignment had to be established before.
3. If it were a lucky accident that happened by chance, luck would have hit the jackpot through trial and error amongst 1.5 × 10^84 possible genetic code tables. That is the number of atoms in the whole universe. That puts any real possibility of a chance of providing the feat out of question. Its, using  Borel's law, in the realm of impossibility. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, the chemical lottery lacks the time necessary to find the universal genetic code. 
4. We have not even considered that there are also over 500 possible amino acids, which would have to be sorted out, to get only 20, and select all L amino and R sugar bases......
5. We know that minds do invent languages, codes, translation systems, ciphers, and complex, specified information all the time. 
6. Put it in other words: The task compares to invent two languages, two alphabets, and a translation system, and the information content of a book ( for example hamlet)  being created and written in English, and translated to Chinese, through the invention and application of an extremely sophisticated hardware system. 
7. The genetic code and its translation system are best explained through the action of an intelligent designer. 

The Genetic Code was most likely implemented by intelligence.
1. In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, word, into another form, ( as another word, letter, etc. ) 
2. In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process.
3. Assignment means designating, dictating, ascribing, corresponding, correlating, specifying, representing, determining, mapping, permutating.    
4. The universal triple-nucleotide genetic code can be the result either of a) a random selection through evolution, or b) the result of intelligent implementation.
5. We know by experience, that performing value assignment and codification is always a process of intelligence with an intended result. Non-intelligence, aka matter, molecules, nucleotides, etc. have never demonstrated to be able to generate codes, and have neither intent nor distant goals with a foresight to produce specific outcomes.  
6. Therefore, the genetic code is the result of an intelligent setup.

The Genetic Code was most likely implemented by intelligence.

1. In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, word, into another form, ( as another word, letter, etc. ) 
2. In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process.
3. Assignment means designating, dictating, ascribing, corresponding, correlating, specifying, representing, determining.  
4. The universal triple-nucleotide genetic code can be the result either of a) a random selection through evolution, or b) the result of intelligent implementation.
5. We know by experience, that performing value assignment and codification is always a process of intelligence with an intended result. Non-intelligence, aka matter, molecules, nucleotides, etc. have never demonstrated to be able to generate codes, and have neither intent nor distant goals with a foresight to produce specific outcomes.  
6. Therefore, the genetic code is the result of an intelligent setup.


“Our experience-based knowledge of information-flow confirms that systems with large amounts of specified complexity (especially codes and languages) invariably originate from an intelligent source — from a mind or personal agent.”
– Stephen C. Meyer, “The origin of biological information and the higher taxonomic categories,” Proceedings of the Biological Society of Washington, 117(2):213-239 (2004).

“As the pioneering information theorist Henry Quastler once observed, “the creation of information is habitually associated with conscious activity.” And, of course, he was right. Whenever we find information—whether embedded in a radio signal, carved in a stone monument, written in a book or etched on a magnetic disc—and we trace it back to its source, invariably we come to mind, not merely a material process. Thus, the discovery of functionally specified, digitally encoded information along the spine of DNA, provides compelling positive evidence of the activity of a prior designing intelligence. This conclusion is not based upon what we don’t know. It is based upon what we do know from our uniform experience about the cause and effect structure of the world—specifically, what we know about what does, and does not, have the power to produce large amounts of specified information.”
– Stephen Meyer

“A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events which can cause information to originate by itself in matter.
Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107.”
(The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.)

“Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible”
Donald E. Johnson – Bioinformatics: The Information in Life

“The genetic code’s error-minimization properties are far more dramatic than these (one in a million) results indicate. When the researchers calculated the error-minimization capacity of the one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution. Researchers estimate the existence of 10^18 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This means of 10^18 codes few, if any have an error-minimization capacity that approaches the code found universally throughout nature.”
Fazale Rana – From page 175; ‘The Cell’s Design’

The genetic code could not be the product of evolution, since it had to be fully operational when life started ( and so, DNA replication, upon which evolution depends ). The only alternative to design is that random unguided events originated it.

Barbieri: Code Biology:
"...there is no deterministic link between codons and amino acids because any codon can be associated with any amino acid.  This means that the rules of the genetic code do not descend from chemical necessity and in this sense they are arbitrary." "...we have the experimental evidence that the genetic code is a real code, a code that is compatible with the laws of physics and chemistry but is not dictated by them."
https://www.sciencedirect.com/journal/biosystems/vol/164/suppl/C

[Comment on other biological codes]: "In signal transduction, in short, we find all the essential components of a code: (a) two independents worlds of molecules (first messengers and second messengers), (b) a set of adaptors that create a mapping between them, and (c) the proof that the mapping is arbitrary because its rules can be changed in many different ways."

Why should or would molecules promote designate, assign, dictate, ascribe, correspond, correlate, specify anything at all ? . How does that make sense?  This is not an argument from incredulity. The proposition defies  reasonable principles and the known and  limited, unspecific range of chance, physical necessity, mutations and  natural selection. What we need, is giving a *plausible* account of how it came about to be in the first place. 
It is in ANY scenario a far stretch to believe that unguided random events would produce a functional code system and arbitrary assignments of meaning. Thats simply putting far too much faith into what molecules on their own are capable of doing.

RNA's, ( if they were extant prebiotically anyway), would just lay around and then disintegrate in a short period of time. If we disconsider that the prebiotic synthesis of RNA's HAS NEVER BEEN DEMONSTRATED IN THE LAB, they would not polymerize. Clay experiments have failed. Systems, given energy and left to themselves, DEVOLVE to give uselessly complex mixtures, “asphalts”.  the literature reports (to our knowledge) exactly  ZERO CONFIRMED OBSERVATIONS where molecule complexification emerged spontaneously from a pool of random chemicals. It is IMPOSSIBLE for any non-living chemical system to escape devolution to enter into the world of the “living”. 

Alberts: The Molecular Biology of the Cell et al, p367
The relationship between a sequence of DNA and the sequence of the corresponding protein is called the genetic code…the genetic code is deciphered by a complex apparatus that interprets the nucleic acid sequence.
Genes VIII, by Lewin, p21-22

…the conversion of the information in [messenger] RNA represents a translation of the information into another language that uses quite different symbols.

The structural basis of the genetic code: amino acid recognition by aminoacyl-tRNA synthetases 28 July 2020
One of the most profound open questions in biology is how the genetic code was established. The emergence of this self-referencing system poses a chicken-or-egg dilemma and its origin is still heavily debated
https://www.nature.com/articles/s41598-020-69100-0

Genomics: Evolution of the Genetic Code 1
Understanding how this code originated and how it affects the molecular biology and evolution of life today are challenging problems, in part because it is so highly conserved — without variation to observe it is difficult to dissect the functional implications of different aspects of a character. 

It is tempting to think that a system so central to life should be elegant, but of course that’s not how evolution works; the genetic code was not designed by clever scientists, but rather built through a series of contingencies. The ‘frozen accident’, as it was described by Crick, that ultimately emerged is certainly non-random, but is more of a mishmash than an elegant plan, which led to new ideas about how the code may have evolved in a series of steps from simpler codes with fewer amino acids. So the code was not always thus, but once it was established before the last universal common ancestor of all extant life (LUCA) it has remained under very powerful selective constraints that kept the code frozen in nearly all genomes that subsequently diversified.

My comment:  A series of contingencies !!! a mishmash than an elegant plan !!! Contingent means accidental, incidental, adventitious, casual, chance. So, in other words, luck. A fortuitous accident. Is that a rational proposition ? 

The genetic codons are assigned to amino acids. Why should or would molecules promote designate, dictate, ascribe, correspond, correlate, specify anything at all ? How does that make sense?

The genetic code could not be the product of evolution, since it had to be fully operational when life started ( and so, DNA replication, upon which evolution depends ). The only alternative to design is that random unguided events originated it.

Code Biology 2
"...there is no deterministic link between codons and amino acids because any codon can be associated with any amino acid.  This means that the rules of the genetic code do not descend from chemical necessity and in this sense they are arbitrary." "...we have the experimental evidence that the genetic code is a real code, a code that is compatible with the laws of physics and chemistry but is not dictated by them."

[Comment on other biological codes]: "In signal transduction, in short, we find all the essential components of a code: (a) two independents worlds of molecules (first messengers and second messengers), (b) a set of adaptors that create a mapping between them, and (c) the proof that the mapping is arbitrary because its rules can be changed in many different ways."

RNA's, ( if they were extant prebiotically anyway), would just lay around and then disintegrate in a short period of time ( a month or so). If we disconsider that the prebiotic synthesis of RNA's HAS NEVER BEEN DEMONSTRATED IN THE LAB, they would not polymerize. Clay experiments have failed. And even IF they would bind in GC rich configurations to small peptides, they would as well simply lay around, and disintegrate. It is in ANY scneario a far stretch to believe that unguided events would produce randomly codes. That's simply putting far too much faith into what molecules on their own are capable of doing.

My comment:  Without stop codons, the translation machinery would not know where to end the protein synthesis, and there could/would never be functional proteins, and no life on earth. At all.

These characteristics may render such changes more statistically probable, less likely to be deleterious, or both. However, most non-canonical genetic codes are inferred from DNA sequence alone, or occasionally DNA sequences and corresponding tRNAs.

Origin and evolution of the genetic code: the universal enigma 2009 Feb Eugene V. Koonin
In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.

Many of the same codons are reassigned (compared to the standard code) in independent lineages (e.g., the most frequent change is the reassignment of the stop codon UGA to tryptophan), this conclusion implies that there should be predisposition towards certain changes; at least one of these changes was reported to confer selective advantage

The origin of the genetic code is acknowledged to be a major hurdle in the origin of life, and I shall mention just one or two of the main problems. Calling it a ‘code’ can be misleading because of associating it with humanly invented codes which at their core usually involve some sort of pre-conceived algorithm; whereas the genetic code is implemented entirely mechanistically – through the action of biological macromolecules. This emphasises that, to have arisen naturally – e.g. through random mutation and natural selection – no forethought is allowed: all of the components would need to have arisen in an opportunistic manner.

Crucial role of the tRNA activating enzymes 
To try to explain the source of the code various researchers have sought some sort of chemical affinity between amino acids and their corresponding codons. But this approach is misguided:

First of all, the code is mediated by tRNAs which carry the anti-codon (in the mRNA) rather than the codon itself (in the DNA). So, if the code were based on affinities between amino acids and anti-codons, it implies that the process of translation via transcription cannot have arisen as a second stage or improvement on a simpler direct system - the complex two-step process would need to have arisen right from the start.
Second, the amino acid has no role in identifying the tRNA or the codon (This can be seen from an experiment in which the amino acid cysteine was bound to its appropriate tRNA in the normal way – using the relevant activating enzyme, and then it was chemically modified to alanine. When the altered aminoacyl-tRNA was used in an in vitro protein synthesizing system (including mRNA, ribosomes etc.), the resulting polypeptide contained alanine (instead of the usual cysteine) corresponding to wherever the codon UGU occurred in the mRNA. This clearly shows that it is the tRNA alone (with no role for the amino acid) with its appropriate anticodon that matches the codon on the mRNA.). This association is done by an activating enzyme (aminoacyl tRNA synthetase) which attaches each amino acid to its appropriate tRNA (clearly requiring this enzyme to correctly identify both components). There are 20 different activating enzymes - one for each type of amino acid.
Interestingly, the end of the tRNA to which the amino acid attaches has the same nucleotide sequence for all amino acids - which constitutes a third reason. 
Third:  Interest in the genetic code tends to focus on the role of the tRNAs, but as just indicated that is only one half of implementing the code. Just as important as the codon-anticodon pairing (between mRNA and tRNA) is the ability of each activating enzyme to bring together an amino acid with its appropriate tRNA. It is evident that implementation of the code requires two sets of intermediary molecules: the tRNAs which interact with the ribosomes and recognise the appropriate codon on mRNA, and the activating enzymes which attach the right amino acid to its tRNA. This is the sort of complexity that pervades biological systems, and which poses such a formidable challenge to an evolutionary explanation for its origin. It would be improbable enough if the code were implemented by only the tRNAs which have 70 to 80 nucleotides; but the equally crucial and complementary role of the activating enzymes, which are hundreds of amino acids long, excludes any realistic possibility that this sort of arrangement could have arisen opportunistically.

Progressive development of the genetic code is not realistic
In view of the many components involved in implementing the genetic code, origin-of-life researchers have tried to see how it might have arisen in a gradual, evolutionary, manner. For example, it is usually suggested that to begin with the code applied to only a few amino acids, which then gradually increased in number. But this sort of scenario encounters all sorts of difficulties with something as fundamental as the genetic code.

1. First, it would seem that the early codons need have used only two bases (which could code for up to 16 amino acids); but a subsequent change to three bases (to accommodate 20) would seriously disrupt the code. Recognizing this difficulty, most researchers assume that the code used 3-base codons from the outset; which was remarkably fortuitous or implies some measure of foresight on the part of evolution (which, of course, is not allowed).
2. Much more serious are the implications for proteins based on a severely limited set of amino acids. In particular, if the code was limited to only a few amino acids, then it must be presumed that early activating enzymes comprised only that limited set of amino acids, and yet had the necessary level of specificity for reliable implementation of the code. There is no evidence of this; and subsequent reorganization of the enzymes as they made use of newly available amino acids would require highly improbable changes in their configuration. Similar limitations would apply to the protein components of the ribosomes which have an equally essential role in translation.
3. Further, tRNAs tend to have atypical bases which are synthesized in the usual way but subsequently modified. These modifications are carried out by enzymes, so these enzymes too would need to have started life based on a limited number of amino acids; or it has to be assumed that these modifications are later refinements - even though they appear to be necessary for reliable implementation of the code.
4. Finally, what is going to motivate the addition of new amino acids to the genetic code? They would have little if any utility until incorporated into proteins - but that will not happen until they are included in the genetic code. So the new amino acids must be synthesized and somehow incorporated into useful proteins (by enzymes that lack them), and all of the necessary machinery for including them in the code (dedicated tRNAs and activating enzymes) put in place – and all done opportunistically! Totally incredible!

https://evolutionunderthemicroscope.com/ool02.html

Comparison of translation loads for standard and alternative genetic codes
The origin and universality of the genetic code is one of the biggest enigmas in biology. Soon after the genetic code of Escherichia coli was deciphered, it was realized that this specific code out of more than 1084 possible codes is shared by all studied life forms (albeit sometimes with minor modifications). The question of how this specific code appeared and which physical or chemical constraints and evolutionary forces have shaped its highly non-random codon assignment is subject of an intense debate. In particular, the feature that codons differing by a single nucleotide usually code for either the same or a chemically very similar amino acid and the associated block structure of the assignments is thought to be a necessary condition for the robustness of the genetic code both against mutations as well as against errors in translation.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2909233/

Was Wright Right? The Canonical Genetic Code is an Empirical Example of an Adaptive Peak in Nature; Deviant Genetic Codes Evolved Using Adaptive Bridges
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2924497/

The error minimization hypothesis postulates that the canonical genetic code evolved as a result of selection to minimize the phenotypic effects of point mutations and errors in translation. 

My comment:  How can the authors claim that there was already translation, if it depends on the genetic code already being set up/

It is likely that the code in its early evolution had few or even a minimal number of tRNAs that decoded multiple codons through wobble pairing, with more amino acids and tRNAs being added as the code evolved.

My comment:  Why do the authors claim that the genetic code emerged based on evolutionary selective pressures, if at this stage, there was no evolution AT ALL? Evolution starts with DNA replication, which DEPENDS on translation being already fully set up. Also, the origin of tRNA's is a huge problem for proponents of abiogenesis by the fact, that they are highly specific, and their biosynthesis in modern cells is a highly complex, multistep process requiring many complex enzymes 

Insuperable problems of the genetic code initially emerging in an RNA World 2018 February
The hypothetical RNA World does not furnish an adequate basis for explaining how this system came into being, but principles of self-organisation that transcend Darwinian natural selection furnish an unexpectedly robust basis for a rapid, concerted transition to genetic coding from a peptide RNA world. The preservation of encoded information processing during the historically necessary transition from any ribozymally operated code to the ancestral aaRS enzymes of molecular biology appears to be impossible, rendering the notion of an RNA Coding World scientifically superfluous. Instantiation of functional reflexivity in the dynamic processes of real-world molecular interactions demanded of nature that it fall upon, or we might say “discover”, a computational “strange loop” (Hofstadter, 1979): a self-amplifying set of nanoscopic “rules” for the construction of the pattern that we humans recognize as “coding relationships” between the sequences of two types of macromolecular polymers. However, molecules are innately oblivious to such abstractions. Many relevant details of the basic steps of code evolution cannot yet be outlined. 

Now observe the colorful just so stories that the authors come up with to explain the unexplicable:
We can now understand how the self-organised state of coding can be approached “from below”, rather than thinking of molecular sequence computation as existing on the verge of a catastrophic fall over a cliff of errors. In GRT systems, an incremental improvement in the accuracy of translation produces replicase molecules. that are more faithfully produced from the gene encoding them. This leads to an incremental improvement in information copying, in turn providing for the selection of narrower genetic quasispecies and an incrementally better encoding of the protein functionalities, promoting more accurate translation.

My comment: This is an entirely unwarranted claim. It is begging the question. There was no translation at this stage, since translation depends on a fully developed and formed genetic code.

The vicious circle can wind up rapidly from below as a selfamplifying process, rather than precipitously winding down the cliff from above. The balanced push-pull tension between these contradictory tendencies stably maintains the system near a tipping point, where, all else being equal, informational replication and translation remain impedance matched – that is, until the system falls into a new vortex of possibilities, such as that first enabled by the inherent incompleteness of the primordial coding “boot block”. Bootstrapped coded translation of genes is a natural feature of molecular processes unique to living systems. Organisms are the only products of nature known to operate an essentially computational system of symbolic information processing. In fact, it is difficult to envisage how alien products of nature found with a similar computational capability, which proved to be necessary for their existence, no matter how primitive, would fail classification as a form of “life”.

My comment: I would rather say, it is difficult to envisage how such a complex system could get "off the hook" by natural, unguided means.
http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC5895081&blobtype=pdf

The lack of foundation in the mechanism on which are based the physico-chemical theories for the origin of the genetic code counterposed to the credible and natural mechanism suggested by the Q2 coevolution theory 1 April 2016
The majority of theories advanced for explaining the origin of Q4 the genetic code maintain that the physico-chemical properties of amino acids had a fundamental role to organize the structuring of the genetic code....... but this does not seem to have been the case. The physico-chemical properties of amino acids played only a subsidiary role in organizing the code – and important only if understood as manifestation of the catalysis performed by proteins . The mechanism on which lie on the majority of theories based on the physico-chemical properties of amino acids is not credible or at least not satisfactory.
https://sci-hub.ren/10.1016/j.jtbi.2016.04.005

There are enough data to refute the possibility that the genetic code was randomly constructed (“a frozen accident”). For example, the genetic code clusters certain amino acid assignments. Amino acids that share the same biosynthetic pathway tend to have the same first base in their codons. Amino acids with similar physical properties tend to have similar codons.

either bottom-up processes (e.g. unknown chemical principles that make the code a necessity), or bottom-up constraints (i.e. a kind of selection process that occurred early in the evolution of life, and that favored the code we have now), then we can dispense with the code metaphor. The ultimate explanation for the code has nothing to do with choice or agency; it is ultimately the product of necessity.

In responding to the “code skeptics,” we need to keep in mind that they are bound by their own methodology to explain the origin of the genetic code in non-teleological, causal terms. They need to explain how things happened in the way that they suppose. Thus if a code-skeptic were to argue that living things have the code they do because it is one which accurately and efficiently translates information in a way that withstands the impact of noise, then he/she is illicitly substituting a teleological explanation for an efficient causal one. We need to ask the skeptic: how did Nature arrive at such an ideal code as the one we find in living things today?
https://uncommondescent.com/intelligent-design/is-the-genetic-code-a-real-code/

Genetic code: Lucky chance or fundamental law of nature?
It becomes clear that the information code is intrinsically related to the physical laws of the universe, and thus life may be an inevitable outcome of our universe. The lack of success in explaining the origin of the code and life itself in the last several decades suggest that we miss something very fundamental about life, possibly something fundamental about matter and the universe itself. Certainly, the advent of the genetic code was no “play of chance”.

Open questions:
1. Did the dialects, i.e., mitochondrial version, with UGA codon (being the stop codon in the universal version) codifying tryptophan; AUA codon (being the isoleucine in the universal version), methionine; and Candida cylindrica (funges), with CUG codon (being the leucine in the universal version) codifying serine, appear accidentally or as a result of some kind of selection process? 
2. Why is the genetic code represented by the four bases A, T(U), G, and C? 
3. Why does the genetic code have a triplet structure? 
4. Why is the genetic code not overlapping, that is, why does the translation apparatus of a cell, which transcribes information, have a discrete equaling to three, but not to one? 
5. Why does the degeneracy number of the code vary from one to six for various amino acids? 
6. Is the existing distribution of codon degeneracy for particular amino acids accidental or some kind of selection process? 
7. Why were only 20 canonical amino acids selected for the protein synthesis? 9. Is this very choice of amino acids accidental or some kind of selection process?
8. Why should there be a genetic code at all?
9. Why should there be the emergency of stereochemical association of a specific arbitrary codon-anticodon set?
10. Aminoacyl-tRNA synthetases recognize the correct tRNA. How did that recognition emerge, and why?

The British biologist John Maynard Smith has described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes:
“The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom ?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid, because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion, or a " good compromise" ?  
An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly, but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance.

That frozen accident means, that good old luck would have  hit the jackpot  trough trial and error amongst 1.5 × 10^84 possible genetic codes. That is the number of atoms in the whole universe. That puts any real possibility of chance providing the feat out of question. Its, using  Borel's law, in the realm of impossibility.  The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code.

Arzamastsev AA. The nature of optimality of DNA code. Biophys. Russ. 1997;42:611–4.
the situation when Nature invented the DNA code surprisingly resembles designing a computer by man. If a computer were designed today, the binary notation would be hardly used. Binary notation was chosen only at the first stage, for the purpose to simplify at most the construction of decoding machine. But now, it is too late to correct this mistake”.
[url=https://www.webpages.uidaho.edu/~stevel/565/literature/Genetic code - Lucky chance or fundamental law of nature.pdf]https://www.webpages.uidaho.edu/~stevel/565/literature/Genetic%20code%20-%20Lucky%20chance%20or%20fundamental%20law%20of%20nature.pdf[/url]

Origin of Information Encoding in Nucleic Acids through a Dissipation-Replication Relation April 18, 2018
Due to the complexity of such an event, it is highly unlikely that that this information could have been generated randomly. A number of theories have attempted to addressed this problem by considering the origin of the association between amino acids and their cognate codons or anticodons.  There is no physical-chemical description of how the specificity of such an association relates to the origin of life, in particular, to enzyme-less reproduction, proliferation and evolution. Carl Woese recognized this early on and emphasized the probelm, still unresolved, of uncovering the basis of the specifity between amino acids and codons in the genetic code.

Carl Woese (1967) reproduced in the seminal paper of Yarus et al. cited frequently above; 
“I am particularly struck by the difficulty of getting [the genetic code] started unless there is some basis in the specificity of interaction between nucleic acids and amino acids or polypeptide to build upon.” 
https://arxiv.org/pdf/1804.05939.pdf

The genetic code is one in a million
if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.
http://www.ncbi.nlm.nih.gov/pubmed/9732450

The genetic code is nearly optimal for allowing additional information within protein-coding sequences
DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1832087/?report=classic

The Genetic Code Part II: Not Mundane and Not Evolvable
https://www.youtube.com/watch?v=oQ9tAL2AM6M

Hidden code in the protein code
Different codons for the same amino acid may affect how quickly mRNA transcripts are translated, and that this pace can influence post-translational modifications. Despite being highly homologous, the mammalian cytoskeletal proteins beta- and gamma-actin contain notably different post-translational modifications: though both proteins are actually post-translationally arginylated, only arginylated beta-actin persists in the cell. This difference is essential for each protein's function.

To investigate whether synonymous codons might have a role in how arginylated forms persist, Kashina and colleagues swapped the synonymous codons between the genes for beta- and gamma-actin and found that the patterns of post-translational modification switched as well. Next, they examined translation rates for the wild-type forms of each protein and found that gamma-actin accumulated more slowly. Computational analysis suggested that differences between the folded mRNA structures might cause differences in translation speed. When the researchers added an antibiotic that slowed down translation rates, accumulation of arginylated actin slowed dramatically. Subsequent work indicated that N-arginylated proteins may, if translated slowly, be subjected to ubiquitination, a post-translational modification that targets proteins for destruction.

Thus, these apparently synonymous codons can help explain why some arginylated proteins but not others accumulate in cells. “One of the bigger implications of our work is that post-translational modifications are actually encoded in the mRNA,” says Kashina. “Coding sequence can define a protein's translation rate, metabolic fate and post-translational regulation.”
https://www.nature.com/articles/nmeth1110-874

Determination of the Core of a Minimal Bacterial Gene Set
Based on the conjoint analysis of several computational and experimental strategies designed to define the minimal set of protein-coding genes that are necessary to maintain a functional bacterial cell, we propose a minimal gene set composed of 206 genes ( which code for 13 protein complexes ) Such a gene set will be able to sustain the main vital functions of a hypothetical simplest bacterial cell with the following features. These protein complexes could not emerge through evolution ( muations and natural selection ) , because evolution depends on the dna replication, which requires precisely these original genes and proteins ( chicken and egg prolem ). So the only mechanism left is chance, and physical necessity.
http://mmbr.asm.org/content/68/3/518.full.pdf

On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization
The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The problem has a clear catch-22 aspect: high translation fidelity hardly can be achieved without a complex, highly evolved set of RNAs and proteins but an elaborate protein machinery could not evolve without an accurate translation system. The origin of the genetic code and whether it evolved on the basis of a stereochemical correspondence between amino acids and their cognate codons (or anticodons), through selectional optimization of the code vocabulary, as a "frozen accident" or via a combination of all these routes is another wide open problem despite extensive theoretical and experimental studies.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1894784/

Literature from those who posture in favor of creation abounds with examples of the tremendous odds against chance producing a meaningful code. For instance, the estimated number of elementary particles in the universe is 10^80. The most rapid events occur at an amazing 10^45 per second. Thirty billion years contains only 10^18 seconds. By totaling those, we find that the maximum elementary particle events in 30 billion years could only be 10^143. Yet, the simplest known free-living organism, Mycoplasma genitalium, has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451.

Problem no.1
The genetic code system ( language ) must be created, and the universal code is nearly optimal and maximally efficient

http://www.ncbi.nlm.nih.gov/pubmed/8335231
The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics.

Origin and evolution of the genetic code: the universal enigma
In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/

The genetic code is one in a million
if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.
http://www.ncbi.nlm.nih.gov/pubmed/9732450

The genetic code is nearly optimal for allowing additional information within protein-coding sequences
DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1832087/?report=classic

Problem no.2
The origin of the information to make the first living cells must be explained.

Determination of the Core of a Minimal Bacterial Gene Set
Based on the conjoint analysis of several computational and experimental strategies designed to define the minimal set of protein-coding genes that are necessary to maintain a functional bacterial cell, we propose a minimal gene set composed of 206 genes ( which code for 13 protein complexes ) Such a gene set will be able to sustain the main vital functions of a hypothetical simplest bacterial cell with the following features. These protein complexes could not emerge through evolution ( muations and natural selection ) , because evolution depends on the dna replication, which requires precisely these original genes and proteins ( chicken and egg prolem ). So the only mechanism left is chance, and physical necessity.
http://mmbr.asm.org/content/68/3/518.full.pdf

Literature from those who posture in favor of creation abounds with examples of the tremendous odds against chance producing a meaningful code. For instance, the estimated number of elementary particles in the universe is 10^80. The most rapid events occur at an amazing 10^45 per second. Thirty billion years contains only 10^18 seconds. By totaling those, we find that the maximum elementary particle events in 30 billion years could only be 10^143. Yet, the simplest known free-living organism, Mycoplasma genitalium, has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451.

Paul Davies once said;
How did stupid atoms spontaneously write their own software … ? Nobody knows … … there is no known law of physics able to create information from nothing.

Problem no.3
The genetic cipher

On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization
The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The problem has a clear catch-22 aspect: high translation fidelity hardly can be achieved without a complex, highly evolved set of RNAs and proteins but an elaborate protein machinery could not evolve without an accurate translation system. The origin of the genetic code and whether it evolved on the basis of a stereochemical correspondence between amino acids and their cognate codons (or anticodons), through selectional optimization of the code vocabulary, as a "frozen accident" or via a combination of all these routes is another wide open problem despite extensive theoretical and experimental studies.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1894784/

The British biologist John Maynard Smith has described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes: “The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom ?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid, because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion, or a " good compromise" ?  

An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly, but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance.

That frozen accident means, that good old luck would have  hit the jackpot  trough trial and error amongst 1.5 × 1084 possible genetic codes . That is the number of atoms in the whole universe. That puts any real possibility of chance providing the feat out of question. Its , using  Borel's law, in the realm of impossibility.  The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code.

Put it in other words : The task compares to invent two languages, two alphabets, and a translation system, and the information content of a book ( for example hamlet)  being written in english translated  to chinese  in a extremely sophisticared hardware system. The conclusion that a intelligent designer had to setup the system follows not based on missing knowledge ( argument from ignorance ). We know that minds do invent languages, codes, translation systems, ciphers, and complex, specified information all the time.  The genetic code and its translation system is best explained through the action of a intelligent designer.



Last edited by Otangelo on Sat Mar 13, 2021 5:07 am; edited 50 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin
The attribution of the design has to be to God or purely materialistic mechanisms. The gigantic pull to swallow in the second case is the fact that the output is the product of code, and that the molecular machinery needed to replicate the code (for inheritance/perpetuation), transcribe it, translate it into protein with many intermediate steps requiring highly specific operations, and to repair it in the foreseen event that it is damaged (to preserve/protect it) or destroy it in the event that it suffers irreparable damage (to forestall cancer) is just too big to swallow. DNA had an intentional purpose. That's the only reasonable conclusion I can come to.

Origin and evolution of the genetic code: the universal enigma
https://reasonandscience.catsboard.com/t2001-origin-and-evolution-of-the-genetic-code-the-universal-enigma

The genetic code is nearly optimal for allowing additional information within protein-coding sequences
https://reasonandscience.catsboard.com/t1404-the-genetic-code-is-nearly-optimal-for-allowing-additional-information-within-protein-coding-sequences

The genetic code cannot arise through natural selection
https://reasonandscience.catsboard.com/t1405-the-genetic-code-cannot-arise-through-natural-selection

The origin of the genetic cipher, the most perplexing problem in biology
https://reasonandscience.catsboard.com/t2267-the-origin-of-the-genetic-cipher-the-most-perplexing-problem-in-biology

The genetic code, insurmountable problem for non-intelligent origin Sdfsds12

Evolution Of Genetic Code” Article Illustrates Fundamental Problem
https://uncommondescent.com/evolution/evolution-of-genetic-code-article-illustrates-fundamental-problem/

Large Numbers Of Exceptions To The Canonical Genetic Code
https://uncommondescent.com/intelligent-design/large-numbers-of-exceptions-to-the-canonical-genetic-code/

The Lewis Carroll classic, Through the Looking Glass, Humpty Dumpty states, “When I use a word, it means just what I choose it to mean — neither more nor less.” In turn, Alice (of Wonderland fame) says, “The question is, whether you can make words mean so many different things.” All organisms on Earth use a genetic code, which is the language in which the building plans for proteins are specified in their DNA. It has long been assumed that there is only one such “canonical” code, so each word means the same thing to every organism. While a few examples of organisms deviating from this canonical code had been serendipitously discovered before, these were widely thought of as very rare evolutionary oddities, absent from most places on Earth and representing a tiny fraction of species. Now, this paradigm has been challenged by the discovery of large numbers of exceptions from the canonical genetic code, published by a team of researchers from the U.S. Department of Energy Joint Genome Institute (DOE JGI) in the May 23, 2014 edition of the journal Science.

It has been 60 years since the discovery of the structure of DNA and the emergence of the central dogma of molecular biology, wherein DNA serves as a template for RNA and these nucleotides form triplets of letters called codons. There are 64 codons, and all but three of these triplets encode actual amino acids — the building blocks of protein. The remaining three are “stop codons,” that bring the molecular machinery to a halt, terminating the translation of RNA into protein. Each has a given name: Amber, Opal and Ochre. When an organism’s machinery reads the instructions in the DNA, builds a protein composed of amino acids, and reaches Amber, Opal or Ochre, this triplet would signal that they have arrived at the end of a protein.



Last edited by Otangelo on Fri Jan 15, 2021 7:17 pm; edited 5 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin
The hardware & software to make proteins, what mechanism explains best its origin?

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#7010

What commonly is discussed in Theism x Atheism debates, is where the information stored in DNA come from. That is an enigma, which biological sciences never addressed in a convincing manner. And science generally doesn't go further than hypothetical guesswork.

But far more than just the origin of the message, the instructions must be explained.

Shakespeares Hamlet cam undoubtedly from Shakespeare's mind.
But the alphabet he used to convey his story, was pre-existent. He learned it and used to write down his romance.

But where the alphabet came from, is an entirely different issue.
So it is in genetics. DNA uses a genetic code, which is composed of 64 entries, composed of codons, three letters of nucleotides, which form together one genetic "letter". Each of these is ascribed to one of the twenty amino acids.  Since there are only twenty amino acids used to make proteins, several different codons can mean the same amino acids, so there is a redundancy, which is very useful since it permits the system to be more robust and error-prone. During the transcription and translation process, errors are minimized.

Science has discovered, that the genetic code is best suited for its task amongst at least one million other possible codes.

And the amino acid selection is as well the best suited for the purpose of constructing molecular machines, enzymes and proteins.

Now that is another not resolved question: How did the genetic code " alphabet " emerge on prebiotic earth?

How were the 64 genetic codons ascribed to 20 amino acids?

These questions belong to the most enigmatic in biological sciences, without good answers.

Besides the above-mentioned problems, which can be considered software problems, there is also the question of how the hardware emerged.

In order for the translation of messenger RNA to amino acids can occur, there are adapter molecules, transfer RNA's (tRNAs)

Transfer RNA, and its biogenesis
https://reasonandscience.catsboard.com/t2058-transfer-rna-and-its-biogenesis

tRNA's are very specific and complex molecules, and the " made of " follows several steps, requiring a significant number of proteins and enzymes, which are by themselves also enormously complex, not only in their structure but as well in their " made of ". So the question in the end arises: did natural processes have the foresight of the end product, tRNA, to make this highly specific nanorobot - like molecular machines which remove, add and modify the nucleotides? If not, how could they have arisen, since, without end goal, there would be no function for them? these enzymes are all specifically made for the production of tRNAs. And tRNA is essential for life

Another essential central player, that workes in an interdependent manner:

Aminoacyl-tRNA synthetases.
https://reasonandscience.catsboard.com/t2280-aminoacyl-trna-synthetases

The synthetases have several active sites that enable them to:

(1) recognize a specific amino acid,
(2) recognize a specific corresponding tRNA(with a specific anticodon),
(3) react the amino acid with ATP (adenosine triphosphate) to form an AMP (adenosine monophosphate) derivative, and then, finally,
(4) link the specific tRNA molecule in question to its corresponding amino acid. Current research suggests that the synthetases recognize particular three-dimensional or chemical features (such as methylated bases) of the tRNA molecule. In virtue of the specificity of the features they must recognize, individual synthetases have highly distinctive shapes that
derive from specifically arranged amino-acid sequences. In other words, the synthetases are themselves marvels of specificity.

And there is, of course, the Ribosome, a veritable ultracomplex factory making proteins:

Ribosomes amazing nanomachines
https://reasonandscience.catsboard.com/t1661-translation-through-ribosomes-amazing-nano-machines

* Each cell contains around 10 million ribosomes, i.e. 7000 ribosomes are produced in the nucleolus each minute.
* Each ribosome contains around 80 proteins, i.e. more than 0.5 million ribosomal proteins are synthesized in the cytoplasm per minute.
* The nuclear membrane contains approximately 5000 pores. Thus, more than 100 ribosomal proteins are imported from the cytoplasm to the nucleus per pore and minute. At the same time 3 ribosomal subunits are exported from the nucleus to the cytoplasm per pore and minute.

But these are just a few of the many players essential to make proteins:

The interdependent and irreducible structures required to make proteins
https://reasonandscience.catsboard.com/t2039-the-interdependent-and-irreducible-structures-required-to-make-proteins

https://reasonandscience.catsboard.com

Otangelo


Admin
Many atheists demonstrate a faulty understanding of how things in nature work.

Repeatedly, i have heard atheists say: The Origin of Life depends just on chemicals. It's basically chemical reactions that over time increased complexity. That is a foolish simplification. Life depends on three basic things which are essential: Energy, matter, and information. While atheists are used to thinking that we are the simpletons, I regard it as more and more important to break down what happens in nature into analogies and a language, that everyone can understand, in order to explain concepts that are implemented in such a complex manner that science is still far from fully understand and describe what we see and observe in the natural world.

One common misconception is that natural principles are just discovered, and described by us. Two cans with Coca Cola, one is with sugar, the other is diet. Both bear information that we can describe. We describe the information transmitted to us that one can contain Coca Cola, and the other is diet. But that does not occur naturally. A chemist invented the formula of how to make Coke, and Diet Coke, and that is not dependent on descriptive, but PREscriptive information. The same occurs in nature. We discover that DNA contains a genetic code. But the rules upon which the genetic code operates are PRE - scriptive. The rules are arbitrary. The genetic Code is CONSTRAINT to behave in a certain way. Chemical principles govern specific RNA interactions with amino acids. But principles that govern have to be set by? - yes, precisely what atheists try to avoid at any cost: INTELLIGENCE. There is no physical necessity, that the triple nucleotides forming a Codon CUU ( cytosine, uracil, uracil ) are assigned to the amino acid Leucine. Intelligence assigns and sets rules. For translation, each of these codons requires a tRNA molecule that has an anticodon with which it can stably base pair with the messenger RNA (mRNA) codon, like lock and key. So there is at one side of the tRNA the CUU anticodon sequence, and at the other side of the tRNA molecule, there is a site to insert the assigned amino acid Leucine. And here comes the BIG question: How was that assignment set up? How did it come to be, that tRNA has an assignment of CUU anticodon sequence to Leucine? The two binding sites are distant one from the other, there is no chemical reaction constraining physically that order or relationship. That is a BIG mystery, that science is attempting to explain by natural unguided mechanisms, but without success. Here we have the CLEAR imprint of an intelligent mind that was necessary to set these rules. That led Eugene Koonin to confess in the paper: "Origin and evolution of the genetic code: the universal enigma" :  It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.

In the genetic code, there are 4^3 = 64 possible codons (tri-nucleotide sequences). Atheists also mock and claim that it is not justified to describe the genetic code as a language. But that is also not true.   In the standard genetic code, three of these 64 mRNA codons (UAA, UAG and UGA) are stop codons. These terminate translation by binding to release factors rather than tRNA molecules. They instruct the ribosome to either start or stop the polymerization of a given amino acid strand.  Did unguided natural occurrences suddenly, in vast sequence space of possibilities, find by a lucky accident the necessity that a size of an amino acid polymer forming a protein requires a defined limited size that has to be INSTRUCTED by the genetic instructions, and for that reason, assigned release factors rather than amino acids to a specific codon sequence, in order to be able to instruct the termination of an amino acid string? That makes, frankly, no sense whatsoever. Not only that. This characterizes factually that the genetic code IS a language. That's described in the following science paper: The genetic language: grammar, semantics, evolution 2 The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics.

Since there are 64 possible codons, logically, there should be 64 tRNA's, but there are in most organisms just 45. Some tRNA species can pair with multiple, synonymous codons all of which encode the same amino acid. Movement ("wobble") of the base in the 5' anticodon position is necessary for small conformational adjustments that affect the overall pairing geometry of anticodons of tRNA.

These notions led Francis Crick to the creation of the wobble hypothesis, a set of four relationships explaining these naturally occurring attributes.

1. The first two bases in the codon create the coding specificity, for they form strong Watson-Crick base pairs and bond strongly to the anticodon of the tRNA.
2. When reading 5' to 3' the first nucleotide in the anticodon (which is on the tRNA and pairs with the last nucleotide of the codon on the mRNA) determines how many nucleotides the tRNA actually distinguishes.
If the first nucleotide in the anticodon is a C or an A, pairing is specific and acknowledges original Watson-Crick pairing, that is: only one specific codon can be paired to that tRNA. If the first nucleotide is U or G, the pairing is less specific and in fact, two bases can be interchangeably recognized by the tRNA. Inosine displays the true qualities of wobble, in that if that is the first nucleotide in the anticodon then any of three bases in the original codon can be matched with the tRNA.
3. Due to the specificity inherent in the first two nucleotides of the codon, if one amino acid is coded for by multiple anticodons and those anticodons differ in either the second or third position (first or second position in the codon) then a different tRNA is required for that anticodon.
4. The minimum requirement to satisfy all possible codons (61 excluding three stop codons) is 32 tRNAs. That is 31 tRNAs for the amino acids and one initiation codon.

Aside from the obvious necessity of wobble, that our bodies have a limited amount of tRNAs and wobble allows for broad specificity, wobble base pairs have been shown to facilitate many biological functions. This has another AMAZING implication which points to intelligent set up:  The science paper: The genetic code is one in a million, confesses:
If we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.

This, all, screams out of DESIGN !!. But rather than science giving honor to God, scientists are obliged to confess ignorance, because pointing to God is not science, but religion.

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#7855


1. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/
2. http://www.ncbi.nlm.nih.gov/pubmed/8335231
3. https://en.wikipedia.org/wiki/Wobble_base_pair#:~:text=A%20wobble%20base%20pair%20is,hypoxanthine%2Dcytosine%20(I%2DC).
4. http://www.ncbi.nlm.nih.gov/pubmed/9732450

https://reasonandscience.catsboard.com

Otangelo


Admin
The arbitrariness of the genetic code
The genetic code has been regarded as arbitrary in the sense that the codon-amino acid assignments could be different than they actually are. This general idea has been spelled out differently by previous, often rather implicit accounts of arbitrariness. They have drawn on the frozen accident theory, on evolutionary contingency, on alternative causal pathways, and on the absence of direct stereochemical interactions between codons and amino acids. It has also been suggested that the arbitrariness of the genetic code justifies attributing semantic information to macromolecules, notably to DNA. I argue that these accounts of arbitrariness are unsatisfactory. I propose that the code is arbitrary in the sense of Jacques Monod’s concept of chemical arbitrariness: the genetic code is arbitrary in that any codon requires certain chemical and structural properties to specify a particular amino acid, but these properties are not required in virtue of a principle of chemistry. I maintain that the code’s chemical arbitrariness is neither sufficient nor necessary for attributing semantic information to nucleic acids.

In data processing systems, information systems that minimize and control errors, and essential. Engineers for example work diligently to protect the integrity of data processed by various terrestrial and satellite communications systems in place today. These systems and associated machines enable reliable communications on a truly global scale. Advanced coding techniques have been developed to obtain reliable information processing. These techniques play an important role in maintaining the high reliability of data in spite of many error-inducing characteristics of a typical communications system.

Several man-made coding techniques and Information processing systems find their analogues in biochemistry and biological information processing. Robust information transmission and minimization of mutational errors are life-essential. Redundancy is the most basic property for any error-correction scheme, and remarkably, it exists within the genetic system. All the methods of error-control coding are based on the adding of redundancy to the transmitted information. As the genetic information is redundant, and since the genetic code is also redundant itself, the existence of error-control mechanisms ensures a high degree of reliability in the transmission and expression of genetic information.

1. https://sci-hub.ren/10.1023/b:biph.0000024412.82219.a6



Last edited by Otangelo on Sat Jan 16, 2021 9:55 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin
More Non-Random DNA Wonders

(1)  The codon bases have a non-random correlation with the kind of amino acids which they code for.  The first of the three letters relates to the kind of amino acid the codon stands for, giving the language a consistent meaning.

This undoubtedly helps the error checking machinery just as the quickest kind of computer program to debug is one in which variable names include a consistent reference at the start classifying it as holding a date, integer, text, array, database value, or whatever.  Without this, you can’t debug it at a rapid pace because every variable name needs to be consciously checked.

Somehow it seems the codon table uses the rules of good programming:

The genetic code, insurmountable problem for non-intelligent origin Codons_aminoacids_table

(2) The effect of mistranslations is called the “load on the code”.  It is minimised by its current arrangement to such an extent that only 3 in 100,000 other possible mappings might have a safer error rate, depending on their deleterious effect on the overall DNA function, as a single change in the codon mapping would cause huge atomic changes throughout the length of the three billion base pair system.  Any changes to the mapping would need modifications to all the interpretation and duplication machinery, which seems geared up for this specific arrangement.

But this 3 in 100,000 statistic assumes all 100,000 alternatives already have the advantage of the type-significant first letter of the codon (detailed in (1) above).  Therefore if one were to include all the truly random arrangements – where the first letter was not weighted towards codon relevance – the disproportion would be vastly greater.

To give you an idea of how much greater, the number of completely random alternatives would be 64*63*62..*45 which if we forget about start and stop codons I work out as 47,732,870,256, 486,900,000,000, 000,000,000,000 or more than 47 billion trillion trillion.  Subtract 36,267,774,588,438,900,000 representing the 3 results per 100,000 estimated as being more fault-tolerant than the current design and you get a worse-performing set of 47 * 10 to the power 33.

This means if you had a trillion planets around every star in the Universe, you could try a different arrangement on every planet, and have one chance – over all of these attempts combine – to get a more fault-tolerant system.

Or, you could try out a different mapping system on a different planet circling every known star in the Universe, each and every day for 3 billion years and stand a chance of getting a better mapping only once.  And for each test version, every day you’d need to create a complex life-form from scratch, and subject it to every imaginable adverse circumstance – seasonal, predatory, infectious, organ failure, sensory development etc., and gauge its reproductive success in only 24 hours before throwing it out and organising a new one.

Any rival mapping system would need to be evaluated regarding its effect on speed of protein assembly or the combined molecular effect of the billions of changes throughout the length of the chromosomes.  All things considered the system we have now must be hugely error-tolerant to allow life forms to remain unchanged for up to 400m years, during all of which time the codon mapping had to remain constant.

(3) The coding system is given further weight by the discovery that within the ribosome, anticodons are enriched near the areas relative to their function, to a level such that the probability of this being a random setup is minuscule.  Not minuscule the way a likelihood of 6.9 is a very small step away from an impossibility level of 7, but less than .0000000000000000001 % or less than one millionth of a trillionth.

In other words, the ribosome behaves as if it’s already geared up and ready to work with the existing code – and is assumed to be one of the most ancient parts of the whole DNA engine.

https://reasonandscience.catsboard.com

Otangelo


Admin
We now come to the central question: how did specific associations between amino acids and nucleotides originate? It is clear that no crude picture of the process will work. Problems remain. Perhaps the most serious is the size problem: a messenger is considerably longer than a ribozyme, and protein enzymes much longer than the short peptides that could be formed by using a ribozyme as a 'message' . The solution is not clear. Suppose that, in any cell, translation errors lead t o the production of malfunctioning proteins. This represents a loss of efficiency, but not a fatal one. Suppose, however, that some malfunctional proteins are themselves used in translation; for example, they are assignment catalysts. Then a single error in one round of protein synthesis could cause several errors in the next round. If so, there would be an exponential increase in the frequency of errors: an error catastrophe.

https://reasonandscience.catsboard.com

Otangelo


Admin
1: RNA Building Blocks Are Hard to Synthesize and Easy to Destroy
2: Ribozymes Are Poor Substitutes for Proteins
3: An RNA-based Translation and Coding System Is Implausible
4: The RNA World Doesn’t Explain the Origin of Genetic Information

To claim that deterministic chemical affinities explain the origin of the genetic code lacks empirical foundation. In order for the translation system to be operational, and the genetic code to bear any function,

The discovery of thirty-one variant genetic codes in mitochondria, and a plethora of prokaryotes indicates that the chemical properties of the relevant monomers allow more than a single set of codon–amino acid assignments. That means: the chemical properties of amino acids and nucleotides do not determine a single universal genetic code; since there is not just one code, “it” cannot be inevitable.

DNA’s capacity to convey information actually requires freedom from chemical determinism or constraint, in particular, in the arrangement of the nucleotide bases. If the bonding properties of nucleotides determines their arrangement, the capacity of DNA to convey information would be destroyed. In that case, the bonding properties of each nucleotide would determine each subsequent nucleotide and thus, in turn, the sequence of the molecular chain. Under these conditions, a rigidly ordered pattern would emerge as required by their bonding properties and then repeat endlessly, forming something like a crystal. If DNA manifested such redundancy, it would be impossible for it to store or convey function bearing information. Whatever may be the origin of a DNA configuration, it can function as a code only if its order is not due to the forces of potential energy. It must be as physically indeterminate as the sequence of words is on a printed page.

There are no differential bonding affinities between oligonucleotides. There is not just an absence of differing bonding affinities; there are no bonds at all between the critical information-bearing bases in DNA. There are neither bonds nor bonding affinities—differing in strength or otherwise—that can explain the origin of the base sequencing that constitutes the information in the DNA molecule. Differing chemical attractions between nucleotide bases does not exist within the DNA molecule. All four bases are acceptable; none is chemically favored. This means there is nothing about either the backbone of the molecule or the way any of the four bases attached to it that make any sequence more likely to form than another.

There are no significant differential affinities between any of the four bases and the binding sites along the sugar-phosphate backbone. The properties of nucleic acids indicate that all the combinatorially possible nucleotide patterns of a DNA are, from a chemical point of view, equivalent. two features of DNA ensure that “self-organizing” bonding affinities cannot explain the specific arrangement of nucleotide bases in the molecule:
(1) there are no bonds between bases along the information-bearing axis of the molecule and
(2) there are no differential affinities between the backbone and the specific bases that could account for variations in sequence.

The Ribosome and its two subunits, the over 200 assembly and scaffold proteins for the biogenesis of the ribosome, initiation, elongation, and release factors,  the signal recognition particle, the error check and repair machinery to ensure minimization of translation errors,  the matching pool of the  tRNA's, amino acyl tRNA synthetases, mRNA's, all twenty amino acids used in proteins, would have to arise together.

Producing the molecular complexes necessary for translation requires coupling multiple tricks—multiple crucial reactions—in a closely integrated (and virtually simultaneous) way. True enzyme catalysts do this. RNA and
small-molecule catalysts do not.


The British biologist John Maynard Smith has described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes:
“The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom ?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid, because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion, or a " good compromise" ?  

An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly, but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance.

That frozen accident means, that good old luck would have  hit the jackpot  trough trial and error amongst 1.5 × 10^84 possible genetic codes. That is the number of atoms in the whole universe. That puts any real possibility of chance providing the feat out of question. Its, using  Borel's law, in the realm of impossibility.  The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code.

Put it in other words : The task compares to invent two languages, two alphabets, and a translation system, and the information content of a book (for example hamlet)  being written in English translated  to Chinese  in an extremely sophisticated hardware system. The conclusion that an intelligent designer had to set up the system follows not based on missing knowledge (argument from ignorance). We know that minds do invent languages, codes, translation systems, ciphers, and complex, specified information all the time.  The genetic code and its translation system is best explained through the action of an intelligent designer.

The attribution of the design has to be to God or purely materialistic mechanisms. The gigantic pull to swallow in the second case is the fact that the output is the product of code, and that the molecular machinery needed to replicate the code (for inheritance/perpetuation), transcribe it, translate it into protein with many intermediate steps requiring highly specific operations, and to repair it in the foreseen event that it is damaged (to preserve/protect it) or destroy it in the event that it suffers irreparable damage (to forestall cancer) is just too big to swallow. DNA had an intentional purpose. That's the only reasonable conclusion I can come to.

Signature in the Cell, Stephen Meyer, page 18:
As the information theorist Hubert Yockey observes, the “genetic code is constructed to confront and solve the problems of communication and recording by the same principles found…in modern communication and computer codes.”

There is no physical reason why any particular codon should be paired up with any specific amino acid. any codon could have been assigned to any amino acid, since there are no direct physical interactions between them:
Chemical affinities between nucleotide codons and amino acids do not determine the correspondences between codons and amino acids that define the genetic code. From the standpoint of the properties of the constituents that comprise the code, the code is physically and chemically arbitrary. All possible codes are equally likely; none is favored chemically. . . . To claim that deterministic chemical affinities made the origin of this system inevitable lacks empirical foundation.

If there is no direct chemical interaction between the codon and binding site of the amino acid on the tRNA, but there is an intermediate space, then there is no evidence that chemical interactions could have selected the assignment based on chemical affinities. Rather, this state of affairs, is evidence that the “genetic code” is in fact a genuine, arbitrary code, such as a designer would create from scratch.

https://reasonandscience.catsboard.com

Otangelo


Admin
The Genetic Code was most likely implemented by intelligence.

The Genetic Code was most likely implemented by intelligence.
1. In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, word, into another form, ( as another word, letter, etc. ) 
2. In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process.
3. Assignment means designating, dictating, ascribing, corresponding, correlating, specifying, representing, determining, mapping, permutating.    
4. The universal triple-nucleotide genetic code can be the result either of a) a random selection through evolution, or b) the result of intelligent implementation.
5. We know by experience, that performing value assignment and codification is always a process of intelligence with an intended result. Non-intelligence, aka matter, molecules, nucleotides, etc. have never demonstrated to be able to generate codes, and have neither intent nor distant goals with a foresight to produce specific outcomes.  
6. Therefore, the genetic code is the result of an intelligent setup.

The codon bases have a non-random correlation with the kind of amino acids which they code for.  The first of the three letters relate to the kind of amino acid the codon stands for, giving the language a consistent meaning.

Any information stored on our genes is useless without its correct interpretation. The genetic code defines the rule set to decode this information.
https://www.nature.com/articles/s41598-020-69100-0

The order of the three input bases is arbitrary and interchangeable (i.e. the model does not include uneven distribution of assignment uncertainty due to a third base ‘wobble’). There is no codon ambiguity; each codon maps uniquely to one amino acid. To create signal-meaning pairs, for each selected amino acid to be transferred we had to determine its codon assignment according to the donor’s code.
https://www.nature.com/articles/s41598-018-21973-y

The Genetic Code (B): Basic Features and Codon Assignments
The assignment of codons to different amino acids was essentially completed by applying the trinucleotide binding technique discovered by Nirenberg and Leder to all the 64 possible synthetic ribotrinucleotides.
https://www.worldscientific.com/doi/abs/10.1142/9789812813626_0008

A survey of codon assignments for 20 amino acids.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC219908/

In translation, 64 genetic codons are ascribed to 20 amino acids

In the standard genetic code table, of the 64 triplets or codons, 61 codons correspond to the 20 amino acids
https://www.dovepress.com/synonymous-codons-influencing-gene-expression-in-organisms-peer-reviewed-fulltext-article-RRBC

The Universal Genetic Code and Non-Canonical Variants
Genetic code refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process.
https://www.sciencedirect.com/topics/neuroscience/genetic-code

A new integrated symmetrical table for genetic codes
For the formation of proteins in living organism cells, it is found that each amino acid can be specified by either a minimum of one codon or up to a maximum of six possible codons. In other words, different codons specify the different number of amino acids. A table for genetic codes is a representation of translation for illustrating the different amino acids with their respectively specifying codons, that is, a set of rules by which information encoded in genetic material (RNA sequences) is translated into proteins (amino acid sequences) by living cells.  There are a total of 64 possible codons, but there are only 20 amino acids specified by them.
https://arxiv.org/ftp/arxiv/papers/1703/1703.03787.pdf

A specification often refers to a set of documented requirements to be satisfied by a material, design, product, or service. A specification is often a type of technical standard.
https://en.wikipedia.org/wiki/Specification_(technical_standard)

code is a set of rules that serve as generally accepted guidelines recommended for the industry to follow.
https://blog.nvent.com/erico-what-is-the-difference-between-a-code-standard-regulation-and-specification-in-the-electrical-industry/

Harper's illustrated Biochemistry 3th edition page 54
While the three letter genetic code could potentially accommodate more than 20 amino acids, the genetic code is redundant since several amino acids are specified by multiple codons.

Biology, Brooker 4th ed. page 243
The Genetic Code Specifies the Amino Acids
The sequence of three bases in most codons specifies a particular amino acid. For example, the codon CCC specifies the amino acid proline, whereas the codon GGC encodes the amino acid glycine.

Genomics: Evolution of the Genetic Code
The code is actually closer to a cipher than a code and individual species do not have a unique genetic code to be cracked; indeed one of the interesting characteristics of the code is that nearly all life shares exactly the same one, once called the ‘universal genetic code’
https://sci-hub.st/https://www.sciencedirect.com/science/article/pii/S0960982216309174
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message
https://en.wikipedia.org/wiki/Cipher

1. https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/genetic-code#:~:text=Abstract-,Genetic%20code%20refers%20to%20the%20assignment%20of%20the%20codons%20to,or%20%E2%80%9Ccanonical%E2%80%9D%20genetic%20code.



Last edited by Otangelo on Tue Dec 29, 2020 8:53 pm; edited 4 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin
The genetic code, insurmountable problem for non-intelligent origin F2.medium

Simply plotting these numbers on a codon table reveals the existence of a remarkable degree of order, much of which would be unexpected on the basis of amino acid properties as normally understood. For example, codons of the form NUN define a set of five amino acids, all of which have very similar polar requirements. Likewise, the set of amino acids defined by the NCN codons all have nearly the same unique polar requirement. The codon couplets CAY-CAR, AAY-AAR, and GAY-GAR each define a pair of amino acids (histidine-glutamine, asparagine-lysine, and aspartic acid-glutamic acid, respectively) that has a unique polar requirement. Only for the last of these (aspartic and glutamic acids), however, would the two amino acids be judged highly similar by more conventional criteria. Perhaps the most remarkable thing about polar requirement is that although it is only a unidimensional characterization of the amino acids, it still seems to capture the essence of the way in which amino acids, all of which are capable of reacting in varied ways with their surroundings, are related in the context of the genetic code. Also of note is the fact that the context in which polar requirement is defined, i.e., the interaction of amino acids with heterocyclic aromatic compounds in an aqueous environment, is more suggestive of a similarity in the way amino acids might interact with nucleic acids than of any similarity in the way they would behave in a proteinaceous environment

While it must be admitted that the evolutionary relationships among the AARSs bear some resemblance to the related amino acid order of the code, it seems unlikely that they are responsible for that order: the evolutionary wanderings of these enzymes alone simply could not produce a code so highly ordered, in both degree and kind, as we now know the genetic code to be.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC98992/



Last edited by Otangelo on Fri Jan 15, 2021 6:32 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The genetic code, insurmountable problem for non-intelligent origin 


https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#8384

Special Issue "The Origin of the Genetic Code" 30 September 2020
https://www.mdpi.com/journal/ijms/special_issues/origin_genetic_code
The genetic code is the fundamental set of rules for decoding genetic information into proteins, with the 64 base triplets specifying amino acids and stop codons. However, the origin of the genetic code remains a mystery, despite numerous theories.

“A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events that can cause the information to originate by itself in matter.
Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107.”
(The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.)

“Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible”
Donald E. Johnson – Bioinformatics: The Information in Life

“The genetic code’s error-minimization properties are far more dramatic than these (one in a million) results indicate. When the researchers calculated the error-minimization capacity of the one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution. Researchers estimate the existence of 10^18 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This means of 10^18 codes few, if any have an error-minimization capacity that approaches the code found universally throughout nature.”
Fazale Rana – From page 175; ‘The Cell’s Design’

Barbieri: Code Biology:
"...there is no deterministic link between codons and amino acids because any codon can be associated with any amino acid.  This means that the rules of the genetic code do not descend from chemical necessity and in this sense they are arbitrary." "...we have the experimental evidence that the genetic code is a real code, a code that is compatible with the laws of physics and chemistry but is not dictated by them."
https://www.sciencedirect.com/journal/biosystems/vol/164/suppl/C

[Comment on other biological codes]: "In signal transduction, in short, we find all the essential components of a code: (a) two independents worlds of molecules (first messengers and second messengers), (b) a set of adaptors that create a mapping between them, and (c) the proof that the mapping is arbitrary because its rules can be changed in many different ways."

Why should or would molecules promote designate, assign, dictate, ascribe, correspond, correlate, specify anything at all ?  This is not an argument from incredulity. The proposition defies reasonable principles and the known and limited, unspecific range of chance, physical necessity, mutations, and natural selection. What we need, is giving a *plausible* account of how it came about to be in the first place. 
It is in ANY scenario a far stretch to believe that unguided random events would produce a functional code system and arbitrary assignments of meaning. That's simply putting far too much faith into what molecules on their own are capable of doing.

RNA's, ( if they were extant prebiotically anyway), would just lay around and then disintegrate in a short period of time. If we disconsider that the prebiotic synthesis of RNA's HAS NEVER BEEN DEMONSTRATED IN THE LAB, they would not polymerize. Clay experiments have failed. Systems, given energy and left to themselves, DEVOLVE to give uselessly complex mixtures, “asphalts”.  the literature reports (to our knowledge) exactly  ZERO CONFIRMED OBSERVATIONS where molecule complexification emerged spontaneously from a pool of random chemicals. It is IMPOSSIBLE for any non-living chemical system to escape devolution to enter into the world of the “living”. 

The structural basis of the genetic code: amino acid recognition by aminoacyl-tRNA synthetases 28 July 2020
One of the most profound open questions in biology is how the genetic code was established. The emergence of this self-referencing system poses a chicken-or-egg dilemma and its origin is still heavily debated
https://www.nature.com/articles/s41598-020-69100-0

Genomics: Evolution of the Genetic Code 
https://www.sciencedirect.com/science/article/pii/S0960982216309174

Understanding how this code originated and how it affects the molecular biology and evolution of life today are challenging problems, in part because it is so highly conserved — without variation to observe it is difficult to dissect the functional implications of different aspects of a character. 

Code Biology 
http://www.codebiology.org/

"...there is no deterministic link between codons and amino acids because any codon can be associated with any amino acid.  This means that the rules of the genetic code do not descend from chemical necessity and in this sense they are arbitrary." "...we have the experimental evidence that the genetic code is a real code, a code that is compatible with the laws of physics and chemistry but is not dictated by them."
My comment:  Without stop codons, the translation machinery would not know where to end the protein synthesis, and there could/would never be functional proteins, and no life on earth. At all.

Origin and evolution of the genetic code: the universal enigma 2009 Feb Eugene V. Koonin
In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.

Many of the same codons are reassigned (compared to the standard code) in independent lineages (e.g., the most frequent change is the reassignment of the stop codon UGA to tryptophan), this conclusion implies that there should be predisposition towards certain changes; at least one of these changes was reported to confer selective advantage

The origin of the genetic code is acknowledged to be a major hurdle in the origin of life, and I shall mention just one or two of the main problems. Calling it a ‘code’ can be misleading because of associating it with humanly invented codes which at their core usually involve some sort of pre-conceived algorithm; whereas the genetic code is implemented entirely mechanistically – through the action of biological macromolecules. This emphasises that, to have arisen naturally – e.g. through random mutation and natural selection – no forethought is allowed: all of the components would need to have arisen in an opportunistic manner.

Crucial role of the tRNA activating enzymes
 https://evolutionunderthemicroscope.com/ool02.html
To try to explain the source of the code various researchers have sought some sort of chemical affinity between amino acids and their corresponding codons. But this approach is misguided:
First of all, the code is mediated by tRNAs which carry the anti-codon (in the mRNA) rather than the codon itself (in the DNA). So, if the code were based on affinities between amino acids and anti-codons, it implies that the process of translation via transcription cannot have arisen as a second stage or improvement on a simpler direct system - the complex two-step process would need to have arisen right from the start.
Second, the amino acid has no role in identifying the tRNA or the codon (This can be seen from an experiment in which the amino acid cysteine was bound to its appropriate tRNA in the normal way – using the relevant activating enzyme, and then it was chemically modified to alanine. When the altered aminoacyl-tRNA was used in an in vitro protein synthesizing system (including mRNA, ribosomes etc.), the resulting polypeptide contained alanine (instead of the usual cysteine) corresponding to wherever the codon UGU occurred in the mRNA. This clearly shows that it is the tRNA alone (with no role for the amino acid) with its appropriate anticodon that matches the codon on the mRNA.). This association is done by an activating enzyme (aminoacyl tRNA synthetase) which attaches each amino acid to its appropriate tRNA (clearly requiring this enzyme to correctly identify both components). There are 20 different activating enzymes - one for each type of amino acid.
Interestingly, the end of the tRNA to which the amino acid attaches has the same nucleotide sequence for all amino acids - which constitutes a third reason. 
Third:  Interest in the genetic code tends to focus on the role of the tRNAs, but as just indicated that is only one half of implementing the code. Just as important as the codon-anticodon pairing (between mRNA and tRNA) is the ability of each activating enzyme to bring together an amino acid with its appropriate tRNA. It is evident that implementation of the code requires two sets of intermediary molecules: the tRNAs which interact with the ribosomes and recognise the appropriate codon on mRNA, and the activating enzymes which attach the right amino acid to its tRNA. This is the sort of complexity that pervades biological systems, and which poses such a formidable challenge to an evolutionary explanation for its origin. It would be improbable enough if the code were implemented by only the tRNAs which have 70 to 80 nucleotides; but the equally crucial and complementary role of the activating enzymes, which are hundreds of amino acids long, excludes any realistic possibility that this sort of arrangement could have arisen opportunistically.

Progressive development of the genetic code is not realistic
In view of the many components involved in implementing the genetic code, origin-of-life researchers have tried to see how it might have arisen in a gradual, evolutionary, manner. For example, it is usually suggested that to begin with the code applied to only a few amino acids, which then gradually increased in number. But this sort of scenario encounters all sorts of difficulties with something as fundamental as the genetic code.

1. First, it would seem that the early codons need have used only two bases (which could code for up to 16 amino acids); but a subsequent change to three bases (to accommodate 20) would seriously disrupt the code. Recognizing this difficulty, most researchers assume that the code used 3-base codons from the outset; which was remarkably fortuitous or implies some measure of foresight on the part of evolution (which, of course, is not allowed).
2. Much more serious are the implications for proteins based on a severely limited set of amino acids. In particular, if the code was limited to only a few amino acids, then it must be presumed that early activating enzymes comprised only that limited set of amino acids, and yet had the necessary level of specificity for reliable implementation of the code. There is no evidence of this; and subsequent reorganization of the enzymes as they made use of newly available amino acids would require highly improbable changes in their configuration. Similar limitations would apply to the protein components of the ribosomes which have an equally essential role in translation.
3. Further, tRNAs tend to have atypical bases which are synthesized in the usual way but subsequently modified. These modifications are carried out by enzymes, so these enzymes too would need to have started life based on a limited number of amino acids; or it has to be assumed that these modifications are later refinements - even though they appear to be necessary for reliable implementation of the code.
4. Finally, what is going to motivate the addition of new amino acids to the genetic code? They would have little if any utility until incorporated into proteins - but that will not happen until they are included in the genetic code. So the new amino acids must be synthesized and somehow incorporated into useful proteins (by enzymes that lack them), and all of the necessary machinery for including them in the code (dedicated tRNAs and activating enzymes) put in place – and all done opportunistically! Totally incredible!
Comparison of translation loads for standard and alternative genetic codes   2010 Jun 14
The origin and universality of the genetic code is one of the biggest enigmas in biology. Soon after the genetic code of Escherichia coli was deciphered, it was realized that this specific code out of more than 1084 possible codes is shared by all studied life forms (albeit sometimes with minor modifications). The question of how this specific code appeared and which physical or chemical constraints and evolutionary forces have shaped its highly non-random codon assignment is subject of an intense debate. In particular, the feature that codons differing by a single nucleotide usually code for either the same or a chemically very similar amino acid and the associated block structure of the assignments is thought to be a necessary condition for the robustness of the genetic code both against mutations as well as against errors in translation.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2909233/

Was Wright Right? The Canonical Genetic Code is an Empirical Example of an Adaptive Peak in Nature; Deviant Genetic Codes Evolved Using Adaptive Bridges
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2924497/
The error minimization hypothesis postulates that the canonical genetic code evolved as a result of selection to minimize the phenotypic effects of point mutations and errors in translation. 
My comment:  How can the authors claim that there was already translation if it depends on the genetic code already being set up/
It is likely that the code in its early evolution had few or even a minimal number of tRNAs that decoded multiple codons through wobble pairing, with more amino acids and tRNAs being added as the code evolved.
My comment:  Why do the authors claim that the genetic code emerged based on evolutionary selective pressures, if at this stage, there was no evolution AT ALL? Evolution starts with DNA replication, which DEPENDS on translation being already fully set up. Also, the origin of tRNA's is a huge problem for proponents of abiogenesis by the fact, that they are highly specific, and their biosynthesis in modern cells is a highly complex, multistep process requiring many complex enzymes 

Insuperable problems of the genetic code initially emerging in an RNA World 2018 February
The hypothetical RNA World does not furnish an adequate basis for explaining how this system came into being, but principles of self-organisation that transcend Darwinian natural selection furnish an unexpectedly robust basis for a rapid, concerted transition to genetic coding from a peptide RNA world. The preservation of encoded information processing during the historically necessary transition from any ribozymally operated code to the ancestral aaRS enzymes of molecular biology appears to be impossible, rendering the notion of an RNA Coding World scientifically superfluous. Instantiation of functional reflexivity in the dynamic processes of real-world molecular interactions demanded of nature that it fall upon, or we might say “discover”, a computational “strange loop” (Hofstadter, 1979): a self-amplifying set of nanoscopic “rules” for the construction of the pattern that we humans recognize as “coding relationships” between the sequences of two types of macromolecular polymers. However, molecules are innately oblivious to such abstractions. Many relevant details of the basic steps of code evolution cannot yet be outlined. 

Now observe the colorful just so stories that the authors come up with to explain the unexplicable:
We can now understand how the self-organised state of coding can be approached “from below”, rather than thinking of molecular sequence computation as existing on the verge of a catastrophic fall over a cliff of errors. In GRT systems, an incremental improvement in the accuracy of translation produces replicase molecules. that are more faithfully produced from the gene encoding them. This leads to an incremental improvement in information copying, in turn providing for the selection of narrower genetic quasispecies and an incrementally better encoding of the protein functionalities, promoting more accurate translation.
My comment: This is an entirely unwarranted claim. It is begging the question. There was no translation at this stage, since translation depends on a fully developed and formed genetic code.
The vicious circle can wind up rapidly from below as a selfamplifying process, rather than precipitously winding down the cliff from above. The balanced push-pull tension between these contradictory tendencies stably maintains the system near a tipping point, where, all else being equal, informational replication and translation remain impedance matched – that is, until the system falls into a new vortex of possibilities, such as that first enabled by the inherent incompleteness of the primordial coding “boot block”. Bootstrapped coded translation of genes is a natural feature of molecular processes unique to living systems. Organisms are the only products of nature known to operate an essentially computational system of symbolic information processing. In fact, it is difficult to envisage how alien products of nature found with a similar computational capability, which proved to be necessary for their existence, no matter how primitive, would fail classification as a form of “life”.
My comment: I would rather say, it is difficult to envisage how such a complex system could get "off the hook" by natural, unguided means.
http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC5895081&blobtype=pdf

The lack of foundation in the mechanism on which are based the Physico-chemical theories for the origin of the genetic code counterposed to the credible and natural mechanism suggested by the Q2 coevolution theory 1 April 2016
[size=12]https://sci-hub.ren/10.1016/j.jtbi.2016.04.005
[/size]
The majority of theories advanced for explaining the origin of Q4 the genetic code maintain that the physico-chemical properties of amino acids had a fundamental role to organize the structuring of the genetic code....... but this does not seem to have been the case. The physico-chemical properties of amino acids played only a subsidiary role in organizing the code – and important only if understood as manifestation of the catalysis performed by proteins . The mechanism on which lie on the majority of theories based on the physico-chemical properties of amino acids is not credible or at least not satisfactory.

There are enough data to refute the possibility that the genetic code was randomly constructed (“a frozen accident”). For example, the genetic code clusters certain amino acid assignments. Amino acids that share the same biosynthetic pathway tend to have the same first base in their codons. Amino acids with similar physical properties tend to have similar codons.
either bottom-up processes (e.g. unknown chemical principles that make the code a necessity), or bottom-up constraints (i.e. a kind of selection process that occurred early in the evolution of life, and that favored the code we have now), then we can dispense with the code metaphor. The ultimate explanation for the code has nothing to do with choice or agency; it is ultimately the product of necessity.
In responding to the “code skeptics,” we need to keep in mind that they are bound by their own methodology to explain the origin of the genetic code in non-teleological, causal terms. They need to explain how things happened in the way that they suppose. Thus if a code-skeptic were to argue that living things have the code they do because it is one which accurately and efficiently translates information in a way that withstands the impact of noise, then he/she is illicitly substituting a teleological explanation for an efficient causal one. We need to ask the skeptic: how did Nature arrive at such an ideal code as the one we find in living things today?
https://uncommondescent.com/intelligent-design/is-the-genetic-code-a-real-code/

Genetic code: Lucky chance or fundamental law of nature?
It becomes clear that the information code is intrinsically related to the physical laws of the universe, and thus life may be an inevitable outcome of our universe. The lack of success in explaining the origin of the code and life itself in the last several decades suggest that we miss something very fundamental about life, possibly something fundamental about matter and the universe itself. Certainly, the advent of the genetic code was no “play of chance”.

Open questions:
1. Did the dialects, i.e., mitochondrial version, with UGA codon (being the stop codon in the universal version) codifying tryptophan; AUA codon (being the isoleucine in the universal version), methionine; and Candida cylindrica (funges), with CUG codon (being the leucine in the universal version) codifying serine, appear accidentally or as a result of some kind of selection process? 
2. Why is the genetic code represented by the four bases A, T(U), G, and C? 
3. Why does the genetic code have a triplet structure? 
4. Why is the genetic code not overlapping, that is, why does the translation apparatus of a cell, which transcribes information, have a discrete equaling to three, but not to one? 
5. Why does the degeneracy number of the code vary from one to six for various amino acids? 
6. Is the existing distribution of codon degeneracy for particular amino acids accidental or some kind of selection process? 
7. Why were only 20 canonical amino acids selected for the protein synthesis? 
8. Why should there be a genetic code at all?
9. Why should there be the emergency of stereochemical association of a specific arbitrary codon-anticodon set?
10. Aminoacyl-tRNA synthetases recognize the correct tRNA. How did that recognition emerge, and why?
11. Is this very choice of amino acids accidental or some kind of selection process?
12. Why don’t we find any protein sequences in the fossils of ancient organisms, which only have primary amino acids?
13. Why didn’t the genetic code keep on expanding to cover more than 20 amino acids? Why not 39, 48 or 62?
14. Why did codon triplets evolve, and why not quadruplets? With 44 = 256 possible codon quadruplets, coding space could have increased, and thus a much larger universe of possible proteins could have been made possible.

The British biologist John Maynard Smith has described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes:
“The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom ?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid, because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion, or a " good compromise" ?  
An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly, but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance.

That frozen accident means, that good old luck would have  hit the jackpot  trough trial and error amongst 1.5 × 10^84 possible genetic codes. That is the number of atoms in the whole universe. That puts any real possibility of chance providing the feat out of question. Its, using  Borel's law, in the realm of impossibility.  The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code.

Arzamastsev AA. The nature of optimality of DNA code. Biophys. Russ. 1997;42:611–4.
the situation when Nature invented the DNA code surprisingly resembles designing a computer by man. If a computer were designed today, the binary notation would be hardly used. Binary notation was chosen only at the first stage, for the purpose to simplify at most the construction of decoding machine. But now, it is too late to correct this mistake”.
[url=https://www.webpages.uidaho.edu/~stevel/565/literature/Genetic code - Lucky chance or fundamental law of nature.pdf][/url][url=[url=https://www.webpages.uidaho.edu/~stevel/565/literature/Genetic code - Lucky chance or fundamental law of nature.pdf][url=https://www.webpages.uidaho.edu/~stevel/565/literature/Genetic code - Lucky chance or fundamental law of nature.pdf[/url]]https://www.webpages.uidaho.edu/~stevel/565/literature/Genetic%20code%20-%20Lucky%20chance%20or%20fundamental%20law%20of%20nature.pdf[/url][/url]

Origin of Information Encoding in Nucleic Acids through a Dissipation-Replication Relation April 18, 2018
Due to the complexity of such an event, it is highly unlikely that that this information could have been generated randomly. A number of theories have attempted to addressed this problem by considering the origin of the association between amino acids and their cognate codons or anticodons.  There is no physical-chemical description of how the specificity of such an association relates to the origin of life, in particular, to enzyme-less reproduction, proliferation and evolution. Carl Woese recognized this early on and emphasized the probelm, still unresolved, of uncovering the basis of the specifity between amino acids and codons in the genetic code.

Carl Woese (1967) reproduced in the seminal paper of Yarus et al. cited frequently above; 
“I am particularly struck by the difficulty of getting [the genetic code] started unless there is some basis in the specificity of interaction between nucleic acids and amino acids or polypeptide to build upon.” 
https://arxiv.org/pdf/1804.05939.pdf

The genetic code is one in a million
if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.
http://www.ncbi.nlm.nih.gov/pubmed/9732450

The genetic code is nearly optimal for allowing additional information within protein-coding sequences
DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1832087/?report=classic

Hidden code in the protein code
Different codons for the same amino acid may affect how quickly mRNA transcripts are translated, and that this pace can influence post-translational modifications. Despite being highly homologous, the mammalian cytoskeletal proteins beta- and gamma-actin contain notably different post-translational modifications: though both proteins are actually post-translationally arginylated, only arginylated beta-actin persists in the cell. This difference is essential for each protein's function.

To investigate whether synonymous codons might have a role in how arginylated forms persist, Kashina and colleagues swapped the synonymous codons between the genes for beta- and gamma-actin and found that the patterns of post-translational modification switched as well. Next, they examined translation rates for the wild-type forms of each protein and found that gamma-actin accumulated more slowly. Computational analysis suggested that differences between the folded mRNA structures might cause differences in translation speed. When the researchers added an antibiotic that slowed down translation rates, accumulation of arginylated actin slowed dramatically. Subsequent work indicated that N-arginylated proteins may, if translated slowly, be subjected to ubiquitination, a post-translational modification that targets proteins for destruction.

Thus, these apparently synonymous codons can help explain why some arginylated proteins but not others accumulate in cells. “One of the bigger implications of our work is that post-translational modifications are actually encoded in the mRNA,” says Kashina. “Coding sequence can define a protein's translation rate, metabolic fate and post-translational regulation.”
https://www.nature.com/articles/nmeth1110-874

Determination of the Core of a Minimal Bacterial Gene Set
Based on the conjoint analysis of several computational and experimental strategies designed to define the minimal set of protein-coding genes that are necessary to maintain a functional bacterial cell, we propose a minimal gene set composed of 206 genes ( which code for 13 protein complexes ) Such a gene set will be able to sustain the main vital functions of a hypothetical simplest bacterial cell with the following features. These protein complexes could not emerge through evolution ( muations and natural selection ) , because evolution depends on the dna replication, which requires precisely these original genes and proteins ( chicken and egg prolem ). So the only mechanism left is chance, and physical necessity.
http://mmbr.asm.org/content/68/3/518.full.pdf

On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization
The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The problem has a clear catch-22 aspect: high translation fidelity hardly can be achieved without a complex, highly evolved set of RNAs and proteins but an elaborate protein machinery could not evolve without an accurate translation system. The origin of the genetic code and whether it evolved on the basis of a stereochemical correspondence between amino acids and their cognate codons (or anticodons), through selectional optimization of the code vocabulary, as a "frozen accident" or via a combination of all these routes is another wide open problem despite extensive theoretical and experimental studies.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1894784/

Literature from those who posture in favor of creation abounds with examples of the tremendous odds against chance producing a meaningful code. For instance, the estimated number of elementary particles in the universe is 10^80. The most rapid events occur at an amazing 10^45 per second. Thirty billion years contains only 10^18 seconds. By totaling those, we find that the maximum elementary particle events in 30 billion years could only be 10^143. Yet, the simplest known free-living organism, Mycoplasma genitalium, has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451.



Last edited by Otangelo on Sat Jan 16, 2021 6:03 pm; edited 2 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin
The structure of proteins has got optimized to consist of a chain, often a very long chain, of components from a set of just 20 different units, called amino acids. Within the DNA, each group of three letters, formed out of the four letters that are available, is called a codon and is the template for creation of an amino acid. The box on this page shows how many three-letter words we can form with four alphabets, and it works out to be 64. If the word had only two letters, there would be only 16 ways that it could be formed, which is not enough to describe 20 amino acids. We hence need at least three letters in the word, and if 64 is a lot more than 20, well, three codons have special uses, but the remaining 61 provide alternate forms for the most frequent amino acids — as an insurance to avoid errors when the code in the DNA is transcribed! 1

Living organisms have  this mathematically elegant system implement, rising the question: How did it originate?

1. https://www.thestatesman.com/supplements/science_supplements/ancestry-genetic-code-1502937176.html

https://reasonandscience.catsboard.com

Otangelo


Admin
Origin and evolution of the genetic code: the universal enigma - a review

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#8386

Origin and evolution of the genetic code: the universal enigma
Eugene V. Koonin

The genetic code is ordered, optimal, robust, highly non-random. It contains a block structure.  

The genetic code is nearly universal, and the arrangement of the codons in the standard codon table is highly non-random. The three main concepts on the origin and evolution of the code are 

1. The stereochemical theory, according to which codon assignments are dictated by physico-chemical affinity between amino acids and the cognate codons (anticodons);
My comment: Since there is no direct physical interaction between the condon/anticodon site, and the attachment of amino acids on the other binding site of tRNA, how could there be an affinity between the two sites? And even if there were affinity and complementarity between nucleotides and amino acids, how could that be demonstrated for the whole code?

2. The coevolution theory, which posits that the code structure coevolved with amino acid biosynthesis pathways;
My comment: So that means, that these ultra-complex biosynthesis pathways evolved, full of proteins, without the machinery to make proteins yet established? That's a chicken and egg problem. There is no evidence for a less evolved code being able to synthesize proteins.
The genetic code, insurmountable problem for non-intelligent origin Evolut11
Different steps in the evolution of the genetic code according to the co-evolution theory

3. And the error minimization theory under which selection to minimize the adverse effect of point mutations and translation errors was the principal factor of the code’s evolution.
My comment: The error-minimization theory supposes that genetic codes with high error rates would somehow evolve less error-prone over time. There is no evidence for this claim. Errors only lead to more errors, not higher precision.

These theories are not mutually exclusive and are also compatible with the frozen accident hypothesis, i.e., the notion that the standard code might have no special properties but was fixed simply because all extant life forms share a common ancestor, with subsequent changes to the code, mostly, precluded by the deleterious effect of codon reassignment.
My comment: There is no evidence, and even less plausibility to such an assertion.

Mathematical analysis of the structure and possible evolutionary trajectories of the code shows that it is highly robust to translational misreading but there are numerous more robust codes, so the standard code potentially could evolve from a random code via a short sequence of codon series reassignments. Thus, much of the evolution that led to the standard code could be a combination of frozen accident with selection for error minimization although contributions from coevolution of the code with metabolic pathways and weak affinities between amino acids and nucleotide triplets cannot be ruled out. However, such scenarios for the code evolution are based on formal schemes whose relevance to the actual primordial evolution is uncertain. A real understanding of the code origin and evolution is likely to be attainable only in conjunction with a credible scenario for the evolution of the coding principle itself and the translation system.

The fundamental question is how these regularities of the standard code came into being, considering that there are more than 10^84 possible alternative code tables if each of the 20 amino acids and the stop signal are to be assigned to at least one codon.

More specifically, the question is, what kind of interplay of chemical constraints, historical accidents, and evolutionary forces could have produced the standard amino acid assignment, which displays many remarkable properties. The features of the code that seem to require a special explanation include, but are not limited to, the block structure of the code, which is thought to be a necessary condition for the code’s robustness with respect to point mutations, translational misreading, and translational frame shifts;

Block structure and stability of the genetic code  24 December 2002
The maximum stability with respect to point mutations and shifts in the reading frame needs the fixation of the middle letters within codons in groups with different physico-chemical properties, thus, explaining a key feature of the universal genetic code. 2

The universal genetic code obeys mainly the principles of optimal coding.These results demonstrate the hierarchical character of optimization of universal genetic code with strictly optimal coding being evolved at the earliest stages of molecular evolution
Question: How can evolution and molecular evolution be even a mechanism at this stage, when there was no DNA replication established yet? Optimization is the action of making the best or most effective use of a situation or resource, and is commonly known to be an intelligence driven process with specific goals in mind, and requires foresight to know what the goal is.

The link between the second codon letter and the properties of the encoded amino acid so that codons with U in the second position correspond to hydrophobic amino acids;
Observation: This implies ORDER. ORDER is the opposite of randomness. It is the arrangement or disposition of things in relation to each other according to a particular sequence, pattern, or method. Order always implies or suggests that intelligence had the goal to create order for specific purposes.

The relationship between the second codon position and the class of aminoacyl-tRNA synthetase

Evolution of the Aminoacyl-tRNA Synthetases and the Origin of the Genetic Code  : 2 November 1994
The rules which governed the development of the genetic code, and led to certain patterns in the coding catalog between codons and amino acids, would also have governed the subsequent evolution of the synthetases in the context of a fixed code, leading to patterns in synthetase distribution such as those observed. 3
My comment: Since when and why should molecules lying around on early earth generate/have rules governing the development of the genetic code? We know that the genetic code cannot be expressed, unless the full set of aminoacyl tRNA synthestases are present, able to select and charge the respective amino acids to be attached to the tRNA's. That is another demonstration that the translation mechanism had to emerge fully set up and operating from day one.  

the negative correlation between the molecular weight of an amino acid and the number of codons allocated to it; the positive correlation between the number of synonymous codons for an amino acid and the frequency of the amino acid in proteins; the apparent minimization of the likelihood of mistranslation and point mutations; and the near optimality for allowing additional information within protein coding sequences.

It is assumed that there are only 4 nucleotides and 20 encoded amino acids (with the notable exception of selenocysteine and pyrrolysine, for which subsets of organisms have evolved special coding schemes

Natural expansion of the genetic code 2006 4
In order to account for its universality, the code was thought to be frozen to its existing form once a certain level of cellular complexity was reached. The already improved accuracy of protein synthesis at that stage, along with any further structural and functional refinement of the translation apparatus from there on, would preclude additional codon reassignments because they would inevitably lead to disruption of an organism’s whole proteome; ; the vast production of misfolded and aberrant proteins would greatly challenge survival of any such organism. 
My comment:  This is a very interesting comment. If the addition of codon assignments above 20 amino acids means  inevitably the disruption of an organism’s whole proteome, why should the same not be expected if the transition would be from, lets say 15 amino acids, to 17 or so? 

The correlation of mRNA codons with amino acids is the product of the interpretation of the code by the translational machinery, and therefore it is only static as long as the components of this machinery do not change and evolve. It is not surprising then that the documented codon reassignments can always be traced back to alterations in the components of the translational apparatus that are primarily involved in the decoding process: the aminoacyl-tRNA synthetase aaRSs, which ensure correct acylation of each tRNA species with its cognate amino acid; the tRNA molecules, whose anticodon base pairs with the correct mRNA codon by the rules of the wobble hypothesis at the ribosome ; and the peptide chain termination factors that recognize the termination codons.
My comment:  This is another very relevant observation. If the genetic code changes, the entire translation machinery has to co-change(evolve through mutations?) together, in order to adapt to the changed code assignment. That would require new information to change several interacting parts like tRNA's, aminoacyl tRNA synthetases etc.

UGA is the only codon with an ambiguous meaning in organisms from all three domains of life; apart from functioning as a stop codon, an in-frame UGA also encodes selenocysteine (Sec), the 21st cotranslationally inserted amino acid, through a recoding mechanism that requires a tRNA with a UCA anticodon (tRNASec), a specialized translation elongation factor (SelB) and an mRNA stem-loop structure known as the selenocysteine insertion sequence element (SECIS). UAG is also ambiguous in the Methanosarcinaceae, where in addition to serving as a translational stop it also encodes pyrrolysine (Pyl), the 22nd cotranslationally inserted amino acid; in this case, a new tRNA synthetase, pyrrolysyl-tRNA synthetase (PylRS), is essential for this recoding event.

REWIRING THE KEYBOARD: EVOLVABILITY OF THE GENETIC CODE 5
The genetic code evolved in two distinct phases. First, the ‘canonical’ code emerged before the last universal ancestor; subsequently, this code diverged in numerous nuclear and organelle lineages.

Any change in the genetic code alters the meaning of a codon, which, analogous to reassigning a key on a keyboard, would introduce errors into every translated message. Although this might have been acceptable at the inception of the code, when cells relied on few proteins, the forces that act on modern translation systems are likely to be quite different from those that influenced the origin and early evolution of the code

The arbitrariness of the genetic code
Perhaps the most important implication concerns the notion of genetic information. Despite its vagueness, arbitrariness is thought to be useful in establishing how molecules like DNA might convey semantic genetic information 6
My comment:  It's not only useful. It is essential, if instructional complex information has to be generated at all.

The only generally accepted sense of “arbitrary“ seems to be that the assignments could be different than they actually are. Of course, this does not say much about the sense in which they could be different. A more substantial claim is, for example, that the genetic code could be different because an early version of the code became established by chance events rather than by selection or stereochemical factors

My comment:  In fact, there are just these two hypotheses. Chance - or design.

The argument has not been worked out, but it seems to be based on an analogy between chemical and linguistic arbitrariness. Linguistic arbitrariness expresses the fact that the linguistic properties of a word are usually not naturally related to its meaning. The phonetic form of ‘dog’ does not reflect a property of dogs. In Peircean terms, the relation between a word and its meaning is symbolic. Similarly, the genetic code’s arbitrariness is understood as the absence of a natural connection between codons and amino acids. Chemical arbitrariness arguably establishes a language-like symbolic relation between codons and amino acids. It is then thought legitimate to attribute meaning and semantic information to genes or its components. Words and letters are conventionally related to their meanings or to signs from other alphabets like the Morse signs. It is this conventional relation which makes letters and Morse signs symbolic. The thought seems to be that arbitrariness between molecular entities establishes a similar, symbolic kind of relation between them.  If one accepts that DNA and RNA contain information and that arbitrariness is essential for having information, one is committed to claim that at least they bear the relevant chemically arbitrary relations.



1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/
2. https://sci-hub.ren/10.1016/s0022-5193(03)00025-0
3. https://sci-hub.ren/10.1007/bf00166624
4. https://sci-hub.ren/10.1038/nchembio847
5. https://www.nature.com/articles/35047500
6. https://sci-hub.ren/10.1023/b:biph.0000024412.82219.a6



Last edited by Otangelo on Tue Jan 26, 2021 4:50 pm; edited 2 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin
The error-minimization or adaptation theory
According to Sonneborn’s argument reviewed by Carl Woese, selection pressure acted on a primitive genetic code that led to the generation of a mature genetic code where mutations in codons produced few adverse outcomes in terms of functional proteins. This represents an error-minimization strategy. Woese admitted that the error-minimization scheme involved innumerable “trials and errors” so that it, in his opinion, “could never have evolved in this way”.

Others have defended the theory. Some ingenious ideas were admitted subsequently as “utterly wrong”. Interestingly, one investigation of the theoretical susceptibility of a million randomly generated codes to errors, through mutations, showed that the standard genetic code was among the least prone to error. This indicated that, if the initial genetic code was primitive and error-prone, then what is observed in nature is the best option. However, the question remains as to why only one single code survived. Why not several different ones? This rather stands as evidence that the Creator made a wise choice.

The ancestral translational machinery conceived in evolutionary schemes is, of necessity, very rudimentary, and thus highly prone to errors. This means that it would have been almost impossible to correctly translate any mRNA, and thus produced little more than statistical proteins (proteins with only random sequences). Yet through necessity, somehow, the codons of the ancestral code were gradually reconfigured in order to minimize translational error. The ‘somehow’ has been imagined as perhaps involving novel amino acids, existence of a positive feedback mechanism that would assign codons to amino acids with similar properties, direct templating between nucleic and amino acids, or other possibilities.

Vetsigian and Woese subsequently proposed that horizontal gene transfer (HGT) could possibly spread workable genetic workable codes across organisms, accounting for the near universality of the genetic code. However, HGT requires that the genetic codes of the host and the recipient species be similar enough for the new genetic code to work. There also needs to be evidence for a mechanism permitting transfer of genetic information in the ancient past.

The stereochemical theory
Over the past 60 years, several theories have been set forward which attempt to explain how information in the DNA translates to protein sequences. These are based on some sort of selective stereochemical complementarity or affinity between amino acids and nucleotides (base pair triplets). On a physico-chemical level, this is based on the negative charges of the nucleotide phosphates interacting with the positive charge of the basic amino acids. In Saxinger et al.’s study no conclusive selective binding occurred between certain amino acids and nucleotide triplets. More recently, Yarus et al. contended that coding triplets arose as essential parts of RNA-like amino acid-binding sites, but they could show this for only seven of the 20 (35%) canonical amino acids. However, they conceded that the code can change.

The take home implication is that different amino acids can be bound by different coding triplets, meaning that the code is not specific and thus meaningless historically. Overall, after decades of research, no evidence has been found which gives strong support to the stereochemical theory. Yarus’s group went on to argue that adaptation, stereochemical features and co-evolutionary changes were compatible and perhaps necessary in order to account for present codon characteristics. However, Barbieri has argued that there is “no real evidence in favour” of the stereochemical theory. This serves to illustrate the uncertainty prevailing.

The co-evolution theory
According to the co-evolution theory, the original genetic code was “excessively degenerate” meaning it could code for several amino acids. These originals were used in “inventive biosynthetic processes” to synthesize the other amino acids. The code then adapted to accommodate these new amino acids. Similarities in the codons of related amino acids were subject to computer analysis in order to determine if a better code could be found based on biosynthetically related amino acids. An extraordinary correlation was noted for the universal code, as against 32,000 randomly generated possibilities. Changing the pattern of relatedness among amino acids gave more codes equal to or of greater correlations than the universal code. However, the authors stated that these observations “cannot be used as proof for the biosynthetic theory of the genetic code”.

Less than half of the 20 canonical amino acids found in proteins can be synthesized from inorganic molecules. Furthermore, the amino acids that are missing (the so-called secondary amino acids) are also missing from material recovered from meteorites. This is problematic for evolution, for it implies that early life-forms on this planet could only use ten amino acids for protein construction, something which we don’t observe today, thereby greatly reducing the possible number of functional proteins.

The primary amino acids were coded by an ancestral genetic code, which then expanded to include all 20 canonical amino acids. The present code is a non-random structure, yet it is more robust as far as translational errors are concerned than the majority of alternative codes that can be generated conceptually according to accepted evolutionary trajectories. When the starting assumptions are altered so that the postulated codes start from an advantaged position, then higher levels of robustness are achieved. A better code could have been produced if evolution had continued, but it did not as the possibility of severe adverse effects was too great. 

Several questions present themselves here, however. Why don’t we find any protein sequences in the fossils of ancient organisms, which only have primary amino acids? The fact that no such proteins exist is strong proof against the evolutionary origin of the genetic code. We only find proteins made up of all 20 amino acids. Why didn’t the genetic code keep on expanding to cover more than 20 amino acids? Why not 39, 48 or 62? Why did codon triplets evolve, and why not quadruplets? With 44 = 256 possible codon quadruplets, coding space could have increased, and thus a much larger universe of possible proteins could have been made possible.

An additional fundamental issue is that if life commenced in an RNA world, then amino acids could have been synthesized on the primitive codons associated with these molecules by primordial synthetases. How do similar coding rules now apply when codon recognition is performed by the anticodons of the tRNA with the assistance of the highly specific aminoacyl-tRNA-synthetases that attach to the amino acids? It has been suggested that perhaps there was a two-base code rather than a three-base one on account of the supposed limited number of amino acids available.

The accretion model of ribosomal evolution
The accretion model of ribosomal evolution is one of the most recent models and describes how the ribosome evolves from simple RNA and protein elements into an organelle complex in six major phases through accretion, recursively adding, iterative processes, subsuming and freezing segments of the rRNA. It is argued that the record of changes is held in rRNA secondary and three-dimensional structures. Patterns observed in extant rRNA found among organisms were used to generate rules supposedly governing the changes.

First, it is assumed that evolution occurred with changes moving from prokaryotes leading finally to the eukaryotes and with the apex reached with humans. Using this framework, a chronological sequence was constructed of rRNA segment additions to the core structure found in Escherichia coli. The six-phase process envisaged provided no evidence for the emergence of ancestral RNA. The proto-mRNA is seen simply as arising from a random population of appropriate molecules. This proto-mRNA together with tRNA, formed through condensation of a cysteine: cysteine: alanine (CCA) sequence unit, gave rise to base-pair coding triplets (codons). The ribosomal units (small and large) are considered to have arisen from loops of the rRNA. The proposed RNA loops were ‘defect-laden’, which required a protection mechanism. During phase 2 the large ribosomal unit is thought of as a crude ribozyme almost as soon as it was a recognizable structure, catalyzing nonspecific, non-coded condensation of amino acids. Finally, the two developing ribosome units came together (phase 4) to form a complex structure recognizable as a ribosome. In the next phase (5), specific interactions began to occur between anticodons in tRNA and mRNA codons to produce functional proteins. In the final phase the genetic code was optimized.

No organisms have been found that contain ribosomes in any of these intermediary phases.
This narrative suffers from major flaws, some of which also are inherent in previous models of the genetic code generation. No organisms have been found that contain ribosomes in any of these intermediary phases. If these intermediary phases are capable of ribosomal function, then why was it necessary to evolve further during additional steps? An insistent problem is how a genetic code could be generated that depends for its expression on proteins that can only be formed when it exists. Petrov et al. proposed a partial solution. The peptidyl transferase (enzyme) centre, an essential component of the ribosome, arose from an rRNA fragment. This means that its origin is conceived of as being in the RNA world. The peptidyl transferase centre is the place in the 50S LSU where peptide bond synthesis occurs. The machinery is very complex in extant organisms. In its original incarnation, the embryonic centre was less than 100 nucleotides long. The original RNA world quickly morphed into the familiar RNA/protein world. This argument is necessary as it “has proven experimentally difficult to achieve” a self-replicating RNA system. In a revealing aside, Fox even suggested that perhaps it is not necessary to validate the existence of the RNA world if it had a short life.

Some of the additional problems with an RNA world origin were noted by Strobel. An RNA commencement to life on Earth rests on the ability of RNA to both share the task of encoding and also to replicate information. This proposition depends on the abilities of RNA copying enzymes (ribozymes). However, such enzymes are unable to copy long templates and at a sufficient rate to overtake decomposition processes. Even greater issues are that there is no sensible resolution to the question of the origin of the activated nucleotides through abiotic processes needed for RNA formation, or of the problem as to how randomly assembled nucleotides achieved the ability to replicate. This has led some to conclude that “the model does not appear to be very plausible”. Nevertheless, undaunted, other possibilities have been invented.

https://creation.com/ribosomes-and-design

https://reasonandscience.catsboard.com

Otangelo


Admin
Eörs Szathmáry  Toward major evolutionary transitions theory 2.0  April 2, 2015
Stereochemical match is aided by codonic or anticodonic triplets in the corresponding binding sites although an open question is the accuracy when all amino acids and aptamers are present in the same milieu. Should this mechanism turn out to be robust, it offers a convenient road toward initial establishment of the code. The question “what for” remains, however.
https://www.pnas.org/content/112/33/10104


Mark Ridley, Evolution 3rd ed.
http://library.lol/main/F0C84F72B8E4C6D45DE7348D599AB035
In the chemical theory, each particular triplet would have some chemical affinity with its amino acid. GGC, for example, would react with glycine in some way that matched the two together. Several lines of evidence suggest this is wrong. One is that no such chemical relation has been found (and not for want of looking), and it is generally thought that one does not exist. Secondly, the triplet and the amino acid do not physically interact in the translation of the code. They are both held on a tRNA molecule, but the amino acid is attached at one end of the molecule, while the site that recognizes the codon on the mRNA is at the other end


The genetic code, insurmountable problem for non-intelligent origin Trna11


If the genetic code is not chemically determined, why is it the same in all species? The most popular theory is as follows. The code is arbitrary, in the same sense that human language is arbitrary. In English the word for a horse is “horse,” in Spanish it is “caballo,” in French it is “cheval,” in Ancient Rome it was “equus.” There is no reason why one particular sequence of letters rather than another should signify that familiar perissodactylic mammal. Therefore, if we find more than one people using the same word, it implies they have both learned it from a common source. It implies common ancestry. When the starship Enterprise boldly descends on one of those extragalactic planets where the aliens speak English, the correct inference is that the locals share a common ancestry with one of the English-speaking peoples of the Earth. If they had evolved independently, they would not be using English. All living species use a common, but equally arbitrary, language in the genetic code. The reason is thought to be that the code evolved early on in the history of life, and one early form turned out to be the common ancestor of all later species. (Notice that saying all life shares a common ancestor is not the same as saying life evolved only once.) The code is then what Crick (1968) called a “frozen accident.” 

My comment: Note the just so assertion. The authors neglect that there was no evolution prior to DNA replication, and life. 

That is, the original coding relationships were accidental, but once the code had evolved, it would be strongly maintained. Any deviation from the code would be lethal. An individual that read GGC as phenylalanine instead of glycine, for example, would bungle all its proteins, and probably die at the egg stage. The universality of the genetic code is important evidence that all life shares a single origin. In Darwin’s time, morphological homologies like the pentadactyl limb were known; but these are shared between fairly limited groups of species (like all the tetrapods). Cuvier had arranged all animals into four large groups according to their homologies. For this reason, Darwin suggested that living species may have a limited number of common ancestors, rather than just one. Molecular homologies, such as the genetic code, now provide the best evidence that all life has a single common ancestor.

https://reasonandscience.catsboard.com

Sponsored content


Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum