ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview

Welcome to my library—a curated collection of research and original arguments exploring why I believe Christianity, creationism, and Intelligent Design offer the most compelling explanations for our origins. Otangelo Grasso


You are not connected. Please login or register

The genetic code, insurmountable problem for non-intelligent origin

Go to page : 1, 2  Next

Go down  Message [Page 1 of 2]

Otangelo


Admin

The genetic code, insurmountable problem for non-intelligent origin

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin

Eugene V. Koonin: Origin and evolution of the genetic code: the universal enigma 2012 Mar 5
In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/

In a paper published by Omachi et al. (2023), the authors investigated the robustness of the standard genetic code (SGC) by exploring its position in a theoretical "fitness landscape," where different genetic codes are evaluated for their ability to minimize the effects of mutations and translation errors. Using an advanced multicanonical Monte Carlo sampling technique, the authors sampled a much broader range of genetic codes than in previous study mentioned previously, from 2015, which often relied on biased evolutionary algorithms. The researchers estimated that among all possible genetic codes, only one in approximately 10^20 random codes surpasses the SGC in robustness—a far rarer occurrence than previous estimates, which suggested one in a million codes. 1 


Charles W. Carter, Jr. (2015): Inheritance, catalysis, and coding can be viewed fundamentally as problems of emerging specificity.  The universal genetic code is highly specific, and there has been no way to account for its gradual emergence by phenotypic selection from among more simply coded peptides. The absence of transitional links between earlier intermolecular interactions and the triplet code is a fundamental stumbling block that has continued to justify the questionable conclusion that a biologically sufficient set of functional RNA molecules arose by themselves, providing all informational continuity and catalysis [6] necessary to produce the code, without then leaving a trace behind in the phylogenetic record.
Life requires Inheritance, catalysis, and coding

1. Creating a translation dictionary, for example of English to Chinese, requires always a translator, that understands both languages. 
2. The meaning of words of one language that are assigned to words of another language that mean the same requires the agreement of meaning in order to establish translation.
3. That is analogous to what we see in biology, where the ribosome translates the words of the genetic language composed of 64 codon words to the language of proteins, composed of 20 amino acids. 
4. The origin of such complex communication systems is best explained by an intelligent designer.

Alberts: The Molecular Biology of the Cell et al, p367
The relationship between a sequence of DNA and the sequence of the corresponding protein is called the genetic code…the genetic code is deciphered by a complex apparatus that interprets the nucleic acid sequence. …the conversion of the information in [messenger] RNA represents a translation of the information into another language that uses quite different symbols.

Florian Kaiser: The structural basis of the genetic code: amino acid recognition by aminoacyl-tRNA synthetases 28 July 2020
One of the most profound open questions in biology is how the genetic code was established. The emergence of this self-referencing system poses a chicken-or-egg dilemma and its origin is still heavily debated
https://www.nature.com/articles/s41598-020-69100-0

Patrick J. Keeling:  Genomics: Evolution of the Genetic Code  September 26, 2016
Understanding how this code originated and how it affects the molecular biology and evolution of life today are challenging problems, in part because it is so highly conserved — without variation to observe it is difficult to dissect the functional implications of different aspects of a character. 

It is tempting to think that a system so central to life should be elegant, but of course that’s not how evolution works; the genetic code was not designed by clever scientists, but rather built through a series of contingencies. The ‘frozen accident’, as it was described by Crick, that ultimately emerged is certainly non-random, but is more of a mishmash than an elegant plan, which led to new ideas about how the code may have evolved in a series of steps from simpler codes with fewer amino acids. So the code was not always thus, but once it was established before the last universal common ancestor of all extant life (LUCA) it has remained under very powerful selective constraints that kept the code frozen in nearly all genomes that subsequently diversified.
https://sci-hub.st/https://www.sciencedirect.com/science/article/pii/S0960982216309174

My comment:  A series of contingencies !!! a mishmash than an elegant plan !!! Contingent means accidental, incidental, adventitious, casual, chance. So, in other words, luck. A fortuitous accident. Is that a rational proposition ? 
The genetic codons are assigned to amino acids. Why should or would molecules promote designate, dictate, ascribe, correspond, correlate, specify anything at all ? How does that make sense? The genetic code could not be the product of evolution, since it had to be fully operational when life started ( and so, DNA replication, upon which evolution depends ). The only alternative to design is that random unguided events originated it.

Marcello Barbieri Code Biology  February 2018
"...there is no deterministic link between codons and amino acids because any codon can be associated with any amino acid.  This means that the rules of the genetic code do not descend from chemical necessity and in this sense they are arbitrary." "...we have the experimental evidence that the genetic code is a real code, a code that is compatible with the laws of physics and chemistry but is not dictated by them."
https://www.sciencedirect.com/journal/biosystems/vol/164/suppl/C

[Comment on other biological codes]: "In signal transduction, in short, we find all the essential components of a code: (a) two independents worlds of molecules (first messengers and second messengers), (b) a set of adaptors that create a mapping between them, and (c) the proof that the mapping is arbitrary because its rules can be changed in many different ways."

RNA's, ( if they were extant prebiotically anyway), would just lay around and then disintegrate in a short period of time ( a month or so). If we disconsider that the prebiotic synthesis of RNA's HAS NEVER BEEN DEMONSTRATED IN THE LAB, they would not polymerize. Clay experiments have failed. And even IF they would bind in GC rich configurations to small peptides, they would as well simply lay around, and disintegrate. It is in ANY scneario a far stretch to believe that unguided events would produce randomly codes. That's simply putting far too much faith into what molecules on their own are capable of doing.

My comment:  Without stop codons, the translation machinery would not know where to end the protein synthesis, and there could/would never be functional proteins, and no life on earth. At all. These characteristics may render such changes more statistically probable, less likely to be deleterious, or both. However, most non-canonical genetic codes are inferred from DNA sequence alone, or occasionally DNA sequences and corresponding tRNAs.
"In signal transduction, in short, we find all the essential components of a code: (a) two independents worlds of molecules (first messengers and second messengers), (b) a set of adaptors that create a mapping between them, and (c) the proof that the mapping is arbitrary because its rules can be changed in many different ways." Why should or would molecules promote designate, assign, dictate, ascribe, correspond, correlate, specify anything at all? How does that make sense?  This is not an argument from incredulity. The proposition defies reasonable principles and the known and limited, unspecific range of chance, physical necessity, mutations and natural selection. What we need, is to give a *plausible* account of how it came about to be in the first place.  It is in ANY scenario a far stretch to believe that unguided random events would produce a functional code system and arbitrary assignments of meaning. That's simply putting far too much faith into what molecules on their own are capable of doing. RNA's, ( if they were extant prebiotically anyway), would just lay around and then disintegrate in a short period of time. If we disconsider that the prebiotic synthesis of RNAs HAS NEVER BEEN DEMONSTRATED IN THE LAB, they would not polymerize. Clay experiments have failed. Systems, given energy and left to themselves, DEVOLVE to give uselessly complex mixtures, “asphalts”.  the literature reports (to our knowledge) exactly  ZERO CONFIRMED OBSERVATIONS where molecule complexification emerged spontaneously from a pool of random chemicals. It is IMPOSSIBLE for any non-living chemical system to escape devolution to enter into the world of the “living”. 

Eugene V. Koonin (2009): In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology. Many of the same codo ns are reassigned (compared to the standard code) in independent lineages (e.g., the most frequent change is the reassignment of the stop codon UGA to tryptophan), this conclusion implies that there should be predisposition towards certain changes; at least one of these changes was reported to confer selective advantage The origin of the genetic code is acknowledged to be a major hurdle in the origin of life, and I shall mention just one or two of the main problems. Calling it a ‘code’ can be misleading because of associating it with humanly invented codes which at their core usually involve some sort of pre-conceived algorithm; whereas the genetic code is implemented entirely mechanistically – through the action of biological macromolecules. This emphasises that, to have arisen naturally – e.g. through random mutation and natural selection – no forethought is allowed: all of the components would need to have arisen in an opportunistic manner.
Origin and evolution of the genetic code: the universal enigma 

Crucial role of the tRNA activating enzymes 
To try to explain the source of the code various researchers have sought some sort of chemical affinity between amino acids and their corresponding codons. But this approach is misguided:

1.  the code is mediated by tRNAs which carry the anti-codon (in the mRNA) rather than the codon itself (in the DNA). So, if the code were based on affinities between amino acids and anti-codons, it implies that the process of translation via transcription cannot have arisen as a second stage or improvement on a simpler direct system - the complex two-step process would need to have arisen right from the start.
2.  The amino acid has no role in identifying the tRNA or the codon (This can be seen from an experiment in which the amino acid cysteine was bound to its appropriate tRNA in the normal way – using the relevant activating enzyme, and then it was chemically modified to alanine. When the altered aminoacyl-tRNA was used in an in vitro protein synthesizing system (including mRNA, ribosomes etc.), the resulting polypeptide contained alanine (instead of the usual cysteine) corresponding to wherever the codon UGU occurred in the mRNA. This clearly shows that it is the tRNA alone (with no role for the amino acid) with its appropriate anticodon that matches the codon on the mRNA.). This association is done by an activating enzyme (aminoacyl tRNA synthetase) which attaches each amino acid to its appropriate tRNA (clearly requiring this enzyme to correctly identify both components). There are 20 different activating enzymes - one for each type of amino acid.
Interestingly, the end of the tRNA to which the amino acid attaches has the same nucleotide sequence for all amino acids - which constitutes a third reason. 
3. Interest in the genetic code tends to focus on the role of the tRNAs, but as just indicated that is only one half of implementing the code. Just as important as the codon-anticodon pairing (between mRNA and tRNA) is the ability of each activating enzyme to bring together an amino acid with its appropriate tRNA. It is evident that implementation of the code requires two sets of intermediary molecules: the tRNAs which interact with the ribosomes and recognise the appropriate codon on mRNA, and the activating enzymes which attach the right amino acid to its tRNA. This is the sort of complexity that pervades biological systems, and which poses such a formidable challenge to an evolutionary explanation for its origin. It would be improbable enough if the code were implemented by only the tRNAs which have 70 to 80 nucleotides; but the equally crucial and complementary role of the activating enzymes, which are hundreds of amino acids long, excludes any realistic possibility that this sort of arrangement could have arisen opportunistically.

The Genetic Code (2017): Progressive development of the genetic code is not realistic
In view of the many components involved in implementing the genetic code, origin-of-life researchers have tried to see how it might have arisen in a gradual, evolutionary, manner. For example, it is usually suggested that to begin with the code applied to only a few amino acids, which then gradually increased in number. But this sort of scenario encounters all sorts of difficulties with something as fundamental as the genetic code.

1. First, it would seem that the early codons need have used only two bases (which could code for up to 16 amino acids); but a subsequent change to three bases (to accommodate 20) would seriously disrupt the code. Recognizing this difficulty, most researchers assume that the code used 3-base codons from the outset; which was remarkably fortuitous or implies some measure of foresight on the part of evolution (which, of course, is not allowed).
2. Much more serious are the implications for proteins based on a severely limited set of amino acids. In particular, if the code was limited to only a few amino acids, then it must be presumed that early activating enzymes comprised only that limited set of amino acids, and yet had the necessary level of specificity for reliable implementation of the code. There is no evidence of this; and subsequent reorganization of the enzymes as they made use of newly available amino acids would require highly improbable changes in their configuration. Similar limitations would apply to the protein components of the ribosomes which have an equally essential role in translation.
3. Further, tRNAs tend to have atypical bases which are synthesized in the usual way but subsequently modified. These modifications are carried out by enzymes, so these enzymes too would need to have started life based on a limited number of amino acids; or it has to be assumed that these modifications are later refinements - even though they appear to be necessary for reliable implementation of the code.
4. Finally, what is going to motivate the addition of new amino acids to the genetic code? They would have little if any utility until incorporated into proteins - but that will not happen until they are included in the genetic code. So the new amino acids must be synthesized and somehow incorporated into useful proteins (by enzymes that lack them), and all of the necessary machinery for including them in the code (dedicated tRNAs and activating enzymes) put in place – and all done opportunistically! Totally incredible!

https://evolutionunderthemicroscope.com/ool02.html
Stefanie Gabriele Sammet: (2010): The origin and universality of the genetic code is one of the biggest enigmas in biology. Soon after the genetic code of Escherichia coli was deciphered, it was realized that this specific code out of more than 1084 possible codes is shared by all studied life forms (albeit sometimes with minor modifications). The question of how this specific code appeared and which physical or chemical constraints and evolutionary forces have shaped its highly non-random codon assignment is subject of an intense debate. In particular, the feature that codons differing by a single nucleotide usually code for either the same or a chemically very similar amino acid and the associated block structure of the assignments is thought to be a necessary condition for the robustness of the genetic code both against mutations as well as against errors in translation.
Comparison of translation loads for standard and alternative genetic codes 

David M. Seaborg Was Wright Right? The Canonical Genetic Code is an Empirical Example of an Adaptive Peak in Nature; Deviant Genetic Codes Evolved Using Adaptive Bridges  2010 Aug 15
The error minimization hypothesis postulates that the canonical genetic code evolved as a result of selection to minimize the phenotypic effects of point mutations and errors in translation. 

My comment:  How can the authors claim that there was already translation, if it depends on the genetic code already being set up/

It is likely that the code in its early evolution had few or even a minimal number of tRNAs that decoded multiple codons through wobble pairing, with more amino acids and tRNAs being added as the code evolved.

My comment:  Why do the authors claim that the genetic code emerged based on evolutionary selective pressures, if at this stage, there was no evolution AT ALL? Evolution starts with DNA replication, which DEPENDS on translation being already fully set up. Also, the origin of tRNA's is a huge problem for proponents of abiogenesis by the fact, that they are highly specific, and their biosynthesis in modern cells is a highly complex, multistep process requiring many complex enzymes 

(2018) The hypothetical RNA World does not furnish an adequate basis for explaining how this system came into being, but principles of self-organisation that transcend Darwinian natural selection furnish an unexpectedly robust basis for a rapid, concerted transition to genetic coding from a peptide RNA world. The preservation of encoded information processing during the historically necessary transition from any ribozymally operated code to the ancestral aaRS enzymes of molecular biology appears to be impossible, rendering the notion of an RNA Coding World scientifically superfluous. Instantiation of functional reflexivity in the dynamic processes of real-world molecular interactions demanded of nature that it fall upon, or we might say “discover”, a computational “strange loop” (Hofstadter, 1979): a self-amplifying set of nanoscopic “rules” for the construction of the pattern that we humans recognize as “coding relationships” between the sequences of two types of macromolecular polymers. However, molecules are innately oblivious to such abstractions. Many relevant details of the basic steps of code evolution cannot yet be outlined. 

Only one fact concerning the RNA World can be established by direct observation: if it ever existed, it ended without leaving any unambiguous trace of itself. Having left no such trace, the latest time of its demise can thus be situated in the period of emergence of the current universal system of genetic coding, a transformative innovation that provided an algorithmic procedure for reproducibly generating identical proteins from patterns in nucleic acid sequences. 
Insuperable problems of the genetic code initially emerging in an RNA World 

Now observe the colorful just so stories that the authors come up with to explain the unexplicable:
We can now understand how the self-organised state of coding can be approached “from below”, rather than thinking of molecular sequence computation as existing on the verge of a catastrophic fall over a cliff of errors. In GRT systems, an incremental improvement in the accuracy of translation produces replicase molecules. that are more faithfully produced from the gene encoding them. This leads to an incremental improvement in information copying, in turn providing for the selection of narrower genetic quasispecies and an incrementally better encoding of the protein functionalities, promoting more accurate translation.

My comment: This is an entirely unwarranted claim. It is begging the question. There was no translation at this stage, since translation depends on a fully developed and formed genetic code.

The vicious circle can wind up rapidly from below as a selfamplifying process, rather than precipitously winding down the cliff from above. The balanced push-pull tension between these contradictory tendencies stably maintains the system near a tipping point, where, all else being equal, informational replication and translation remain impedance matched – that is, until the system falls into a new vortex of possibilities, such as that first enabled by the inherent incompleteness of the primordial coding “boot block”. Bootstrapped coded translation of genes is a natural feature of molecular processes unique to living systems. Organisms are the only products of nature known to operate an essentially computational system of symbolic information processing. In fact, it is difficult to envisage how alien products of nature found with a similar computational capability, which proved to be necessary for their existence, no matter how primitive, would fail classification as a form of “life”.

My comment: I would rather say, it is difficult to envisage how such a complex system could get "off the hook" by natural, unguided means.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2924497/

Massimo Di Giulio The lack of foundation in the mechanism on which are based the Physico-chemical theories for the origin of the genetic code counterposed to the credible and natural mechanism suggested by the Q2 coevolution theory 1 April 2016
The majority of theories advanced for explaining the origin of Q4 the genetic code maintain that the Physico-chemical properties of amino acids had a fundamental role to organize the structuring of the genetic code....... but this does not seem to have been the case. The Physico-chemical properties of amino acids played only a subsidiary role in organizing the code – and important only if understood as manifestation of the catalysis performed by proteins . The mechanism on which lie on the majority of theories based on the Physico-chemical properties of amino acids is not credible or at least not satisfactory.
https://sci-hub.ren/10.1016/j.jtbi.2016.04.005

There are enough data to refute the possibility that the genetic code was randomly constructed (“a frozen accident”). For example, the genetic code clusters certain amino acid assignments. Amino acids that share the same biosynthetic pathway tend to have the same first base in their codons. Amino acids with similar physical properties tend to have similar codons.

either bottom-up processes (e.g. unknown chemical principles that make the code a necessity), or bottom-up constraints (i.e. a kind of selection process that occurred early in the evolution of life, and that favored the code we have now), then we can dispense with the code metaphor. The ultimate explanation for the code has nothing to do with choice or agency; it is ultimately the product of necessity.

In responding to the “code skeptics,” we need to keep in mind that they are bound by their own methodology to explain the origin of the genetic code in non-teleological, causal terms. They need to explain how things happened in the way that they suppose. Thus if a code-skeptic were to argue that living things have the code they do because it is one which accurately and efficiently translates information in a way that withstands the impact of noise, then he/she is illicitly substituting a teleological explanation for an efficient causal one. We need to ask the skeptic: how did Nature arrive at such an ideal code as the one we find in living things today?
https://uncommondescent.com/intelligent-design/is-the-genetic-code-a-real-code/

Genetic code: Lucky chance or fundamental law of nature?
It becomes clear that the information code is intrinsically related to the physical laws of the universe, and thus life may be an inevitable outcome of our universe. The lack of success in explaining the origin of the code and life itself in the last several decades suggest that we miss something very fundamental about life, possibly something fundamental about matter and the universe itself. Certainly, the advent of the genetic code was no “play of chance”.

Open questions:
1. Did the dialects, i.e., mitochondrial version, with UGA codon (being the stop codon in the universal version) codifying tryptophan; AUA codon (being the isoleucine in the universal version), methionine; and Candida cylindrica (funges), with CUG codon (being the leucine in the universal version) codifying serine, appear accidentally or as a result of some kind of selection process? 
2. Why is the genetic code represented by the four bases A, T(U), G, and C? 
3. Why does the genetic code have a triplet structure? 
4. Why is the genetic code not overlapping, that is, why does the translation apparatus of a cell, which transcribes information, have a discrete equaling to three, but not to one? 
5. Why does the degeneracy number of the code vary from one to six for various amino acids? 
6. Is the existing distribution of codon degeneracy for particular amino acids accidental or some kind of selection process? 
7. Why were only 20 canonical amino acids selected for the protein synthesis? 9. Is this very choice of amino acids accidental or some kind of selection process?
8. Why should there be a genetic code at all?
9. Why should there be the emergency of stereochemical association of a specific arbitrary codon-anticodon set?
10. Aminoacyl-tRNA synthetases recognize the correct tRNA. How did that recognition emerge, and why?

John Maynard Smith British biologist: The Major Transitions in Evolution 1997: has described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes:
“The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid, because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion, or a " good compromise" ? 
An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly, but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance.
https://3lib.net/book/1102567/9707b4

That frozen accident means, that good old luck would have  hit the jackpot  trough trial and error amongst 1.5 × 10^84 possible genetic codes. That is the number of atoms in the whole universe. That puts any real possibility of chance providing the feat out of question. Its, using  Borel's law, in the realm of impossibility.  The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code.

Victor A. Gusev Arzamastsev AA  The nature of optimality of DNA code 1997
“the situation when Nature invented the DNA code surprisingly resembles designing a computer by man. If a computer were designed today, the binary notation would be hardly used. Binary notation was chosen only at the first stage, for the purpose to simplify at most the construction of decoding machine. But now, it is too late to correct this mistake”.
https://www.webpages.uidaho.edu/~stevel/565/literature/Genetic%20code%20-%20Lucky%20chance%20or%20fundamental%20law%20of%20nature.pdf

Julian Mejıa Origin of Information Encoding in Nucleic Acids through a Dissipation-Replication Relation April 18, 2018
Due to the complexity of such an event, it is highly unlikely that that this information could have been generated randomly. A number of theories have attempted to addressed this problem by considering the origin of the association between amino acids and their cognate codons or anticodons.  There is no physical-chemical description of how the specificity of such an association relates to the origin of life, in particular, to enzyme-less reproduction, proliferation and evolution. Carl Woese recognized this early on and emphasized the probelm, still unresolved, of uncovering the basis of the specifity between amino acids and codons in the genetic code. Carl Woese (1967) reproduced in the seminal paper of Yarus et al. cited frequently above;  “I am particularly struck by the difficulty of getting [the genetic code] started unless there is some basis in the specificity of interaction between nucleic acids and amino acids or polypeptide to build upon.” 
https://arxiv.org/pdf/1804.05939.pdf

S J Freeland The genetic code is one in a million 1998 Sep;4
if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.
http://www.ncbi.nlm.nih.gov/pubmed/9732450

Shalev Itzkovitz The genetic code is nearly optimal for allowing additional information within protein-coding sequences 2007 Apr; 17
DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1832087/?report=classic

The Genetic Code Part II: Not Mundane and Not Evolvable
https://www.youtube.com/watch?v=oQ9tAL2AM6M

Monya Baker  Hidden code in the protein code 28 October 2010
Different codons for the same amino acid may affect how quickly mRNA transcripts are translated, and that this pace can influence post-translational modifications. Despite being highly homologous, the mammalian cytoskeletal proteins beta- and gamma-actin contain notably different post-translational modifications: though both proteins are actually post-translationally arginylated, only arginylated beta-actin persists in the cell. This difference is essential for each protein's function.

To investigate whether synonymous codons might have a role in how arginylated forms persist, Kashina and colleagues swapped the synonymous codons between the genes for beta- and gamma-actin and found that the patterns of post-translational modification switched as well. Next, they examined translation rates for the wild-type forms of each protein and found that gamma-actin accumulated more slowly. Computational analysis suggested that differences between the folded mRNA structures might cause differences in translation speed. When the researchers added an antibiotic that slowed down translation rates, accumulation of arginylated actin slowed dramatically. Subsequent work indicated that N-arginylated proteins may, if translated slowly, be subjected to ubiquitination, a post-translational modification that targets proteins for destruction.

Thus, these apparently synonymous codons can help explain why some arginylated proteins but not others accumulate in cells. “One of the bigger implications of our work is that post-translational modifications are actually encoded in the mRNA,” says Kashina. “Coding sequence can define a protein's translation rate, metabolic fate and post-translational regulation.”
https://www.nature.com/articles/nmeth1110-874

Rosario Gil Determination of the Core of a Minimal Bacterial Gene Set Sept. 2004
Based on the conjoint analysis of several computational and experimental strategies designed to define the minimal set of protein-coding genes that are necessary to maintain a functional bacterial cell, we propose a minimal gene set composed of 206 genes ( which code for 13 protein complexes ) Such a gene set will be able to sustain the main vital functions of a hypothetical simplest bacterial cell with the following features. These protein complexes could not emerge through evolution ( muations and natural selection ) , because evolution depends on the dna replication, which requires precisely these original genes and proteins ( chicken and egg prolem ). So the only mechanism left is chance, and physical necessity.
http://mmbr.asm.org/content/68/3/518.full.pdf

Yuri I Wolf On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization 2007 May 31
The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The problem has a clear catch-22 aspect: high translation fidelity hardly can be achieved without a complex, highly evolved set of RNAs and proteins but an elaborate protein machinery could not evolve without an accurate translation system. The origin of the genetic code and whether it evolved on the basis of a stereochemical correspondence between amino acids and their cognate codons (or anticodons), through selectional optimization of the code vocabulary, as a "frozen accident" or via a combination of all these routes is another wide open problem despite extensive theoretical and experimental studies.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1894784/

Literature from those who posture in favor of creation abounds with examples of the tremendous odds against chance producing a meaningful code. For instance, the estimated number of elementary particles in the universe is 10^80. The most rapid events occur at an amazing 10^45 per second. Thirty billion years contains only 10^18 seconds. By totaling those, we find that the maximum elementary particle events in 30 billion years could only be 10^143. Yet, the simplest known free-living organism, Mycoplasma genitalium, has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451.

Problem no.1
The genetic code system ( language ) must be created, and the universal code is nearly optimal and maximally efficient

V A Ratner [The genetic language: grammar, semantics, evolution] 1993 May;29
The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics.
http://www.ncbi.nlm.nih.gov/pubmed/8335231

Eugene V. Koonin Origin and evolution of the genetic code: the universal enigma 2012 Mar 5
In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/

S J Freeland The genetic code is one in a million 1998 Sep
if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.
http://www.ncbi.nlm.nih.gov/pubmed/9732450

Shalev Itzkovitz The genetic code is nearly optimal for allowing additional information within protein-coding sequences 2007 Apr; 17
DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1832087/?report=classic

Problem no.2
The origin of the information to make the first living cells must be explained.

Rosario Gil Determination of the Core of a Minimal Bacterial Gene Set , Sept. 2004
Based on the conjoint analysis of several computational and experimental strategies designed to define the minimal set of protein-coding genes that are necessary to maintain a functional bacterial cell, we propose a minimal gene set composed of 206 genes ( which code for 13 protein complexes ) Such a gene set will be able to sustain the main vital functions of a hypothetical simplest bacterial cell with the following features. These protein complexes could not emerge through evolution ( muations and natural selection ) , because evolution depends on the dna replication, which requires precisely these original genes and proteins ( chicken and egg prolem ). So the only mechanism left is chance, and physical necessity.
http://mmbr.asm.org/content/68/3/518.full.pdf

Literature from those who posture in favor of creation abounds with examples of the tremendous odds against chance producing a meaningful code. For instance, the estimated number of elementary particles in the universe is 10^80. The most rapid events occur at an amazing 10^45 per second. Thirty billion years contains only 10^18 seconds. By totaling those, we find that the maximum elementary particle events in 30 billion years could only be 10^143. Yet, the simplest known free-living organism, Mycoplasma genitalium, has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451.

Paul Davies once said;
How did stupid atoms spontaneously write their own software … ? Nobody knows … … there is no known law of physics able to create information from nothing.

Problem no.3 
The genetic cipher

Yuri I Wolf On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization 2007 May 31
The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The problem has a clear catch-22 aspect: high translation fidelity hardly can be achieved without a complex, highly evolved set of RNAs and proteins but an elaborate protein machinery could not evolve without an accurate translation system. The origin of the genetic code and whether it evolved on the basis of a stereochemical correspondence between amino acids and their cognate codons (or anticodons), through selectional optimization of the code vocabulary, as a "frozen accident" or via a combination of all these routes is another wide open problem despite extensive theoretical and experimental studies.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1894784/

John Maynard Smith British biologist: The Major Transitions in Evolution 1997
has described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes:

“The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid, because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion, or a " good compromise" ? 

An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly, but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance.
https://3lib.net/book/1102567/9707b4

That frozen accident means, that good old luck would have  hit the jackpot  trough trial and error amongst 1.5 × 1084 possible genetic codes . That is the number of atoms in the whole universe. That puts any real possibility of chance providing the feat out of question. Its , using  Borel's law, in the realm of impossibility.  The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code.

Put it in other words : The task compares to invent two languages, two alphabets, and a translation system, and the information content of a book ( for example hamlet)  being written in english translated  to chinese  in a extremely sophisticared hardware system. The conclusion that a intelligent designer had to setup the system follows not based on missing knowledge ( argument from ignorance ). We know that minds do invent languages, codes, translation systems, ciphers, and complex, specified information all the time.  The genetic code and its translation system is best explained through the action of a intelligent designer.

The genetic code could not be the product of evolution, since it had to be fully operational when life started ( and so, DNA replication, upon which evolution depends ). The only alternative to design is that random unguided events originated it.



Last edited by Otangelo on Sun Nov 10, 2024 3:59 am; edited 70 times in total

https://reasonandscience.catsboard.com

2The genetic code, insurmountable problem for non-intelligent origin Empty The origin of the genetic code Sat Feb 04, 2017 7:01 pm

Otangelo


Admin

The origin of the genetic code

1. Creating a translation dictionary, for example of English to Chinese, requires always a translator, that understands both languages. 
2. The meaning of words of one language that are assigned to words of another language that mean the same requires the agreement of meaning in order to establish translation.
3. That is analogous to what we see in biology, where the ribosome translates the words of the genetic language composed of 64 codon words to the language of proteins, composed of 20 amino acids. 
4. The origin of such complex communication systems is best explained by an intelligent designer.

V A Ratner The genetic language: grammar, semantics, evolution 1993 May 29
The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics.
http://www.ncbi.nlm.nih.gov/pubmed/8335231

Lumen Microbiology : Mechanisms of Microbial Genetics
Translation of the mRNA template converts nucleotide-based genetic information into the “language” of amino acids to create a protein product.
https://courses.lumenlearning.com/microbiology/chapter/protein-synthesis-translation/

An unguided physical process creating a semiotic code is like suggesting that a rainbow can write poetry it is never going to happen!  Physics and chemistry alone do not possess the tools to create a concept. The only cause capable of creating conceptual semiotic information is a conscious intelligent mind.

1.DNA is not merely a molecule with a pattern; it is a information storage mechanism, using the genetic code.
2.All codes we know the origin of are created by a conscious mind.  
3.Therefore DNA was designed by a mind, and language and information are proof of the action of a Superintelligence.
http://evo2.org/read-prove-god-exists/

Codes have always a mental origin
1. In cells, the genetic code assigns 61 codons and 3 start/stop codons to 20 amino acids, using the Ribosome as a translation mechanism.
2. All codes require arbitrary values being assigned and determined to represent something else.
3. All codes require a translation mechanism, adapter, key, or process of some kind to exist prior to translation
4. Foreknowledge is required both, a) to get a functional outcome through the information system, and b) to set up the entire system.
5. Therefore, translation directing the making of proteins used in life was most probably designed.

Codes come always from intelligence
1. In cells,  the genetic code is the assignment ( a cipher) of 64 triplet codons to 20 amino acids.
2. A code is a system of rules where a symbol, letters, words, etc. are assigned to something else. Transmitting information, for example, can be done through the translation of the symbols of the alphabetic letters, to symbols of kanji, logographic characters used in Japan. That requires a common agreement of meaning. 
3. Therefore, the triplet codons (triplet nucleotides) to amino acids must be pre-established by a mind. The origin of the genetic code is best explained by an intelligent designer.  

The Genetic Code was most likely implemented by intelligence.
1. In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, word, into another form, ( as another word, letter, etc. ) 
2. In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process.
3. Assignment means designating, dictating, ascribing, corresponding, correlating, specifying, representing, determining, mapping, permutating.    
4. The universal triple-nucleotide genetic code can be the result either of a) a random selection through evolution, or b) the result of intelligent implementation.
5. We know by experience, that performing value assignment and codification is always a process of intelligence with an intended result. Non-intelligence, aka matter, molecules, nucleotides, etc. have never demonstrated to be able to generate codes, and have neither intent nor distant goals with a foresight to produce specific outcomes.  
6. Therefore, the genetic code is the result of an intelligent setup.

The argument of the origin of codes
1. In cells, information is encoded through the genetic code which is a set of rules, stored in DNA sequences of nucleotide triplets called codons. The information distributed along a strand of DNA is biologically relevant. In computerspeak, genetic data are semantic data. Consider the way in which the four bases A, G, C, and T are arranged in DNA. As explained, these sequences are like letters in an alphabet, and the letters may spell out, in code, the instructions for making proteins. A different sequence of letters would almost certainly be biologically useless. Only a very tiny fraction of all possible sequences spell out a biologically meaningful message. Codons are used to translate genetic information into amino acid polypeptide sequences, which make proteins ( the molecular machines, the working horses of the cell ). And so, the information which is sent through the system, as well as the communication channels that permit encoding, sending, and decoding, which in life is done by over 25 extremely complex molecular machine systems, which do as well error check and repair to maintain genetic stability, and minimizing replication, transcription and translation errors, and permit organisms to pass accurately genetic information to their offspring, and survive. This system had to be set-up prior to life began because life depends on it.
2. A code is a system of rules where a symbol, letters, words, or even sounds, gestures, or images, are assigned to something else. Translating information through a key, code, or cipher, for example, can be done through the translation of the symbols of the alphabetic letters, to symbols of kanji, logographic characters used in Japan.
3. Intelligent design is the most case-adequate explanation for the origin of the sequence-specific digital information (the genetic text) necessary to produce a minimal proteome to kick-start life. The assembly information stored in genes, and the assignment of codons (triplet nucleotides) to amino acids must be pre-established by a mind. Assignment which means designating, ascribing, corresponding, or correlating meaning of characters through a code system, where symbols of one language are assigned to symbols of another language that mean the same, requires a common agreement of meaning in order to establish communication, trough encoding, sending, and decoding. Semantics, Syntax, and pragmatics are always set up by intelligence. The origin of such complex communication systems is best explained by an intelligent designer.

1. The origin of the genetic cipher 
1.Triplet codons must be assigned to amino acids to establish a genetic cipher.  Nucleic-acid bases and amino acids don’t recognize each other directly but have to deal via chemical intermediaries ( tRNA's and  Aminoacyl tRNA synthetase ), there is no obvious reason why particular triplets should go with particular amino acids.
2. Other translation assignments are conceivable, but whatever cipher is established, the right amino acids must be assigned to permit polypeptide chains, which fold to active functional proteins. Functional amino acid chains in sequence space are rare.  There are two possibilities to explain the correct assignment of the codons to the right amino acids. Chance, and design. Natural selection is not an option, since DNA replication is not set up at the stage prior to a self-replicating cell, but this assignment had to be established before.
3. If it were a lucky accident that happened by chance, luck would have hit the jackpot through trial and error amongst 1.5 × 10^84 possible genetic code tables. That is the number of atoms in the whole universe. That puts any real possibility of a chance of providing the feat out of question. Its, using  Borel's law, in the realm of impossibility. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, the chemical lottery lacks the time necessary to find the universal genetic code. 
4. We have not even considered that there are also over 500 possible amino acids, which would have to be sorted out, to get only 20, and select all L amino and R sugar bases......
5. We know that minds do invent languages, codes, translation systems, ciphers, and complex, specified information all the time. 
6. Put it in other words: The task compares to invent two languages, two alphabets, and a translation system, and the information content of a book ( for example hamlet)  being created and written in English, and translated to Chinese, through the invention and application of an extremely sophisticated hardware system. 
7. The genetic code and its translation system are best explained through the action of an intelligent designer. 

The genetic piano
1. The work of the gene regulatory network “corresponds to a pianist playing a piece of music. Like keys on a piano, DNA is the blueprint to make the proteins that cells require. Epigenetic information provides dynamic, flexible instructions as to how, where, and when the information stored in DNA will be expressed.
2.  There must be an origin of the information required to produce function. Who’s the pianist and who’s the conductor?  The environment cannot be the director. Heredity cannot be the musician; it has no foresight to orchestrate the collection of processes organized into a meaningful, functional outcome.
3. Science is supposed to seek efficient and adequate causes, not just-so stories, or appeals to chance based on circular reasoning.  The alternative and the only explanation is therefore intelligent design with a known cause sufficient to produce functional instructional information: an intelligent agent.


The semantic argument
1. 64 Triplet codons ( three-letter words ) stored in DNA have meaning ( semantics). Arbitrarily, they are assigned to 20 amino acids, the building blocks of proteins. ( The codon UUA ( uracil/uracil/adenine = leucine)
2. Codons are therefore information-bearing molecules. They inform the translation machinery, which amino acid has to be added in the nascent polypeptide chain to make functional proteins.
3. Information is a disembodied abstract entity independent of its physical carrier.  Information is neither classical nor quantum, it is independent of the properties of physical systems used to its processing.
4. The set-up of an information system, based on semiotic information is always traced back to an intelligent source that sets it up it for purposeful, specific goals.
5. The origin of the genetic code, based on semiotics, is therefore, best explained by intelligent design.
https://arxiv.org/pdf/1402.2414.pdf

The Wobble hypothesis points to an intelligent setup!
1. In translation, the wobble hypothesis is a set of four relationships. The first two bases in the codon create the coding specificity, for they form strong Watson-Crick base pairs and bond strongly to the anticodon of the tRNA.
2. When reading 5' to 3' the first nucleotide in the anticodon (which is on the tRNA and pairs with the last nucleotide of the codon on the mRNA) determines how many nucleotides the tRNA actually distinguishes.
If the first nucleotide in the anticodon is a C or an A, pairing is specific and acknowledges original Watson-Crick pairing, that is: only one specific codon can be paired to that tRNA. If the first nucleotide is U or G, the pairing is less specific and in fact, two bases can be interchangeably recognized by the tRNA. Inosine displays the true qualities of wobble, in that if that is the first nucleotide in the anticodon then any of three bases in the original codon can be matched with the tRNA.
3. Due to the specificity inherent in the first two nucleotides of the codon, if one amino acid is coded for by multiple anticodons and those anticodons differ in either the second or third position (first or second position in the codon) then a different tRNA is required for that anticodon.
4. The minimum requirement to satisfy all possible codons (61 excluding three stop codons) is 32 tRNAs. Which is 31 tRNAs for the amino acids and one initiation codon. Aside from the obvious necessity of wobble, that our bodies have a limited amount of tRNAs and wobble allows for broad specificity, wobble base pairs have been shown to facilitate many biological functions. This has another AMAZING implication which points to intelligent set up:  The science paper: The genetic code is one in a million, confesses: If we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.
5. This, all, by all means, screams out literally of intelligent DESIGN !!

1. First, it would seem that the early codons need have used only two bases (which could code for up to 16 amino acids); but a subsequent change to three bases (to accommodate 20) would seriously disrupt the code. Recognizing this difficulty, most researchers assume that the code used 3-base codons from the outset; which was remarkably fortuitous or implies some measure of foresight on the part of evolution (which, of course, is not allowed).
2. Much more serious are the implications for proteins based on a severely limited set of amino acids. In particular, if the code was limited to only a few amino acids, then it must be presumed that early activating enzymes comprised only that limited set of amino acids, and yet had the necessary level of specificity for reliable implementation of the code. There is no evidence of this; and subsequent reorganization of the enzymes as they made use of newly available amino acids would require highly improbable changes in their configuration. Similar limitations would apply to the protein components of the ribosomes which have an equally essential role in translation.
3. Further, tRNAs tend to have atypical bases which are synthesized in the usual way but subsequently modified. These modifications are carried out by enzymes, so these enzymes too would need to have started life based on a limited number of amino acids; or it has to be assumed that these modifications are later refinements - even though they appear to be necessary for reliable implementation of the code.
4. Finally, what is going to motivate the addition of new amino acids to the genetic code? They would have little if any utility until incorporated into proteins - but that will not happen until they are included in the genetic code. So the new amino acids must be synthesized and somehow incorporated into useful proteins (by enzymes that lack them), and all of the necessary machinery for including them in the code (dedicated tRNAs and activating enzymes) put in place – and all done opportunistically! Totally incredible!

https://evolutionunderthemicroscope.com/ool02.html

1. D
2. D -> A & B & C
3. A & B & C -> requires Intelligence
4. Therefore Intelligence

A: Information, Biosemiotics ( instructional complex mRNA codon sequences transcribed from DNA )
B: Translation mechanism ( adapter, key, or process of some kind to exist prior to translation = ribosome )
C: Genetic Code
D: Functional proteins

1. Life depends on proteins ( molecular machines ) (D). Their function depends on the correct arrangement of a specified complex sequence of amino acids.
2. That depends on the translation of genetic information (A) through the ribosome (B) and the genetic code (C), which assigns 61 codons and 3 start/stop codons to 20 amino acids
3. Instructional complex Information ( Biosemiotics: Semantics, Synthax, and pragmatics (A)) is only generated by intelligent beings with foresight. Only intelligence with foresight can conceptualize and instantiate complex machines with specific purposes, like translation using adapter keys (ribosome, tRNA, aminoacyl tRNA synthetases (B)) All codes require arbitrary values being assigned and determined by agency to represent something else ( genetic code (C)).
4. Therefore, Proteins being the product of semiotics/algorithmic information including translation through the genetic code, and the manufacturing system ( information directing manufacturing ) are most probably the product of a divine intelligent designer.

The problem of translation through the Ribosome is threefold:

1. The origin of Information stored in the genome.

1. Semiotic functional information is not a tangible entity, and as such, it is beyond the reach of, and cannot be created by any undirected physical process.
2. This is not an argument about probability. Conceptual semiotic information is simply beyond the sphere of influence of any undirected physical process. To suggest that a physical process can create semiotic code is like suggesting that a rainbow can write poetry... it is never going to happen!  Physics and chemistry alone do not possess the tools to create a concept. The only cause capable of creating conceptual semiotic information is a conscious intelligent mind.
3. Since life depends on the vast quantity of semiotic information, life is no accident and provides powerful positive evidence that we have been designed. A scientist working at the cutting edge of our understanding of the programming information in biology, he described what he saw as an “alien technology written by an engineer a million times smarter than us”

2. The origin of the adapter, key, or process of some kind to exist prior to translation = ribosome

1. Ribosomes have the function to translate genetic information into proteins. According to Craig Venter, the ribosome is “an incredibly beautiful complex entity” which requires a minimum of 53 proteins. It is nothing if not an editorial perfectionist…the ribosome exerts far tighter quality control than anyone ever suspected over its precious protein products…  They are molecular factories with complex machine-like operations. They carefully sense, transfer, and process, continually exchange and integrate information during the various steps of translation, within itself at a molecular scale, and amazingly, even make decisions. They communicate in a coordinated manner, and information is integrated and processed to enable an optimized ribosome activity. Strikingly, many of the ribosome functional properties go far beyond the skills of a simple mechanical machine. They can halt the translation process on the fly, and coordinate extremely complex movements. The whole system incorporates 11 ingenious error check and repair mechanisms, to guarantee faithful and accurate translation, which is life-essential.
2. For the assembly of this protein-making factory, consisting of multiple parts, the following is required: genetic information to produce the ribosome assembly proteins, chaperones, all ribosome subunits, and assembly cofactors. a full set of tRNA's, a full set of aminoacyl tRNA synthetases, the signal recognition particle, elongation factors, mRNA, etc. The individual parts must be available,  precisely fit together, and assembly must be coordinated. A ribosome cannot perform its function unless all subparts are fully set up and interlocked. 
3. The making of a translation machine makes only sense if there is a source code, and information to be translated. Eugene Koonin: Breaking the evolution of the translation system into incremental steps, each associated with a biologically plausible selective advantage is extremely difficult even within a speculative scheme let alone experimentally. Speaking of ribosomes, they are so well-structured that when broken down into their component parts by chemical catalysts (into long molecular fragments and more than fifty different proteins) they reform into a functioning ribosome as soon as the divisive chemical forces have been removed, independent of any enzymes or assembly machinery – and carry on working.  Design some machinery that behaves like this and I personally will build a temple to your name! Natural selection would not select for components of a complex system that would be useful only in the completion of that much larger system. The origin of the ribosome is better explained through a brilliant intelligent and powerful designer, rather than mindless natural processes by chance, or/and evolution since we observe all the time minds capabilities producing machines and factories.

3. The origin of the genetic code

1. A code is a system of rules where a symbol, letters, words, etc. are assigned to something else. Transmitting information, for example, can be done through the translation of the symbols of the alphabetic letters, to symbols of kanji, logographic characters used in Japan.  In cells,  the genetic code is the assignment ( a cipher) of 64 triplet codons to 20 amino acids.
2. Assigning meaning of characters through a code system, where symbols of one language are assigned to symbols of another language that mean the same, requires a common agreement of meaning. The assignment of triplet codons (triplet nucleotides) to amino acids must be pre-established by a mind.
3. Therefore, the origin of the genetic code is best explained by an intelligent designer. 


Perry Marshall: Is DNA a Code?
http://evo2.org/dna-atheists/dna-code/
Codes always involve a system of symbols that represent ideas or plans.

According to the Field Museum, DNA base pairs are “codes, or instructions, that specify the characteristics of an organism, from a body’s sex to the color of a pea”
My comment: This very sentence is the cause of a lot of confusion. The information stored in genes, aka the sequence of codons, which instructs the sequence of amino acids, is NOT the genetic code. The genetic code is the assignment of 64 trinucelotide codons to 20 amino acids.

Claim: DNA is a set of instructions only in the same sense that chemistry itself is a set of instructions. All molecules know or decode is the laws of physics.
Reply: The bits and bytes on a hard drive don’t “know” anything either, they simply obey the laws of physics. It’s a purely electro-mechanical process. But they still have to be programmed to do what they do. Computer programs don’t emerge naturally, they are designed.  A book cannot be reduced to paper and ink.

Codes are generally expressed as binary relations or as geometric correspondences between a domain and a counterdomain; one speaks of mapping in the latter case. Thus, in the International Morse Code, 52 symbols consisting of sequences of dots and dashes map on 52 symbols of the alphabet, numbers and punctuation marks; or in the genetic code, 61 of the possible symbol triplets of the RNA domain map on a set of 20 symbols of the polypeptide counterdomain.

The data on your computer cannot be explained purely in terms of the materials your computer is made of; which is as good an illustration of any as to why purely materialistic interpretations fail. Many years ago a discussion about this topic would seem hopelessly abstract to most people, but now we live in the information age. We all know exactly what information is and we all understand that information is the entity that defines living things, man-made things, and all designs.

Codes are the product of a mind. A thinking entity outside the cell has to be responsible for the double dose of the DNA and the Protein , which use different languages and yet can communicate with each other. One would never dare insist that random ordering could create the Morse Code or the Braille reading method. That would be utterly irrational. DNA base sequencing cannot be explained by chance nor physical necessity any more than the information in a newspaper headline can be explained by reference to the chemical properties of ink. Nor can the conventions of the genetic code that determine the assignments between nucleotide triplets and amino acids during translation be explained in this manner. The genetic code functions like a grammatical convention in a human language. The properties/shape of building bricks/blocks do not determine their arrangement in the construction of a house or a wall. Similarly, the properties of biological building blocks do not determine the arrangement of monomers into functional information-bearing DNA and RNA polypeptides, nor protein strands.

The cell often employs a functional logic that mirrors our own, but exceeds it in the elegance of its execution. “It’s like we are looking at 8.0 or 9.0 versions of design strategies that we have just begun to implement. When I see how the cell processes information,” he said, “it gives me an eerie feeling that someone else figured this out before we got here.”

The attribution of the design has to be to God or purely materialistic mechanisms. The gigantic pull to swallow in the second case is the fact that the output is the product of code, and that the molecular machinery needed to replicate the code (for inheritance/perpetuation), transcribe it, translate it into protein with many intermediate steps requiring highly specific operations, and to repair it in the foreseen event that it is damaged (to preserve/protect it) or destroy it in the event that it suffers irreparable damage (to forestall cancer) is just too big to swallow. DNA had an intentional purpose. That's the only reasonable conclusion I can come to.

The Lewis Carroll classic, Through the Looking Glass, Humpty Dumpty states, “When I use a word, it means just what I choose it to mean — neither more nor less.” In turn, Alice (of Wonderland fame) says, “The question is, whether you can make words mean so many different things.” All organisms on Earth use a genetic code, which is the language in which the building plans for proteins are specified in their DNA. It has long been assumed that there is only one such “canonical” code, so each word means the same thing to every organism. While a few examples of organisms deviating from this canonical code had been serendipitously discovered before, these were widely thought of as very rare evolutionary oddities, absent from most places on Earth and representing a tiny fraction of species. Now, this paradigm has been challenged by the discovery of large numbers of exceptions from the canonical genetic code, published by a team of researchers from the U.S. Department of Energy Joint Genome Institute (DOE JGI) in the May 23, 2014 edition of the journal Science.

It has been 60 years since the discovery of the structure of DNA and the emergence of the central dogma of molecular biology, wherein DNA serves as a template for RNA and these nucleotides form triplets of letters called codons. There are 64 codons, and all but three of these triplets encode actual amino acids — the building blocks of protein. The remaining three are “stop codons,” that bring the molecular machinery to a halt, terminating the translation of RNA into protein. Each has a given name: Amber, Opal and Ochre. When an organism’s machinery reads the instructions in the DNA, builds a protein composed of amino acids, and reaches Amber, Opal or Ochre, this triplet would signal that they have arrived at the end of a protein.

Origin and evolution of the genetic code: the universal enigma
https://reasonandscience.catsboard.com/t2001-origin-and-evolution-of-the-genetic-code-the-universal-enigma

The genetic code is nearly optimal for allowing additional information within protein-coding sequences
https://reasonandscience.catsboard.com/t1404-the-genetic-code-is-nearly-optimal-for-allowing-additional-information-within-protein-coding-sequences

The genetic code cannot arise through natural selection
https://reasonandscience.catsboard.com/t1405-the-genetic-code-cannot-arise-through-natural-selection

The origin of the genetic cipher, the most perplexing problem in biology
https://reasonandscience.catsboard.com/t2267-the-origin-of-the-genetic-cipher-the-most-perplexing-problem-in-biology

Evolution Of Genetic Code” Article Illustrates Fundamental Problem
https://uncommondescent.com/evolution/evolution-of-genetic-code-article-illustrates-fundamental-problem/

Large Numbers Of Exceptions To The Canonical Genetic Code
https://uncommondescent.com/intelligent-design/large-numbers-of-exceptions-to-the-canonical-genetic-code/

The genetic code, insurmountable problem for non-intelligent origin Sdfsds12



Last edited by Otangelo on Sun Jul 18, 2021 7:05 pm; edited 11 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The hardware & software to make proteins, what mechanism explains best its origin?

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#7010

What commonly is discussed in Theism x Atheism debates, is where the information stored in DNA come from. That is an enigma, which biological sciences never addressed in a convincing manner. And science generally doesn't go further than hypothetical guesswork.

But far more than just the origin of the message, the instructions must be explained.

Shakespeares Hamlet cam undoubtedly from Shakespeare's mind.
But the alphabet he used to convey his story, was pre-existent. He learned it and used to write down his romance.

But where the alphabet came from, is an entirely different issue.
So it is in genetics. DNA uses a genetic code, which is composed of 64 entries, composed of codons, three letters of nucleotides, which form together one genetic "letter". Each of these is ascribed to one of the twenty amino acids.  Since there are only twenty amino acids used to make proteins, several different codons can mean the same amino acids, so there is a redundancy, which is very useful since it permits the system to be more robust and error-prone. During the transcription and translation process, errors are minimized.

Science has discovered, that the genetic code is best suited for its task amongst at least one million other possible codes.

And the amino acid selection is as well the best suited for the purpose of constructing molecular machines, enzymes and proteins.

Now that is another not resolved question: How did the genetic code " alphabet " emerge on prebiotic earth?

How were the 64 genetic codons ascribed to 20 amino acids?

These questions belong to the most enigmatic in biological sciences, without good answers.

Besides the above-mentioned problems, which can be considered software problems, there is also the question of how the hardware emerged.

In order for the translation of messenger RNA to amino acids can occur, there are adapter molecules, transfer RNA's (tRNAs)

Transfer RNA, and its biogenesis
https://reasonandscience.catsboard.com/t2058-transfer-rna-and-its-biogenesis

tRNA's are very specific and complex molecules, and the " made of " follows several steps, requiring a significant number of proteins and enzymes, which are by themselves also enormously complex, not only in their structure but as well in their " made of ". So the question in the end arises: did natural processes have the foresight of the end product, tRNA, to make this highly specific nanorobot - like molecular machines which remove, add and modify the nucleotides? If not, how could they have arisen, since, without end goal, there would be no function for them? these enzymes are all specifically made for the production of tRNAs. And tRNA is essential for life

Another essential central player, that workes in an interdependent manner:

Aminoacyl-tRNA synthetases.
https://reasonandscience.catsboard.com/t2280-aminoacyl-trna-synthetases

The synthetases have several active sites that enable them to:

(1) recognize a specific amino acid,
(2) recognize a specific corresponding tRNA(with a specific anticodon),
(3) react the amino acid with ATP (adenosine triphosphate) to form an AMP (adenosine monophosphate) derivative, and then, finally,
(4) link the specific tRNA molecule in question to its corresponding amino acid. Current research suggests that the synthetases recognize particular three-dimensional or chemical features (such as methylated bases) of the tRNA molecule. In virtue of the specificity of the features they must recognize, individual synthetases have highly distinctive shapes that
derive from specifically arranged amino-acid sequences. In other words, the synthetases are themselves marvels of specificity.

And there is, of course, the Ribosome, a veritable ultracomplex factory making proteins:

Ribosomes amazing nanomachines
https://reasonandscience.catsboard.com/t1661-translation-through-ribosomes-amazing-nano-machines

* Each cell contains around 10 million ribosomes, i.e. 7000 ribosomes are produced in the nucleolus each minute.
* Each ribosome contains around 80 proteins, i.e. more than 0.5 million ribosomal proteins are synthesized in the cytoplasm per minute.
* The nuclear membrane contains approximately 5000 pores. Thus, more than 100 ribosomal proteins are imported from the cytoplasm to the nucleus per pore and minute. At the same time 3 ribosomal subunits are exported from the nucleus to the cytoplasm per pore and minute.

But these are just a few of the many players essential to make proteins:

The interdependent and irreducible structures required to make proteins
https://reasonandscience.catsboard.com/t2039-the-interdependent-and-irreducible-structures-required-to-make-proteins

https://reasonandscience.catsboard.com

Otangelo


Admin

Many atheists demonstrate a faulty understanding of how things in nature work.

Repeatedly, i have heard atheists say: The Origin of Life depends just on chemicals. It's basically chemical reactions that over time increased complexity. That is a foolish simplification. Life depends on three basic things which are essential: Energy, matter, and information. While atheists are used to thinking that we are the simpletons, I regard it as more and more important to break down what happens in nature into analogies and a language, that everyone can understand, in order to explain concepts that are implemented in such a complex manner that science is still far from fully understand and describe what we see and observe in the natural world.

One common misconception is that natural principles are just discovered, and described by us. Two cans with Coca Cola, one is with sugar, the other is diet. Both bear information that we can describe. We describe the information transmitted to us that one can contain Coca Cola, and the other is diet. But that does not occur naturally. A chemist invented the formula of how to make Coke, and Diet Coke, and that is not dependent on descriptive, but PREscriptive information. The same occurs in nature. We discover that DNA contains a genetic code. But the rules upon which the genetic code operates are PRE - scriptive. The rules are arbitrary. The genetic Code is CONSTRAINT to behave in a certain way. Chemical principles govern specific RNA interactions with amino acids. But principles that govern have to be set by? - yes, precisely what atheists try to avoid at any cost: INTELLIGENCE. There is no physical necessity, that the triple nucleotides forming a Codon CUU ( cytosine, uracil, uracil ) are assigned to the amino acid Leucine. Intelligence assigns and sets rules. For translation, each of these codons requires a tRNA molecule that has an anticodon with which it can stably base pair with the messenger RNA (mRNA) codon, like lock and key. So there is at one side of the tRNA the CUU anticodon sequence, and at the other side of the tRNA molecule, there is a site to insert the assigned amino acid Leucine. And here comes the BIG question: How was that assignment set up? How did it come to be, that tRNA has an assignment of CUU anticodon sequence to Leucine? The two binding sites are distant one from the other, there is no chemical reaction constraining physically that order or relationship. That is a BIG mystery, that science is attempting to explain by natural unguided mechanisms, but without success. Here we have the CLEAR imprint of an intelligent mind that was necessary to set these rules. That led Eugene Koonin to confess in the paper: "Origin and evolution of the genetic code: the universal enigma" :  It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.

In the genetic code, there are 4^3 = 64 possible codons (tri-nucleotide sequences). Atheists also mock and claim that it is not justified to describe the genetic code as a language. But that is also not true.   In the standard genetic code, three of these 64 mRNA codons (UAA, UAG and UGA) are stop codons. These terminate translation by binding to release factors rather than tRNA molecules. They instruct the ribosome to either start or stop the polymerization of a given amino acid strand.  Did unguided natural occurrences suddenly, in vast sequence space of possibilities, find by a lucky accident the necessity that a size of an amino acid polymer forming a protein requires a defined limited size that has to be INSTRUCTED by the genetic instructions, and for that reason, assigned release factors rather than amino acids to a specific codon sequence, in order to be able to instruct the termination of an amino acid string? That makes, frankly, no sense whatsoever. Not only that. This characterizes factually that the genetic code IS a language. That's described in the following science paper: The genetic language: grammar, semantics, evolution 2 The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics.

Since there are 64 possible codons, logically, there should be 64 tRNA's, but there are in most organisms just 45. Some tRNA species can pair with multiple, synonymous codons all of which encode the same amino acid. Movement ("wobble") of the base in the 5' anticodon position is necessary for small conformational adjustments that affect the overall pairing geometry of anticodons of tRNA.

These notions led Francis Crick to the creation of the wobble hypothesis, a set of four relationships explaining these naturally occurring attributes.

1. The first two bases in the codon create the coding specificity, for they form strong Watson-Crick base pairs and bond strongly to the anticodon of the tRNA.
2. When reading 5' to 3' the first nucleotide in the anticodon (which is on the tRNA and pairs with the last nucleotide of the codon on the mRNA) determines how many nucleotides the tRNA actually distinguishes.
If the first nucleotide in the anticodon is a C or an A, pairing is specific and acknowledges original Watson-Crick pairing, that is: only one specific codon can be paired to that tRNA. If the first nucleotide is U or G, the pairing is less specific and in fact, two bases can be interchangeably recognized by the tRNA. Inosine displays the true qualities of wobble, in that if that is the first nucleotide in the anticodon then any of three bases in the original codon can be matched with the tRNA.
3. Due to the specificity inherent in the first two nucleotides of the codon, if one amino acid is coded for by multiple anticodons and those anticodons differ in either the second or third position (first or second position in the codon) then a different tRNA is required for that anticodon.
4. The minimum requirement to satisfy all possible codons (61 excluding three stop codons) is 32 tRNAs. That is 31 tRNAs for the amino acids and one initiation codon.

Aside from the obvious necessity of wobble, that our bodies have a limited amount of tRNAs and wobble allows for broad specificity, wobble base pairs have been shown to facilitate many biological functions. This has another AMAZING implication which points to intelligent set up:  The science paper: The genetic code is one in a million, confesses:
If we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.

This, all, screams out of DESIGN !!. But rather than science giving honor to God, scientists are obliged to confess ignorance, because pointing to God is not science, but religion.

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#7855


1. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/
2. http://www.ncbi.nlm.nih.gov/pubmed/8335231
3. https://en.wikipedia.org/wiki/Wobble_base_pair#:~:text=A%20wobble%20base%20pair%20is,hypoxanthine%2Dcytosine%20(I%2DC).
4. http://www.ncbi.nlm.nih.gov/pubmed/9732450

https://reasonandscience.catsboard.com

Otangelo


Admin

ULRICH E. STEGMANN (2004): The arbitrariness of the genetic code

The genetic code has been regarded as arbitrary in the sense that the codon-amino acid assignments could be different than they actually are. This general idea has been spelled out differently by previous, often rather implicit accounts of arbitrariness. They have drawn on the frozen accident theory, on evolutionary contingency, on alternative causal pathways, and on the absence of direct stereochemical interactions between codons and amino acids. It has also been suggested that the arbitrariness of the genetic code justifies attributing semantic information to macromolecules, notably to DNA. I argue that these accounts of arbitrariness are unsatisfactory. I propose that the code is arbitrary in the sense of Jacques Monod’s concept of chemical arbitrariness: the genetic code is arbitrary in that any codon requires certain chemical and structural properties to specify a particular amino acid, but these properties are not required in virtue of a principle of chemistry. I maintain that the code’s chemical arbitrariness is neither sufficient nor necessary for attributing semantic information to nucleic acids.

In data processing systems, information systems that minimize and control errors, and essential. Engineers for example work diligently to protect the integrity of data processed by various terrestrial and satellite communications systems in place today. These systems and associated machines enable reliable communications on a truly global scale. Advanced coding techniques have been developed to obtain reliable information processing. These techniques play an important role in maintaining the high reliability of data in spite of many error-inducing characteristics of a typical communications system.

Several man-made coding techniques and Information processing systems find their analogues in biochemistry and biological information processing. Robust information transmission and minimization of mutational errors are life-essential. Redundancy is the most basic property for any error-correction scheme, and remarkably, it exists within the genetic system. All the methods of error-control coding are based on the adding of redundancy to the transmitted information. As the genetic information is redundant, and since the genetic code is also redundant itself, the existence of error-control mechanisms ensures a high degree of reliability in the transmission and expression of genetic information.

1. https://sci-hub.ren/10.1023/b:biph.0000024412.82219.a6



Last edited by Otangelo on Tue Sep 13, 2022 4:55 pm; edited 2 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

More Non-Random DNA Wonders

(1)  The codon bases have a non-random correlation with the kind of amino acids which they code for.  The first of the three letters relates to the kind of amino acid the codon stands for, giving the language a consistent meaning.

This undoubtedly helps the error checking machinery just as the quickest kind of computer program to debug is one in which variable names include a consistent reference at the start classifying it as holding a date, integer, text, array, database value, or whatever.  Without this, you can’t debug it at a rapid pace because every variable name needs to be consciously checked.

Somehow it seems the codon table uses the rules of good programming:

The genetic code, insurmountable problem for non-intelligent origin Codons_aminoacids_table

(2) The effect of mistranslations is called the “load on the code”.  It is minimised by its current arrangement to such an extent that only 3 in 100,000 other possible mappings might have a safer error rate, depending on their deleterious effect on the overall DNA function, as a single change in the codon mapping would cause huge atomic changes throughout the length of the three billion base pair system.  Any changes to the mapping would need modifications to all the interpretation and duplication machinery, which seems geared up for this specific arrangement.

But this 3 in 100,000 statistic assumes all 100,000 alternatives already have the advantage of the type-significant first letter of the codon (detailed in (1) above).  Therefore if one were to include all the truly random arrangements – where the first letter was not weighted towards codon relevance – the disproportion would be vastly greater.

To give you an idea of how much greater, the number of completely random alternatives would be 64*63*62..*45 which if we forget about start and stop codons I work out as 47,732,870,256, 486,900,000,000, 000,000,000,000 or more than 47 billion trillion trillion.  Subtract 36,267,774,588,438,900,000 representing the 3 results per 100,000 estimated as being more fault-tolerant than the current design and you get a worse-performing set of 47 * 10 to the power 33.

This means if you had a trillion planets around every star in the Universe, you could try a different arrangement on every planet, and have one chance – over all of these attempts combine – to get a more fault-tolerant system.

Or, you could try out a different mapping system on a different planet circling every known star in the Universe, each and every day for 3 billion years and stand a chance of getting a better mapping only once.  And for each test version, every day you’d need to create a complex life-form from scratch, and subject it to every imaginable adverse circumstance – seasonal, predatory, infectious, organ failure, sensory development etc., and gauge its reproductive success in only 24 hours before throwing it out and organising a new one.

Any rival mapping system would need to be evaluated regarding its effect on speed of protein assembly or the combined molecular effect of the billions of changes throughout the length of the chromosomes.  All things considered the system we have now must be hugely error-tolerant to allow life forms to remain unchanged for up to 400m years, during all of which time the codon mapping had to remain constant.

(3) The coding system is given further weight by the discovery that within the ribosome, anticodons are enriched near the areas relative to their function, to a level such that the probability of this being a random setup is minuscule.  Not minuscule the way a likelihood of 6.9 is a very small step away from an impossibility level of 7, but less than .0000000000000000001 % or less than one millionth of a trillionth.

In other words, the ribosome behaves as if it’s already geared up and ready to work with the existing code – and is assumed to be one of the most ancient parts of the whole DNA engine.

https://reasonandscience.catsboard.com

Otangelo


Admin

We now come to the central question: how did specific associations between amino acids and nucleotides originate? It is clear that no crude picture of the process will work. Problems remain. Perhaps the most serious is the size problem: a messenger is considerably longer than a ribozyme, and protein enzymes much longer than the short peptides that could be formed by using a ribozyme as a 'message' . The solution is not clear. Suppose that, in any cell, translation errors lead t o the production of malfunctioning proteins. This represents a loss of efficiency, but not a fatal one. Suppose, however, that some malfunctional proteins are themselves used in translation; for example, they are assignment catalysts. Then a single error in one round of protein synthesis could cause several errors in the next round. If so, there would be an exponential increase in the frequency of errors: an error catastrophe.

https://reasonandscience.catsboard.com

Otangelo


Admin

1: RNA Building Blocks Are Hard to Synthesize and Easy to Destroy
2: Ribozymes Are Poor Substitutes for Proteins
3: An RNA-based Translation and Coding System Is Implausible
4: The RNA World Doesn’t Explain the Origin of Genetic Information

To claim that deterministic chemical affinities explain the origin of the genetic code lacks empirical foundation. In order for the translation system to be operational, and the genetic code to bear any function,

The discovery of thirty-one variant genetic codes in mitochondria, and a plethora of prokaryotes indicates that the chemical properties of the relevant monomers allow more than a single set of codon–amino acid assignments. That means: the chemical properties of amino acids and nucleotides do not determine a single universal genetic code; since there is not just one code, “it” cannot be inevitable.

DNA’s capacity to convey information actually requires freedom from chemical determinism or constraint, in particular, in the arrangement of the nucleotide bases. If the bonding properties of nucleotides determines their arrangement, the capacity of DNA to convey information would be destroyed. In that case, the bonding properties of each nucleotide would determine each subsequent nucleotide and thus, in turn, the sequence of the molecular chain. Under these conditions, a rigidly ordered pattern would emerge as required by their bonding properties and then repeat endlessly, forming something like a crystal. If DNA manifested such redundancy, it would be impossible for it to store or convey function bearing information. Whatever may be the origin of a DNA configuration, it can function as a code only if its order is not due to the forces of potential energy. It must be as physically indeterminate as the sequence of words is on a printed page.

There are no differential bonding affinities between oligonucleotides. There is not just an absence of differing bonding affinities; there are no bonds at all between the critical information-bearing bases in DNA. There are neither bonds nor bonding affinities—differing in strength or otherwise—that can explain the origin of the base sequencing that constitutes the information in the DNA molecule. Differing chemical attractions between nucleotide bases does not exist within the DNA molecule. All four bases are acceptable; none is chemically favored. This means there is nothing about either the backbone of the molecule or the way any of the four bases attached to it that make any sequence more likely to form than another.

There are no significant differential affinities between any of the four bases and the binding sites along the sugar-phosphate backbone. The properties of nucleic acids indicate that all the combinatorially possible nucleotide patterns of a DNA are, from a chemical point of view, equivalent. two features of DNA ensure that “self-organizing” bonding affinities cannot explain the specific arrangement of nucleotide bases in the molecule:
(1) there are no bonds between bases along the information-bearing axis of the molecule and
(2) there are no differential affinities between the backbone and the specific bases that could account for variations in sequence.

The Ribosome and its two subunits, the over 200 assembly and scaffold proteins for the biogenesis of the ribosome, initiation, elongation, and release factors,  the signal recognition particle, the error check and repair machinery to ensure minimization of translation errors,  the matching pool of the  tRNA's, amino acyl tRNA synthetases, mRNA's, all twenty amino acids used in proteins, would have to arise together.

Producing the molecular complexes necessary for translation requires coupling multiple tricks—multiple crucial reactions—in a closely integrated (and virtually simultaneous) way. True enzyme catalysts do this. RNA and
small-molecule catalysts do not.


The British biologist John Maynard Smith has described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes:
“The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom ?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid, because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion, or a " good compromise" ?  

An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly, but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance.

That frozen accident means, that good old luck would have  hit the jackpot  trough trial and error amongst 1.5 × 10^84 possible genetic codes. That is the number of atoms in the whole universe. That puts any real possibility of chance providing the feat out of question. Its, using  Borel's law, in the realm of impossibility.  The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code.

Put it in other words : The task compares to invent two languages, two alphabets, and a translation system, and the information content of a book (for example hamlet)  being written in English translated  to Chinese  in an extremely sophisticated hardware system. The conclusion that an intelligent designer had to set up the system follows not based on missing knowledge (argument from ignorance). We know that minds do invent languages, codes, translation systems, ciphers, and complex, specified information all the time.  The genetic code and its translation system is best explained through the action of an intelligent designer.

The attribution of the design has to be to God or purely materialistic mechanisms. The gigantic pull to swallow in the second case is the fact that the output is the product of code, and that the molecular machinery needed to replicate the code (for inheritance/perpetuation), transcribe it, translate it into protein with many intermediate steps requiring highly specific operations, and to repair it in the foreseen event that it is damaged (to preserve/protect it) or destroy it in the event that it suffers irreparable damage (to forestall cancer) is just too big to swallow. DNA had an intentional purpose. That's the only reasonable conclusion I can come to.

Signature in the Cell, Stephen Meyer, page 18:
As the information theorist Hubert Yockey observes, the “genetic code is constructed to confront and solve the problems of communication and recording by the same principles found…in modern communication and computer codes.”

There is no physical reason why any particular codon should be paired up with any specific amino acid. any codon could have been assigned to any amino acid, since there are no direct physical interactions between them:
Chemical affinities between nucleotide codons and amino acids do not determine the correspondences between codons and amino acids that define the genetic code. From the standpoint of the properties of the constituents that comprise the code, the code is physically and chemically arbitrary. All possible codes are equally likely; none is favored chemically. . . . To claim that deterministic chemical affinities made the origin of this system inevitable lacks empirical foundation.

If there is no direct chemical interaction between the codon and binding site of the amino acid on the tRNA, but there is an intermediate space, then there is no evidence that chemical interactions could have selected the assignment based on chemical affinities. Rather, this state of affairs, is evidence that the “genetic code” is in fact a genuine, arbitrary code, such as a designer would create from scratch.

https://reasonandscience.catsboard.com

Otangelo


Admin

The Genetic Code was most likely implemented by intelligence.

The Genetic Code was most likely implemented by intelligence.
1. In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, word, into another form, ( as another word, letter, etc. ) 
2. In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process.
3. Assignment means designating, dictating, ascribing, corresponding, correlating, specifying, representing, determining, mapping, permutating.    
4. The universal triple-nucleotide genetic code can be the result either of a) a random selection through evolution, or b) the result of intelligent implementation.
5. We know by experience, that performing value assignment and codification is always a process of intelligence with an intended result. Non-intelligence, aka matter, molecules, nucleotides, etc. have never demonstrated to be able to generate codes, and have neither intent nor distant goals with a foresight to produce specific outcomes.  
6. Therefore, the genetic code is the result of an intelligent setup.

The codon bases have a non-random correlation with the kind of amino acids which they code for.  The first of the three letters relate to the kind of amino acid the codon stands for, giving the language a consistent meaning.

Any information stored on our genes is useless without its correct interpretation. The genetic code defines the rule set to decode this information.
https://www.nature.com/articles/s41598-020-69100-0

The order of the three input bases is arbitrary and interchangeable (i.e. the model does not include uneven distribution of assignment uncertainty due to a third base ‘wobble’). There is no codon ambiguity; each codon maps uniquely to one amino acid. To create signal-meaning pairs, for each selected amino acid to be transferred we had to determine its codon assignment according to the donor’s code.
https://www.nature.com/articles/s41598-018-21973-y

The Genetic Code (B): Basic Features and Codon Assignments
The assignment of codons to different amino acids was essentially completed by applying the trinucleotide binding technique discovered by Nirenberg and Leder to all the 64 possible synthetic ribotrinucleotides.
https://www.worldscientific.com/doi/abs/10.1142/9789812813626_0008

A survey of codon assignments for 20 amino acids.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC219908/

In translation, 64 genetic codons are ascribed to 20 amino acids

In the standard genetic code table, of the 64 triplets or codons, 61 codons correspond to the 20 amino acids
https://www.dovepress.com/synonymous-codons-influencing-gene-expression-in-organisms-peer-reviewed-fulltext-article-RRBC

The Universal Genetic Code and Non-Canonical Variants
Genetic code refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process.
https://www.sciencedirect.com/topics/neuroscience/genetic-code

A new integrated symmetrical table for genetic codes
For the formation of proteins in living organism cells, it is found that each amino acid can be specified by either a minimum of one codon or up to a maximum of six possible codons. In other words, different codons specify the different number of amino acids. A table for genetic codes is a representation of translation for illustrating the different amino acids with their respectively specifying codons, that is, a set of rules by which information encoded in genetic material (RNA sequences) is translated into proteins (amino acid sequences) by living cells.  There are a total of 64 possible codons, but there are only 20 amino acids specified by them.
https://arxiv.org/ftp/arxiv/papers/1703/1703.03787.pdf

A specification often refers to a set of documented requirements to be satisfied by a material, design, product, or service. A specification is often a type of technical standard.
https://en.wikipedia.org/wiki/Specification_(technical_standard)

code is a set of rules that serve as generally accepted guidelines recommended for the industry to follow.
https://blog.nvent.com/erico-what-is-the-difference-between-a-code-standard-regulation-and-specification-in-the-electrical-industry/

Harper's illustrated Biochemistry 3th edition page 54
While the three letter genetic code could potentially accommodate more than 20 amino acids, the genetic code is redundant since several amino acids are specified by multiple codons.

Biology, Brooker 4th ed. page 243
The Genetic Code Specifies the Amino Acids
The sequence of three bases in most codons specifies a particular amino acid. For example, the codon CCC specifies the amino acid proline, whereas the codon GGC encodes the amino acid glycine.

Genomics: Evolution of the Genetic Code
The code is actually closer to a cipher than a code and individual species do not have a unique genetic code to be cracked; indeed one of the interesting characteristics of the code is that nearly all life shares exactly the same one, once called the ‘universal genetic code’
https://sci-hub.st/https://www.sciencedirect.com/science/article/pii/S0960982216309174
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message
https://en.wikipedia.org/wiki/Cipher

1. https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/genetic-code#:~:text=Abstract-,Genetic%20code%20refers%20to%20the%20assignment%20of%20the%20codons%20to,or%20%E2%80%9Ccanonical%E2%80%9D%20genetic%20code.



Last edited by Otangelo on Tue Dec 29, 2020 8:53 pm; edited 4 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The genetic code, insurmountable problem for non-intelligent origin F2.medium

Simply plotting these numbers on a codon table reveals the existence of a remarkable degree of order, much of which would be unexpected on the basis of amino acid properties as normally understood. For example, codons of the form NUN define a set of five amino acids, all of which have very similar polar requirements. Likewise, the set of amino acids defined by the NCN codons all have nearly the same unique polar requirement. The codon couplets CAY-CAR, AAY-AAR, and GAY-GAR each define a pair of amino acids (histidine-glutamine, asparagine-lysine, and aspartic acid-glutamic acid, respectively) that has a unique polar requirement. Only for the last of these (aspartic and glutamic acids), however, would the two amino acids be judged highly similar by more conventional criteria. Perhaps the most remarkable thing about polar requirement is that although it is only a unidimensional characterization of the amino acids, it still seems to capture the essence of the way in which amino acids, all of which are capable of reacting in varied ways with their surroundings, are related in the context of the genetic code. Also of note is the fact that the context in which polar requirement is defined, i.e., the interaction of amino acids with heterocyclic aromatic compounds in an aqueous environment, is more suggestive of a similarity in the way amino acids might interact with nucleic acids than of any similarity in the way they would behave in a proteinaceous environment

While it must be admitted that the evolutionary relationships among the AARSs bear some resemblance to the related amino acid order of the code, it seems unlikely that they are responsible for that order: the evolutionary wanderings of these enzymes alone simply could not produce a code so highly ordered, in both degree and kind, as we now know the genetic code to be.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC98992/



Last edited by Otangelo on Fri Jan 15, 2021 6:32 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The genetic code, insurmountable problem for non-intelligent origin

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#8384

Special Issue "The Origin of the Genetic Code" 30 September 2020
https://www.mdpi.com/journal/ijms/special_issues/origin_genetic_code
The genetic code is the fundamental set of rules for decoding genetic information into proteins, with the 64 base triplets specifying amino acids and stop codons. However, the origin of the genetic code remains a mystery, despite numerous theories.

“A code system is always the result of a mental process (it requires an intelligent origin or inventor). It should be emphasized that matter as such is unable to generate any code. All experiences indicate that a thinking being voluntarily exercising his own free will, cognition, and creativity, is required. ,,,there is no known law of nature and no known sequence of events that can cause the information to originate by itself in matter.
Werner Gitt 1997 In The Beginning Was Information pp. 64-67, 79, 107.”
(The retired Dr Gitt was a director and professor at the German Federal Institute of Physics and Technology (Physikalisch-Technische Bundesanstalt, Braunschweig), the Head of the Department of Information Technology.)

“Because of Shannon channel capacity that previous (first) codon alphabet had to be at least as complex as the current codon alphabet (DNA code), otherwise transferring the information from the simpler alphabet into the current alphabet would have been mathematically impossible”
Donald E. Johnson – Bioinformatics: The Information in Life

“The genetic code’s error-minimization properties are far more dramatic than these (one in a million) results indicate. When the researchers calculated the error-minimization capacity of the one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution. Researchers estimate the existence of 10^18 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This means of 10^18 codes few, if any have an error-minimization capacity that approaches the code found universally throughout nature.”
Fazale Rana – From page 175; ‘The Cell’s Design’

Barbieri: Code Biology:
"...there is no deterministic link between codons and amino acids because any codon can be associated with any amino acid.  This means that the rules of the genetic code do not descend from chemical necessity and in this sense they are arbitrary." "...we have the experimental evidence that the genetic code is a real code, a code that is compatible with the laws of physics and chemistry but is not dictated by them."
https://www.sciencedirect.com/journal/biosystems/vol/164/suppl/C

[Comment on other biological codes]: "In signal transduction, in short, we find all the essential components of a code: (a) two independents worlds of molecules (first messengers and second messengers), (b) a set of adaptors that create a mapping between them, and (c) the proof that the mapping is arbitrary because its rules can be changed in many different ways."

Why should or would molecules promote designate, assign, dictate, ascribe, correspond, correlate, specify anything at all ?  This is not an argument from incredulity. The proposition defies reasonable principles and the known and limited, unspecific range of chance, physical necessity, mutations, and natural selection. What we need, is giving a *plausible* account of how it came about to be in the first place. 
It is in ANY scenario a far stretch to believe that unguided random events would produce a functional code system and arbitrary assignments of meaning. That's simply putting far too much faith into what molecules on their own are capable of doing.

RNA's, ( if they were extant prebiotically anyway), would just lay around and then disintegrate in a short period of time. If we disconsider that the prebiotic synthesis of RNA's HAS NEVER BEEN DEMONSTRATED IN THE LAB, they would not polymerize. Clay experiments have failed. Systems, given energy and left to themselves, DEVOLVE to give uselessly complex mixtures, “asphalts”.  the literature reports (to our knowledge) exactly  ZERO CONFIRMED OBSERVATIONS where molecule complexification emerged spontaneously from a pool of random chemicals. It is IMPOSSIBLE for any non-living chemical system to escape devolution to enter into the world of the “living”. 

The structural basis of the genetic code: amino acid recognition by aminoacyl-tRNA synthetases 28 July 2020
One of the most profound open questions in biology is how the genetic code was established. The emergence of this self-referencing system poses a chicken-or-egg dilemma and its origin is still heavily debated
https://www.nature.com/articles/s41598-020-69100-0

Genomics: Evolution of the Genetic Code 
https://www.sciencedirect.com/science/article/pii/S0960982216309174

Understanding how this code originated and how it affects the molecular biology and evolution of life today are challenging problems, in part because it is so highly conserved — without variation to observe it is difficult to dissect the functional implications of different aspects of a character. 

Code Biology 
http://www.codebiology.org/

"...there is no deterministic link between codons and amino acids because any codon can be associated with any amino acid.  This means that the rules of the genetic code do not descend from chemical necessity and in this sense they are arbitrary." "...we have the experimental evidence that the genetic code is a real code, a code that is compatible with the laws of physics and chemistry but is not dictated by them."
My comment:  Without stop codons, the translation machinery would not know where to end the protein synthesis, and there could/would never be functional proteins, and no life on earth. At all.

Origin and evolution of the genetic code: the universal enigma 2009 Feb Eugene V. Koonin
In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.

Many of the same codons are reassigned (compared to the standard code) in independent lineages (e.g., the most frequent change is the reassignment of the stop codon UGA to tryptophan), this conclusion implies that there should be predisposition towards certain changes; at least one of these changes was reported to confer selective advantage

The origin of the genetic code is acknowledged to be a major hurdle in the origin of life, and I shall mention just one or two of the main problems. Calling it a ‘code’ can be misleading because of associating it with humanly invented codes which at their core usually involve some sort of pre-conceived algorithm; whereas the genetic code is implemented entirely mechanistically – through the action of biological macromolecules. This emphasises that, to have arisen naturally – e.g. through random mutation and natural selection – no forethought is allowed: all of the components would need to have arisen in an opportunistic manner.

Crucial role of the tRNA activating enzymes
 https://evolutionunderthemicroscope.com/ool02.html
To try to explain the source of the code various researchers have sought some sort of chemical affinity between amino acids and their corresponding codons. But this approach is misguided:
First of all, the code is mediated by tRNAs which carry the anti-codon (in the mRNA) rather than the codon itself (in the DNA). So, if the code were based on affinities between amino acids and anti-codons, it implies that the process of translation via transcription cannot have arisen as a second stage or improvement on a simpler direct system - the complex two-step process would need to have arisen right from the start.
Second, the amino acid has no role in identifying the tRNA or the codon (This can be seen from an experiment in which the amino acid cysteine was bound to its appropriate tRNA in the normal way – using the relevant activating enzyme, and then it was chemically modified to alanine. When the altered aminoacyl-tRNA was used in an in vitro protein synthesizing system (including mRNA, ribosomes etc.), the resulting polypeptide contained alanine (instead of the usual cysteine) corresponding to wherever the codon UGU occurred in the mRNA. This clearly shows that it is the tRNA alone (with no role for the amino acid) with its appropriate anticodon that matches the codon on the mRNA.). This association is done by an activating enzyme (aminoacyl tRNA synthetase) which attaches each amino acid to its appropriate tRNA (clearly requiring this enzyme to correctly identify both components). There are 20 different activating enzymes - one for each type of amino acid.
Interestingly, the end of the tRNA to which the amino acid attaches has the same nucleotide sequence for all amino acids - which constitutes a third reason. 
Third:  Interest in the genetic code tends to focus on the role of the tRNAs, but as just indicated that is only one half of implementing the code. Just as important as the codon-anticodon pairing (between mRNA and tRNA) is the ability of each activating enzyme to bring together an amino acid with its appropriate tRNA. It is evident that implementation of the code requires two sets of intermediary molecules: the tRNAs which interact with the ribosomes and recognise the appropriate codon on mRNA, and the activating enzymes which attach the right amino acid to its tRNA. This is the sort of complexity that pervades biological systems, and which poses such a formidable challenge to an evolutionary explanation for its origin. It would be improbable enough if the code were implemented by only the tRNAs which have 70 to 80 nucleotides; but the equally crucial and complementary role of the activating enzymes, which are hundreds of amino acids long, excludes any realistic possibility that this sort of arrangement could have arisen opportunistically.

Progressive development of the genetic code is not realistic
In view of the many components involved in implementing the genetic code, origin-of-life researchers have tried to see how it might have arisen in a gradual, evolutionary, manner. For example, it is usually suggested that to begin with the code applied to only a few amino acids, which then gradually increased in number. But this sort of scenario encounters all sorts of difficulties with something as fundamental as the genetic code.

1. First, it would seem that the early codons need have used only two bases (which could code for up to 16 amino acids); but a subsequent change to three bases (to accommodate 20) would seriously disrupt the code. Recognizing this difficulty, most researchers assume that the code used 3-base codons from the outset; which was remarkably fortuitous or implies some measure of foresight on the part of evolution (which, of course, is not allowed).
2. Much more serious are the implications for proteins based on a severely limited set of amino acids. In particular, if the code was limited to only a few amino acids, then it must be presumed that early activating enzymes comprised only that limited set of amino acids, and yet had the necessary level of specificity for reliable implementation of the code. There is no evidence of this; and subsequent reorganization of the enzymes as they made use of newly available amino acids would require highly improbable changes in their configuration. Similar limitations would apply to the protein components of the ribosomes which have an equally essential role in translation.
3. Further, tRNAs tend to have atypical bases which are synthesized in the usual way but subsequently modified. These modifications are carried out by enzymes, so these enzymes too would need to have started life based on a limited number of amino acids; or it has to be assumed that these modifications are later refinements - even though they appear to be necessary for reliable implementation of the code.
4. Finally, what is going to motivate the addition of new amino acids to the genetic code? They would have little if any utility until incorporated into proteins - but that will not happen until they are included in the genetic code. So the new amino acids must be synthesized and somehow incorporated into useful proteins (by enzymes that lack them), and all of the necessary machinery for including them in the code (dedicated tRNAs and activating enzymes) put in place – and all done opportunistically! Totally incredible!


Comparison of translation loads for standard and alternative genetic codes   2010 Jun 14
The origin and universality of the genetic code is one of the biggest enigmas in biology. Soon after the genetic code of Escherichia coli was deciphered, it was realized that this specific code out of more than 1084 possible codes is shared by all studied life forms (albeit sometimes with minor modifications). The question of how this specific code appeared and which physical or chemical constraints and evolutionary forces have shaped its highly non-random codon assignment is subject of an intense debate. In particular, the feature that codons differing by a single nucleotide usually code for either the same or a chemically very similar amino acid and the associated block structure of the assignments is thought to be a necessary condition for the robustness of the genetic code both against mutations as well as against errors in translation.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2909233/

Was Wright Right? The Canonical Genetic Code is an Empirical Example of an Adaptive Peak in Nature; Deviant Genetic Codes Evolved Using Adaptive Bridges
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2924497/
The error minimization hypothesis postulates that the canonical genetic code evolved as a result of selection to minimize the phenotypic effects of point mutations and errors in translation. 
My comment:  How can the authors claim that there was already translation if it depends on the genetic code already being set up/
It is likely that the code in its early evolution had few or even a minimal number of tRNAs that decoded multiple codons through wobble pairing, with more amino acids and tRNAs being added as the code evolved.
My comment:  Why do the authors claim that the genetic code emerged based on evolutionary selective pressures, if at this stage, there was no evolution AT ALL? Evolution starts with DNA replication, which DEPENDS on translation being already fully set up. Also, the origin of tRNA's is a huge problem for proponents of abiogenesis by the fact, that they are highly specific, and their biosynthesis in modern cells is a highly complex, multistep process requiring many complex enzymes 

Insuperable problems of the genetic code initially emerging in an RNA World 2018 February
The hypothetical RNA World does not furnish an adequate basis for explaining how this system came into being, but principles of self-organisation that transcend Darwinian natural selection furnish an unexpectedly robust basis for a rapid, concerted transition to genetic coding from a peptide RNA world. The preservation of encoded information processing during the historically necessary transition from any ribozymally operated code to the ancestral aaRS enzymes of molecular biology appears to be impossible, rendering the notion of an RNA Coding World scientifically superfluous. Instantiation of functional reflexivity in the dynamic processes of real-world molecular interactions demanded of nature that it fall upon, or we might say “discover”, a computational “strange loop” (Hofstadter, 1979): a self-amplifying set of nanoscopic “rules” for the construction of the pattern that we humans recognize as “coding relationships” between the sequences of two types of macromolecular polymers. However, molecules are innately oblivious to such abstractions. Many relevant details of the basic steps of code evolution cannot yet be outlined. 

Now observe the colorful just so stories that the authors come up with to explain the unexplicable:
We can now understand how the self-organised state of coding can be approached “from below”, rather than thinking of molecular sequence computation as existing on the verge of a catastrophic fall over a cliff of errors. In GRT systems, an incremental improvement in the accuracy of translation produces replicase molecules. that are more faithfully produced from the gene encoding them. This leads to an incremental improvement in information copying, in turn providing for the selection of narrower genetic quasispecies and an incrementally better encoding of the protein functionalities, promoting more accurate translation.
My comment: This is an entirely unwarranted claim. It is begging the question. There was no translation at this stage, since translation depends on a fully developed and formed genetic code.
The vicious circle can wind up rapidly from below as a selfamplifying process, rather than precipitously winding down the cliff from above. The balanced push-pull tension between these contradictory tendencies stably maintains the system near a tipping point, where, all else being equal, informational replication and translation remain impedance matched – that is, until the system falls into a new vortex of possibilities, such as that first enabled by the inherent incompleteness of the primordial coding “boot block”. Bootstrapped coded translation of genes is a natural feature of molecular processes unique to living systems. Organisms are the only products of nature known to operate an essentially computational system of symbolic information processing. In fact, it is difficult to envisage how alien products of nature found with a similar computational capability, which proved to be necessary for their existence, no matter how primitive, would fail classification as a form of “life”.
My comment: I would rather say, it is difficult to envisage how such a complex system could get "off the hook" by natural, unguided means.
http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC5895081&blobtype=pdf

The lack of foundation in the mechanism on which are based the Physico-chemical theories for the origin of the genetic code counterposed to the credible and natural mechanism suggested by the Q2 coevolution theory 1 April 2016
[size=12]https://sci-hub.ren/10.1016/j.jtbi.2016.04.005
[/size]
The majority of theories advanced for explaining the origin of Q4 the genetic code maintain that the physico-chemical properties of amino acids had a fundamental role to organize the structuring of the genetic code....... but this does not seem to have been the case. The physico-chemical properties of amino acids played only a subsidiary role in organizing the code – and important only if understood as manifestation of the catalysis performed by proteins . The mechanism on which lie on the majority of theories based on the physico-chemical properties of amino acids is not credible or at least not satisfactory.

There are enough data to refute the possibility that the genetic code was randomly constructed (“a frozen accident”). For example, the genetic code clusters certain amino acid assignments. Amino acids that share the same biosynthetic pathway tend to have the same first base in their codons. Amino acids with similar physical properties tend to have similar codons.
either bottom-up processes (e.g. unknown chemical principles that make the code a necessity), or bottom-up constraints (i.e. a kind of selection process that occurred early in the evolution of life, and that favored the code we have now), then we can dispense with the code metaphor. The ultimate explanation for the code has nothing to do with choice or agency; it is ultimately the product of necessity.
In responding to the “code skeptics,” we need to keep in mind that they are bound by their own methodology to explain the origin of the genetic code in non-teleological, causal terms. They need to explain how things happened in the way that they suppose. Thus if a code-skeptic were to argue that living things have the code they do because it is one which accurately and efficiently translates information in a way that withstands the impact of noise, then he/she is illicitly substituting a teleological explanation for an efficient causal one. We need to ask the skeptic: how did Nature arrive at such an ideal code as the one we find in living things today?
https://uncommondescent.com/intelligent-design/is-the-genetic-code-a-real-code/

Genetic code: Lucky chance or fundamental law of nature?
It becomes clear that the information code is intrinsically related to the physical laws of the universe, and thus life may be an inevitable outcome of our universe. The lack of success in explaining the origin of the code and life itself in the last several decades suggest that we miss something very fundamental about life, possibly something fundamental about matter and the universe itself. Certainly, the advent of the genetic code was no “play of chance”.

Open questions:
1. Did the dialects, i.e., mitochondrial version, with UGA codon (being the stop codon in the universal version) codifying tryptophan; AUA codon (being the isoleucine in the universal version), methionine; and Candida cylindrica (funges), with CUG codon (being the leucine in the universal version) codifying serine, appear accidentally or as a result of some kind of selection process? 
2. Why is the genetic code represented by the four bases A, T(U), G, and C? 
3. Why does the genetic code have a triplet structure? 
4. Why is the genetic code not overlapping, that is, why does the translation apparatus of a cell, which transcribes information, have a discrete equaling to three, but not to one? 
5. Why does the degeneracy number of the code vary from one to six for various amino acids? 
6. Is the existing distribution of codon degeneracy for particular amino acids accidental or some kind of selection process? 
7. Why were only 20 canonical amino acids selected for the protein synthesis? 
8. Why should there be a genetic code at all?
9. Why should there be the emergency of stereochemical association of a specific arbitrary codon-anticodon set?
10. Aminoacyl-tRNA synthetases recognize the correct tRNA. How did that recognition emerge, and why?
11. Is this very choice of amino acids accidental or some kind of selection process?
12. Why don’t we find any protein sequences in the fossils of ancient organisms, which only have primary amino acids?
13. Why didn’t the genetic code keep on expanding to cover more than 20 amino acids? Why not 39, 48 or 62?
14. Why did codon triplets evolve, and why not quadruplets? With 44 = 256 possible codon quadruplets, coding space could have increased, and thus a much larger universe of possible proteins could have been made possible.

The British biologist John Maynard Smith has described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes:
“The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom ?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid, because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion, or a " good compromise" ?  
An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly, but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance.

That frozen accident means, that good old luck would have  hit the jackpot  trough trial and error amongst 1.5 × 10^84 possible genetic codes. That is the number of atoms in the whole universe. That puts any real possibility of chance providing the feat out of question. Its, using  Borel's law, in the realm of impossibility.  The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code.

Arzamastsev AA. The nature of optimality of DNA code. Biophys. Russ. 1997;42:611–4.
the situation when Nature invented the DNA code surprisingly resembles designing a computer by man. If a computer were designed today, the binary notation would be hardly used. Binary notation was chosen only at the first stage, for the purpose to simplify at most the construction of decoding machine. But now, it is too late to correct this mistake”.
https://pubmed.ncbi.nlm.nih.gov/9296623/

Origin of Information Encoding in Nucleic Acids through a Dissipation-Replication Relation April 18, 2018
Due to the complexity of such an event, it is highly unlikely that that this information could have been generated randomly. A number of theories have attempted to addressed this problem by considering the origin of the association between amino acids and their cognate codons or anticodons.  There is no physical-chemical description of how the specificity of such an association relates to the origin of life, in particular, to enzyme-less reproduction, proliferation and evolution. Carl Woese recognized this early on and emphasized the probelm, still unresolved, of uncovering the basis of the specifity between amino acids and codons in the genetic code.

Carl Woese (1967) reproduced in the seminal paper of Yarus et al. cited frequently above; 
“I am particularly struck by the difficulty of getting [the genetic code] started unless there is some basis in the specificity of interaction between nucleic acids and amino acids or polypeptide to build upon.” 
https://arxiv.org/pdf/1804.05939.pdf

The genetic code is one in a million
if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.
http://www.ncbi.nlm.nih.gov/pubmed/9732450

The genetic code is nearly optimal for allowing additional information within protein-coding sequences
DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1832087/?report=classic

Hidden code in the protein code
Different codons for the same amino acid may affect how quickly mRNA transcripts are translated, and that this pace can influence post-translational modifications. Despite being highly homologous, the mammalian cytoskeletal proteins beta- and gamma-actin contain notably different post-translational modifications: though both proteins are actually post-translationally arginylated, only arginylated beta-actin persists in the cell. This difference is essential for each protein's function.

To investigate whether synonymous codons might have a role in how arginylated forms persist, Kashina and colleagues swapped the synonymous codons between the genes for beta- and gamma-actin and found that the patterns of post-translational modification switched as well. Next, they examined translation rates for the wild-type forms of each protein and found that gamma-actin accumulated more slowly. Computational analysis suggested that differences between the folded mRNA structures might cause differences in translation speed. When the researchers added an antibiotic that slowed down translation rates, accumulation of arginylated actin slowed dramatically. Subsequent work indicated that N-arginylated proteins may, if translated slowly, be subjected to ubiquitination, a post-translational modification that targets proteins for destruction.

Thus, these apparently synonymous codons can help explain why some arginylated proteins but not others accumulate in cells. “One of the bigger implications of our work is that post-translational modifications are actually encoded in the mRNA,” says Kashina. “Coding sequence can define a protein's translation rate, metabolic fate and post-translational regulation.”
https://www.nature.com/articles/nmeth1110-874

Determination of the Core of a Minimal Bacterial Gene Set
Based on the conjoint analysis of several computational and experimental strategies designed to define the minimal set of protein-coding genes that are necessary to maintain a functional bacterial cell, we propose a minimal gene set composed of 206 genes ( which code for 13 protein complexes ) Such a gene set will be able to sustain the main vital functions of a hypothetical simplest bacterial cell with the following features. These protein complexes could not emerge through evolution ( muations and natural selection ) , because evolution depends on the dna replication, which requires precisely these original genes and proteins ( chicken and egg prolem ). So the only mechanism left is chance, and physical necessity.
http://mmbr.asm.org/content/68/3/518.full.pdf

On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization
The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The problem has a clear catch-22 aspect: high translation fidelity hardly can be achieved without a complex, highly evolved set of RNAs and proteins but an elaborate protein machinery could not evolve without an accurate translation system. The origin of the genetic code and whether it evolved on the basis of a stereochemical correspondence between amino acids and their cognate codons (or anticodons), through selectional optimization of the code vocabulary, as a "frozen accident" or via a combination of all these routes is another wide open problem despite extensive theoretical and experimental studies.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1894784/

Literature from those who posture in favor of creation abounds with examples of the tremendous odds against chance producing a meaningful code. For instance, the estimated number of elementary particles in the universe is 10^80. The most rapid events occur at an amazing 10^45 per second. Thirty billion years contains only 10^18 seconds. By totaling those, we find that the maximum elementary particle events in 30 billion years could only be 10^143. Yet, the simplest known free-living organism, Mycoplasma genitalium, has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451.



Last edited by Otangelo on Mon Jul 18, 2022 8:03 pm; edited 5 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The structure of proteins has got optimized to consist of a chain, often a very long chain, of components from a set of just 20 different units, called amino acids. Within the DNA, each group of three letters, formed out of the four letters that are available, is called a codon and is the template for creation of an amino acid. The box on this page shows how many three-letter words we can form with four alphabets, and it works out to be 64. If the word had only two letters, there would be only 16 ways that it could be formed, which is not enough to describe 20 amino acids. We hence need at least three letters in the word, and if 64 is a lot more than 20, well, three codons have special uses, but the remaining 61 provide alternate forms for the most frequent amino acids — as an insurance to avoid errors when the code in the DNA is transcribed! 1

Living organisms have  this mathematically elegant system implement, rising the question: How did it originate?

1. https://www.thestatesman.com/supplements/science_supplements/ancestry-genetic-code-1502937176.html

https://reasonandscience.catsboard.com

Otangelo


Admin

Origin and evolution of the genetic code: the universal enigma - a review

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#8386

Origin and evolution of the genetic code: the universal enigma
Eugene V. Koonin

The genetic code is ordered, optimal, robust, highly non-random. It contains a block structure.  

The genetic code is nearly universal, and the arrangement of the codons in the standard codon table is highly non-random. The three main concepts on the origin and evolution of the code are 

1. The stereochemical theory, according to which codon assignments are dictated by physico-chemical affinity between amino acids and the cognate codons (anticodons);
My comment: Since there is no direct physical interaction between the condon/anticodon site, and the attachment of amino acids on the other binding site of tRNA, how could there be an affinity between the two sites? And even if there were affinity and complementarity between nucleotides and amino acids, how could that be demonstrated for the whole code?

2. The coevolution theory, which posits that the code structure coevolved with amino acid biosynthesis pathways;
My comment: So that means, that these ultra-complex biosynthesis pathways evolved, full of proteins, without the machinery to make proteins yet established? That's a chicken and egg problem. There is no evidence for a less evolved code being able to synthesize proteins.
The genetic code, insurmountable problem for non-intelligent origin Evolut11
Different steps in the evolution of the genetic code according to the co-evolution theory

3. And the error minimization theory under which selection to minimize the adverse effect of point mutations and translation errors was the principal factor of the code’s evolution.
My comment: The error-minimization theory supposes that genetic codes with high error rates would somehow evolve less error-prone over time. There is no evidence for this claim. Errors only lead to more errors, not higher precision.

These theories are not mutually exclusive and are also compatible with the frozen accident hypothesis, i.e., the notion that the standard code might have no special properties but was fixed simply because all extant life forms share a common ancestor, with subsequent changes to the code, mostly, precluded by the deleterious effect of codon reassignment.
My comment: There is no evidence, and even less plausibility to such an assertion.

Mathematical analysis of the structure and possible evolutionary trajectories of the code shows that it is highly robust to translational misreading but there are numerous more robust codes, so the standard code potentially could evolve from a random code via a short sequence of codon series reassignments. Thus, much of the evolution that led to the standard code could be a combination of frozen accident with selection for error minimization although contributions from coevolution of the code with metabolic pathways and weak affinities between amino acids and nucleotide triplets cannot be ruled out. However, such scenarios for the code evolution are based on formal schemes whose relevance to the actual primordial evolution is uncertain. A real understanding of the code origin and evolution is likely to be attainable only in conjunction with a credible scenario for the evolution of the coding principle itself and the translation system.

The fundamental question is how these regularities of the standard code came into being, considering that there are more than 10^84 possible alternative code tables if each of the 20 amino acids and the stop signal are to be assigned to at least one codon.

More specifically, the question is, what kind of interplay of chemical constraints, historical accidents, and evolutionary forces could have produced the standard amino acid assignment, which displays many remarkable properties. The features of the code that seem to require a special explanation include, but are not limited to, the block structure of the code, which is thought to be a necessary condition for the code’s robustness with respect to point mutations, translational misreading, and translational frame shifts;

Block structure and stability of the genetic code  24 December 2002
The maximum stability with respect to point mutations and shifts in the reading frame needs the fixation of the middle letters within codons in groups with different physico-chemical properties, thus, explaining a key feature of the universal genetic code. 2

The universal genetic code obeys mainly the principles of optimal coding.These results demonstrate the hierarchical character of optimization of universal genetic code with strictly optimal coding being evolved at the earliest stages of molecular evolution
Question: How can evolution and molecular evolution be even a mechanism at this stage, when there was no DNA replication established yet? Optimization is the action of making the best or most effective use of a situation or resource, and is commonly known to be an intelligence driven process with specific goals in mind, and requires foresight to know what the goal is.

The link between the second codon letter and the properties of the encoded amino acid so that codons with U in the second position correspond to hydrophobic amino acids;
Observation: This implies ORDER. ORDER is the opposite of randomness. It is the arrangement or disposition of things in relation to each other according to a particular sequence, pattern, or method. Order always implies or suggests that intelligence had the goal to create order for specific purposes.

The relationship between the second codon position and the class of aminoacyl-tRNA synthetase

Evolution of the Aminoacyl-tRNA Synthetases and the Origin of the Genetic Code  : 2 November 1994
The rules which governed the development of the genetic code, and led to certain patterns in the coding catalog between codons and amino acids, would also have governed the subsequent evolution of the synthetases in the context of a fixed code, leading to patterns in synthetase distribution such as those observed. 3
My comment: Since when and why should molecules lying around on early earth generate/have rules governing the development of the genetic code? We know that the genetic code cannot be expressed, unless the full set of aminoacyl tRNA synthestases are present, able to select and charge the respective amino acids to be attached to the tRNA's. That is another demonstration that the translation mechanism had to emerge fully set up and operating from day one.  

the negative correlation between the molecular weight of an amino acid and the number of codons allocated to it; the positive correlation between the number of synonymous codons for an amino acid and the frequency of the amino acid in proteins; the apparent minimization of the likelihood of mistranslation and point mutations; and the near optimality for allowing additional information within protein coding sequences.

It is assumed that there are only 4 nucleotides and 20 encoded amino acids (with the notable exception of selenocysteine and pyrrolysine, for which subsets of organisms have evolved special coding schemes

Natural expansion of the genetic code 2006 4
In order to account for its universality, the code was thought to be frozen to its existing form once a certain level of cellular complexity was reached. The already improved accuracy of protein synthesis at that stage, along with any further structural and functional refinement of the translation apparatus from there on, would preclude additional codon reassignments because they would inevitably lead to disruption of an organism’s whole proteome; ; the vast production of misfolded and aberrant proteins would greatly challenge survival of any such organism. 
My comment:  This is a very interesting comment. If the addition of codon assignments above 20 amino acids means  inevitably the disruption of an organism’s whole proteome, why should the same not be expected if the transition would be from, lets say 15 amino acids, to 17 or so? 

The correlation of mRNA codons with amino acids is the product of the interpretation of the code by the translational machinery, and therefore it is only static as long as the components of this machinery do not change and evolve. It is not surprising then that the documented codon reassignments can always be traced back to alterations in the components of the translational apparatus that are primarily involved in the decoding process: the aminoacyl-tRNA synthetase aaRSs, which ensure correct acylation of each tRNA species with its cognate amino acid; the tRNA molecules, whose anticodon base pairs with the correct mRNA codon by the rules of the wobble hypothesis at the ribosome ; and the peptide chain termination factors that recognize the termination codons.
My comment:  This is another very relevant observation. If the genetic code changes, the entire translation machinery has to co-change(evolve through mutations?) together, in order to adapt to the changed code assignment. That would require new information to change several interacting parts like tRNA's, aminoacyl tRNA synthetases etc.

UGA is the only codon with an ambiguous meaning in organisms from all three domains of life; apart from functioning as a stop codon, an in-frame UGA also encodes selenocysteine (Sec), the 21st cotranslationally inserted amino acid, through a recoding mechanism that requires a tRNA with a UCA anticodon (tRNASec), a specialized translation elongation factor (SelB) and an mRNA stem-loop structure known as the selenocysteine insertion sequence element (SECIS). UAG is also ambiguous in the Methanosarcinaceae, where in addition to serving as a translational stop it also encodes pyrrolysine (Pyl), the 22nd cotranslationally inserted amino acid; in this case, a new tRNA synthetase, pyrrolysyl-tRNA synthetase (PylRS), is essential for this recoding event.

REWIRING THE KEYBOARD: EVOLVABILITY OF THE GENETIC CODE 5
The genetic code evolved in two distinct phases. First, the ‘canonical’ code emerged before the last universal ancestor; subsequently, this code diverged in numerous nuclear and organelle lineages.

Any change in the genetic code alters the meaning of a codon, which, analogous to reassigning a key on a keyboard, would introduce errors into every translated message. Although this might have been acceptable at the inception of the code, when cells relied on few proteins, the forces that act on modern translation systems are likely to be quite different from those that influenced the origin and early evolution of the code

The arbitrariness of the genetic code
Perhaps the most important implication concerns the notion of genetic information. Despite its vagueness, arbitrariness is thought to be useful in establishing how molecules like DNA might convey semantic genetic information 6
My comment:  It's not only useful. It is essential, if instructional complex information has to be generated at all.

The only generally accepted sense of “arbitrary“ seems to be that the assignments could be different than they actually are. Of course, this does not say much about the sense in which they could be different. A more substantial claim is, for example, that the genetic code could be different because an early version of the code became established by chance events rather than by selection or stereochemical factors

My comment:  In fact, there are just these two hypotheses. Chance - or design.

The argument has not been worked out, but it seems to be based on an analogy between chemical and linguistic arbitrariness. Linguistic arbitrariness expresses the fact that the linguistic properties of a word are usually not naturally related to its meaning. The phonetic form of ‘dog’ does not reflect a property of dogs. In Peircean terms, the relation between a word and its meaning is symbolic. Similarly, the genetic code’s arbitrariness is understood as the absence of a natural connection between codons and amino acids. Chemical arbitrariness arguably establishes a language-like symbolic relation between codons and amino acids. It is then thought legitimate to attribute meaning and semantic information to genes or its components. Words and letters are conventionally related to their meanings or to signs from other alphabets like the Morse signs. It is this conventional relation which makes letters and Morse signs symbolic. The thought seems to be that arbitrariness between molecular entities establishes a similar, symbolic kind of relation between them.  If one accepts that DNA and RNA contain information and that arbitrariness is essential for having information, one is committed to claim that at least they bear the relevant chemically arbitrary relations.



1. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/
2. https://sci-hub.ren/10.1016/s0022-5193(03)00025-0
3. https://sci-hub.ren/10.1007/bf00166624
4. https://sci-hub.ren/10.1038/nchembio847
5. https://www.nature.com/articles/35047500
6. https://sci-hub.ren/10.1023/b:biph.0000024412.82219.a6



Last edited by Otangelo on Tue Jan 26, 2021 4:50 pm; edited 2 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The error-minimization or adaptation theory
According to Sonneborn’s argument reviewed by Carl Woese, selection pressure acted on a primitive genetic code that led to the generation of a mature genetic code where mutations in codons produced few adverse outcomes in terms of functional proteins. This represents an error-minimization strategy. Woese admitted that the error-minimization scheme involved innumerable “trials and errors” so that it, in his opinion, “could never have evolved in this way”.

Others have defended the theory. Some ingenious ideas were admitted subsequently as “utterly wrong”. Interestingly, one investigation of the theoretical susceptibility of a million randomly generated codes to errors, through mutations, showed that the standard genetic code was among the least prone to error. This indicated that, if the initial genetic code was primitive and error-prone, then what is observed in nature is the best option. However, the question remains as to why only one single code survived. Why not several different ones? This rather stands as evidence that the Creator made a wise choice.

The ancestral translational machinery conceived in evolutionary schemes is, of necessity, very rudimentary, and thus highly prone to errors. This means that it would have been almost impossible to correctly translate any mRNA, and thus produced little more than statistical proteins (proteins with only random sequences). Yet through necessity, somehow, the codons of the ancestral code were gradually reconfigured in order to minimize translational error. The ‘somehow’ has been imagined as perhaps involving novel amino acids, existence of a positive feedback mechanism that would assign codons to amino acids with similar properties, direct templating between nucleic and amino acids, or other possibilities.

Vetsigian and Woese subsequently proposed that horizontal gene transfer (HGT) could possibly spread workable genetic workable codes across organisms, accounting for the near universality of the genetic code. However, HGT requires that the genetic codes of the host and the recipient species be similar enough for the new genetic code to work. There also needs to be evidence for a mechanism permitting transfer of genetic information in the ancient past.

The stereochemical theory
Over the past 60 years, several theories have been set forward which attempt to explain how information in the DNA translates to protein sequences. These are based on some sort of selective stereochemical complementarity or affinity between amino acids and nucleotides (base pair triplets). On a physico-chemical level, this is based on the negative charges of the nucleotide phosphates interacting with the positive charge of the basic amino acids. In Saxinger et al.’s study no conclusive selective binding occurred between certain amino acids and nucleotide triplets. More recently, Yarus et al. contended that coding triplets arose as essential parts of RNA-like amino acid-binding sites, but they could show this for only seven of the 20 (35%) canonical amino acids. However, they conceded that the code can change.

The take home implication is that different amino acids can be bound by different coding triplets, meaning that the code is not specific and thus meaningless historically. Overall, after decades of research, no evidence has been found which gives strong support to the stereochemical theory. Yarus’s group went on to argue that adaptation, stereochemical features and co-evolutionary changes were compatible and perhaps necessary in order to account for present codon characteristics. However, Barbieri has argued that there is “no real evidence in favour” of the stereochemical theory. This serves to illustrate the uncertainty prevailing.

The co-evolution theory
According to the co-evolution theory, the original genetic code was “excessively degenerate” meaning it could code for several amino acids. These originals were used in “inventive biosynthetic processes” to synthesize the other amino acids. The code then adapted to accommodate these new amino acids. Similarities in the codons of related amino acids were subject to computer analysis in order to determine if a better code could be found based on biosynthetically related amino acids. An extraordinary correlation was noted for the universal code, as against 32,000 randomly generated possibilities. Changing the pattern of relatedness among amino acids gave more codes equal to or of greater correlations than the universal code. However, the authors stated that these observations “cannot be used as proof for the biosynthetic theory of the genetic code”.

Less than half of the 20 canonical amino acids found in proteins can be synthesized from inorganic molecules. Furthermore, the amino acids that are missing (the so-called secondary amino acids) are also missing from material recovered from meteorites. This is problematic for evolution, for it implies that early life-forms on this planet could only use ten amino acids for protein construction, something which we don’t observe today, thereby greatly reducing the possible number of functional proteins.

The primary amino acids were coded by an ancestral genetic code, which then expanded to include all 20 canonical amino acids. The present code is a non-random structure, yet it is more robust as far as translational errors are concerned than the majority of alternative codes that can be generated conceptually according to accepted evolutionary trajectories. When the starting assumptions are altered so that the postulated codes start from an advantaged position, then higher levels of robustness are achieved. A better code could have been produced if evolution had continued, but it did not as the possibility of severe adverse effects was too great. 

Several questions present themselves here, however. Why don’t we find any protein sequences in the fossils of ancient organisms, which only have primary amino acids? The fact that no such proteins exist is strong proof against the evolutionary origin of the genetic code. We only find proteins made up of all 20 amino acids. Why didn’t the genetic code keep on expanding to cover more than 20 amino acids? Why not 39, 48 or 62? Why did codon triplets evolve, and why not quadruplets? With 44 = 256 possible codon quadruplets, coding space could have increased, and thus a much larger universe of possible proteins could have been made possible.

An additional fundamental issue is that if life commenced in an RNA world, then amino acids could have been synthesized on the primitive codons associated with these molecules by primordial synthetases. How do similar coding rules now apply when codon recognition is performed by the anticodons of the tRNA with the assistance of the highly specific aminoacyl-tRNA-synthetases that attach to the amino acids? It has been suggested that perhaps there was a two-base code rather than a three-base one on account of the supposed limited number of amino acids available.

The accretion model of ribosomal evolution
The accretion model of ribosomal evolution is one of the most recent models and describes how the ribosome evolves from simple RNA and protein elements into an organelle complex in six major phases through accretion, recursively adding, iterative processes, subsuming and freezing segments of the rRNA. It is argued that the record of changes is held in rRNA secondary and three-dimensional structures. Patterns observed in extant rRNA found among organisms were used to generate rules supposedly governing the changes.

First, it is assumed that evolution occurred with changes moving from prokaryotes leading finally to the eukaryotes and with the apex reached with humans. Using this framework, a chronological sequence was constructed of rRNA segment additions to the core structure found in Escherichia coli. The six-phase process envisaged provided no evidence for the emergence of ancestral RNA. The proto-mRNA is seen simply as arising from a random population of appropriate molecules. This proto-mRNA together with tRNA, formed through condensation of a cysteine: cysteine: alanine (CCA) sequence unit, gave rise to base-pair coding triplets (codons). The ribosomal units (small and large) are considered to have arisen from loops of the rRNA. The proposed RNA loops were ‘defect-laden’, which required a protection mechanism. During phase 2 the large ribosomal unit is thought of as a crude ribozyme almost as soon as it was a recognizable structure, catalyzing nonspecific, non-coded condensation of amino acids. Finally, the two developing ribosome units came together (phase 4) to form a complex structure recognizable as a ribosome. In the next phase (5), specific interactions began to occur between anticodons in tRNA and mRNA codons to produce functional proteins. In the final phase the genetic code was optimized.

No organisms have been found that contain ribosomes in any of these intermediary phases.
This narrative suffers from major flaws, some of which also are inherent in previous models of the genetic code generation. No organisms have been found that contain ribosomes in any of these intermediary phases. If these intermediary phases are capable of ribosomal function, then why was it necessary to evolve further during additional steps? An insistent problem is how a genetic code could be generated that depends for its expression on proteins that can only be formed when it exists. Petrov et al. proposed a partial solution. The peptidyl transferase (enzyme) centre, an essential component of the ribosome, arose from an rRNA fragment. This means that its origin is conceived of as being in the RNA world. The peptidyl transferase centre is the place in the 50S LSU where peptide bond synthesis occurs. The machinery is very complex in extant organisms. In its original incarnation, the embryonic centre was less than 100 nucleotides long. The original RNA world quickly morphed into the familiar RNA/protein world. This argument is necessary as it “has proven experimentally difficult to achieve” a self-replicating RNA system. In a revealing aside, Fox even suggested that perhaps it is not necessary to validate the existence of the RNA world if it had a short life.

Some of the additional problems with an RNA world origin were noted by Strobel. An RNA commencement to life on Earth rests on the ability of RNA to both share the task of encoding and also to replicate information. This proposition depends on the abilities of RNA copying enzymes (ribozymes). However, such enzymes are unable to copy long templates and at a sufficient rate to overtake decomposition processes. Even greater issues are that there is no sensible resolution to the question of the origin of the activated nucleotides through abiotic processes needed for RNA formation, or of the problem as to how randomly assembled nucleotides achieved the ability to replicate. This has led some to conclude that “the model does not appear to be very plausible”. Nevertheless, undaunted, other possibilities have been invented.

https://creation.com/ribosomes-and-design

https://reasonandscience.catsboard.com

Otangelo


Admin

Eörs Szathmáry  Toward major evolutionary transitions theory 2.0  April 2, 2015
Stereochemical match is aided by codonic or anticodonic triplets in the corresponding binding sites although an open question is the accuracy when all amino acids and aptamers are present in the same milieu. Should this mechanism turn out to be robust, it offers a convenient road toward initial establishment of the code. The question “what for” remains, however.
https://www.pnas.org/content/112/33/10104


Mark Ridley, Evolution 3rd ed.
http://library.lol/main/F0C84F72B8E4C6D45DE7348D599AB035
In the chemical theory, each particular triplet would have some chemical affinity with its amino acid. GGC, for example, would react with glycine in some way that matched the two together. Several lines of evidence suggest this is wrong. One is that no such chemical relation has been found (and not for want of looking), and it is generally thought that one does not exist. Secondly, the triplet and the amino acid do not physically interact in the translation of the code. They are both held on a tRNA molecule, but the amino acid is attached at one end of the molecule, while the site that recognizes the codon on the mRNA is at the other end


The genetic code, insurmountable problem for non-intelligent origin Trna11


If the genetic code is not chemically determined, why is it the same in all species? The most popular theory is as follows. The code is arbitrary, in the same sense that human language is arbitrary. In English the word for a horse is “horse,” in Spanish it is “caballo,” in French it is “cheval,” in Ancient Rome it was “equus.” There is no reason why one particular sequence of letters rather than another should signify that familiar perissodactylic mammal. Therefore, if we find more than one people using the same word, it implies they have both learned it from a common source. It implies common ancestry. When the starship Enterprise boldly descends on one of those extragalactic planets where the aliens speak English, the correct inference is that the locals share a common ancestry with one of the English-speaking peoples of the Earth. If they had evolved independently, they would not be using English. All living species use a common, but equally arbitrary, language in the genetic code. The reason is thought to be that the code evolved early on in the history of life, and one early form turned out to be the common ancestor of all later species. (Notice that saying all life shares a common ancestor is not the same as saying life evolved only once.) The code is then what Crick (1968) called a “frozen accident.” 

My comment: Note the just so assertion. The authors neglect that there was no evolution prior to DNA replication, and life. 

That is, the original coding relationships were accidental, but once the code had evolved, it would be strongly maintained. Any deviation from the code would be lethal. An individual that read GGC as phenylalanine instead of glycine, for example, would bungle all its proteins, and probably die at the egg stage. The universality of the genetic code is important evidence that all life shares a single origin. In Darwin’s time, morphological homologies like the pentadactyl limb were known; but these are shared between fairly limited groups of species (like all the tetrapods). Cuvier had arranged all animals into four large groups according to their homologies. For this reason, Darwin suggested that living species may have a limited number of common ancestors, rather than just one. Molecular homologies, such as the genetic code, now provide the best evidence that all life has a single common ancestor.

https://reasonandscience.catsboard.com

Otangelo


Admin

The Genetic Code is... stored on one of the two strands of a DNA molecules as a linear, non-overlapping sequence of the nitrogenous bases Adenine (A), Guanine (G), Cytosine (C) and Thymine (T). These are the "alphabet" of letters that are used to write the "code words".
http://www.brooklyn.cuny.edu/bc/ahp/BioInfo/GP/GeneticCode.html

Given the different numbers of “letters” in the mRNA and protein “alphabets,” scientists theorized that combinations of nucleotides corresponded to single amino acids. Nucleotide doublets would not be sufficient to specify every amino acid because there are only 16 possible two-nucleotide combinations (42). In contrast, there are 64 possible nucleotide triplets (43), which is far more than the number of amino acids.
https://courses.lumenlearning.com/bccc-bio101/chapter/the-genetic-code/

Jim: That we use the first letter of each chemical as shorthand does not take away the simple fact that these are chemicals
Reply: One common misconception is that natural principles are just discovered, and described by us. Two cans with Coca Cola, one is normal, the other is diet. Both bear information that we can describe. We describe the information transmitted to us that one can contain Coca Cola, and the other is diet. But that does not occur naturally. A chemist invented the formula of how to make Coke, and Diet Coke, and that is not dependent on descriptive, but PREscriptive information. The same occurs in nature. We discover that DNA contains a genetic code. But the rules upon which the genetic code operates are PRE - scriptive. The rules are arbitrary. The genetic Code is CONSTRAINT to behave in a certain way. Chemical principles govern specific RNA interactions with amino acids. But principles that govern have to be set by? - yes, precisely what atheists try to avoid at any cost: INTELLIGENCE. There is no physical necessity, that the triple nucleotides forming a Codon CUU ( cytosine, uracil, uracil ) are assigned to the amino acid Leucine. Intelligence assigns and sets rules. For translation, each of these codons requires a tRNA molecule that has an anticodon with which it can stably base pair with the messenger RNA (mRNA) codon, like lock and key. So there is at one side of the tRNA the CUU anticodon sequence, and at the other side of the tRNA molecule, there is a site to insert the assigned amino acid Leucine. And here comes the BIG question: How was that assignment set up? How did it come to be, that tRNA has an assignment of CUU anticodon sequence to Leucine? The two binding sites are distant one from the other, there is no chemical reaction constraining physically that order or relationship. That is a BIG mystery, that science is attempting to explain naturally, but without success. Here we have the CLEAR imprint of an intelligent mind that was necessary to set these rules. That led Eugene Koonin to confess in the paper: "Origin and evolution of the genetic code: the universal enigma" : It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.

In the genetic code, there are 4^3 = 64 possible codons (tri-nucleotide sequences). Atheists also mock and claim that it is not justified to describe the genetic code as a language. But that is also not true. In the standard genetic code, three of these 64 mRNA codons (UAA, UAG and UGA) are stop codons. These terminate translation by binding to release factors rather than tRNA molecules. They instruct the ribosome to either start or stop polymerization of a given amino acid strand. Did unguided natural occurrences suddenly, in vast sequence space of possibilities, find by a lucky accident the necessity that a size of an amino acid polymer forming a protein requires a defined limited size that has to be INSTRUCTED by the genetic instructions, and for that reason, assigned release factors rather than amino acids to a specific codon sequence, in order to be able to instruct the termination of a amino acid string? That makes, frankly, no sense whatsoever. Not only that. This characterizes factually that the genetic code IS a language. That's described in the following science paper: The genetic language: grammar, semantics, evolution 2
The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics.

1. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/
2. http://www.ncbi.nlm.nih.gov/pubmed/8335231

https://reasonandscience.catsboard.com

Otangelo


Admin

A response to: The Argument from Genetic Code (DNA) DEBUNKED | John Lennox & Ken Ham

https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#8986

https://www.youtube.com/watch?v=eJCmerK0DjQ


Axel Kramer:  So, after sitting through 15 min of Perry supposedly "debunking" the design argument, the only takeaway is a 'birds and the bees' talk somehow more unbearable than the one my parents dispensed.
Firstly, Stephen, it wasn't that long into the video that you committed the 'bandwagon' fallacy by referencing "97% of scientists" holding a materialistic worldview. An argument fails or succeeds on its own merit. It wasn't that long ago that the majority of scientists were lauding eugenics.

With that said, seeing as it was left to Perry to offer anything in the way of a counter-argument, I'll respond to his claims. In essence, his argument can be simplified as follows: We see forms of communication in nature, therefore DNA. That's it. It is hilariously erroneous to compare a flower-to-pollinator 'signal' to the sheer brilliance that is our genome. The likes of bees locate flowers predominately by way of colour signals and electric fields. There is no code or information embedded within these 'signals'! This is both a false equivalence fallacy and a straw man.

But somehow, Perry outdoes himself by claiming that the genetic code is "incredibly simple" when compared to human languages. One has to stop here and wonder if Perry is either 1. A fraud. 2. Delusional. Or, 3. Assumes everyone watching this is an idiot.

Unlike human and programming languages, which essentially operate as a linear, one-dimensional, one-way, sequential code—like the lines of letters and words on
this page. DNA information is overlapping-multi-layered and multi-dimensional; it reads both backward and forwards, and the ‘junk’ is far more functional than the protein
code, so there is no fossilized history of evolution. No human engineer has ever even imagined, let alone designed an information storage device anything like it. Moreover, the vast majority of its content is meta-information—information about how to use information. Meta-information cannot arise by chance because it only makes sense in context of the information it relates to. Finally, 95% of its functional information shows no sign of having been naturally selected; on the contrary, it is rapidly degenerating!

Furthermore, according to the authors of ENCODE, 95% of the functional transcripts (genic and UTR transcripts with at least one known function) show no sign of selection pressure (i.e. they are not noticeably conserved and are mutating at the average rate). This contradicts Charles Darwin’s theory that natural selection is the major cause of our evolution. It also creates an interesting paradox: cell architecture, machinery and metabolic cycles are all highly conserved (e.g. the human insulin gene has been put into bacteria to produce human insulin on an industrial scale), while most of the chromosomal information is freely mutating. How could this state of affairs be maintained for the supposed 3.8 billion years since bacteria first evolved? A better answer might be that life is only thousands, not billions of years old. It also looks like cells, not genes, are in control of life—the direct opposite of what neo-Darwinists have long assumed.

What of it, Jon? Why did you make the blatantly false claim of the genetic code (of even the simplest life forms, for that matter) as being "incredibly simple"? Why did you not mention the likes of meta-information?
But wait! It gets worse! The order of amino acids in proteins is determined by information coded on genes. There are over 1.51 × 1084 possible genetic codes based on mapping 64 codons to 20 amino acids and a ‘stop’ signal (i.e. 64 → 21). The origin of code-based genetics is for evolutionists an utter mystery since this requires a large number of irreducibly complex machines: ribosomes, RNA and DNA polymerases, aminoacyl tRNA synthetases (aaRS), release factors, etc. These machines consist for the most part of proteins, which poses a paradox: dozens of unrelated proteins are needed (plus several special RNA polymers) to process the encoded information. Without them, the genetic code won’t work, but generating such proteins requires that the code already be functional.

This is one of many examples of ‘chicken-and-egg dilemmas faced by materialists' (interestingly, no mention of it in this video!). Another is the need for a reliable source of ATP for amino acids to polymerise to proteins: without the necessary proteins and genes already in place, such ATP molecules won’t be produced. In addition, any genetic replicator needs a reliable ‘feed stock’ of nucleotides and amino acids, but several of the metabolic processes used by cells are interlinked. For example, until various amino acid biosynthetic networks are functional, the nucleotides can’t be metabolized. These are some of the reasons we believe natural processes did not produce the genetic code step-wise.
Finally, I think it best we end on quotes by Jack Trevors and David Abel. In their paper, "Chance and necessity do not explain the origin of life, Cell Biology International, 28," they state the following: "‘Thus far, no paper has provided a plausible mechanism for natural-process algorithm-writing." Jack Trevors is also on the record stating, "Genetic instructions don’t write themselves any more than a software program writes itself".
So, what do you say in response, Jon?
TLDR: There is precisely nothing in the way of a naturalistic mechanism for the origin of DNA, cells, and the genetic code mentioned anywhere in this video - because there is nothing materialists can offer as a plausible explanation. That you go so far as to call it "incredibly simple" is astonishing, to say the least.
And finally, you'll find that Lennox is perfectly aware that the "letters" of DNA represent molecules. I can easily find quotes by secular scientists using the exact same language. All you succeed in displaying is a nauseating sense of pompousness.

Steve : 97% of scientists claim that we have evolved over time
Reply:  Not only is that an argument ad populum, but the origin of the genetic code is an abiogenesis problem, that has nothing to do with evolution.

Steve :  Argument from ignorance: Nobody knows how language naturally arises, therefore it cannot naturally arise.
Reply:  The genetic code is actually not a language, but a translation program, that assigns 64 codons to 20 amino acids. Translation programs and assignments of meanings have always an intelligent origin.

The Genetic Code was most likely implemented by intelligence.
1. In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, word, into another form, ( as another word, letter, etc. ) 
2. In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process.
3. Assignment means designating, dictating, ascribing, corresponding, correlating, specifying, representing, determining, mapping, permutating.    
4. The universal triple-nucleotide genetic code can be the result either of a) a random selection through evolution, or b) the result of intelligent implementation.
5. We know by experience, that performing value assignment and codification is always a process of intelligence with an intended result. Non-intelligence, aka matter, molecules, nucleotides, etc. have never demonstrated to be able to generate codes, and have neither intent nor distant goals with a foresight to produce specific outcomes.  
6. Therefore, the genetic code is the result of an intelligent setup.

The argument of the origin of codes
1. In cells, information is encoded through the genetic code which is a set of rules, stored in DNA sequences of nucleotide triplets called codons. The information distributed along a strand of DNA is biologically relevant. In computerspeak, genetic data are semantic data. Consider the way in which the four bases A, G, C, and T are arranged in DNA. As explained, these sequences are like letters in an alphabet, and the letters may spell out, in code, the instructions for making proteins. A different sequence of letters would almost certainly be biologically useless. Only a very tiny fraction of all possible sequences spell out a biologically meaningful message. Codons are used to translate genetic information into amino acid polypeptide sequences, which make proteins ( the molecular machines, the working horses of the cell ). And so, the information which is sent through the system, as well as the communication channels that permit encoding, sending, and decoding, which in life is done by over 25 extremely complex molecular machine systems, which do as well error check and repair to maintain genetic stability, and minimizing replication, transcription and translation errors, and permit organisms to pass accurately genetic information to their offspring, and survive. This system had to be set-up prior to life began because life depends on it.
2. A code is a system of rules where a symbol, letters, words, or even sounds, gestures, or images, are assigned to something else. Translating information through a key, code, or cipher, for example, can be done through the translation of the symbols of the alphabetic letters, to symbols of kanji, logographic characters used in Japan.
3. Intelligent design is the most case-adequate explanation for the origin of the sequence-specific digital information (the genetic text) necessary to produce a minimal proteome to kick-start life. The assembly information stored in genes, and the assignment of codons (triplet nucleotides) to amino acids must be pre-established by a mind. Assignment which means designating, ascribing, corresponding, or correlating meaning of characters through a code system, where symbols of one language are assigned to symbols of another language that mean the same, requires a common agreement of meaning in order to establish communication, trough encoding, sending, and decoding. Semantics, Syntax, and pragmatics are always set up by intelligence. The origin of such complex communication systems is best explained by an intelligent designer.

1. The origin of the genetic cipher 
1.Triplet codons must be assigned to amino acids to establish a genetic cipher.  Nucleic-acid bases and amino acids don’t recognize each other directly but have to deal via chemical intermediaries ( tRNA's and  Aminoacyl tRNA synthetase ), there is no obvious reason why particular triplets should go with particular amino acids.
2. Other translation assignments are conceivable, but whatever cipher is established, the right amino acids must be assigned to permit polypeptide chains, which fold to active functional proteins. Functional amino acid chains in sequence space are rare.  There are two possibilities to explain the correct assignment of the codons to the right amino acids. Chance, and design. Natural selection is not an option, since DNA replication is not set up at the stage prior to a self-replicating cell, but this assignment had to be established before.
3. If it were a lucky accident that happened by chance, luck would have hit the jackpot through trial and error amongst 1.5 × 10^84 possible genetic code tables. That is the number of atoms in the whole universe. That puts any real possibility of a chance of providing the feat out of question. Its, using  Borel's law, in the realm of impossibility. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, the chemical lottery lacks the time necessary to find the universal genetic code. 
4. We have not even considered that there are also over 500 possible amino acids, which would have to be sorted out, to get only 20, and select all L amino and R sugar bases......
5. We know that minds do invent languages, codes, translation systems, ciphers, and complex, specified information all the time. 
6. Put it in other words: The task compares to invent two languages, two alphabets, and a translation system, and the information content of a book ( for example hamlet)  being created and written in English, and translated to Chinese, through the invention and application of an extremely sophisticated hardware system. 
7. The genetic code and its translation system are best explained through the action of an intelligent designer. 

Steve :  Language can emerge through co-evolution
Reply: How could consciousness, logic and language evolve from matter?
https://reasonandscience.catsboard.com/t1334-the-origin-of-language#6045

1. Minds exist which have and use objective logic.
2. Objective logic cannot be based on our subjective minds, a non-static universe or immaterial abstractions outside of a mind.
3. Objective logic depends and can only derive from a pre-existing necessary first mind with objective logic.

In the comments section of the video:

Me:   
1. In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, word, into another form, ( as another word, letter, etc. )
2. In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process.
3. Assignment means designating, dictating, ascribing, corresponding, correlating, specifying, representing, determining, mapping, permutating.    
4. The universal triple-nucleotide genetic code can be the result either of a) a random selection through evolution, or b) the result of intelligent implementation.
5. We know by experience, that performing value assignment and codification is always a process of intelligence with an intended result. Non-intelligence, aka matter, molecules, nucleotides, etc. have never demonstrated to be able to generate codes, and have neither intent nor distant goals with a foresight to produce specific outcomes.  
6. Therefore, the genetic code is the result of an intelligent setup.

Oners82 @Otangelo Grasso  
Pretty much every biologist on the planet says you're wrong, and so do I. Your argument is just creationist BS that has been refuted a thousand times over. Premise 1 is false, premise 4 is a false dichotomy, premise 5 is BLATANTLY false, but even if your premises were true the conclusion still does not follow because your argument is invalid - it's a non sequitur. Try harder, creationist boy.

Otangelo Grasso @Oners82:    thats funny, premise 1 is word by word from wiki about code. And 5 is also not a false dichotomy, because there was no evolution prior dna replication. and you need a fully developed genetic code for life to start.  I don't have to try harder, you don't know what you are talking about, and your last sentence shows you are a ultracrepidarian.

Oners82 @Otangelo Grasso
"premise 1 is word by word from wiki about code."

LIAR. If you had just copied and pasted from Wiki without acknowledging it that would be bad enough because you would be guilty of plagiarism, but that isn't what you did. You DOCTORED the quote from Wiki because you are a devious POS so you changed the quote to suit your argument. That just makes you a dishonest, deceptive troll.

Also, that article on Wiki is BS anyway because the quote that you took and doctored is not sourced. It is just opinion, nothing more. The entire article only has two sources and one of those is a dead link, so it actually only has one source and that is just to another Wiki opinion, NOT a reliable, academic source.

I therefore stand by my claim that premise 1 is false, and I can tell you why if you like. It is so obvious that I shouldn't need to tell you why, but you are obviously not the sharpest tool in the toolbox so I'll explain why if you need me to.

"And 5 is also not a false dichotomy, because there was no evolution prior dna replication."

Non sequitur. You reasoning literally makes no sense because your reason does not justify your claim.
And your claim is false anyway because there WAS evolution prior to DNA replication, for example with RNA replication.

"you need a fully developed genetic code for life to start."

No, you don't. There is a reason why every biologist says you're wrong you uneducated fool...

" you don't know what you are talking about... you are a ultracrepidarian"

This coming from the guy with no education but thinks he knows better than the experts who spend their lives dedicated to understanding biology...
You are genuinely too stupid to see the irony in your statement, aren't you...
I am facepalming so hard right now you have no idea!

P.S. Your argument is still invalid even if most of your premises weren't false.

Otangelo Grasso @Oners82: 
Well, first of all, i did not copy word for word what wiki states, namely: In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium - but I paraphrased it: code is a system of rules to convert information—such as assigning the meaning of a letter, word, into another form - so again, your claim that no1 is false, is wrong. If you google: What does code mean? the first answer that comes up is: " a system of words, letters, figures, or other symbols substituted for other words, letters, etc., especially for the purposes of secrecy. ". That's precisely what the genetic code does. It assigns the meaning of 64 codons to 20 amino acids. That is not metaphorically so, but literally, and factually so. So you are WRONG. And NO. There was no evolution prior to DNA replication, because, quoting Paul Davies: 

Why Darwinian evolution does NOT explain the origin of life Sep 2, 2021
I think in all honesty a lot of people even confuse it the people who aren't familiar with the area that oh I presume Darwinian evolution sort of accounts for the origin of life but of course, you don't get an evolutionary process until you've got a self-replicating molecule. ( Darwin )  gave us a theory of evolution about how life has evolved but he uh didn't want to tangle with how you go from non-life to life and for me, that's a much bigger step. 

RNA replication is also a widely believed fairy tale, that you, uninformed, simply blindly believe. To quote Christian de Duve: 

The Beginnings of Life on Earth   September-October 1995 issue of American Scientist.
Contrary to what is sometimes intimated, the idea of a few RNA molecules coming together by some chance combination of circumstances and henceforth being reproduced and amplified by replication simply is not tenable. There could be no replication without a robust chemical underpinning continuing to provide the necessary materials and energy.

I utterly enjoy your name-calling and bitching about me, and at the same time exposing your lack of knowledge and understanding of the subject, holding to superficial uneducated beliefs, endorsing popularizers like Stephen and Jon Perry, well-known pseudo-science propagators,  and exposing the size and majesty of a true  Dunning Kruger. Facepalms.

https://reasonandscience.catsboard.com

Otangelo


Admin

Michael Yarus Evolution of the Standard Genetic Code 24 January 2021
https://link.springer.com/article/10.1007/s00239-020-09983-9

Laurence D Hurst: Protein evolution: causes of trends in amino-acid gain and loss 2006 Aug 24
https://pubmed.ncbi.nlm.nih.gov/16929253/

Regine Geyer: ​On the efficiency of the genetic code after frameshift mutations May 21, 2018
https://peerj.com/articles/4825/

https://reasonandscience.catsboard.com

Otangelo


Admin

Jean Lehmann Emergence of a Code in the Polymerization of Amino Acids along RNA Templates 2009 Jun 3
The origin of the genetic code in the context of an RNA world is a major problem in the field of biophysical chemistry.  

A major issue about the origin of the genetic system is to understand how coding rules were generated before the appearance of a family of coded enzymes, the aminoacyl-tRNA synthetases. Each of these ∼20 different enzymes has a binding pocket specific for one of the 20 encoded amino acids, and also displays an affinity for a particular tRNA, the adaptor for translation. These adaptors are characterized by their anticodons, a triplet of base located on a loop. The synthetases establish the code by attaching specific amino acids onto the 3′ ends of their corresponding tRNAs, a two-step process called aminoacylation

Although the molecular organization of genetic code is now known in detail, there is still no agreement on the reason(s) for which it has emerged. Early studies have shown that the codon table is highly structured with respect to amino acids hydrophobicity properties, suggesting that basic physico-chemical considerations could contain the solution to this problem. More recent works have shown that this table is ordered with respect to features of the aminoacyl-tRNA synthetases and the tRNAs,. For instance, the mechanisms of aminoacylation as well as identity elements on the tRNAs are specific to certain groups of codons. Although these facts are fundamental, and have inspired scenarios for the evolution and the expansion of the code, evolutionary considerations may not, in essence, provide an answer to the origin of the code (since it is a prerequisite for biological evolution).

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2685977/

https://reasonandscience.catsboard.com

Otangelo


Admin

A Thermodynamic Basis for Prebiotic Amino Acid Synthesis and the Nature of the First Genetic Code
https://www.liebertpub.com/doi/10.1089/ast.2008.0280

Consensus temporal order of amino acids and evolution of the triplet code
https://www.sciencedirect.com/science/article/abs/pii/S0378111900004765?via%3Dihub

Modern diversification of the amino acid repertoire driven by oxygen
https://www.pnas.org/doi/full/10.1073/pnas.1717100115

In the same sense, as the word chair has symbolic meaning, so does the codon word UUA of the genetic code. While the word chair is assigned to mean an object with a separate seat for one person, typically with a back and four legs, so the word UUA mean Leucine, one of the 20 amino acids used in life to make proteins, the working horses of the cell. In both cases, there is a meaning assignment. The genetic code is the bridge between digital data on one side and the conversion to the analog system on the other side. On one side is the blueprint, and on the other, the machine. And in the middle, is the adapter key, the genetic code system, which transforms the data into the machine.

https://reasonandscience.catsboard.com

Otangelo


Admin

The genetic code, insurmountable problem for non-intelligent origin The_se10

1. The genetic code, and genetic information, are analogous to human language. Codons are words, the sequence of codons ( genes) are sentences. Both contain semantic meaning.
2. Codon words are assigned, and code for amino acids, and genes which are strings containing codified, complex specified information instruct the assembly of proteins.
3. Semantic meaning is non-material. Therefore, the origin of the genetic code, and genetic information, are non-material.
4. Instructional assembly information to make devices for specific purposes comes always from a mind. Therefore, genetic information comes from a mind.

https://reasonandscience.catsboard.com

Otangelo


Admin

Co-evolution Hypothesis of Codon Assignments 

The Co-evolution Hypothesis of Codon Assignments, first proposed by T.H. Jukes in 1983, offers a hypothesis on the origin of the genetic code. This hypothesis suggests that the genetic code and the biosynthetic pathways for amino acids evolved in tandem, shaping each other through a process of mutual adaptation. According to this model, the earliest form of life utilized a limited set of amino acids, which gradually expanded as new biosynthetic pathways emerged. As these pathways developed, they influenced the assignment of codons to specific amino acids, creating a feedback loop that drove the evolution of both the genetic code and the metabolic network. This hypothesis provides a fascinating framework for understanding the fundamental processes that gave rise to life on Earth. The biosynthetic pathways and enzymes involved in amino acid production are essential for the emergence and sustenance of life. They form the backbone of protein synthesis, which is a cornerstone of all known biological systems. The Co-evolution Hypothesis suggests that these pathways were not merely a prerequisite for life, but actively shaped the very language of genetics. It's important to note that while this hypothesis offers valuable insights, it is not without alternatives. Some scientists propose different models for the origin of the genetic code, such as the Frozen Accident Hypothesis or the Stereochemical Hypothesis. Interestingly, the existence of multiple, non-homologous pathways for amino acid biosynthesis across different organisms raises questions about the universality of these processes. This diversity could be interpreted as evidence for polyphyletic origins of life, challenging the notion of a single, universal common ancestor. The complexity and diversity of these essential biosynthetic pathways present a significant challenge to explanations relying solely on unguided, naturalistic processes. The intricate interplay between genetic information and metabolic function, as proposed by the Co-evolution Hypothesis, suggests a level of coordination and specificity that is difficult to account for through random events alone. This complexity invites consideration of alternative explanations for the origin of life and the genetic code, potentially including directed or purposeful processes.

Unresolved Challenges in the Co-emergence Hypothesis of Codon Assignments

1. Interdependence of Genetic Code and Biosynthetic Pathways
The Co-emergence Hypothesis of Codon Assignments posits that the genetic code and amino acid biosynthetic pathways emerged together, mutually influencing one another. A fundamental challenge lies in explaining how these two highly complex systems could co-emerge without invoking a guided process. The specificity required for assigning codons to amino acids, in tandem with the development of the metabolic pathways needed to produce those amino acids, suggests a level of coordination that is difficult to attribute to naturalistic processes.

For example, the assignment of specific codons to newly synthesized amino acids implies a functional genetic code was already in place. However, this presupposes the simultaneous availability of both a codon recognition system (e.g., tRNAs and ribosomes) and the amino acid biosynthetic enzymes. The emergence of these interconnected systems, each dependent on the other for functionality, presents a significant conceptual problem.

Conceptual problem: Simultaneous Emergence and Functional Interdependence
- There is no known mechanism by which both the genetic code and biosynthetic pathways could emerge simultaneously without coordination.
- The challenge lies in explaining the origin of these interdependent systems in the absence of a pre-existing, functional framework.

2. Specificity and Precision in Codon Assignments
The Co-emergence Hypothesis suggests that as new amino acids emerged through biosynthetic pathways, they were incorporated into the genetic code through the assignment of specific codons. This process requires an extraordinary level of precision and specificity, as the incorrect assignment of codons could lead to dysfunctional proteins and hinder cellular function. The emergence of a highly specific and error-free codon assignment system under naturalistic conditions remains unexplained.

Moreover, the hypothesis presupposes that the translation machinery (e.g., tRNAs, aminoacyl-tRNA synthetases, and ribosomes) was capable of recognizing and correctly assigning codons to newly synthesized amino acids. The exact mechanisms by which such specificity and precision could be established and maintained from the earliest stages of life are not addressed by the Co-emergence Hypothesis.

Conceptual problem: Establishing and Maintaining Specificity
- The difficulty lies in explaining how a precise and functional codon assignment system could emerge without errors under naturalistic conditions.
- The origin of the translation machinery capable of recognizing and assigning codons with high fidelity remains unresolved.

3. Lack of Molecular Homology Among Biosynthetic Pathways
One of the key issues challenging the Co-emergence Hypothesis is the existence of multiple, non-homologous pathways for amino acid biosynthesis across different organisms. These diverse pathways often lack common ancestry at the molecular level, suggesting independent origins. This diversity challenges the idea that the genetic code and biosynthetic pathways co-emerged in a uniform, universal manner.

For instance, certain amino acids, such as tryptophan, are synthesized through completely different biosynthetic routes in different organisms. The lack of homology between these pathways raises questions about how a coherent genetic code could emerge if the biosynthetic mechanisms for producing its constituent amino acids were not universally shared.

Conceptual problem: Independent Origins of Biosynthetic Pathways
- The challenge is to explain how the genetic code could have co-emerged with biosynthetic pathways that are not homologous across different forms of life.
- The existence of diverse biosynthetic routes suggests that the genetic code may not have co-emerged with a single, universal metabolic network.

4. Feedback Mechanisms and Codon Reassignment
The Co-emergence Hypothesis implies that feedback mechanisms between amino acid availability and codon assignments played a crucial role in shaping the genetic code. However, the emergence of such feedback loops, where the genetic code and biosynthetic pathways influence each other, requires the existence of complex regulatory systems. Explaining the origin of these regulatory networks, which would need to operate effectively from the earliest stages of life, is a significant challenge.

Additionally, the process by which codon reassignments could occur without disrupting existing protein synthesis remains problematic. Codon reassignment would require not only changes in the genetic code but also corresponding changes in the translation machinery and amino acid biosynthesis, all of which would need to occur simultaneously to maintain cellular function.

Conceptual problem: Origin of Feedback Mechanisms and Codon Reassignment
- The challenge lies in explaining how feedback mechanisms that allow for codon reassignment could emerge without pre-existing regulatory systems.
- The simultaneous changes required in the genetic code, translation machinery, and metabolic pathways are difficult to account for within a naturalistic framework.

5. Inadequacy of Current Naturalistic Models
The complexity and interdependence observed in the Co-emergence Hypothesis highlight significant gaps in current naturalistic models. The hypothesis requires a level of coordination and precision in the simultaneous emergence of the genetic code and biosynthetic pathways that naturalistic processes struggle to explain. The lack of empirical evidence supporting the naturalistic formation of such complex systems under prebiotic conditions further underscores the limitations of existing models.

Current models often assume a gradual, stepwise accumulation of functional complexity. However, the Co-emergence Hypothesis suggests that both the genetic code and biosynthetic pathways needed to be functional from the outset, raising questions about the feasibility of such a scenario arising through natural, unguided processes.

Conceptual problem: Insufficiency of Existing Explanatory Frameworks
- There is a need for new hypotheses that can adequately account for the simultaneous emergence of complex, interdependent systems such as the genetic code and biosynthetic pathways.
- The lack of empirical support for the naturalistic origin of these systems under prebiotic conditions highlights the need for alternative explanations.

6. Open Questions and Future Research Directions
Several critical questions remain unanswered regarding the Co-emergence Hypothesis of Codon Assignments. How could a highly specific and interdependent genetic code and biosynthetic network emerge under prebiotic conditions? What mechanisms could facilitate the simultaneous development and integration of these systems? How can we reconcile the immediate functional necessity of both the genetic code and metabolic pathways with the challenges of their unguided origin?

Addressing these questions will require innovative research approaches that go beyond current naturalistic models. Experimental simulations, advanced computational modeling, and interdisciplinary studies combining insights from molecular biology, systems biology, and prebiotic chemistry may provide new perspectives on the origins of the genetic code. Additionally, exploring alternative theoretical frameworks that consider non-naturalistic explanations may offer a more comprehensive understanding of the origins of life.

Future research should focus on identifying plausible prebiotic conditions that could support the emergence of such complex systems. Investigating potential simpler precursors or analogs to the genetic code and biosynthetic pathways may provide insights into their origins. However, much work remains to develop coherent models that can adequately explain the co-emergence of these fundamental biological systems.

Conceptual problem: Need for Novel Hypotheses and Methodologies
- There is an urgent need for new research strategies and hypotheses that can address the origins of the genetic code and biosynthetic pathways.
- Developing comprehensive models that effectively explain the simultaneous emergence and integration of these systems remains a significant challenge.

21.2.3. Stereochemical Theory of Codon Assignment  

The Stereochemical Theory of Codon Assignment, initially proposed by Carl Woese in 1967, presents a hypothesis regarding the origin of the genetic code. This theory posits that the association between codons and amino acids arose from direct chemical interactions between nucleic acids and amino acids. According to this model, the physical and chemical properties of both nucleotides and amino acids played a determining role in establishing the codon-amino acid pairings we observe in modern organisms. This hypothesis suggests that the genetic code's structure is not arbitrary but rather reflects inherent chemical affinities. The theory proposes that specific triplet sequences of nucleotides have a natural tendency to bind preferentially to certain amino acids due to their stereochemical compatibility. This intrinsic relationship would have been essential for the emergence of a functional translation system in early life forms. The Stereochemical Theory offers an elegant explanation for how the complex process of protein synthesis could have originated. It provides a potential mechanism for the initial establishment of codon-amino acid associations without requiring a pre-existing, sophisticated biological machinery. This concept is essential for understanding how life could have transitioned from a hypothetical RNA world to the DNA-RNA-protein world we observe today. However, while the Stereochemical Theory provides valuable insights, it is not the only proposed explanation for the origin of the genetic code. Alternative hypotheses, such as the Adaptive Theory or the Frozen Accident Theory, offer different perspectives on this fundamental question. The existence of multiple, competing theories underscores the complexity of the problem and the current limitations of our understanding. Interestingly, the diversity of codon assignments observed across different organisms, particularly in mitochondrial genomes, raises questions about the universality of the genetic code. This variation could be interpreted as evidence for multiple, independent origins of translation systems, challenging the concept of a single, universal common ancestor. The specificity of codon-amino acid associations, as proposed by the Stereochemical Theory, presents a significant challenge to explanations relying solely on unguided, naturalistic processes. The precise matching between codons and amino acids, potentially based on complex stereochemical interactions, suggests a level of organization and specificity that is difficult to account for through random events alone. This complexity invites consideration of alternative explanations for the origin of the genetic code, potentially including directed or purposeful processes.

Unresolved Challenges in the Stereochemical Theory of Codon Assignment

1. Chemical Specificity of Codon-Amino Acid Interactions
The Stereochemical Theory of Codon Assignment suggests that codons and their corresponding amino acids are matched based on inherent chemical affinities. A significant challenge lies in identifying and demonstrating the precise stereochemical interactions that would have driven these specific pairings. While some studies have shown possible direct interactions between nucleotides and amino acids, the evidence is limited, and the proposed chemical affinities often do not account for the full range of codon assignments observed in the universal genetic code.

For instance, while certain codons have been experimentally shown to bind to their respective amino acids or their precursors, many codon-amino acid pairings do not exhibit such straightforward stereochemical relationships. This lack of universal applicability raises questions about the adequacy of the Stereochemical Theory in explaining the entirety of the genetic code.

Conceptual problem: Incomplete Chemical Affinities
- The challenge is to demonstrate consistent and universal chemical affinities between all codons and their corresponding amino acids.
- The lack of experimental evidence supporting the stereochemical basis for every codon-amino acid pairing undermines the theory's explanatory power.

2. Diversity and Variability of the Genetic Code
The Stereochemical Theory must contend with the fact that the genetic code is not entirely universal. Variations in codon assignments, particularly in mitochondrial genomes and some prokaryotes, challenge the idea that codon-amino acid pairings are solely determined by fixed chemical interactions. If the genetic code were based purely on stereochemistry, one would expect a more rigid and universally conserved codon assignment pattern. The observed variability suggests that factors other than stereochemical affinity may have influenced the development of the genetic code.

This variability in codon assignments across different species and organelles raises questions about the theory's ability to explain the origin of the genetic code in a diverse array of biological systems. It also suggests that other mechanisms, possibly including adaptive or functional considerations, may have played a role in shaping the genetic code.

Conceptual problem: Codon Assignment Variability
- The observed diversity in codon assignments across different organisms and organelles challenges the universality of the stereochemical interactions proposed by the theory.
- The theory must account for the variability in the genetic code while maintaining a coherent explanation for its origins.

3. Prebiotic Conditions and the Emergence of Specific Codon-Amino Acid Pairings
One of the critical challenges for the Stereochemical Theory is explaining how specific codon-amino acid pairings could have emerged under prebiotic conditions. The theory assumes that certain nucleotides and amino acids would naturally interact and form stable complexes, leading to the establishment of the genetic code. However, the conditions on the early Earth that would have facilitated such interactions are poorly understood, and it remains unclear whether the necessary concentrations of nucleotides and amino acids were present in the right environments.

Furthermore, the spontaneous formation of specific codon-amino acid pairs in the absence of a pre-existing translation system is highly speculative. The transition from these hypothetical interactions to a fully functional genetic code capable of directing protein synthesis represents a significant gap in the theory that has yet to be adequately addressed.

Conceptual problem: Prebiotic Plausibility
- The theory faces challenges in explaining how specific codon-amino acid interactions could have formed under plausible prebiotic conditions.
- The lack of evidence for the spontaneous formation of stable codon-amino acid complexes in early Earth environments raises questions about the theory's viability.

4. Transition from Stereochemical Interactions to a Functional Genetic Code
Even if stereochemical interactions between codons and amino acids existed, transitioning from these simple interactions to a fully functional genetic code capable of supporting life remains a significant conceptual hurdle. The genetic code not only requires specific codon-amino acid pairings but also complex translation machinery, including tRNAs, ribosomes, and aminoacyl-tRNA synthetases, all of which must work in concert to produce functional proteins.

The Stereochemical Theory does not adequately explain how these complex molecular systems could have co-emerged with the genetic code, nor does it provide a clear pathway from simple codon-amino acid affinities to the intricate translation processes observed in modern cells. The emergence of such a coordinated system under naturalistic conditions is difficult to account for, suggesting that additional factors or mechanisms may be necessary to bridge this gap.

Conceptual problem: Functional Integration
- The theory lacks a clear explanation for how simple stereochemical interactions could give rise to the complex, integrated system of protein synthesis.
- The transition from codon-amino acid affinities to a fully functional genetic code remains an unresolved challenge.

5. Insufficiency of Naturalistic Explanations
The Stereochemical Theory, while offering an intriguing hypothesis, falls short in providing a comprehensive naturalistic explanation for the origin of the genetic code. The theory assumes that the genetic code's structure is determined by intrinsic chemical properties, yet the complexity and specificity of the code suggest a level of organization that may not be fully accounted for by unguided chemical interactions alone.

The precise matching of codons to amino acids, the emergence of a functional translation system, and the observed variations in the genetic code across different organisms all point to the need for a more robust explanatory framework. Current naturalistic models, including the Stereochemical Theory, struggle to address these challenges satisfactorily, indicating that alternative explanations may be necessary to fully understand the origins of the genetic code.

Conceptual problem: Limitations of Naturalistic Models
- The complexity and specificity of the genetic code challenge the sufficiency of naturalistic explanations like the Stereochemical Theory.
- The theory's inability to account for the full range of codon assignments and the emergence of the translation machinery suggests the need for alternative hypotheses.

6. Open Questions and Future Research Directions
The Stereochemical Theory leaves several critical questions unanswered. How can we empirically demonstrate the existence of specific codon-amino acid affinities under prebiotic conditions? What mechanisms could explain the transition from simple chemical interactions to a functional genetic code? How do we reconcile the variability in codon assignments with the theory's premise of chemical specificity?

Future research should focus on experimental and computational approaches to test the validity of the Stereochemical Theory. Investigating the potential for specific nucleotide-amino acid interactions under controlled conditions, as well as exploring alternative scenarios for the origin of the genetic code, may provide new insights. Additionally, interdisciplinary studies combining chemistry, molecular biology, and prebiotic simulations will be crucial in addressing these unresolved challenges.

Conceptual problem: Need for Empirical Validation and Theoretical Refinement
- There is a pressing need for experimental evidence to support or refute the stereochemical basis of the genetic code.
- Developing a more comprehensive model that integrates stereochemical interactions with other potential mechanisms for codon assignment will be essential for advancing our understanding of the genetic code's origin.

21.2.4. Adaptive Theory of Codon Usage 

The Adaptive Theory of Codon Usage, proposed by Shigeru Osawa and Thomas H. Jukes in 1988, offers a distinct perspective on the evolution of the genetic code. This theory suggests that codon assignments have been shaped by selective pressures to optimize translational efficiency and accuracy. According to this model, the current genetic code is the result of a long evolutionary process that favored certain codon-amino acid pairings based on their functional advantages in protein synthesis. This hypothesis proposes that the genetic code has evolved to minimize the impact of translation errors and to enhance the speed of protein production. It suggests that codons for similar amino acids are often adjacent in the genetic code, reducing the potential for detrimental mutations. Additionally, the theory posits that more frequently used amino acids are assigned to codons that are less prone to mistranslation. The Adaptive Theory is essential for understanding the fine-tuning of genetic information processing in living organisms. It provides a framework for explaining the non-random patterns observed in codon usage across different species and even within individual genomes. This concept is particularly relevant when considering how organisms adapt to different environmental conditions, as codon usage can influence protein expression levels and cellular energetics. While the Adaptive Theory offers valuable insights, it is not the sole explanation for codon assignment patterns. Other hypotheses, such as the Stereochemical Theory or the Coevolution Theory, provide alternative viewpoints on this fundamental aspect of molecular biology. The existence of multiple explanatory models highlights the complexity of the genetic code's origins and evolution. Notably, the observation of variant genetic codes, particularly in mitochondria and certain unicellular organisms, raises intriguing questions about the universality of codon assignments. These variations could be interpreted as evidence for independent evolutionary trajectories, potentially challenging the notion of a single, universal common ancestor for all life forms. The intricate optimization of codon usage proposed by the Adaptive Theory presents a significant challenge to explanations relying solely on unguided, naturalistic processes. The precise balancing of multiple factors - including error minimization, translation speed, and metabolic efficiency - suggests a level of fine-tuning that is difficult to account for through random events alone. This complexity invites consideration of alternative explanations for the origin and evolution of the genetic code, potentially including directed or purposeful processes.

Unresolved Challenges in the Adaptive Theory of Codon Usage

1. Optimization of Codon Assignments
The Adaptive Theory posits that codon assignments have been optimized to reduce translation errors and enhance protein synthesis efficiency. However, the emergence of such precise optimization without guided processes remains a significant challenge. The theory suggests that selective pressures favored codon-amino acid pairings that minimize translation errors, but it is unclear how this optimization could have emerged gradually. For example, while some codons for similar amino acids are adjacent in the genetic code, this pattern is not consistently observed across all codons.

The intricate balance between minimizing translation errors and maximizing efficiency suggests a level of coordination that is difficult to attribute to unguided processes. The lack of consistent patterns across the entire genetic code raises questions about the theory's explanatory power.

Conceptual problem: Emergence of Optimization
- The challenge lies in explaining the stepwise emergence of optimized codon assignments without invoking guided processes.
- The lack of consistent patterns in codon adjacency and error minimization across the entire genetic code raises questions about the theory's explanatory power.

2. Variability in Codon Usage Across Organisms
The Adaptive Theory must account for the significant variability in codon usage observed across different species and even within individual genomes. This variability suggests that codon usage is not solely dictated by selective pressures for translational efficiency and accuracy. For example, certain organisms, such as those with highly specialized lifestyles or those inhabiting extreme environments, exhibit codon usage patterns that deviate significantly from the norm.

This variability challenges the idea that codon assignments have been universally optimized according to the principles proposed by the Adaptive Theory. Instead, it suggests that other factors, possibly including genetic drift, environmental constraints, and historical contingencies, may have played a more prominent role in shaping codon usage.

Conceptual problem: Inconsistent Codon Usage Patterns
- The variability in codon usage across different organisms undermines the theory's claim of universal optimization for translational efficiency.
- The theory must address the influence of other factors, such as genetic drift and environmental constraints, in shaping codon usage patterns.

3. Origin of Codon Assignments
The Adaptive Theory also faces the challenge of explaining how the initial codon assignments originated. It assumes that selective pressures gradually optimized codon usage but does not adequately address how the first codon-amino acid pairings were established in an already functioning translation system.

The theory needs to explain how the structure of the genetic code, which appears finely tuned for error minimization and efficiency, came into existence. The challenge lies in accounting for the initial formation of these codon-amino acid pairings within an already functional system, rather than through a gradual or stepwise process.

Conceptual problem: Origin of Initial Assignments
- The theory lacks a clear explanation for the origin of optimized codon assignments within an already existing system.
- The absence of a gradual or stepwise mechanism for the initial codon-amino acid pairings presents a significant challenge.

4. Functional Integration of the Genetic Code
Even if the Adaptive Theory can explain the optimization of codon usage, it must also account for the integration of these optimized codon assignments into a fully functional genetic code. The genetic code requires not only specific codon-amino acid pairings but also a coordinated translation system, including ribosomes, tRNAs, and aminoacyl-tRNA synthetases. The simultaneous development of these components in a way that maintains the proposed optimization presents a significant conceptual challenge.

The theory must also address how changes in codon usage patterns, driven by selective pressures, could be accommodated within the existing translation machinery without disrupting protein synthesis. The functional integration of optimized codon assignments into the broader context of cellular biochemistry remains an open question.

Conceptual problem: Coordination with Translation Machinery
- The theory needs to explain how optimized codon assignments were integrated into a functional genetic code with minimal disruption.
- The simultaneous development of codon optimization and translation machinery poses a significant challenge to naturalistic explanations.

5. Limitations of Naturalistic Models
The Adaptive Theory, while offering a plausible mechanism for codon usage optimization, struggles to provide a comprehensive naturalistic explanation for the origin and refinement of the genetic code. The theory assumes that selective pressures are sufficient to explain the intricate balance between error minimization, translation speed, and metabolic efficiency. However, the complexity and specificity of the genetic code suggest that additional factors may be required to fully account for its emergence.

The precise tuning of codon assignments, which appears necessary for optimal protein synthesis, raises the possibility that directed or purposeful processes could have played a role in the genetic code's development. The limitations of current naturalistic models, including the Adaptive Theory, highlight the need for alternative explanations that can better account for the observed complexity.

Conceptual problem: Insufficiency of Selective Pressures
- The complexity of the genetic code challenges the sufficiency of naturalistic explanations like the Adaptive Theory.
- The theory's reliance on selective pressures to explain codon usage optimization may not fully account for the observed specificity and fine-tuning.

6. Open Questions and Future Research Directions
The Adaptive Theory leaves several critical questions unanswered. How can we empirically test the proposed mechanisms of codon optimization? What role did environmental factors and genetic drift play in shaping codon usage patterns? How did the initial codon assignments emerge, and how were they integrated into a functional genetic code?

Future research should focus on experimental studies that investigate the selective pressures influencing codon usage in various organisms. Additionally, computational models that simulate the emergence of codon assignments under different environmental and genetic conditions may provide new insights. Interdisciplinary approaches combining molecular biology and biochemistry will be essential for addressing the unresolved challenges posed by the Adaptive Theory.

Conceptual problem: Need for Empirical Validation and Theoretical Expansion
- There is a pressing need for empirical studies to test the mechanisms of codon optimization proposed by the Adaptive Theory.
- Expanding the theory to incorporate additional factors, such as environmental influences and genetic drift, will be crucial for advancing our understanding of codon usage and the origin of the genetic code.

Unresolved Challenges in the Origin of the Genetic Code

1. Code Universality and Optimization
The genetic code is nearly universal across all domains of life and appears to be optimized for error minimization. This universality and optimization pose significant challenges to explanations of its unguided origin. For instance, the code's arrangement minimizes the impact of point mutations and translational errors, a feature that seems unlikely to have arisen by chance.

Conceptual problem: Spontaneous Optimization
- No clear mechanism for the emergence of a highly optimized code without guidance
- Difficulty explaining the origin of error-minimizing properties in the genetic code

2. tRNA-Amino Acid Assignment
The specific pairing of tRNAs with their corresponding amino acids is essential for the translation process. This precise assignment presents a significant challenge to explanations of unguided origin. For example, each of the 20 standard amino acids must be correctly paired with its corresponding tRNA(s), a level of specificity that is difficult to account for without invoking a coordinated system.

Conceptual problem: Arbitrary Associations
- Challenge in explaining the emergence of specific tRNA-amino acid pairings without guidance
- Lack of a clear pathway for the development of such precise molecular recognition

3. Codon Assignment
The assignment of specific codons to amino acids appears to be non-random, with similar amino acids often sharing related codons. This pattern of assignment poses challenges to explanations of its unguided origin. For instance, hydrophobic amino acids tend to share the second base in their codons, a feature that suggests some underlying organization.

Conceptual problem: Non-random Organization
- Difficulty in accounting for the non-random patterns in codon assignments without guidance
- Lack of explanation for the apparent logical structure in the genetic code

4. Simultaneous Emergence of Code and Translation Machinery
The genetic code is inseparable from the translation machinery that interprets it. This interdependence poses a significant challenge to explanations of gradual, step-wise origin. The code cannot function without ribosomes, tRNAs, and aminoacyl-tRNA synthetases, yet these components require the code to be produced.

Conceptual problem: Chicken-and-Egg Paradox
- Challenge in explaining the concurrent emergence of the code and its interpretation machinery
- Difficulty accounting for the origin of a system where each component seems to require the pre-existence of the others

5. Transition from RNA World
Many theories propose that the genetic code emerged from an RNA world. However, the transition from a hypothetical RNA-based system to the current DNA-RNA-protein system presents significant challenges. For example, the emergence of aminoacyl-tRNA synthetases, which are proteins, in an RNA-based world is difficult to explain.

Conceptual problem: System Transition
- No clear mechanism for transitioning from an RNA-based coding system to the current genetic code
- Difficulty explaining the origin of protein-based components essential for the modern genetic code

The origin of the translation code presents numerous challenges to unguided explanations. The complexity, specificity, and interdependence observed in this system raise significant questions about how such a sophisticated code could have emerged without guidance. Further research is needed to address these conceptual problems and provide a comprehensive explanation for the origin of the translation code.


University of Utah (2017): Reading the genetic code depends on context:  University of Utah biologists now suggest that connecting amino acids to make proteins in ribosomes, the cell's protein factories, may in fact be influenced by sets of three triplets - a "triplet of triplets" that provide crucial context for the ribosome. Biologists have long accepted that sets of three letters, called triplets or codons, are the fundamental unit of instruction telling the ribosome which particular amino acid to add to the growing protein chain. "We know it's a triplet code," says biologist Kelly Hughes. "That's been established since 1961. But there are certain things that happen in making protein from RNA that don't quite make sense." Hughes and Chevance worked with a gene in Salmonella that codes for the FlgM protein, which is a component of the bacteria's flagellum. A mutation that was defective in "reading" a specific codon in the flgM gene only affected FlgM protein production and not other genes that contained the same codon. "That got us thinking—why is that particular codon in the flgM gene affected and not the same codon in the other genes?" Hughes says. "That's when we started thinking about context." Changing the codon on one side of the defective codon resulted in a 10-fold increase in FlgM protein activity. Changing the codon on the other side resulted in a 20-fold decrease. And the two changes together produced a 35-fold increase. "We realized that these two codons, although separated by a codon, were talking to each other," Hughes says. "The effective code might be a triplet of triplets."



Last edited by Otangelo on Wed Oct 02, 2024 8:53 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Fazale Rana The Cell's Design: How Chemistry Reveals the Creator's Artistry 1 junho 2008

Page 172:

A Rational Response
A pilot flying his plane over the South Pacific sees an uncharted island in the distance. Deciding to explore, the pilot spirals the plane downward to take a closer look. As the plane descends, he spots large rocks on the island's shore arranged to spell SOS. The pilot then sees a grass hut located farther down the beach. Even before he sees the footprints in the sand, the pilot reaches for the transmitter and radios for help. Though SOS is not a word, most would agree that the pilot's plea was rational. He easily recognized the universal distress message. The pilot knew the improbability of wind and waves acting on the rocks along the shore to form the right letters. More importantly, based on experience, the pilot understood that the carefully arranged stones communicated meaningful information—they were a code that required an intelligent agent's design and implementation. The island's inhabitant spelled out SOS on the shore with the hope that whoever saw the intentionally placed rocks would know what he meant That same type of evidence has been discovered inside the cell. Biochemical machinery is, in essence, information-based. And, the chemical information in the cell is encoded using symbols.

By itself, this information offers powerful evidence for an Intelligent Designer. But, recent discoveries go one step further. Molecular biologists studying the genetic code's origin have unwittingly stumbled across a "grass hut" in what may be the most profound evidence for intelligent activity—a type of fine-tuning in the code's rules. Just as the hut on the beach helped convince the pilot that someone was using carefully placed rocks to signal for help, the precision of the code adds confirmatory evidence that a mind programmed life's genetic code. The genetic code's carefully crafted rules supply it with a surprising capacity to minimize errors. These error-minimization properties allow the cell's biochemical information systems to make mistakes and still communicate critical information with high fidelity. It's as if the stranded island inhabitant could arrange the rocks in any three-letter combination and still communicate his desperate plight.

A Genetic SOS
At first glance, there appears to be a mismatch between the storage and functional expression of information in the cell. Clearly, a one-to-one relationship cannot exist between the four different nucleotides of DNA and the twenty different amino acids used to assemble polypeptides. The cell's machinery compensates for this mismatch by using groupings comprised of three nucleotides (codons) to specify the twenty amino acids. The cell uses a set of rules—the genetic code—to relate these nucleotide triplet sequences to the twenty amino acids used to make polypeptides. Codons represent the fundamental coding units. In the same way the stranded islander used three letters (SOS) to communicate, the genetic code uses three-nucleotide "characters" to signify an amino acid. For all intents and purposes, the genetic code is universal among all living organisms. It consists of sixty-four codons. Because the genetic code only needs to encode twenty amino acids, some of the codons are redundant. Different codons can code for the same amino acid. In fact, up to six different codons specify some amino acids. A single codon specifies others.
Table 9.1 describes the universal genetic code. It is presented in a conventional way, according to how the information appears in mRNA molecules after the information stored in DNA is transcribed. (In RNA uracil is used instead of thymine [T].) The first nucleotide of the coding triplet begins at what biochemists call the 5' end of the sequence. Each nucleotide in the codon's first position (5' end) can be read from the left-most column, and the nucleotide in the second position can be read from the row across the top of the table. The nucleotide in each codon's third position (the 3' end) can be read within each box. For example, the two codons, 5' UUU and 5' UUC, that specify phenylalanine (abbreviated Phe) are listed in the box located at the top left corner of the table. Interestingly, some codons (stop or nonsense codons) don't specify any amino acids. They always occur at the end of the gene informing the protein manufacturing machinery where the polypeptide chain ends. Stop codons serve as a form of "punctuation" for the cell's information system. (For example, UGA is a stop codon.) Some coding triplets (start codons) play a dual role in the genetic code. These codons not only encode amino acids but also "tell" the cell where a polypeptide begins. For example, the codon GUG not only encodes the amino acid valine, but it also specifies the beginning of a polypeptide chain. Start codons function as a sort of "capitalization" for the information system of the cell. The information content of DNA and proteins—the molecules that ultimately define life's most fundamental structures and processes—leads to the conclusion that an Intelligent Designer must have been responsible for biochemical systems. The existence of the genetic code makes this conclusion as rational as the pilot's actions when he radioed for a rescue team after spotting the message on the beach.

A Biochemical Grass Hut
The structure of rules for the genetic code reveals even further evidence that it stems from a Creator. A capacity to resist the errors that naturally occur as a cell uses or transmits information from one generation to the next is built into the code. Recent studies employing methods to quantify the genetic code's error-minimization properties indicate that the genetic code's rules have been carefully chosen and finely tuned.

The Potential to Be Wished Away
 Translating the stored information of DNA into the functional information of proteins is the code's chief function. Error minimization, therefore, measures the capability of the genetic code to execute its function. The failure of the genetic code to transmit and translate information with high fidelity can be devastating to the cell. A brief explanation of the effect mutations have on the cell shows the problem. A mutation refers to any change that takes place in the DNA nucleotide sequence. Several different types of changes to DNA sequences can occur with substitution mutations being the most frequent. As a result of these mutations, a nucleotide(s) in the DNA strand is replaced by another nucleotide(s). For example, an A may be replaced by a G or а С with a T. When substitutions occur, they alter the codon that houses the substituted nucleotide. And if the codon changes, then the amino acid specified by that codon also changes, altering the amino acid sequence of the polypeptide chain specified by the mutated gene.

This mutation can then lead to a distorted chemical and physical profile along the polypeptide chain. If the substituted amino acid has dramatically different physicochemical properties from the native amino acid, then the polypeptide folds improperly. An improperly folded protein has reduced or even lost function. Mutations can be deleterious because they hold the potential to significantly and negatively impact protein structure and function.

Taking a Closer Look
Simple inspection shows that the genetic code's redundancy is not haphazard but carefully thought out—even more so than a grass hut built beyond the reach of the waves. Deliberate rules were set up to protect the cell from the harmful effects of substitution mutations. For example, six codons encode the amino acid leucine (Leu). If at a particular amino acid position in a polypeptide, Leu is encoded by 5'CUU, substitution mutations in the 3' position from U to C, A, or G produce three new codons—5'CUC, 5'CUA, and 5'CUG, respectively—all of which code for Leu.

The net effect leaves the amino acid sequence of the polypeptide unchanged. And, the cell successfully avoids the negative effects of a substitution mutation. Likewise, a change of С in the 5' position to a U generates a new codon, 5'UUU, which specifies phenylalanine, an amino acid with physical and chemical properties similar to Leu. Changing С to an A or a G produces codons that code for isoleucine and valine, respectively. These two amino acids possess chemical and physical properties similar to leucine. Qualitatively, it appears as if the genetic code has been constructed to minimize the errors that could result from substitution mutations.

Calling in the Coordinates
Recently, scientists have worked to quantitatively evaluate the error-minimization capacity of the genetic code. One of the first studies to perform this analysis indicated that the universal genetic code found in nature could withstand the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5,000) of randomly generated genetic codes with different codon assignments than the one found through-out nature.

This initial work, however, did not take into account the fact that some types of substitution mutations occur more frequently in nature than others. For example, an A-to-G substitution occurs more often than either an A- to-C or an A-to-T mutation. When researchers incorporated this correction into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes and that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity.7 Nature's universal genetic code is truly one in a million! The genetic code's error-minimization properties are far more dramatic than these results indicate. When the researchers calculated the error-minimization capacity of the one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution with the naturally occurring genetic code lying outside the distribution (see figure 9.1). Researchers estimate the existence of 1018 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This means of 1018 possible genetic codes few, if any, have an error-minimization capacity that approaches the code found universally throughout nature.

Out of Harm's Way
Some researchers have challenged the optimality of the genetic code. But, the scientists who discovered the remarkable error-minimization capacity of the genetic code have concluded that the rules of the genetic code cannot be accidental.10 A genetic code assembled through random biochemical events could not possess near ideal error-minimization properties. Researchers argue that a force shaped the genetic code. Instead of looking to an intentional Programmer, these scientists appeal to natural selection. That is, they believe random events operated on by the forces of natural selection over and over again produced the genetic code's error-minimization capacity.

Natural Forces at Work
Even though some researchers think natural selection shaped the genetic code, other scientific work questions the likelihood that the genetic code could evolve. In 1968, Nobel laureate Francis Crick argued that the genetic code could not undergo significant evolution.12 His rationale is easy to understand. Any change in codon assignments would lead to changes in amino acids in every polypeptide made by the cell. This wholesale change in polypeptide sequences would result in a large number of defective proteins. Nearly any conceivable change to the genetic code would be lethal to the cell. The scientists who suggest that natural selection shaped the genetic code are fully aware of Crick's work. Still they rely on evolution to explain the code's optimal design because of the existence of nonuniversal genetic codes. While the genetic code in nature is generally regarded as universal, some non-universal genetic codes exist—codes that employ slightly different codon assignments. Presumably, these nonuniversal codes evolved from the universal genetic code. Therefore, researchers argue that such evolution is possible. But, the codon assignments of the nonuniversal genetic codes are nearly identical to those of the universal genetic code with only one or two exceptions. Non-universal genetic codes can be thought of as deviants of the universal genetic code.
Does the existence of nonuniversal codes imply that wholesale genetic code evolution is possible? A careful study reveals that codon changes in the nonuniversal genetic codes always occur in relatively small genomes, such as those in mitochondria. These changes involve (1) codons that occur at low frequencies in that particular genome or (2) stop codons. Changes in assignment for these codons could occur without producing a lethal scenario because only a small number of polypeptides in the cell or organelle would experience an altered amino acid sequence. So it seems limited evolution of the genetic code can take place, but only in special circumstances- es.13 The existence of nonuniversal genetic codes does not necessarily justify an evolutionary origin of the amazingly optimal genetic code found in nature.

Is a Timely Rescue Possible?
Even if the genetic code could change over time to yield a set of rules that allowed for the best possible error-minimization capacity, is there enough time for this process to occur? Biophysicist Hubert Yockey addressed this question. He determined that natural selection would have to explore 1.40 x 1070 different genetic codes to discover the universal genetic code found in nature. The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 1055 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code. Other workplaces the genetic code's origin coincidental with life's start. Operating within the evolutionary paradigm, a team headed by renowned origin-of-life researcher Manfred Eigen estimated the age of the genetic code at 3.8 ± 0.6 billion years.15 Current geochemical evidence places life's first appearance on Earth at 3.86 billion years ago.16 This timing means that  the genetic code's origin coincides with life's start on Earth. It appears as if the genetic code came out of nowhere, without any time to search out the best option. In the face of these types of problems, some scientists suggest that the genetic code found in nature emerged from a simpler code that employed co- dons consisting of one or two nucleotides.17 Over time, these simpler genetic codes expanded to eventually yield the universal genetic code based on coding triplets. The number of possible genetic codes based on one or two nucleo- tide codons is far fewer than for codes based on coding triplets. This scenario makes code evolution much more likely from a naturalistic standpoint. One complicating factor for these proposals arises, however, from the fact that simpler genetic codes cannot specify twenty different amino acids. Rather, they are limited to sixteen at most. Such a scenario would mean that the first life-forms had to make use of proteins that consisted of no more than sixteen different amino acids. Interestingly, some proteins found in nature, such as ferredoxins, are produced with only thirteen amino acids. On the surface, this observation seems to square with the idea that the genetic code found in nature arose from a simpler code. Yet, proteins like the ferredoxins are atypical. Most proteins require all twenty amino acids. This requirement, coupled with recent recognition that life in its most minimal form needs several hundred proteins (see chapter 3), makes these types of models for code evolution speculative at best. The optimal nature of the genetic code and the difficulty accounting for the code's origin from an evolutionary perspective work together to support the conclu- sion that an Intelligent Designer programmed the genetic code, and hence, life.

https://reasonandscience.catsboard.com

24The genetic code, insurmountable problem for non-intelligent origin Empty The Genetic Code Sun Jul 31, 2022 7:12 am

Otangelo


Admin

The Genetic Code


https://reasonandscience.catsboard.com/t2363-the-genetic-code-insurmountable-problem-for-non-intelligent-origin#9408



The Genetic Code is often confounded with the specified information-bearing sequence of DNA or mRNA nucleotides, but it is a translation program. It is the crucial bridge, the decoding of information stored in nucleotides, to proteins. It is a fundamental basis for all biology. Therefore elucidating its origin is of the utmost importance, if we want to find out where life came from. While routinely evolutionary pressures are invoked to explain its emergence, that mechanism should and cannot be invoked, since evolution depends on the genetic code, and its translation machinery being fully implemented, to start life and self-replication. 

B.Alberts Molecular Biology of the Cell 4th ed. (2003): Once an mRNA has been produced, by transcription and processing the information present in its nucleotide sequence is used to synthesize a protein. Transcription is simple to understand as a means of information transfer: since DNA and RNA are chemically and structurally similar, the DNA can act as a direct template for the synthesis of RNA by complementary base-pairing. As the term transcription signifies, it is as if a message written out by hand is being converted, say, into a typewritten text. The language itself and the form of the message do not change, and the symbols used are closely related. In contrast, the conversion of the information in RNA into protein represents a translation of the information into another language that uses quite different symbols. Moreover, since there are only four different nucleotides in mRNA and twenty different types of amino acids in a protein, this translation cannot be accounted for by a direct one-to-one correspondence between a nucleotide in RNA and an amino acid in protein. The nucleotide sequence of a gene, through the medium of mRNA, is translated into the amino acid sequence of a protein by rules that are known as the genetic code. The sequence of nucleotides in the mRNA molecule is read consecutively in groups of three. RNA is a linear polymer of four different nucleotides, so there are 4 × 4 × 4 = 64 possible combinations of three nucleotides: the triplets AAA, AUA, AUG, and so on. However, only 20 different amino acids are commonly found in proteins. The code is redundant and some amino acids are specified by more than one triplet. Each group of three consecutive nucleotides in RNA is called a codon, and each codon specifies either one amino acid or a stop to the translation process. 54

B. Alberts Molecular Biology of the Cell 6th ed. (2015): Each group of three consecutive nucleotides in RNA is called a codon, and each codon specifies either one amino acid or a stop to the translation process. AUG is the Universal Start Codon. Nearly every organism (and every gene) that has been studied uses the three ribonucleotide sequence AUG to indicate the "START" of protein synthesis (Start Point of Translation). 55

Furthermore, three codons are referred to as STOP codons: UAA, UAG, and UGA. These are used to terminate translation; they indicate the end of the gene's coding region.

Why and how should natural processes have " chosen " to insert a punctuation signal, a universal start codon, and stop codons,  in order for the Ribosome to " know " where to start and end translation? This is essential in order for the machinery to start translating at the correct place.

1. The instantiation of communication systems depends on creating a language using symbols, Statistics, Semantics, Synthax, pragmatics, information storage, transmission, translation, conversion, and eventually a transduction system,
2. Signal transmission is a fundamental property in all biological life forms. Cells use various kinds of molecular communication, cell signaling, signal transduction pathways, genetic and epigenetic codes and languages.  
3. Communication systems are always instantiated by thinking minds. Therefore, biological communication systems were intelligently designed.

The genetic code, insurmountable problem for non-intelligent origin Polype13
The figure above outlines the mechanism whereby the information corresponding to an (arbitrarily chosen) DNA sequence is transferred. Here the messenger RNA is assumed to be transcribed from the DNA strand marked by an asterisk. 56

Paul Davies, The fifth miracle (2000): page 105:
I have described life as a deal struck between nucleic acids and proteins. However, these molecules inhabit very different chemical realms; indeed, they are barely on speaking terms. This is most clearly reflected in the arithmetic of information transfer. The data needed to assemble proteins are stored in DNA using the four-letter alphabet A, G, C, T. On the other hand, proteins are made out of twenty different sorts of amino acids. Obviously, twenty into four won’t go. So how do nucleic acids and proteins communicate? Earthlife has discovered a neat solution to this numerical mismatch by packaging the bases in triplets. Four bases can be arranged in sixty-four different permutations of three, and twenty will go into sixty-four, with some room left over for redundancy and punctuation. The sequence of rungs of the DNA ladder thus determines, three by three, the exact sequence of amino acids in the proteins. To translate from the sixty-four triplets into the twenty amino acids means assigning each triplet (termed a codon) a corresponding amino acid. This assignment is called the genetic code. The idea that life uses a cipher was first suggested in the early 1950s by George Gamow, the same physicist who proposed the modern big-bang theory of the universe. As in all translations, there must be someone, or something, that is bilingual, in this case to turn the coded instructions written in nucleic acid language into a result written in amino-acid language. From what I have explained, it should be apparent that this crucial translation step occurs in living organisms when the appropriate amino acids are attached to the respective molecules of tRNA prior to the protein-assembly process.  This attachment is carried out by a group of clever enzymes  57

Job Merkel (2019):    DNA translation: Everyone speaks a language. Animals speak a language. Computers speak a language. Even your cells speak a language. And like any language, we need to understand the basic rules before we can read and write with it. Four letters make up DNA’s alphabet. These four letters are Adenine (A) Cytosine (C) Guanine (G) Thymine (T) But letters alone do not make a language.  Conveniently, all of DNA’s words are the same length. They are all three (3) letters long. Scientists call these three letters a codon. In the following chart, we’ll see what these codons mean.

The genetic code, insurmountable problem for non-intelligent origin F2.medium

Each codon designates an amino acid. For example, the codon TAT codes for the amino acid Tyrosine. If we continue our analogy, this makes each codon a “word.” These words are the basis of DNA translation. In DNA translation, DNA is converted into a specific sequence of amino acids. But words alone aren’t enough to convey meaning. You need to string words together to form sentences. In the same way, amino acids combine together through DNA translation to form proteins. These sentences need punctuation. Punctuation serves to let you know when a sentence begins when it ends, and any pauses or gaps in-between. DNA is no different. It uses specific codons to indicate the beginning or ending of a sentence. For example, the codon “ATG” indicates the beginning of an amino acid sequence. For this reason, scientists refer to ATG as the “START” codon. It is always at the beginning of a sentence. Without a START codon, your cells wouldn’t know where to begin making proteins. There are also three codons that act as a “STOP” codon. These three codons (TGA, TAA, TAG) always indicate the end of a sentence. Without a STOP codon, your cells wouldn’t know when to stop making a given protein. As a demonstration, here’s what an example of a “sentence” might look like in DNA: ATG TAT CAG GGA TGA This translates to: START - Tyrosine - Glutamine - Glycine - STOP This would produce a protein made of 3 amino acids (Tyr-Glu-Gly). Most proteins are not this short. For example, a hemoglobin subunit is 141 amino acids long. To continue the metaphor of language, sentences aren’t the only part of a written document. Writers clump similar sentences together into paragraphs. And the same is true for proteins. Individual units of protein may come together to form something larger than themselves. DNA acts as the alphabet, coding for amino acids in codons. These codons act as words to make proteins. These proteins act as sentences, and merge together to make larger structures. These larger structures are your paragraphs. 58

1. There are two punctuation marks in the genetic code called the START and STOP codons which signal the end of protein synthesis in all organisms. ATG as the “START” codon. It is always at the beginning of a sentence. Without a START codon, your cells wouldn’t know where to begin making proteins. There are also three codons that act as a “STOP” codon. These three codons (TGA, TAA, TAG) always indicate the end of a sentence. Without a STOP codon, your cells wouldn’t know when to stop making a given protein. As a demonstration, here’s what an example of a “sentence” might look like in DNA: ATG TAT CAG GGA TGA This translates to: START - Tyrosine - Glutamine - Glycine - STOP This would produce a protein made of 3 amino acids (Tyr-Glu-Gly).
2. Without start and stop codon signals, there would be no way to begin or end the process of translation. Ribosomes can translate RNA sequences without the need for an initiation codon, as demonstrated in experiments leading to the elucidation of the genetic code. If AUG is missing, it will start later at the next AUG. This will likely create a small or big deletion and may cause a frameshift. Termination of protein translation occurs when the translating ribosome reaches a stop codon that is recognized by a release factor. Each of the three stop codons, UAA, UGA and UAG, is used in all three domains of life. During protein synthesis, STOP codons cause the release of the new polypeptide chain from the ribosome. This occurs because there are no tRNAs with anticodons complementary to the STOP codons. Without stop codons, an organism is unable to produce specific proteins. The new polypeptide (protein) chain will just grow and grow until the cell bursts or there are no more available amino acids to add to it. A nonsense mutation occurs in DNA when a sequence change gives rise to a stop codon rather than a codon specifying an amino acid. The presence of the new stop codon results in the production of a shortened protein that is likely non-functional.
3. The START and STOP codons had to be part of the genetic code right from the beginning, or proteins could not be synthesized. Gradual evolutionary development is not feasible. The genetic code most certainly was designed.

The Genetic Code is not random, but arbitrary
Jaque Monod (1972): The genetic code, universal in the biosphere, seems to be chemically arbitrary, in as much as the transfer of information could just as well  take place according to some other convention 56

ULRICH E. STEGMANN (2004): The genetic code has been regarded as arbitrary in the sense that the codon-amino acid assignments could be different than they actually are.  The genetic code is arbitrary in that any codon requires certain chemical and structural properties to specify a particular amino acid, but these properties are not required in virtue of a principle of chemistry.  It is arbitrary that a particular codon specifies one particular amino acid rather than a different one. The only generally accepted sense of “arbitrary“ seems to be that the assignments could be different than they actually are. Chemical arbitrariness is similar or even equivalent to the absence of chemical necessity. 59

David L. Abel (2009): Anti-codons are at opposite ends of t-RNA molecules from amino acids. The linking of each tRNA with the correct amino acid depends entirely upon a completely independent family of tRNA aminoacyl synthetase proteins. Each of these synthetases must be specifically prescribed by separate linear digital programming. These symbol and coding systems not only predate human existence, they produced humans along with their anthropocentric minds. The nucleotide and codon syntax of DNA linear digital prescription has no physicochemical explanation. All nucleotides are bound with the same rigid 3’5’ phosphodiester bonds. The codon table is arbitrary and formal, not physical. Codon syntax communicates time-independent, non-physicodynamic “meaning” (prescription of biofunction). This meaning is realized only after abstract translation via a conceptual codon table. To insist that codon syntax only represents amino acid sequence in our human minds is not logically tenable. 60

Ludmila Lackova (2017):  Jacques Monod, declared the arbitrary nature of the genetic code explicitly: ‘‘There is no direct steric relationship between the coding triplet and the encoded amino acid. The code […] seems chemically arbitrary.’’ (Monod 1970, 123). Arbitrariness was defined by Ferdinand de Saussure as one of the three main principles of languages. According to de Saussure, a linguistic sign and any kind of sign ‘‘is arbitrary in that it has no natural connection’’ between the sign and its object (De Saussure 1916, 69).  It means that there is no natural direct connection between the word dog and its meaning (the object in general, in this case, any dog). In other words, what is referred to does not necessitate the form of what is referring to (the referent). It is important. In a similar way as the relation between a sign and its meaning is mediated in the natural language, the binding between the amino acid and the triplet in the genetic code is not direct, but mediated by the tRNA molecule. The tRNA has a place for amino acid attachment and on the opposite side a place for an anticodon. In both natural language and the genetic code, there is no physical connection between the two entities that enter into the relation; the connection is conventional or historical. For reasons described in the previous paragraph, in the field of biology and biosemiotics, it is used to consider the strings of nucleic bases as signs and the strings of amino acids as their meanings, in other words, nucleic bases should refer to amino acid in the same way as words refer to objects (meanings). The mediated connection between the two entities in the protein synthesis makes it tempting to consider them as signs and their meanings. Since amino acids in a form of a string are not the final product of protein synthesis and do not represent functional units, they cannot be considered as meaning of the genetic code. Amino acids as such have no direct function in a cell. They only provide a framework of the final protein that acts as functional unit and it is the shape of a protein that determines whether the protein can interact with other molecules and in which way. Not every shape of a protein has a function, but every function is provided by a shape, thus we suggest that shapes are the elementary meaning-carrying entities in a cell or in an organism. 61

Eugene V. Koonin (2017): The assignment of codons to amino acids across the code table is clearly nonrandom: Related amino acids typically occupy contiguous areas in the table. The second position of a codon is the most important specificity determinant, and three of the four columns of the codon table encode related, chemically similar amino acids. For example, all codons with a U in the second position correspond to hydrophobic amino acids. Even a simple qualitative examination shows that the code is robust to mutational or translational errors. Substitutions and translation errors in synonymous positions (typically, the third position in a codon) have no effect on the protein (although this does not necessarily imply such substitutions are selectively neutral), whereas substitutions in the first position most often lead to the incorporation of an amino acid similar to the correct one, thus decreasing the damage. 62

A: Information, Biosemiotics ( instructional complex mRNA codon sequences transcribed from DNA )
B: Translation mechanism ( adapter, key, or process of some kind to exist prior to translation = ribosome )
C: Genetic Code
D: Functional proteins

1. Life depends on proteins ( molecular machines ) (D). Their function depends on the correct arrangement of a specified complex sequence of amino acids.
2. That depends on the translation of genetic information (A) through the ribosome (B) and the genetic code (C), which assigns 61 codons and 3 start/stop codons to 20 amino acids
3. Instructional complex Information ( Biosemiotics: Semantics, Synthax, and pragmatics (A)) is only generated by intelligent beings with foresight. Only intelligence with foresight can conceptualize and instantiate complex machines with specific purposes, like translation using adapter keys (ribosome, tRNA, aminoacyl tRNA synthetases (B)) All codes require arbitrary values being assigned and determined by agency to represent something else ( genetic code (C)).
4. Therefore, Proteins being the product of semiotics/algorithmic information including translation through the genetic code, and the manufacturing system ( information directing manufacturing ) are most probably the product of a divine intelligent designer.

The Genetic Code is more robust than 1 million alternatives
Thomas Butler (2009): Almost immediately after its elucidation, attempts were made to explain the assignment of codons to amino acids. It was noticed that amino acids with related properties were grouped together, which would have the effect of minimizing translation errors. The canonical genetic code was compared to samples of randomly generated synthetic codes. Depending on the measure used to characterize or score the sampled codes, high degrees of optimality have been reported. For example, using an empirical measure of amino acid differences referred to below as the “experimental polar requirement”, Freeland and Hurst calculated that the genetic code is “one in a million”. More recently, it has been shown that when coupled to known patterns of codon usage, the canonical code and the codon usage are simultaneously optimized with respect to point mutations and to the rapid termination of peptides that are generated with frameshift errors. 63

S. J. Freeland  (1998): Statistical and biochemical studies of the genetic code have found evidence of nonrandom patterns in the distribution of codon assignments. It has, for example, been shown that the code minimizes the effects of point mutation or mistranslation: erroneous codons are either synonymous or code for an amino acid with chemical properties very similar to those of the one that would have been present had the error not occurred. If we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors. 64

S. J. Freeland (2000): The canonical code is at or very close to a global optimum for error minimization across plausible parameter space. This result is robust to variation in the methods and assumptions of the analysis. Although significantly better codes do exist under some assumptions, they are extremely rare and thus consistent with reports of an adaptive code: previous analyses which suggest otherwise derive from a misleading metric. However, all extant, naturally occurring, secondarily derived, nonstandard genetic codes do appear less adaptive. 65

Subsequent efforts employing much more sophisticated models revealed even greater robustness of the code (Hani Goodarzi 2004)

Shalev Itzkovitz: (2007): DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information. 66

Adapted from Fazale Rana The Cell's Design (2008) Page 172: The genetic code is fine-tuned to minimize errors. These error-minimization properties allow the cell to make mistakes and still communicate critical information with high fidelity. The cell uses groupings of three nucleotides (codons) to specify twenty amino acids. The cell uses a set of rules—the genetic code—to relate these nucleotide triplet sequences to the twenty amino acids used to make polypeptides. Codons represent the fundamental coding units. In the same way, the stranded islander used three letters (SOS) to communicate, the genetic code uses three-nucleotide "characters" that are assigned to an amino acid.  It consists of sixty-four codons. Because the genetic code only needs to encode twenty amino acids, some of the codons are redundant. Different codons can code for the same amino acid. In fact, up to six different codons specify some amino acids. A single codon specifies others. The universal genetic code, presented in a conventional way, is according to how the information appears in mRNA molecules after the information stored in DNA is transcribed. (In RNA uracil is used instead of thymine [T].) The first nucleotide of the coding triplet begins at what biochemists call the 5' end of the sequence. Each nucleotide in the codon's first position (5' end) can be read from the left-most column, and the nucleotide in the second position can be read from the row across the top of the table. The nucleotide in each codon's third position (the 3' end) can be read within each box. For example, the two codons, 5' UUU and 5' UUC, that specify phenylalanine (abbreviated Phe) are listed in the box located at the top left corner of the table. Interestingly, some codons (stop or nonsense codons) don't specify any amino acids. They always occur at the end of the gene informing the protein manufacturing machinery where the polypeptide chain ends. Stop codons serve as a form of "punctuation" for the cell's information system. (For example, UGA is a stop codon.) Some coding triplets (start codons) play a dual role in the genetic code. These codons not only encode amino acids but also "tell" the cell where a polypeptide begins. For example, the codon GUG not only encodes the amino acid valine but also specifies the beginning of a polypeptide chain. Start codons function as a sort of "capitalization" for the information system of the cell.  A capacity to resist the errors that naturally occur as a cell uses or transmits information from one generation to the next is built into the code. Recent studies employing methods to quantify the genetic code's error-minimization properties indicate that the genetic code's rules have been finely tuned. The failure of the genetic code to transmit and translate information with high fidelity can be devastating to the cell. A mutation refers to any change that takes place in the DNA nucleotide sequence. Several different types of changes to DNA sequences can occur with substitution mutations being the most frequent. As a result of these mutations, a nucleotide(s) in the DNA strand is replaced by another nucleotide(s). For example, an A may be replaced by a G or а С with a T. When substitutions occur, they alter the codon that houses the substituted nucleotide. And if the codon changes, then the amino acid specified by that codon also changes, altering the amino acid sequence of the polypeptide chain specified by the mutated gene. This mutation can then lead to a distorted chemical and physical profile along the polypeptide chain. If the substituted amino acid has dramatically different physicochemical properties from the native amino acid, then the polypeptide folds improperly. An improperly folded protein has reduced or even lost function. Mutations can be deleterious because they hold the potential to significantly and negatively impact protein structure and function.

Error minimization
Six codons encode the amino acid leucine (Leu). If at a particular amino acid position in a polypeptide, Leu is encoded by 5'CUU, substitution mutations in the 3' position from U to C, A, or G produce three new codons—5'CUC, 5'CUA, and 5'CUG, respectively—all of which code for Leu. The net effect leaves the amino acid sequence of the polypeptide unchanged. And, the cell successfully avoids the negative effects of a substitution mutation. Likewise, a change of С in the 5' position to a U generates a new codon, 5'UUU, which specifies phenylalanine, an amino acid with physical and chemical properties similar to Leu. Changing С to an A or a G produces codons that code for isoleucine and valine, respectively. These two amino acids possess chemical and physical properties similar to leucine. Qualitatively, it appears as if the genetic code has been constructed to minimize the errors that could result from substitution mutations. Recently, scientists have worked to quantitatively evaluate the error-minimization capacity of the genetic code. One of the first studies to perform this analysis indicated that the universal genetic code found in nature could withstand the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5,000) of randomly generated genetic codes with different codon assignments than the one found through-out nature.

This initial work, however, did not take into account the fact that some types of substitution mutations occur more frequently in nature than others. For example, an A-to-G substitution occurs more often than either an A- to-C or an A-to-T mutation. When researchers incorporated this correction into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes and that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity. Nature's universal genetic code is truly one in a million! The genetic code's error-minimization properties are far more dramatic than these results indicate. When the researchers calculated the error-minimization capacity of the one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution with the naturally occurring genetic code lying outside the distribution. Researchers estimate the existence of 10^18 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This means of 10^18 possible genetic codes few, if any, have an error-minimization capacity that approaches the code found universally throughout nature. A genetic code assembled through random biochemical events could not possess near ideal error-minimization properties.

In 1968, Nobel laureate Francis Crick argued that the genetic code could not undergo significant evolution. Any change in codon assignments would lead to changes in amino acids in every polypeptide made by the cell. This wholesale change in polypeptide sequences would result in a large number of defective proteins. Nearly any conceivable change to the genetic code would be lethal to the cell. The scientists who suggest that natural selection shaped the genetic code are fully aware of Crick's work. Still, they rely on evolution to explain the code's optimal design because of the existence of nonuniversal genetic codes. While the genetic code in nature is generally regarded as universal, some non-universal genetic codes exist—codes that employ slightly different codon assignments. Presumably, these nonuniversal codes evolved from the universal genetic code. Therefore, researchers argue that such evolution is possible. But, the codon assignments of the nonuniversal genetic codes are nearly identical to those of the universal genetic code with only one or two exceptions. Non-universal genetic codes can be thought of as deviants of the universal genetic code. Does the existence of nonuniversal codes imply that wholesale genetic code evolution is possible? A careful study reveals that codon changes in the nonuniversal genetic codes always occur in relatively small genomes, such as those in mitochondria. These changes involve (1) codons that occur at low frequencies in that particular genome or (2) stop codons. Changes in assignment for these codons could occur without producing a lethal scenario because only a small number of polypeptides in the cell or organelle would experience an altered amino acid sequence. So it seems limited evolution of the genetic code can take place, but only in special circumstances. The existence of nonuniversal genetic codes does not necessarily justify an evolutionary origin of the amazingly optimal genetic code found in nature.

Even if the genetic code could change over time to yield a set of rules that allowed for the best possible error-minimization capacity, is there enough time for this process to occur? Biophysicist Hubert Yockey addressed this question. (H.Yockey (2005): Let us calculate the number of genetic codes with the codon-amino acid assignment typical of the modern standard genetic code. we have: 1.40 × 10^70 One must presume that the modern genetic code did not originate from among codes awaiting assignment. [url=https://www.cambridge.org/br/academic/subjects/life-sciences/evolutionary-biology/information-theory-evolution-and-origin-life?format=HB&isbn=9780521802932#:~:text=Information Theory%2C Evolution and the,the algorithmic language of computers.]67[/url]Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code. A team headed by renowned origin-of-life researcher Manfred Eigen estimated the age of the genetic code at 3.8 ± 0.6 billion years. Current geochemical evidence places life's first appearance on Earth at 3.86 billion years ago. This timing means that the genetic code's origin coincides with life's start on Earth. It appears as if the genetic code came out of nowhere, without any time to search out the best option. In the face of these types of problems, some scientists suggest that the genetic code found in nature emerged from a simpler code that employed codons consisting of one or two nucleotides. Over time, these simpler genetic codes expanded to eventually yield the universal genetic code based on coding triplets. The number of possible genetic codes based on one or two nucleotide codons is far fewer than for codes based on coding triplets. This scenario makes code evolution much more likely from a naturalistic standpoint. One complicating factor for these proposals arises, however, from the fact that simpler genetic codes cannot specify twenty different amino acids. Rather, they are limited to sixteen at most. Such a scenario would mean that the first life forms had to make use of proteins that consisted of no more than sixteen different amino acids. Interestingly, some proteins found in nature, such as ferredoxins, are produced with only thirteen amino acids. On the surface, this observation seems to square with the idea that the genetic code found in nature arose from a simpler code. Yet, proteins like ferredoxins are atypical. Most proteins require all twenty amino acids. This requirement, coupled with the recent recognition that life in its most minimal form needs several hundred proteins, makes these types of models for code evolution speculative at best. The optimal nature of the genetic code and the difficulty of accounting for the code's origin from an evolutionary perspective work together to support the conclusion that an Intelligent Designer programmed the genetic code, and hence, life. 68

Exploiting the redundancy (or degeneracy) of the genetic code
D. L. Gonzalez (2019): Since the cardinality of the starting set of codons (64) is greater than the cardinality of the arriving set of amino acids (20 + 2), the mapping is necessarily degenerate. In other words, some amino acids are coded by two or more codons. 69

Tessa E.F. Quax et.al. (2016): Because 18 of 20 amino acids are encoded by multiple synonymous codons, the genetic code is called “degenerate.” Because synonymous mutations do not affect the identity of the encoded amino acid, they were originally thought to have no consequences for protein function or organismal fitness and were therefore regarded as “silent mutations.” However, comparative sequence analysis revealed a non-random distribution of synonymous codons in genes of different organisms. Each organism seems to prefer a different set of codons over others; this phenomenon is called codon bias. It has been established that codon bias also influences protein folding and differential regulation of protein expression. Analysis of the tRNA content of organisms in all domains of life showed that they never contain a full set of tRNAs with anticodons complementary to the 61 different codons; for example, 39 tRNAs with distinct anticodons are present in the bacterium Escherichia coli, 35 in the archaeon Sulfolobus solfataricus, and 45 in the eukaryote Homo sapiens. 70

M.Eberlin (2019): The redundancy is vital. The apparent overkill minimizes reading and transmitting errors so that the same amino acid is transferred to each generation. But if carefully inspected, the redundancies themselves don’t seem to be random, since they involve mainly changes in the third letter of each triplet. For example, the simplest amino acid, glycine, has four codons that specify it: GGA, GGC, GGG, and GGT. The only position that varies is the third, and any nucleotide in that position will still specify glycine. (There are other biological effects possible, though—for example, effects on the speed of protein synthesis and folding) Changes in the first and second letters are less common and are offset by the expression of amino acids with chemically similar properties and that don’t significantly alter the structure and properties of the final protein. For example, the CTT codon that codes for leucine becomes the chemically similar isoleucine when the C is replaced by A (ATT). Such redundancies establish a chemical buffer between amino acids when common errors occur. That is, the code of life has built-in safeguards against potentially damaging genetic typos. But that’s not the only purpose of the redundancy in our genetic code. The use of different codons to express a single amino acid also allows the speed of protein synthesis to be controlled. For example, four different codons may specify the same amino acid, but the four differ in their effects on how fast or slow a bond is made and the protein folds. This kinetic control gives each protein the exact amount of time it needs to form the correct 3-D shape. There are other nuances in our genetic code that seem to suggest foresight, such as the grouping of codons for amino acids with either acid or alkaline side chains. Hence, if environmental stimuli require exchanging an alkaline (basic) amino acid for an acidic amino acid in a protein, this exchange is aided by such grouping. Again, what a wonderful chemical trick! For example, a basic lysine coded by either AAA or AAG can easily be changed to the acidic glutamic acid by only a single letter substitution: GAA or GAG. Having such a flexible code helps the organism to stay alive. The code also anticipates and has safeguards against the most common single-point mutations. For instance, leucine is encoded by no less than six codons. The CTT codon encodes leucine, but all the third letter-mutation variations—CTC, CTA, and CTG—are “synonymous” and also encode leucine. First-letter mutations are rarer, and potentially more dangerous because they do change the amino acid specified—if C is exchanged for T, forming the TTT codon, a different amino acid (phenylalanine) will be expressed. But even for this, the genetic code has a safeguard: phenylalanine’s chemical properties are similar to leucine’s, so the protein will still retain its shape and function. If the first letter C in CTT (leucine) is replaced by A or G, something similar happens, since ATT (isoleucine) and GTT (valine) have physicochemical properties similar to leucine as well. 71

 

50. María A Sánchez-Romero: The bacterial epigenome 2020 Jan;18
51. Daniel J. Nicholson: On Being the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology 2020
52. David F. Coppedge Cilia Are Antennas for Human Senses and Development October 26, 2007
53. E. Camprubí: The Emergence of Life  27 November 2019
54. B.Alberts: Molecular Biology of the Cell. 4th edition. 2003
55. B. Alberts Molecular Biology of the Cell 6th ed. 2015
56. J.Monod: Chance and Necessity: An Essay on the Natural Philosophy of Modern Biology  12 setember 1972
57. PAUL DAVIES: [size=12]The Fifth Miracle The Search for the Origin and Meaning of Life 2000[/size]
58. Job Merkel: The Language of DNA 15 NOV, 2019
59. ULRICH E. STEGMANN: The arbitrariness of the genetic code 9 September 2003
60. David L. Abel: The Capabilities of Chaos and Complexity 9 January 2009
61. Ludmila Lackova: [url=https://pubmed.ncbi.nlm.nih.gov/28488159/#:~:text=Arbitrariness in the genetic code,between amino acids and nucleobases.]Arbitrariness is not enough: towards a functional approach to the genetic code[/url] 2 May 2017
62. Eugene V. Koonin: Origin and Evolution of the Universal Genetic Code 2017
63. Thomas Butler: Extreme genetic code optimality from a molecular dynamics calculation of amino acid polar requirement 17 June 2009
64. S J Freeland: The genetic code is one in a million 1998 Sep
65. S J Freeland: Early Fixation of an Optimal Genetic Code 01 April 2000
66. Shalev Itzkovitz: The genetic code is nearly optimal for allowing additional information within protein-coding sequences 2007 Apr; 17
67. H.Yockey: [url=https://www.cambridge.org/br/academic/subjects/life-sciences/evolutionary-biology/information-theory-evolution-and-origin-life?format=HB&isbn=9780521802932#:~:text=Information Theory%2C Evolution and the,the algorithmic language of computers.]Information theory, evolution, and the origin of life[/url] 2005
68. Fazale Rana [size=12]The Cell's Design: How Chemistry Reveals the Creator's Artistry 1 junho 2008 Page 172:
69. D. L. Gonzalez  On the origin of degeneracy in the genetic code 18 October 2019
70. Tessa E.F. Quax: Codon Bias as a Means to Fine-Tune Gene Expression 2016 Jul 16
71. M.Eberlin Foresight 2019[/size]



Last edited by Otangelo on Sun Jul 31, 2022 7:15 am; edited 2 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

David L. Abel (2014): The codon redundancy (“degeneracy”) found in protein-coding regions of mRNA also prescribes Translational Pausing (TP). When coupled with the appropriate interpreters, multiple meanings and functions are programmed into the same sequence of configurable switch-settings. This additional layer of prescriptive Information (PI) purposely slows or speeds up the translation-decoding process within the ribosome. Variable translation rates help prescribe functional folding of the nascent protein. Redundancy of the codon to amino acid mapping, therefore, is anything but superfluous or degenerate. Redundancy programming allows for simultaneous dual prescriptions of prescriptive Information and amino acid assignments. This allows both functions to be coincident and realizable. The prescriptive Information schema is a bona fide rule-based code, conforming to logical code-like properties. This prescriptive Information code is programmed into the supposedly degenerate redundancy of the codon table. Algorithmic processes play a dominant role in the realization of this multi-dimensional code.

Genomic prescriptions of bio functions are multi-dimensional. Within the genome domain, executable operations format, read, write, copy, and maintain digital Functional Information (FI). Bio-molecular machines are programmed to organize, regulate, and control metabolism. The genetic code is composed of data sets residing in the particular sequencing of nucleotides. A large percentage of protein-folding is assisted by chaperones, some of which are RNAs rather than protein chaperones. But the final fold is primarily constrained by the primary-structure of amino-acid sequence. Protein-coding sequencing significantly affects translation rate, folding, and function. Most protein functionality is dependent upon its three-dimensional conformation. These conformations are dependent upon folding mechanisms performed upon the nascent protein. Such folding mechanisms are linked directly to several cooperative translational processes. By “translational processes,” we mean processes that go beyond simply translating and linking the amino acids. This paper expands the understanding of translation processes to go beyond just the mechanistic interactions between the polypeptide and ribosome tunnel. The mRNA sequencing of codons itself determines the rate of translation (internal mechanism). The chaperone function occurs as an external mechanism. These mechanisms all work to contribute coherently to the folding process. The crucial point is that they are all dependent upon momentary pauses in the translation process. We collectively define these linked phenomena and their rate regulation as “co-translational pausing.” The dependency of folding on these multiple translation processes has been defined as “co-translational folding”. They reveal the ribosome, among other things, to be not only a machine, but an independent computer-mediated manufacturing system.

Nucleotide, and eventually amino acid, sequencing are both physicodynamically indeterminate (inert). Cause-and-effect physical determinism, in other words, cannot account for the programming of sequence-dependent biofunction. Nucleotide sequencing and consequent amino acid sequencing are formally programmed in both the nascent protein and in the chaperones that help determine folding.  The external mechanisms involve trigger factors. Prokaryotes employ chaperones,  ribosome tunnel interactions, and binding protein factors. The internal mechanisms involve mRNA interactions, codon sequences and tRNA availability. Translational pausing (TP) allows for momentary pauses enabling preliminary folding of the nascent protein. The particular redundancy of codons provides temporal regulation of the co-translational folding process.

Translational pausing of nascent proteins is linked to the arrangement of nucleotides in the mRNA. Pausing can be induced by mRNA structure,  signal recognition particle (SRP) binding, mRNA binding proteins, rare codons, and anti-Shine-Dalgarno (aSD) codon sequences. A common thread exists between the mechanical execution of the folding process (exit tunnel/factors/chaperones) to internal mRNA processes involved in folding of the nascent protein. We argue that the causal relationship to co-translational folding is due to a prescribed arrangement of codons within the mRNA. We base this on the fact that for trigger factors, chaperones, and binding proteins are all related to the nascent amino acid chain sequence. Amino acid sequence, by necessary consequence, points to mRNA sequences. We further posit that the interactions with translation pausing can be traced back to the specific arrangements of redundant codons in the mRNA, and ultimately to the genome. We propose that the pausing functions are facilitated by first generating a pause state in the translation of the mRNA codons within the ribosome. This gives protein factors, trigger factors and other chaperones the necessary time to mechanically perform folding operations. If the pausing effect was solely related to the amino acid chain sequence, then replacing codons with synonymous codons should still produce the same folded amino acid chain with the same translation speed. However, substitution of rare codons with synonymous codons did produce a change in speed and conformation changes.

Reduncancy permits a secondary, superimposed code
Redundancy in the primary genetic code allows for additional independent codes. Coupled with the appropriate interpreters and algorithmic processors, multiple dimensions of meaning, and function can be instantiated into the same codon string. A secondary code superimposed upon the primary codonic prescription of amino acid sequence in proteins. Dual interpretations enable the assembly of the protein's primary structure while enabling additional folding controls via pausing of the translation process. TP provides for temporal control of the translation process allowing the nascent protein to fold appropriately as per its defined function. The functionality of condonic redundancy denies the ill-advised label of “degeneracy.” Multiple dimensions of independent coding by the same codon string has become apparent.

The ribosome can be thought of as an autonomous functional processor of data that it sees at its input. This data has been shown to be prescriptive information in the form of prescribed data, not just probabilistic combinatorial data. Choices must be made with intent to select the best branch of each bifurcation point, in advance of computational halting.

The arrangement of codons has embodied in it a prescribed sequential series of both amino acid code and time-based translation pausing code necessary for protein assembly and nascent pre-folding that defines protein functionality. The translation pausing coding schema follows distinct and consistent rules. These rules are logical and unambiguous. 72

The Wobble hypothesis points to an intelligent setup!
1. In translation, the wobble hypothesis is a set of four relationships. The first two bases in the codon create the coding specificity, for they form strong Watson-Crick base pairs and bond strongly to the anticodon of the tRNA.
2. When reading 5' to 3' the first nucleotide in the anticodon (which is on the tRNA and pairs with the last nucleotide of the codon on the mRNA) determines how many nucleotides the tRNA actually distinguishes.
If the first nucleotide in the anticodon is a C or an A, pairing is specific and acknowledges original Watson-Crick pairing, that is: only one specific codon can be paired to that tRNA. If the first nucleotide is U or G, the pairing is less specific and in fact, two bases can be interchangeably recognized by the tRNA. Inosine displays the true qualities of wobble, in that if that is the first nucleotide in the anticodon then any of three bases in the original codon can be matched with the tRNA.
3. Due to the specificity inherent in the first two nucleotides of the codon, if one amino acid is coded for by multiple anticodons and those anticodons differ in either the second or third position (first or second position in the codon) then a different tRNA is required for that anticodon.
4. The minimum requirement to satisfy all possible codons (61 excluding three stop codons) is 32 tRNAs. Which is 31 tRNAs for the amino acids and one initiation codon. Aside from the obvious necessity of wobble, that our bodies have a limited amount of tRNAs and wobble allows for broad specificity, wobble base pairs have been shown to facilitate many biological functions. This has another AMAZING implication which points to intelligent set up:  The science paper: The genetic code is one in a million, confesses: If we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.
5. This, all, by all means, points clearly to intelligent DESIGN !!

Several reading frames are explored
Susha Cheriyedath (2019): The genetic code can be read in multiple ways depending on where the reading starts. For example, if the base sequence is GGGAAACCC, reading could start from the first letter, G and there will be 3 codons - GGG, AAA, and CCC. If reading starts at G in the second position, the string will have two codons - GGA and AAC. If reading starts at the third base G, 2 codons will again result - GAA and ACC. Thus, there are 3 ways of reading the code of every strand of genetic material. These different ways of reading a nucleotide sequence is known as a reading frame. Each reading frame will produce a different sequence of amino acids and hence proteins. Thus, in double-stranded DNA, there are 6 possible reading frames. 72a

Several proteins can be produced using the same mRNA strand. Ribosomal frameshifting conveying this function is promoted by a so-called pseudoknot structure and also a specific site in the mRNA, known as a slippery sequence. In order to convey this function, the ribosome shifts back one base and subsequently proceeds to read the mRNA transcript in a different frame. 

The triplet Code goes triplet
University of Utah (2017): Connecting amino acids to make proteins in ribosomes may in fact be influenced by sets of three triplets – a “triplet of triplets” that provide crucial context for the ribosome. Hughes and Fabienne Chevance worked with a gene in Salmonella that codes for the FlgM protein, which is a component of the bacteria's flagellum. A mutation that was defective in "reading" a specific codon in the flgM gene only affected FlgM protein production and not other genes that contained the same codon. "That got us thinking—why is that particular codon in the flgM gene affected and not the same codon in the other genes?" Hughes says. "That's when we started thinking about context." Changing the codon on one side of the defective codon resulted in a 10-fold increase in FlgM protein activity. Changing the codon on the other side resulted in a 20-fold decrease. And the two changes together produced a 35-fold increase. "We realized that these two codons, although separated by a codon, were talking to each other," Hughes says. "The effective code might be a triplet of triplets." 73

Hydrophobicity and the Genetic Code
JEAN LEHMANN (1998): Observations on the hydrophobicity of the molecular compounds of the genetic coding system have long suggested the non-random organization of the genetic code. This parameter was measured for the four nucleotides and the 20 coded amino acids and a strong correlation was found between average ranking measures of the anticodon doublet (i.e. the "rst two bases, from 3' to 5') and the corresponding amino acids, with a few exceptions. Moreover, the four nucleotides can be ordered as [U, C, G, A], from the most hydrophilic (U) to the most hydrophobic (A). It was thus proposed to set the genetic code table in this order. The second position of the anticodon shows the best correlation property: the most hydrophilic amino acids are coded by A (anticodon U) and the most hydrophobic by U (anticodon A), which are at opposite ends of the list. 74

Carl R. Woese (2000): Simply plotting these numbers on a codon table reveals the existence of a remarkable degree of order, much of which would be unexpected on the basis of amino acid properties as normally understood. For example, codons of the form NUN define a set of five amino acids, all of which have very similar polar requirements. Likewise, the set of amino acids defined by the NCN codons all have nearly the same unique polar requirement. The codon couplets CAY-CAR, AAY-AAR, and GAY-GAR each define a pair of amino acids (histidine-glutamine, asparagine-lysine, and aspartic acid-glutamic acid, respectively) that has a unique polar requirement. Only for the last of these (aspartic and glutamic acids), however, would the two amino acids be judged highly similar by more conventional criteria. Perhaps the most remarkable thing about polar requirement is that although it is only a unidimensional characterization of the amino acids, it still seems to capture the essence of the way in which amino acids, all of which are capable of reacting in varied ways with their surroundings, are related in the context of the genetic code. Also of note is the fact that the context in which polar requirement is defined, i.e., the interaction of amino acids with heterocyclic aromatic compounds in an aqueous environment, is more suggestive of a similarity in the way amino acids might interact with nucleic acids than of any similarity in the way they would behave in a proteinaceous environment. While it must be admitted that the evolutionary relationships among the AARSs bear some resemblance to the related amino acid order of the code, it seems unlikely that they are responsible for that order: the evolutionary wanderings of these enzymes alone simply could not produce a code so highly ordered, in both degree and kind, as we now know the genetic code to be. 75

Origin of the Genetic Code
Over the decades, several hypotheses have been elaborated, attempting to explain the origin of the genetic code. Carl Woese illustrated the problem already in 1969, not long after the code was deciphered:

Let us try to gain a feeling for the present status of the "coding problem" through an analogy. Suppose we were given a particular extract from cells and we determined it to have the following property: When nucleoside triphosphates are added to the extract along with poly A, then poly T is synthesized, but when the poly A is re-placed by poly T, poly C, or poly G, successively, one observes production of poly A, poly G, or poly C, respectively. Given these and certain other experiments, one would soon arrive at the notion of an input-output "code" for this system of the simple composition 

The genetic code, insurmountable problem for non-intelligent origin Geneti14

Where to proceed from here is immediately obvious in this simple example. Why are these particular input units associated with these particular output unitS? Following this line of questioning, we would sooner or later discover that base pairing (postulated in another universe by Drs. WATSON and CRICK) lies behind all. Viewing our knowledge of the genetic code in the light of this analogy, we see that what is now possible is the construction of an "input-output" table, the catalog of codon assignments, for the system, but what remains unknown is why this particular set of relationships exists. The essence of the genetic code lies in those "forces" or processes that cause UUU to be assigned to phenylalanine, or cause translation to occur in the way that it does. And we have yet to gain the slightest appreciation for what these are. As will be seen, if it is not already obvious, this aspect of the problem is inseparable from the problem of how this biological information processing system we find in the cell could ever have arisen in the first place. 76

Ádám Radványid (2018) mentions the stereochemical, the coding coenzyme handle, the coevolution, the four-column theory, the error minimization and the frozen accident hypotheses 77

Henri Grosjean (2016):  The earliest amino acids used to synthesize ancestral polypeptides were found in the prebiotic soup and selected according to their specific interactions with the ancestral codons (the ‘stereochemical hypothesis’). These steps were subsequently expanded through the co-evolution with the invention of biosynthetic pathways for new amino acids (the ‘amino acid metabolism hypothesis’) and the emergence of the corresponding primordial aminoacyl-tRNA synthetases able to fix these new amino acids on appropriated proto-tRNAs (‘co-evolution with tRNA aminoacylation systems’). Constant refinement at both the replication and translation levels allows to progressively minimize the impact of coding errors and to increase the diversity and functionality of proteins that can be made with a larger amino acid alphabet ‘error minimizing code hypothesis’. Finally, the code can further evolve by reassignment of unused, temporarily ambiguous, or less used codons for other canonical or even totally new amino acids (the ‘codon capture theory’). Finally, early horizontal transfer and collective evolution of the code through different subspecies have been emphasized. In other words, the present-day genetic code did not necessarily result solely from divergent evolution, but also from collective evolution via the development of an innovation-sharing process that allows the emergence of a quasi-universal genetic code among populations of species ‘speaking the same language. The take-home lesson of all the above information is that along the expansion of the genetic code, optimal stability of complementary codon-anticodon pairs appears to have been the main evolutionary force. 77

Grosjean uses repeatedly teleologically loaded terms. "Inventing, constantly refining, progressively minimizing the impact of coding errors, reassigning, innovation-sharing" are all goal-oriented actions that would nor couldn't be performed by inanimated molecules.    

The stereochemical hypothesis according to which codon assignments are dictated by Physico-chemical affinity between amino acids and the cognate codons (anticodons).
Soon after the genetic code was deciphered, this hypothesis was proposed by Carl Woese. He wrote (1967): I am particularly struck by the difficulty of getting [the genetic code] started unless there is some basis in the specificity of interaction between nucleic acids and amino acids or polypeptide to build upon. 78 

Since there is no direct physical interaction between the codon/anticodon site, and the attachment of amino acids on the other binding site of tRNA, there could be no affinity between the two sites. David B. F. Johnson (2010): The stereochemical hypothesis postulates that the code developed from interactions between nucleotides and amino acids, yet supporting evidence in a biological context is lacking. Eugene V. Koonin (2017):  Translation of the code does not involve direct recognition of the codons (or anticodons) by amino acids, which brings up a burning question: Why are the codon assignments what they are? In other words, why is it the case that, for instance, glycine is encoded by GGN codons rather than, say, CCN codons (the latter of which encode proline in the SGC)? The initial attempts for a direct experimental demonstration of the interaction between amino acids and the cognate codons or anticodons, by Woese and coworkers, were generally unconvincing, resulting in a long lull in the pursuit of the stereochemical account of the code.  In our previous review of the code evolution, we presented several arguments to the effect that the statistical evidence from the aptamer experiments did not provide for direct conclusions about the stereochemical origin of the code. Notwithstanding the additional data and counterargument, our reasoning still appears to hold. 79

The coevolution hypothesis was first proposed by TZE-FEI WONG in 1975: The theory proposed that the structure of the genetic code was determined by the sequence of evolutionary emergence of new amino acids within the primordial biochemical system. 80

Carl Woese (1969): Knowing the codon to be a nucleotide triplet, we must ask why this number is three instead of some other number. In the past it has often been suggested that the number three derives from the fact that three is the minimum number necessary to supply sufficient information to encode the 20 amino acids. We shall point out the weakness of this sort of reasoning. If this were the explanation for size of the codon, one could question how the cell ever came to use 20 amino acids. 16 or less, say, would do nearly as well (particularly during the early phases of cellular evolution), and given an option as to codon size, it would seem easier to evolve a doublet code (which could handle up to 16 amino acids) than a triplet one. Further, were a doublet code to become established by evolution, it would hardly seem likely that it would be replaced by a triplet code (that in any case offered so little in the way of selective advantage) when the change-over process would be so disruptive, so lethal, to the cell. 76

Eugene V. Koonin (2017): Under this scenario, the code evolved from an ancestral version that included only simple amino acids produced abiogenically and then expanded to incorporate the more complex amino acids in parallel with the evolution of their respective biosynthetic pathways (i.e., there was code–pathway coevolution). The importance of biosynthetic pathways for code evolution is almost self-evident because amino acids could not be incorporated into the code unless they were available. Under this coevolution theory, the code evolved by subdivision: In the ancestral code, large blocks of codons encoded the same amino acid but were split to encode two amino acids upon the evolution of the respective metabolic pathways. 79

So that means, that these ultra-complex biosynthesis pathways evolved, catalyzing their reactions utilizing complex enzymes and proteins, without the machinery to make proteins yet established? How would that have been possible? It is a classic chicken & egg problem. Another question i: Why do we not find proteins with just 16 amino acids or less, encoded by codes assigning just 16 amino acids, using two nucleotide codons ( 4 x 6 =16 ), rather than 3 ( 3^4 = 64) which composes the current codon table? Also, why not  The fact that there are none, is evidence against this hypothesis.

Several questions present themselves here, however. Why don’t we find any protein sequences in the fossils of ancient organisms, which only have primary amino acids? The fact that no such proteins exist is strong proof against the evolutionary origin of the genetic code.  Why are there no 2 or 4-letter codes? Why did it not expand to quadruplets? With 4^4 = 256 possible codon quadruplets, coding space could have increased, and thus a much larger alphabet of possible proteins could have emerged.

Irene A. Chen (2010): The development of ribosomes that can read quadruplet codons could trigger a giant leap in the complexity of protein sequences. Although the practical exploration of sequence space is still limited to an infinitesimal fraction of the total volume, a full quadruplet genetic code would essentially double the information-theoretic content of proteins. Analogous studies modifying the alphabet size of ribozymes suggest that increasing the information-theoretic content of the genetic code could permit a corresponding increase in functionality. Recent work has overcome major inefficiencies in the translation of programmable quadruplet codons, paving the way for studies on fundamental questions about the origin of the genetic code and the characteristics of alternate protein “universes”. [url=https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3133615/#:~:text=However%2C prior efforts using quadruplet,particularly hampered by low efficiency.]81[/url]

T. Mukai et.al (2018): It has long been possible to transiently produce proteins bearing noncoding amino acids, but stabilizing an expanded genetic code for sustained function in vivo requires an integrated approach: creating recoded genomes and introducing new translation machinery that function together without compromising viability or clashing with endogenous pathways. 82

Here, Mukay and colleagues give out a trade secret: The genetic code, and the translation machinery have to function together. That means they have to be conceptualized and implemented together. They operate based on integrated complexity, which constitutes as well interdependence, and irreducible complexity. The genetic code has no function unless there is machinery to decode and translate the information and vice versa. Foresight is necessary to engineer a system where software and hardware join forces to convey a functional output. 

Dirson Jian Li (2021): The expansion of the genetic code along the roadmap can be explained by the coevolution of tRNAs with Aminoacyl tRNA synthetases (aaRSs).  A comprehensive study of the evolution of the genetic code inevitably involves the origins of tRNAs and aaRSs.The evolution of tRNAs played significant role to implement the number of canonical amino acids as 20. The primordial translation mechanism were invented during the evolution of the genetic code. The tRNAs and ribosomes were indispensable in the senior stage of the primordial translation mechanism, as well as in the modern translation mechanism. 82a

Li shows that the translation forms an interdependent system, where all players, hardware, and software, had to be created together.

T. Mukai continues: A synthetic organism would have additional protein constituents, noncanonical amino acids (ncAAs) assigned to their own codon in the genetic code. This would be the dream of protein engineers; it would allow the design of proteins with novel properties based on the presence of new building blocks in addition to the 20 canonical amino acids. Progress along these lines is being made, as codons have been successfully reassigned to encode ncAAs in Escherichia coli.

Protein engineers would be required to design proteins, and reassign codon triplet words, having a new meaning. 

T. Mukai : Rewriting the genetic code involves (a) engineering orthogonal translational components, (b) engineering endogenous translational components, (c) metabolome engineering, (d ) massive genome/chromosome engineering for modulating global codon usage, (e) chemical synthesis or biosynthesis of ncAAs

Mukai mentions what would be required to rewrite the genetic code, and basically, in order to achieve the goals mentioned, involves the necessity of engineering, five times. Evolution by Natural selection does not engineer anything. Intelligence does. 

1. Cells demonstrate to operate based on engineering principles.
2. Engineering is something always performed by intelligent designers.
3. Therefore, Cells are most probably designed


The error minimization theory under which selection to minimize the adverse effect of point mutations and translation errors was the principal factor of the code’s evolution.
The error-minimization theory supposes that genetic codes with high error rates would somehow evolve less error-prone over time. There is no evidence for this claim. Errors only lead to non-functional products, not higher precision. One would also have to presuppose a graduate evolution of the ribosome, itself as well as not performing the translation process with the same precision, accuracy, and error-minimization scheme through error check and repair mechanisms, which obviously, also were not there at the beginning, and would have to evolve in a gradual manner.  

In 1965, Carl Woese wrote: Sonneborn has recently suggested an ingenious evolutionary mechanism whereby the codon catalogue can be highly ordered, but the order not derive from any sort of molecular interactions.
Ordering in this case would result from selection pressure for a code which is the least sensitive to the lethality introduced by mutation. A scheme such as Sonneborn's would involve countless evolutionary trials and errors, and I feel that the possibilities for evolving into "blind alleys" (forms of the code having a far lower degree of order) so far outnumber the possibilities for evolving an optimal code (the one observed) that the latter could never have evolved in this way. However, it must be admitted that without a proper analysis of the Sonne-born model-such as a computer study-this counterargument remains feeble. Thus the question is not completely resolved at this time. 83

Almost 40 years later, in 2003, STEPHEN J. FREELAND admitted:  There remain ill-explored facets of the ‘error minimizing’ code hypothesis, however, including the mechanism and pathway by which an adaptive pattern of codon assignments emerged, the extent to which natural selection created synonym redundancy, its role in shaping the amino acid and nucleotide languages, and even the correct interpretation of the adaptive codon assignment pattern 84

J.Monod (1972): Indeed, mutations are known which, impairing the structure of certain components of the translation mechanism, thereby modify the interpretation of certain triplets and thus (with regard to the convention in force) commit errors which are exceedingly prejudicial to the organism. 85

If seemingly "small" translation errors lead to catastrophic outcomes, how much more, if the system is not fully developed, and operating with exquisite precision and accuracy, and all error-check and repair mechanisms fully implemented?  

Warren Shipton and  David W. Swift extend further on the topic. 

No hope to find a naturalistic explanation for the origin of the genetic code
Carl Woese (1969): In conclusion the evolution of the genetic code is the major remaining problem in the coding field. This problem is also the central one in the evolution of the first "modern" cell. At present we have very little concept of what the stages and events in this most intricate process were. Understanding in this area is probably more impeded by this lack of a concept than it is by a lack of facts. Barring miracles, the code's evolution should be a gradual step-wise process, utilizing and conforming to simple interactions between nucleic acids and polypeptides and/or their derivatives, and so readily understandable. 76

J.Monod (1972):The code is meaningless unless translated. The modern cell's translating machinery consists of at least fifty macromolecular components which are themselves coded in DNA: the code cannot be translated otherwise than by products of translation. It is the modern expression of omne vivum ex ovo. When and how did this circle become closed? It is exceedingly difficult to imagine. 85

John Maynard Smith (1997): described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes: “The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion or a " good compromise"?  An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance. 86

Victor A. Gusev Arzamastsev (1997): “the situation when Nature invented the DNA code surprisingly resembles designing a computer by man. If a computer were designed today, the binary notation would be hardly used. Binary notation was chosen only at the first stage, for the purpose to simplify at most the construction of decoding machine. But now, it is too late to correct this mistake”. [url=https://www.webpages.uidaho.edu/~stevel/565/literature/Genetic code - Lucky chance or fundamental law of nature.pdf]8[/url]7

Yuri I Wolf: (2007): The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The problem has a clear catch-22 aspect: high translation fidelity hardly can be achieved without a complex, highly evolved set of RNAs and proteins but an elaborate protein machinery could not evolve without an accurate translation system. The origin of the genetic code and whether it evolved on the basis of a stereochemical correspondence between amino acids and their cognate codons (or anticodons), through selectional optimization of the code vocabulary, as a "frozen accident" or via a combination of all these routes is another wide open problem despite extensive theoretical and experimental studies. 88

Eugene V. Koonin (2012): In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology. 

Eugene V. Koonin (2017): Certainly, there have been many developments with regard to the quantification of code properties, but the fundamental framework of evolutionary ideas laid out in the classic papers has not been overhauled. Unfortunately, this is due less to the successes of the early research than to the limited and questionable progress achieved over the next half-century. Notwithstanding the complete transformation of biology that occurred over these decades, we do not seem to be much closer to the solution. 89

Marcello Barbieri (2018): "...there is no deterministic link between codons and amino acids because any codon can be associated with any amino acid.  This means that the rules of the genetic code do not descend from chemical necessity and in this sense they are arbitrary." "...we have the experimental evidence that the genetic code is a real code, a code that is compatible with the laws of physics and chemistry but is not dictated by them." 90

Julian Mejıa (2018): Due to the complexity of such an event, it is highly unlikely that that this information could have been generated randomly. A number of theories have attempted to addressed this problem by considering the origin of the association between amino acids and their cognate codons or anticodons.  There is no physical-chemical description of how the specificity of such an association relates to the origin of life, in particular, to enzyme-less reproduction, proliferation and evolution. Carl Woese recognized this early on and emphasized the probelm, still unresolved, of uncovering the basis of the specifity between amino acids and codons in the genetic code. Carl Woese (1967) reproduced in the seminal paper of Yarus et al. cited frequently above;  “I am particularly struck by the difficulty of getting [the genetic code] started unless there is some basis in the specificity of interaction between nucleic acids and amino acids or polypeptide to build upon.” 91

Charles W Carter (2018): The hypothetical RNA World does not furnish an adequate basis for explaining how this system came into being. The preservation of encoded information processing during the historically necessary transition from any ribozymally operated code to the ancestral aaRS enzymes of molecular biology appears to be impossible, rendering the notion of an RNA Coding World scientifically superfluous. Instantiation of functional reflexivity in the dynamic processes of real-world molecular interactions demanded of nature that it fall upon, or we might say “discover”, a computational “strange loop” (Hofstadter, 1979): a self-amplifying set of nanoscopic “rules” for the construction of the pattern that we humans recognize as “coding relationships” between the sequences of two types of macromolecular polymers. However, molecules are innately oblivious to such abstractions. Many relevant details of the basic steps of code evolution cannot yet be outlined. 92

Florian Kaiser (2020): One of the most profound open questions in biology is how the genetic code was established. The emergence of this self-referencing system poses a chicken-or-egg dilemma and its origin is still heavily debated 93

JOSEF BERGER (1976):  The complexity of the functional correlation between recent nucleic acids and proteins can e.g. give rise to the assumption that the genetic code (and life) could not originate on the Earth. It was Portelli (1975) who published the hypothesis that the genetic code could not originate during the history of the Earth. In his opinion the recent genetic code represents the informational message transmitted by living systems of the previous eyrie of the Universe. Our last assumption is also in agreement with Crick's (1968) hypothesis that the triplets are formed before the creation of life.  94a

What creates codes, algorithms, and translation systems?
David L. Abel (2009): Computational methods often employ genetic algorithms (GA’s). The GA search technique begins with a large random pool of representations of “potential solutions.” GA’s are not dealing with physicodynamic cause-and-effect chains. A representation of any kind cannot be reduced to inanimate physicality. Second, “potential solutions” are formal, not merely physical entities. The optimized solution was purposefully pursued at each iteration. The overall process was entirely goal-directed (formal). Real evolution has no goal. Fourth, a formal fitness function is used to define and measure the fittest solutions thus far to a certain formal problem.  Genetic algorithms are no model at all of natural process. GA’s are nothing more than multiple layers of abstract conceptual engineering. Like language, we may start with a random phase space of alphabetical symbols. But no meaning or function results without deliberate and purposeful selection of letters out of that random phase space. No abiotic primordial physicodynamic environment could have exercised such programming prowess. Neither physics nor chemistry can dictate formal optimization, any more than physicality itself generates the formal study of physicality. Human epistemological pursuits are formal enterprises of agent minds. Natural process genetic algorithms have not been observed to exist. The genetic algorithms of living organisms are just metaphysically presupposed to have originated through natural process. But genetic algorithms cannot be used to model spontaneous life origin through natural process because genetic algorithms are formal. 94

S.C. Meyer, P.Nelson (2011): Persistent lack of progress on a scientific problem is exactly what one should expect when a causal puzzle has been fundamentally misconceived, or when the toolkit employed in causal explanation is too limited. Our knowledge of cause and effect, long understood to be the basis of all scientific inference and explanation, affirms that true codes—and the semantic relationships they embody—always arise from intelligent causes. If the genetic code as an effect gives evidence of irreducible semantic or functional mappings—i.e., if what we see operating in cells is not like a code, but genuinely is a code—then we should seek its explanation in the only cause “true and sufficient” to such effects: intelligence. 95
 
M.Eberlin (2019):The genetic information and the genetic code together include features like semantic logic and the meaningful ordering of characters—things not dictated by any laws of physics or chemistry 96

Order vs. Organization
David L. Abel (2009):  Organization ≠ order. Disorganization ≠ disorder. Spontaneous bona fide self-organization has never been observed. “Self-organization” is logically a nonsense term. Inanimate objects cannot organize themselves into integrated, cooperative, holistic schemes. Schemes are formal, not physical. To organize requires choice contingency, not just chance contingency and law-like necessity. Sloppy definitions lead to fallacious inferences, especially to category errors. Organization requires 1) decision nodes, 2) steering toward a goal of formal function, 3) algorithmic optimization, 4) selective switch-setting to achieve integration of a circuit, 5) choice with intent. The only entity that logically could possibly be considered to organize itself is an agent. But not even an agent self-organizes. Agents organize things and events in their lives. They do not organize their own molecular biology, cellular structure, organs and organ systems. Agents do not organize their own being. Agents do not create themselves. They merely make purposeful choices with the brains and minds with which they find themselves. Artificial intelligence does not organize itself either. It is invariably programmed by agents to respond in certain ways to various environmental challenges in the artificial life data base. Thus the reality of self-organization is highly suspect on logical and analytic grounds even before facing the absence of empirical evidence of any spontaneous formal self-organization. Certainly no prediction of bona fide self-organization from unaided physicodynamics has ever been fulfilled. Of course, if we fail through sloppy definitions to discern between self-ordering phenomena and organization, we will think that evidence of self-organization is abundant. We will point to hundreds of peer-reviewed papers with “self-organization” in their titles. But when all of these papers are carefully critiqued with a proper scientific skepticism, our embarrassment only grows with each exposure of the blatant artificial selection that was incorporated into each paper’s experimental design. Such investigator involvement is usually readily apparent right within Materials and Methods of the paper. 

Formalism vs. Physicality
When it comes to life-origin studies, we have to address how symbol selection in the genetic material symbol system came about objectively in nature. Life-origin science must address the derivation of objective organization and control in the first cells. How did prescriptive information and control arise out of the chaos of a primordial slime, vent interfaces in the ocean floor, or mere tide pools? We have no evidence whatsoever of formal organization arising spontaneously out of physical chaos or self-ordering phenomena. Chance and necessity has not been shown to generate the choice contingency required to program computational halting, algorithmic optimization, or sophisticated function. If chance and necessity, order and complexity cannot produce formal function, what does? Selection for potential utility is what optimizes algorithms, not randomness, and not fixed law. Utility lies in a third dimension imperceptible to chance and necessity. What provides this third dimension is when each token in a linear digital programming string is arbitrarily (non physicodynamically, formally) selected for potential function. The string becomes a cybernetic program capable of computation only when signs/symbols/tokens are arbitrarily chosen from an alphabet to represent utilitarian configurable switch settings. The choice represented by that symbol can then be instantiated into physicality using a dynamically inert configurable switch setting. At the moment the switch knob seen in Figure 4 is pushed, nonphysical formalism is instantiated into physicality. Then and only then does algorithmic programming become a physical reality. Once instantiated, we easily forget the requirement of instantiation of formal instructions and controls into the physical system to achieve engineering function. It was the formal voluntary pushing of the configurable switch knob in a certain direction that alone organized physicality.  The selection of any combination of multiple switch settings to achieve degrees of organization is called programming. But purposefully flipping the very first binary configurable switch is the foundation and first step of any form of programming. Programming requires choice contingency. . No known natural process spontaneously compresses an informational message string. Any type of measurement is a formal function that cannot be reduced to physicodynamics. We do not plug initial conditions into the formal equations known as “the laws of physics.” We plug symbolic representations of those initial conditions into the laws of physics. Then we do formal mathematical manipulations of these equations to reliably predict physicodynamic interactions and outcomes. In this sense formalism governs physicality. The role that mathematics plays in physics is alone sufficient to argue for formalism’s transcendence over physicality. Just as it takes an additional dimension to measure the algorithmic compressibility of a sequence, it takes still another dimension to measure the formal utility of any sequence. Formalisms are abstract, conceptual, representational, algorithmic, choice-contingent, non physical activities of mind. Formalisms typically involve steering toward utility. Formalisms employ controls rather than mere physicodynamic constraints. Formalisms require obedience to arbitrarily prescribed rules rather than forced laws. Physicodynamics cannot visualize, let alone quantify formal utility. Formalisms cannot be produced by chance or necessity. Language, for example, uses arbitrary symbol selections from an alphabet of options. Logic theory uses rules, not laws, to judge inferences. Programming requires choice contingency at each decision node. Each logic gate and configurable switch must be deliberately set a certain way to achieve potential (not-yet-existent) computational halting. These are all formal functions, not spontaneous physicodynamic events. They are just as formal as mathematics. Decision nodes, logic gates, and configurable switches cannot be set by chance and/or necessity if sophisticated formal utility is expected to arise. They must be set with the intent to control and to program computational halting. Acknowledgment of the reality of formal controls was growing within the molecular biological community even prior to the now weekly new discoveries of extraordinarily sophisticated cybernetic mechanisms in cellular physiology. 97

These are Abel's key terms: Intelligence can utilize search techniques, it can instantiate purposefully pursued goal-directed processes. It can define and measure the fittest solutions, It can implement abstract conceptual engineering. It can deliberately and purposefully select letters, it can set and select switches to achieve the integration of a circuit, and it can make choices with intent.  It can formally, and mathematically manipulate equations to reliably predict physicodynamic interactions and outcomes. The mind can instantiate abstract, conceptual, representational, algorithmic, choice-contingent, and non-physical affairs.  A formalized setup requires obedience to arbitrarily prescribed rules rather than forced laws. Intelligence can deliberately set a certain way to achieve potential (not-yet-existent) computational halting.  Minds can implement not spontaneous physicodynamic events.

Neither chance nor laws of physics can do any of this. An intelligent mind is required to instantiate a system, where information, data based on digital codes, and data transmission can direct the assembly process of complex specified machines and factories for defined purposes.  

Open questions: 98
1. Did the dialects, i.e., mitochondrial version, with UGA codon (being the stop codon in the universal version) codifying tryptophan; AUA codon (being the isoleucine in the universal version), methionine; and Candida cylindrica (funges), with CUG codon (being the leucine in the universal version) codifying serine, appear accidentally or as a result of some kind of selection process? 
2. Why is the genetic code represented by the four bases A, T(U), G, and C? 
3. Why does the genetic code have a triplet structure?
4. Why is the genetic code not overlapping, that is, why does the translation apparatus of a cell, which transcribes information, have a discrete equaling to three, but not to one?
5. Why does the degeneracy number of the code vary from one to six for various amino acids?
6. Is the existing distribution of codon degeneracy for particular amino acids accidental or some kind of selection process?
7. Why were only 20 canonical amino acids selected for the protein synthesis? 9. Is this very choice of amino acids accidental or some kind of selection process?
8. Why should there be a genetic code at all?
9. Why should there be the emergency of stereochemical association of a specific arbitrary codon-anticodon set?
10. Aminoacyl-tRNA synthetases recognize the correct tRNA. How did that recognition emerge, and why?
11. Is this very choice of amino acids accidental or some kind of selection process?
12. Why don’t we find any protein sequences in the fossils of ancient organisms, which only have primary amino acids?
13. Why didn’t the genetic code keep on expanding to cover more than 20 amino acids? Why not 39, 48 or 62?
14. Why did codon triplets evolve, and why not quadruplets? With 44 = 256 possible codon quadruplets, coding space could have increased, and thus a much larger universe of possible proteins could have been made possible.

72. David L Abel: Redundancy of the genetic code enables translational pausing  2014 May 20
72a.  Susha Cheriyedath: START and STOP Codons Feb 26, 2019
73. University of Utah Reading the genetic code depends on context APRIL 17, 2017
74. J Lehmann: Physico-chemical constraints connected with the coding properties of the genetic system 2000 Jan 21
75. Carl R. Woese: Aminoacyl-tRNA Synthetases, the Genetic Code, and the Evolutionary Process  2000 Mar; 6
76. CARL R. WOESE: The Biological Significance of the Genetic Code 1969
77. Ádám Radványid The evolution of the genetic code: Impasses and challenges February 2018
77. Henri Grosjean: An integrated, structure- and energy-based view of the genetic code 2016 Sep 30
78. C R Woese: The molecular basis for the genetic code. 1967
79. Eugene V. Koonin: Origin and Evolution of the Universal Genetic Code 2017
80. TZE-FEI WONG: A Co-Evolution Theory of the Genetic Code 1975
81. Irene A. Chen:  An expanded genetic code could address fundamental questions about algorithmic information, biological function, and the origins of life 20 July 2010
82. Takahito Mukai: Rewriting the Genetic Code  July 11, 2017
82a. Dirson Jian Li: Formation of the Codon Degeneracy during Interdependent Development between Metabolism and Replication 20 December 2021
83. C R Woese: Order in the genetic code 1965 Jul;5
84. Stephen J. Freeland: The Case for an Error Minimizing Standard Genetic Code October 2003
85. J.Monod: Chance and Necessity: An Essay on the Natural Philosophy of Modern Biology  12 setember 1972
86. John Maynard Smith: The Major Transitions in Evolution 1997
87. Victor A. Gusev Arzamastsev:  [url=https://www.webpages.uidaho.edu/~stevel/565/literature/Genetic code - Lucky chance or fundamental law of nature.pdf]Genetic code: Lucky chance or fundamental law of nature?[/url] 1997
88. Yuri I Wolf On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization 2007 May 31
89. Eugene V. Koonin: Origin and evolution of the genetic code: the universal enigma 2012 Mar 5
90. Marcello Barbieri Code Biology  February 2018
91. Julian Mejıa: Origin of Information Encoding in Nucleic Acids through a Dissipation-Replication Relation April 18, 2018
92. Charles W Carter: Insuperable problems of the genetic code initially emerging in an RNA World 2018 February
93. Florian Kaiser: The structural basis of the genetic code: amino acid recognition by aminoacyl-tRNA synthetases 28 July 2020
94a. JOSEF BERGER: THE GENETIC CODE AND THE ORIGIN OF LIFE 1976
94. David L. Abel: The Capabilities of Chaos and Complexity 9 January 2009
95. S.C. Meyer,  P.A. Nelson Can the Origin of the Genetic Code Be Explained by Direct RNA Templating?  August 24, 2011
96. M.Eberlin Foresight 2019
97. David L. Abel: The Capabilities of Chaos and Complexity 9 January 2009
98. Victor A.Gusev: Genetic code: Lucky chance or fundamental law of nature? December 2004

https://reasonandscience.catsboard.com

Sponsored content



Back to top  Message [Page 1 of 2]

Go to page : 1, 2  Next

Permissions in this forum:
You cannot reply to topics in this forum