ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview

Otangelo Grasso: This is my library, where I collect information and present arguments developed by myself that lead, in my view, to the Christian faith, creationism, and Intelligent Design as the best explanation for the origin of the physical world.


You are not connected. Please login or register

The Cell factory maker, Paley's watchmaker argument 2.0

Go down  Message [Page 1 of 1]

Otangelo


Admin

On the Origin of Life and the Virus world by the means of an Intelligent Designer

The Cell factory maker, Paley's watchmaker argument 2.0

https://www.amazon.com/Origin-Virus-World-Intelligent-Designer-ebook/dp/B0BJ9RYMD6?ref_=ast_sto_dp
The Cell factory maker, Paley's watchmaker argument 2.0 Factor11

https://reasonandscience.catsboard.com/t2809-on-the-origin-of-life-by-the-means-of-an-intelligent-designer

Index Page: On the origin of Cell factories by the means of an intelligent designer

Introduction
This book is about one of the deepest unsolved mysteries: The immense difficult puzzle of the origin of life. After at least 70 years of scientific inquiries and investigation, after Watson and Crick discovered the DNA molecule, and Miller & Urey performed their chemical experiment in 1953, which started the modern era of investigation of the origin of life, billions of dollars were spent, and millions of manhours invested to solve life's origin, it did not bring clear answers of the trajectory from non-life to life by chemical evolutionary means. 

Kepa Ruiz-Mirazo (2014): We are still far from understanding which principles governed the transition from inanimate to animate matter. 15
Erik D Andrulis: (2011) Life is an inordinately complex unsolved puzzle. Despite significant theoretical progress, experimental anomalies, paradoxes, and enigmas have revealed paradigmatic limitations. 16
Yasuji Sawada (2020): The structures of biological organisms are miraculously complex and the functions are extremely multi-fold far beyond imagination. Even the global features such as metabolism, self-replication, evolution or common cell structures make us wonder how they were created in the history of nature. 16
Tomislav Stolar (2019): The emergence of life is among the most complex and intriguing questions and the majority of the scientific community agrees that we are far from being able to provide an answer to it 15

Finding a compelling explanation of how life started matters deeply to us since it is one of the relevant questions that can give us a hint in regards to the quest of which worldview is true: Naturalism or theism. Was a supernatural entity, an intelligent designer involved? Creation has been brand-marked by many as a myth. As fairy tale stories of ancient sheep herders, of uneducated tribesmen that lack the credentials compared to scientists of current days, which work actively in the field, and do the relevant scientific experiments, who can give a hint to the truth. Rather than revelation, science operates on experimentation.  Is the hypothesis of Intelligent Design, not the case-adequate answer and explanation, after all?

Prof. Dr. Ruth E. Blake (2020): Investigating the origin of life requires an integrated and multidisciplinary approach including but not limited to biochemistry, microbiology, biophysics, geology, astrobiology, mathematical modeling, and astronomy. 14

Inquiry of the origin of life extends to several scientific disciplines: the research fields of synthetic organic chemists, biologists, biochemists, theoretical and evolutionary biologists, theoretical chemists, physical chemists, physicists, and earth and planetary scientists. Answering the question of the origin of life is maybe THE most challenging of all scientific open questions. Life is baffling. Cells are astonishingly complex. LIfe is staggeringly breathtakingly complex, fare beyond what any human technology has been able to come up with. The difference between life and non-life is mind-boggling. Cells are the most complex miniature chemical factories in the universe. They build their own building blocks, make their own energy, and host all the information to make all the awe-inspiringly complex machines, production lines, compartments, and information storage devices, and the envelope to host all the technology to make all this. Materials, energy, and information are joined to make a self-replicating factory. One of the three alone, on its own, achieves nothing. It has to be all three at the same time joined together as a team to climb mount insurmountable. Irreducible complex integrated, and interdependent factories are constructed by actualizing and instantiating instructional complex information that directs the cell making. Codified digital information directs and instantiates the equivalent analog device. But what came before the data? What instantiated transcription, and the code of translation?  Where did all this come from?

Where did life come from? What is the origin of these masterfully crafted high-tech molecular machines, turbines, computers, and production lines? Science has, and still is facing monumental challenges in answering these questions.  The author will lead in this book on a journey and hopes to lead the reader through its demonstration to the conclusion that intelligent design is very clear, and with no doubt, the best explanation.  The evidence against naturalistic origins does not come from the outside critics like creationists,  but from the inside out, from working scientists in the field.  Peer-reviewed scientific papers demonstrate on their own that natural, nonguided, non-intelligent causes do not sufficiently, but inadequately explain the origin of life, constantly attempting to resort to natural selection and evolution, a mechanism that does not belong to the origin of life, and how evidence in special on a molecular level demonstrates overwhelmingly the signature and footprints of an intelligent, awe-inspiring, super-powerful designer. A mathematician, a physicist, a chemist, and a biologist, an engineer of the highest, finest, smartest, unfathomable order. One that created everything physical, instantiating and actualizing eternal energy, the laws that govern the universe, and creating life, complex and sophisticated, organized, integrated, and interdependent and complex from the start. Elegantly instantiating the most energy-efficient solutions, not leaving anything to chance, but designing and arranging every atom and placing it where it can perform its function, and contributing to the operation of the system as a whole.  This book will unravel how many scientists ( mostly against their bias ) have to admit to having no compelling answers in regard to the Origin of Life (OoL). In the case of the origin of life, absence of evidence ( that unguided random events could in principle have selected and instantiated the first self-replicating, living cells ) is evidence of absence. The conclusion that an intelligent designer must have been involved in creating life, is not an argument from ignorance, but an inference to the best explanation based on all scientific information that we have at our disposition today. The facts that lead to the cause, the evidence is not provided by proponents of creationism and Intelligent Design but comes directly from the scientific, peer-reviewed literature. The authors in their smacking grand majority do hold to the inference that natural, and not supernatural causes explain all natural phenomena in question, but this is an inference that stands on philosophical, not scientific ground. Rather than actually finding explanations based on naturalistic causes, the gap is becoming wider and wider - the more science is advancing, and digging deeper into the intricacies, new levels of complexity are unraveled.  In 1871, Nature magazine published an article about Ernst Haeckel's views on abiogenesis, describing Biological cells as essentially and nothing more than structureless bits of protoplasm without nuclei ( protoplasm = an albuminoid, nitrogenous Carbon compound ) as primitive slime - and their other vital properties can therefore simply and entirely brought about by the entirely by the peculiar and complex manner in which carbon under certain conditions can combine with the other elements. 1  This book aims, aside from other books already published on the subject, to advance the subject further, and bring less known facts to light, that illustrate the supreme sophistication of cells, and how Paley's watchmaker argument can be expanded to a new version, which we call: The factory maker argument.

Recent biochemistry books, like Bruce Alberts's classic: Molecular Biology Of The Cell 6 opened up an entirely new world of miniaturized technology on a microscale permitting to learn about the intricacies and complexity on a microscopic level, the inner workings inside of living cells, permitting a journey into a microworld, that is bewildering, full of unexpected, awe-inspiring high-tech devices, and incredible worlds that scientists, a half century back, never thought to encounter. There are a wealth of mindboggling aspects of life, the superb engineering & architecture of cells, and their precise exquisite operations, that are not common knowledge. They point to a super-intellect, and engineer of unimaginable intelligence and power. One of our awe-inspiring discoveries over the years was the fact that cells are not analogous to factories, but an entire city, park, or quartier of interlinked chemical high-tech factories in a literal sense with superb capabilities, efficiency, and adaptive design. Dozens and dozens of scientific papers describe cells as literal minuscule autonomous self-replicating factories, that build and operate based on information, and sophisticated interconnected signaling networks. There are two types of information. Descriptive, and prescriptive. Descriptive information is when someone encounters an artifact, and describes it. Blueprints, floorplans, architectural drawings, and a recipe to make a cake, all contain information on " know-how". It is instructional assembly information. It instructs, dictates, directs, orients, and tells the reader/receiver how to construct, how assemble, and how to make or how operate things. Interestingly, that is precisely the kind of information that we encounter in the genome and epigenetic information storage mechanisms. Some have described the genome as a blueprint, and others compared it to a library. A scrambled and seemingly disorganized collection of genes, analogous to books. Most of it is so-called Junk-DNA, while just a tiny part, a few percent, encodes proteins, the working horses of the cell. Time has more and more unraveled, that this so-called Junk DNA is actually not non-functional junk, a remnant from a long evolutionary past, but it has a function. It regulates gene expression, amongst other tasks.

The Cell factory maker, Paley's watchmaker argument 2.0 Abioge15

In the last 150 years, since Haeckel's ink, the understanding on a molecular level of how cells operate and function has gone through a remarkable development. The advance of scientific investigation has come to realize, what was well expressed in an article in 2014:

The upward trend in the literature reveals that as technology allows scientists to investigate smaller and smaller cellular structures with increasing accuracy, they are discovering a level of molecular complexity that is far beyond what earlier generations had predicted. Molecular biologists often focus years of their research on individual molecules or pathways. 2

What science has come to discover, is more and more new levels of complexity down to the precise complex specified order, arrangement, and fine-tuning of the most basic molecules and building blocks used in life down to atoms.
Bharath Ramsundar gave some data points in a remarkable article. He wrote in in 2016:

One neuron contains roughly 175 trillion ( precisely ordered) atoms per cell. Assuming that the brain contains 100 billion neurons, the total brain contains roughly 2 * 10^25 atoms. ( That's more than the number of stars in the universe ) Both estimates are likely low. 3

Another example we find in David Goodsell's excellent book: Our Molecular Nature, page 26:
Dozens of enzymes are needed to make the DNA bases cytosine and thymine from their component atoms. The first step is a "condensation" reaction, connecting two short molecules to form one longer chain, performed by aspartate carbamoyltransferase. Other enzymes then connect the ends of this chain to form the six-sided ring of nucleotide bases, and half a dozen others shuffle atoms around to form each of the bases. In bacteria, the first enzyme in the sequence, aspartate carbamoyltransferase, controls the entire pathway. (In human cells, the regulation is more complex, involving the interaction of several of the enzymes in the pathway.) Bacterial aspartate carbamoyltransferase determines when thymine and cytosine will be made, through a battle of opposing forces. It is an allosteric enzyme, referring to its remarkable changes in shape (the term is derived from the Greek for "other shape"). The enzyme is composed of six large catalytic subunits and six smaller regulatory subunits. Take just a moment to ponder the immensity of this enzyme. The entire complex is composed of over 40,000 atoms, each of which plays a vital role. The handful of atoms that actually perform the chemical reaction are the central players. But they are not the only important atoms within the enzyme--every atom plays a supporting pan. The atoms lining the surfaces between subunits are chosen to complement one another exactly, to orchestrate the shifting regulatory motions. The atoms covering the surface are carefully picked to interact optimally with water, ensuring that the enzyme doesn't form a pasty aggregate, but remains an individual, floating factory. And the thousands of interior atoms are chosen to fit like a jigsaw puzzle, interlocking into a sturdy framework. Aspartate carbamoyltransferase is fully as complex as any fine automobile in our familiar world. And, just as manufacturers invest a great deal of research and time into the design of an automobile, enzymes like aspartate carbamoyltransferase have been finely tuned  and now, Goodsell adds just five words at the end of the sentence - over the course of evolution.. 4

After describing like an epic - how this enzyme is masterfully crafted down to the atomic scale, resorting to the analogy of the manufacturing of an automobile, he concludes that this life-essential enzyme was finely tuned by a process that does not operate based on intelligence - namely evolution. You did read that right. Worse than that:  Aspartate Carbamoyltransferase is part of the biosynthesis of pyrimidines - the nucleobases that make up RNA and DNA, life-essential molecules that had to be made prior to when life started, and as such, evolution as the mechanism to synthesize them cannot be invoked. There is a minimal number of proteins and enzymes that are required to start life, and the synthesis of RNA and DNA is life essential, and as such, the origin of  Aspartate Carbamoyltransferase is a question that belongs to the origin of life research. 

There is more: Remarkably, not only this exquisitely engineered molecular machine that makes nucleobases is finely and precisely adjusted to operate and perform its job on an atomic scale: Its product is finely tuned as well to perform a life-essential function: Forming Watson-Crick base-pairing, giving rise to the famous DNA ladder, the information storage mechanism of life.

John D. Barrow reports in his book:  FITNESS OF THE COSMOS FOR LIFE,  Biochemistry and Fine-Tuning, on page 154:
Today, it is particularly striking to many scientists that cosmic constants, physical laws, biochemical pathways, and terrestrial conditions are just right for the emergence and flourishing of life. It now seems that only a very restricted set of physical conditions operative at several major junctures of emergence could have opened the gateways to life. Fine-tuning in biochemistry is represented by the strength of the chemical bonds that makes the universal genetic code possible. Neither transcription nor translation of the messages encoded in RNA and DNA would be possible if the strength of the bonds had different values. Hence, life, as we understand it today, would not have arisen.  As it happens, the average bond energy of a carbon-oxygen double bond is about 30 kcal per mol higher than that of a carbon-carbon or carbon-nitrogen double bond, a difference that reflects the fact that ketones normally exist as ketones and not as their enol-tautomers. If (in the sense of a “counterfactual variation”) the difference between the average bond energy of a carbon-oxygen double bond and that of a carbon-carbon and carbon-nitrogen double bond were smaller by a few kcal per mol, then the nucleobases guanine, cytosine, and thymine would exist as “enols” and not as “ketones,” and Watson–Crick base-pairing would not exist – nor would the kind of life we know. It looks as though this is providing a glimpse of what might appear (to those inclined) as biochemical fine-tuning of life. 5

The machinery that makes RNA and DNA, aspartate carbamoyltransferase, is finely tuned on an atomic level - depending on the information stored in DNA to be made. The nucleobases that make up DNA, also finely tuned in their atomic arrangement to permit the right forces to make Watson-Crick base-pairing possible, and so, the stable life-essential information storage mechanism depends on the finely tuned aspartate carbamoyltransferase machine to be synthesized - that creates a Catch22 situation. If everything depends on everything to be made, how did it all start?  

To me, fine-tuning, interdependence, irreducible complexity, and complex specified order on an atomic scale is awe-inspiring evidence of creation. Maybe the skeptical reader will react with resistance to such evidence and will attempt to resort to naturalistic, non-engineered principles acting in nature. The author will provide many other examples that will make it very difficult - if not impossible - to keep asserting that non-intelligent mechanisms are a compelling explanation. For some, there is never enough evidence to conclude God. But others might come to realize that the teleological argument today is solidly grounded in scientific evidence, and as such, it is reasonable to be an advocate of intelligent design.

Chapter 1


The aim of this book is not to point to a specific divinity, but to an intelligent designer as the best explanation of the origins of life, and biodiversity. Identifying the creator or which religion is true is not the scope of this book. The author is relying on peer-reviewed scientific papers to make his case. Providing these sources is used as the basis of the conclusive arguments.  A common criticism is that every scientific problem, where God was inserted in a gap of knowledge, has been replaced with a scientific one. The more science advances, the more it solves open questions, and religion is receding and hiding into an ever-smaller getting pocket of ignorance. "In the past, we didn't know how lightning worked, and Zeus was invoked. Today we know better." We hear that frequently. The conflict has been portrayed as between Science and religion. Science is based on reason and the scientific method, which gives us empirical results. Religion is based on blind faith, and left to the "religtards". Reason-based thinking is supposedly based on apistevism, with no faith required ( Not considering, that whatever position one takes, naturalism, or theism, is a position one has to take on faith, since nobody has access to the realm beyond our physical universe. Metaphysics and the realm of fundamental ontology are beyond scientific investigation. Religion vs science is a false dichotomy. It's not rare that when I ask an atheist that denies a creator, what he would like to replace God with, the answer is often: Science...... not recognizing the logical fallacy that science is not a causal mechanism or agency, but a tool to explore reality. Kelly James Clark writes in his book: Religion and the Sciences of Origins: Historical and Contemporary Discussions:

The deepest intellectual battle is not between science and religion (which, as we have seen, can operate with a great deal of accord), but between naturalism and theism—two broad philosophical (or metaphysical) ways of looking at the world. Neither view is a scientific view; neither view is based on or inferable from empirical data. Metaphysics, like numbers and the laws of logic, lies outside the realm of human sense experience. So the issue of naturalism versus theism must be decided on philosophical grounds. Metaphysical naturalism is the view that nothing exists but matter/energy in space-time. Naturalism denies the existence of anything beyond nature. The naturalist rejects God, and also such spooky entities as souls, angels, and demons. Metaphysical naturalism entails that there is no ultimate purpose or design in nature because there is no Purposer or Designer. On the other hand, theism is the view that the universe is created by and owes its sustained existence to a Supreme Being that exists outside the universe. These two views, by definition, contradict each other.  6

The proposition that no creator is required to explain our existence, is what unites all nonbelievers, weak, strong atheists, agnostics, skeptics, and nihilists. When pressed hard, how that makes sense, the common cop-out is: We don't know what replaces God. Science is working on it. 

God of the gaps
Isaac Newton and  Laplace did provide a classic example of a God-of-the-gaps argument. Newton's equations were a great tool to predict and explain the motions of planets. Since there are several gravitational interactions between them, Newton suspected that gravity would interrupt their trajectories, and God would intervene from time to time to solve the problem and bring them back on track. A legend tells us Laplace who was a french mathematician, was brought to Napoleon who asked him about the absence of God in his theory: “M. Laplace, they tell me you have written this large book on the system of the universe, and have never even mentioned its Creator.” To this, Laplace famously replied, “I had no need of that hypothesis.” Newton used his lack of knowledge to insert God in the gap. It becomes problematic when later, the gap is filled with a scientific explanation. Design advocates are frequently accused of using this fallacy.  The argument that God is a gap filler is really boring, a beaten horse ad nauseam. Its invoked in almost every theist-atheist debate when atheists are unable to successfully refute a theist claim. No, God is NOT a gap filler. God can be a logical inference based on the evidence observed and at hand in the natural world. If a theist would say, ''We don't know what caused 'x', therefore, God.'', it would be indeed a God of the gaps fallacy. What, however, today, often can be said, is: ''Based on current knowledge, an intelligent creative agency is a better explanation than materialistic naturalism."  If one is not arguing from ignorance, but rather reasoning from the available evidence to the best explanation, is it not rather ludicrous to accuse ID proponents of launching a 'god of the gaps argument'?

Paul Davies comments in The Goldilocks enigma, on page 206
The weak point in the “gaps” argument of the Intelligent Design movement is that there is no reason why biologists should immediately have all the answers anyway. Just because something can’t be explained in detail at this particular time doesn’t mean that it has no natural explanation: it’s just that we don’t know what it is yet. Life is very complicated, and unraveling the minutiae of the evolutionary story in detail is an immense undertaking. Actually, in some cases we may never know the full story. Because evolution is a process that operates over billions of years, it is entirely likely that the records of many designlike features have been completely erased. But that is no excuse for invoking magic to fill in the gaps. So could it be that life’s murky beginning is one of those “irreducible” gaps in which the actions of an intelligent designer might lie? I don’t think so. Let me repeat my warning. Just because we can’t explain how life began doesn’t make it a miracle. Nor does it mean that we will never be able to explain it—just that it’s a hard and complicated problem about an event that happened a long time ago and left no known trace. But I for one am confident that we will figure out how it happened in the not-too-distant future. 7

Isn't that interesting? While Davies accuses ID proponents of using gap arguments, he does precisely that, by inserting naturalism into gaps of knowledge. We don't know (yet), therefore natural processes must have done it. Why can someone not start with the presupposition that an eternal, powerful, intelligent conscious creator must be at the bottom of reality, and use that as a starting point to investigate if the God hypothesis withstands scrutiny?

Limited causal alternatives  do not justify to claim of " not knowing "
Hosea 4:6 says: People are destroyed for lack of knowledge. People, often, either because of confirmation bias or bad will, blindly believe what others say, without scrutinizing on their own, if what they read and are informed about, is true, or made-up, unwarranted, and based on superficial, and at the end, false, misleading information. Not only that. When questioned, they argue based on that badly researched information and expect others to take them seriously, and regard them as knowledgeable. This is in particular frustrating when the interlocutor actually HAS investigated the issue in demand in a duly, serious fashion, knows the issue at hand, but encounters deaf ears. Scientists HATE saying "we don't know. " They prefer to shut their face until they do know. Much less base their entire worldview on being "comfortable of not knowing". Confessing of not knowing, when there are good reasons to confess ignorance, is justifiable.  But claiming not knowing something, despite the evident facts easy at hand and having the ability to come to informed well-founded conclusions based on sound reasoning, and through known facts and evidence, is not only willful ignorance but plain foolishness. Especially, when the issues in the discussion are related to origins and worldviews, which ground how we see the world, past present, future, morals, etc.  If there were hundreds of possible statements, then claiming not knowing which makes most sense could be justified.  In the quest of God, there are just two possible explanations.

A God, or no God. That's the question
Either there is a God-creator and causal agency of the universe, or not. God either exists or he doesn’t. These are the only two possible explanations. The law of excluded middle is given the name of law for a reason. It's called so when we say something is either A or it is not A there's no middle there; no third option; it is one of the fundamental laws of logic. It's a dichotomy: it's either God or not God. When we reduce the "noise", we come down to what the distinction is between the two big competing worldviews. Either nature is the product of pointless stupidity of no existential value or the display of God's sublime grandeur and intellect. How we answer the God Question has significant implications for how we understand ourselves, our relation to others, and our place in the universe. Remarkably, however, many people [ in special in the West, like in Europe, for example ] don’t give this question nearly the attention it deserves; they live as though it doesn’t really matter to everyday life. Either our worldview is based on believing in naturalism & materialism, which means that the physical world had no causal agency, or our worldview is based on deism, theism & or creationism. Some posit a pantheistic principle (impersonal spirit) but in the author's view, spirits are by definition personal.  That is the dichotomy that simplifies our investigation a lot: Or an intelligent designer, a powerful creator, a conscious being with will, foresight, aims, and goals exists, or not. 

What's the Mechanism of Intelligent Design?
We don't know how exactly a mind might act in the world to cause change. Your mind, mediated by your brain, sends signals to your arm, hand, and fingers,  and writes a text through the keyboard of the computer  I sit here typing. I cannot explain to you how exactly this process functions, but we know, it happens. Consciousness can interact with the physical world and cause change. But how exactly that happens, we don't know. Why then should we expect to know how God created the universe? The hypothesis of intelligent design proposes an intelligent mental cause as the origin of the physical world. Nothing else.

Is the "God concept" illogical?
The author has seen several atheists claiming that they were never presented with a God concept that was possible, logical, plausible, and not contradictory - and therefore, impossible. The author's description of God for this discussion is: God is a conscious mind. Is spaceless, timeless, immaterial, powerful, intelligent, and personal, which brought space, time, and matter into being. Consciousness englobes the mind, "qualia", intellectual activity, calculating, thinking, forming abstract ideas, imagination, introspection, cognition, memories, awareness, experiencing, intentions, free volition, free creation, invention, and generation of information. It classifies, recognizes, and judges behavior, good and evil. It is aware of beauty and feels sensations and emotions. There is nothing contradictory in this hypothesis, and it is therefore on the table as a possible option.

Did God create Ex-nihilo?
God could be eternally in the disposition of infinite potential energy and disposes of it whenever it fits him to use it. He can both be an eternal mind of infinite knowledge, wisdom, and intelligence, that creats minds similar to his own mind, but with less, and limited intelligence, but also physical worlds/universes through his eternal power. Power comes from Latin, and means, potere. Potere means able of doing. Able to provoke change.  God is spirit, but has eternal power at his disposal to actualize it in various forms. So basically, he is not creating the physical world from nothing, but from a potential at his disposal, which can manifest when he decides so, precisely in the way that he wills and decides to do. So anything external to him is instantiated, uphold, and secured by his eternal power.

The potential of energy was/is with God and when God created our universe, in the first instant, he focussed and concentrated enormous power or energy that is at his disposition, into a single point, a singularity, which triggered the creation and stretching out our universe.  The temperatures, densities, and energies of the Universe would be arbitrary, unimaginably large, and would coincide with the birth of time, matter, and space itself, and God subdued and ordained the energy to start obeying the laws of physics that he instantiated at the same time. We know that matter/energy are interchangeable. There had to be a connection between God and the Universe. God did not only create the universe, but sustains it permanently through his power, and ordains the laws of physics to impose and secure that the universe works orderly, with stability, and in a predictable manner.

Intelligence vs no intelligence
How can we recognize the signature of (past) intelligent actions? Contrasting and comparing "intended" versus "accidental" arrangements leads us to the notion of design. We have an extensive experience-based understanding of the kinds of things that intelligent minds design to instantiate devices for specific purposes and functions. We also know by experience the range of stochastic accidental natural phenomena and causes, and what they can instantiate. Opponents commonly object and argue that intelligent design does not make predictions. But since we have already background knowledge of what intelligent minds can do, we can attribute similar effects, to similar causes. A physical system is composed of a specific arrangement of matter and parts: a machine, a car, a clock. When we describe it and quantify its size, structure, and motions, and annotate the materials used, that description contains information. When we arrange and distribute materials in a certain way for intended means, we can produce things for specific aims and call them design. The question is, therefore, when we see things in nature that appear to have specific functions and give the impression of being designed, ARE they indeed the product of intentional design? Science has unraveled structures of mind-boggling organizational intricacies at the molecular level that leave us in awe,  so sophisticated that our most advanced technology seems kindergarten by comparison.  A proponent of naturalism has to posit that physical and chemical things emerged by chance and/or physical necessity and biological systems by mindless evolutionary pressures. These organic structures like living cells present us with a degree of complexity that science has failed to explain stochastically by unguided means. No scientific experiment has been able to come even close to synthesizing the basic building blocks of life and reproducing a  self-replicating Cell in the Laboratory through self-assembly and autonomous organization. The total lack of any kind of experimental evidence leading to the re-creation of life; not to mention the spontaneous emergence of life… is the most humiliating embarrassment to the proponents of naturalism and the whole so-called “scientific establishment” around it… because it undermines the worldview of who wants naturalism to be true. Everything we know tells us that machines, production lines, computers, energy generating turbines, transistors, and interlinked factories are structures designed by an intelligence. The cooperation and interdependent action of proteins and co-factors in cells is stupendous and depends on very specific controlled and arranged mechanisms, precise allosteric binding sites, and finely-tuned forces. Accidents do not design machines. Intellect does. Intelligence leaves behind a characteristic signature. The action or signature of an intelligent designer can be detected when we see :

1. Implementing things conveying regular behavior, order, stability, and predictability.  The imposition of mathematical rules, laws, principles, and physical constants. 
2. Something purposefully and intentionally developed and made to accomplish a specific goal(s). That includes specifically the generation and making of building blocks, energy, and the instantiation of instructional blueprints, floorplans, and complex specified information.  If an arrangement of parts is a) perceptible by a reasonable person as having a purpose and/or a function,  and b) can be used for the perceived purpose then its purpose was designed by an intelligent mind.
3. Repeating a variety of complex actions with precision based on methods that obey instructions, governed by rules. 
4. An instructional complex blueprint (bauplan) or protocol to make objects ( machines, factories, houses, cars, etc.) that are irreducible complex, integrated, and an interdependent system or artifact composed of several interlocked, well-matched hierarchically arranged systems of parts contributing to a higher end of a complex system that would be useful only in the completion of that much larger system. The individual subsystems and parts are neither self-sufficient, and their origin cannot be explained individually, since, by themselves, they would be useless. The cause must be intelligent and with foresight, because the unity transcends every part, and thus must have been conceived as an idea, because, by definition, only an idea can hold together elements without destroying or fusing their distinctness. An idea cannot exist without an intelligent creator, so there must be an intelligent mind.
5. Artifacts which use might be employed in different systems ( a wheel is used in cars and airplanes ) 
6. Things that are precisely adjusted and finely tuned to perform precise functions and purposes. 
7. Arrangement of materials and elements into details, colors, and forms to produce an object or work of art able to transmit the sense of beauty, and elegance, that pleases the aesthetic senses, especially the sight.  “I declare this world is so beautiful that I can hardly believe it exists.”
8. Establishing a language, code, communication, and information transmission system, that is 1. A language, 2. the information (message) produced upon that language, the 3 .information storage mechanism ( a hard disk, paper, etc.), 4. an information transmission system, that is: encoding - sending and decoding) and eventually fifth, sixth, and seventh ( not essential): translation, conversion, and transduction. 
9. Any scheme where instructional information governs, orchestrates, guides, and controls the performance of actions of constructing, creating, building, and operating. That includes operations and actions as adapting, choreographing, communicating, controlling product quality, coordinating, cutting, duplicating, elaborating strategies, engineering, error checking and detecting, and minimizing, expressing, fabricating, fine-tuning, foolproof checking, governing, guiding, implementing, information processing, interpreting, interconnecting, intermediating, instructing, logistic organizing, managing, monitoring, optimizing, orchestrating, organizing, positioning, monitoring and managing of quality, regulating, recruiting, recognizing, recycling, repairing, retrieving, shuttling, separating, self-destructing, selecting, signaling, stabilizing, storing, translating, transcribing, transmitting, transporting, waste managing. 
10. Designed objects exhibit “constrained optimization.” The optimal or best-designed laptop computer is the one that is the best balance and compromises multiple competing factors. 

Now lets see if we observe the signature of intelligent action in nature:

1. Paul Davies: The universe is governed by dependable, immutable, absolute, universal, mathematical laws of an unspecified origin.
Eugene Wigner: The mathematical underpinning of nature "is something bordering on the mysterious and there is no rational explanation for it.
Richard Feynman: Why nature is mathematical is a mystery...The fact that there are rules at all is a kind of miracle.
Albert Einstein: How can it be that mathematics, being, after all, a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?
Max Tegmark: Nature is clearly giving us hints that the universe is mathematical.
2.  Basically, everything in biology is purposeful [ in contrast to the structure of a rock, for example], and has a specific function. Examples abound. Like co-factors and apo-proteins ( lock and key). Cells are interlocked irreducible factories where a myriad of proteins work together to self-sustain and perpetuate life. To replicate, reproduce, adapt, grow, remain organized, store, and use the information to control metabolism, homeostasis, development, and change. A lifeless Rock has no goal, has no specific shape or form for a specific function, but is random, and the forms of stones and mountains come in all chaotic shapes, sizes, and physicochemical arrangements, and there is no goal-oriented interaction between one rock and another, no interlocking mechanical interaction.
3. A variety of biological events are performed in a repetitive manner, described in biomechanics, obeying complex biochemical and biomechanical signals. These include, for example, cell migration, cell motility, traction force generation, protrusion forces, stress transmission, mechanosensing and mechanotransduction, mechanochemical coupling in biomolecular motors, synthesis, sorting, storage, and transport of biomolecules
4.  In living cells, information is encoded through at least 30 genetic, and almost 30 epigenetic codes that form various sets of rules and languages. They are transmitted through a variety of means, that is the cell cilia as the center of communication, microRNA's influencing cell function, the nervous system, the system synaptic transmission, neuromuscular transmission, transmission b/w nerves & body cells, axons as wires, the transmission of electrical impulses by nerves between brain & receptor/target cells, vesicles, exosomes, platelets, hormones, biophotons, biomagnetism, cytokines and chemokines, elaborate communication channels related to the defense of microbe attacks, nuclei as modulators-amplifiers. These information transmission systems are essential for keeping all biological functions, that is organismal growth and development, metabolism, regulating nutrition demands, controlling reproduction, homeostasis, constructing biological architecture, complexity, form, controlling organismal adaptation, change, regeneration/repair, and promoting survival.
5. Some convergent systems are bat echolocation in bats, oilbirds, and dolphins, cephalopod eye structure, similar to the vertebrate eye, an extraordinary similarity of the visual systems of sand lance (fish) and chameleon (reptile). Both the chameleon and the sand lance move their eyes independent of one another in a jerky manner, rather than in concert. Chameleons share their ballistic tongues with salamanders and sand lace fish. There are a variety of organisms, unrelated to each other, that encounter nearly identical convergent biological systems. This commonness makes little sense in light of evolutionary theory. If evolution is indeed responsible for the diversity of life, one would expect convergence to be extremely rare. 
6. The Universe is wired in such a way that life in it is possible. That includes the fine-tuning of the Laws of physics, the physical constants, the initial conditions of the universe, the Big Bang, the cosmological constant, the subatomic particles, atoms, the force of gravity, Carbon nucleosynthesis, the basis of all life on earth, the Milky Way Galaxy, the Solar System, the sun, the earth, the moon, water, the electromagnetic spectrum, biochemistry. Hundreds of fine-tuning parameters are known. Even in biology, we find fine-tuning, like nucleobase isomeric arrangement that permits Watson-Crick base-pairing, cellular signaling pathways, photosynthesis, etc.
7.  I doubt someone would disagree with Ralph Waldo Emerson. Why should we expect beauty to emerge from randomness? If we are merely atoms in motion, the result of purely unguided processes, with no mind or thought behind us, then why should we expect to encounter beauty in the natural world, and the ability to recognize beauty, and distinguish it from ugliness? Beauty is a reasonable expectation if we are the product of design by a designer who appreciates beauty and the things that bring joy.
8. In the alphabet of the three-letter word found in cell biology are the organic bases, which are adenine (A), guanine (G), cytosine (C), and thymine (T). It is the triplet recipe of these bases that make up the ‘dictionary’ we call molecular biology genetic code. The code system enables the transmission of genetic information to be codified, which at the molecular level, is conveyed through genes. Pelagibacter ubique is one of the smallest self-replicating free-living cells, has a genome size of 1,3 million base pairs, and codes for about 1,300 proteins. The genetic information is sent through communication channels that permit encoding, sending, and decoding, done by over 25 extremely complex molecular machine systems, which do as well error checks and repair to maintain genetic stability, and minimizing replication, transcription, and translation errors, and permit organisms to pass accurately genetic information to their offspring, and survive.
9. Science has unraveled, that cells, strikingly, are cybernetic, ingeniously crafted cities full of interlinked factories. Cells contain information, which is stored in genes (books), and libraries (chromosomes). Cells have superb, fully automated information classification, storage, and retrieval programs ( gene regulatory networks ) that orchestrate strikingly precise and regulated gene expression. Cells also contain hardware - a masterful information-storage molecule ( DNA ) - and software, more efficient than millions of alternatives ( the genetic code ) - ingenious information encoding, transmission, and decoding machinery ( RNA polymerase, mRNA, the Ribosome ) - and highly robust signaling networks ( hormones and signaling pathways ) - awe-inspiring error check and repair systems of data ( for example mind-boggling Endonuclease III which error checks and repairs DNA through electric scanning ). Information systems, which prescribe, drive, direct, operate, and control interlinked compartmentalized self-replicating cell factory parks that perpetuate and thrive life. Large high-tech multimolecular robotlike machines ( proteins ) and factory assembly lines of striking complexity ( fatty acid synthase, non-ribosomal peptide synthase ) are interconnected into functional large metabolic networks. In order to be employed at the right place, once synthesized, each protein is tagged with an amino acid sequence, and clever molecular taxis ( motor proteins dynein, kinesin, transport vesicles ) load and transport them to the right destination on awe-inspiring molecular highways ( tubulins, actin filaments ). All this, of course, requires energy. Responsible for energy generation are high-efficiency power turbines ( ATP synthase )- superb power generating plants ( mitochondria ) and electric circuits ( highly intricate metabolic networks ). When something goes havoc, fantastic repair mechanisms are ready in place. There are protein folding error check and repair machines ( chaperones), and if molecules become non-functional, advanced recycling methods take care ( endocytic recycling ) - waste grinders and management ( Proteasome Garbage Grinders )
10. Translation by the ribosome is a compromise between the opposing constraints of accuracy and speed.

Genesis or Darwin?
In the west, the tradition of the Judeo-Christian God revealed in the Bible shaped the Worldview of many generations, and the greatest pioneers of science like Kepler, Galileo, Newton, Boyle, Maxwell, etc. were Christians, and firmly believed that a powerful creator instantiated the natural order.  Butterfield puts it that way:

Until the end of the Middle Ages there was no distinction between theology and science. Knowledge was deduced from self-evident principles received from God, so science and theology were essentially the same fields. After the Middle Ages, the increasingly atheistic rejection of God by scientists led to the creation of materialist secular science in which scientists will continue to search for a natural explanation for a phenomenon based on the expectation that they will find one. 14

Naturalists hijack science by imposing philosophical naturalism
From 1860 - to 1880, Thomas Huxley and members of a group called the “X Club” effectively hijacked science into a vehicle to promote materialism (the philosophy that everything we see is purely the result of natural processes apart from the action of any kind of god and hence, science can only allow natural explanations). Huxley was a personal friend of Charles Darwin, who was more introverted and aggressively fought the battle for him. Wikipedia has an interesting article worth reading titled, “X Club.” It reveals a lot about the attitudes, beliefs, and goals of this group. 8

The fact that science papers do not point to God, does not mean that the evidence unraveled by science does not point to God. All it means is that the philosophical framework based on methodological naturalism that surrounds science since its introduction in the 19th century is a flawed framework, and should have been changed a long time ago when referencing historical science, which responds to questions of origins. Arbitrary a priori restrictions are the cause of bad science, where it is not permitted to lead the evidence wherever it is. The proponents of design make only the limited claim that an act of intelligence is detectable in the organization of living things, and using the very same methodology that materialists themselves use to identify an act of intelligence, design proponents have successfully demonstrated their evidence. In turn, their claim can be falsified with a single example of a dimensional semiotic system coming into existence without intelligence.

Sean Carroll writes:
Science should be interested in determining the truth, whatever that truth may be – natural, supernatural, or otherwise. The stance known as methodological naturalism, while deployed with the best of intentions by supporters of science, amounts to assuming part of the answer ahead of time. If finding truth is our goal, that is just about the biggest mistake we can make. 9

Scientific evidence is what we observe in nature. The understanding of it like microbiological systems and processes is the exercise and exploration of science. What we infer through the observation, especially when it comes to the origin of given phenomena in nature, is philosophy, and based on individual induction and abductive reasoning. What looks like a compelling explanation to somebody, can not be compelling to someone else, and eventually, I infer the exact contrary. In short, the imposition of methodological naturalism is plainly question-begging, and it is thus an error of method. No one can know with absolute certainty that the design hypothesis is false.  It follows from the absence of absolute knowledge, that each person should be willing to accept at least the possibility that the design hypothesis is correct, however remote that possibility might seem to him.  Once a person makes that concession, as every honest person must, the game is up.

In Genesis, all life forms were created, all creatures according to their kinds. It is stated as a fact. In 1837, Charles Darwin draw a simple tree in one of his notebooks, and above, he wrote: “I think.” 
What we see right here, is the difference between stating something as a fact, and another as uncertain hypothetical imagination. While Genesis as the authoritative word of God makes absolute claims, man, in his limitless, can only speculate, infer, and express what he thinks to be true based on circumstantial evidence.    Darwin incorporated the idea in: On the Origin of Species (1859), where he wrote:
The affinities of all the beings of the same class have sometimes been represented by a great tree. I believe this simile largely speaks the truth. 10

Since then, science, and biology textbooks have adopted Darwin's view of universal common descent, denying the veracity of the biblical Genesis account, and replaced it with the evolutionary narrative to explain the diversity of life.
Creationist objections have often been refuted by resorting to scientific consensus, claiming that evolution is a fact, disregarding that the term evolution, before using it, has to be defined. More than that. Intelligent design has been brand marked as pseudo-science and been rejected by the scientific community.11 

In nature, life comes only from life. That has never been disproven. Therefore, that should in our view be the default position. In this book, we will show how eliminative induction refutes the origin and complexification of life by natural means, and how abductive reasoning to the best explanation leads to an intelligent designer as the best explanation of the origin and diversification of life. Origin of life research in over 70 years has led only to dead ends. Furthermore, every key branch of Darwin's tree of life is fraught with problems, starting with the root of the tree of life, to every major key transition. From the first to the last universal common ancestor, to the three domains of life, from unicellular to multicellular life. Wherever one looks, there are problems. Nothing in Biology Makes Sense Except in the Light of Evolution was Dobzhansky's famous dictum back in 1973. The author's view is that nothing in physics, chemistry, and biology makes sense except in the Light of intelligent design by a superintelligent powerful creator, that made everything for his own purposes, and his glory. Louis Pasteur, famously stated:  "Little Science takes you away from God, but more of it takes you to him". The author agrees with Pasteur. For over 160 years, Darwin's Theory of Evolution has influenced and penetrated biological thinking and remains the dominant view of the history of life in academia, and in general. Despite its popularity, the Bible, which is in disagreement with Darwin's view, is still believed to be true by a large percentage of the population in the united states. The Genesis account states that God created the universe, and the world, in literally six days, and each of its living kind individually. Both accounts, which contradict each other, cannot be true. If one is true the other must be false. The dispute between them is an old one. Ultimately, each one of us has to find out individually, what makes the most sense. In this book, we use the approach of eliminative induction, and bayesian thinking, abductive reasoning, inference to the best explanation, to come to the conclusion that design tops naturalistic explanations like evolution when it comes to explaining the origin of life, and biodiversity.

Intelligent design wins using eliminative induction based on the fact that its competitors are false. Materialism explains basically nothing consistently in regards to origins but is based on unwarranted consensus and scientific materialism, a philosophical framework, that should never have been applied to historical sciences. Evidence should be permitted to lead wherever it is. Also, eventually, an intelligent agency is the best explanation of origins.

And it wins based on abductive reasoning, using inference to the best explanation, relying on positive evidence, on the fact that basically, all-natural phenomena demonstrate the imprints and signature of intelligent input and setup. We see an unfolding plan, a universe governed by laws, that follows mathematical principles, finely adjusted on all levels, from the Big Bang to stars, galaxies, the earth, to permit life, which is governed by instructional complex information stored in genes and epigenetically, encoding, transmitting and decoding information, used to build, control and maintain molecular machines ( proteins ) that are build based on integrated functional complex parts, which are literally nanorobots with internal communication systems, fully automated manufacturing production lines, transport carriers, turbines, transistors, computers, and factory parks, employed to give rise to a wide range, millions of species, of unimaginably complex multicellular organisms. This book will focus on how the cell, which is the smallest unit of life, provides the most fascinating and illustrative evidence of design in nature, and so, pointing to an intelligent designer.

Consensus in science
Nearly all (around 97%) of the scientific community accepts evolution as the dominant scientific theory of biological diversity. Atheists have used that data in an attempt to bolster their worldview and reject intelligent design inferences. You can beat consensus in science with one fact. But you can't convince an idiot about God's existence with a thousand facts. Much of the "settled science" that even geologists and other degree-holding "scientists" accept is really not established fact, it's only the most "widely accepted theory", and some actually ignore evidence that might support other and better inferences from available evidence, because that evidence indicates something other than the "consensus opinion" on a subject. Never mind that almost all the most groundbreaking and world-changing scientific and mathematical breakthroughs from Galileo to Newton, to Pasteur, to Pascal and Einstein, etc. were made by people who rejected conventional wisdom or went well beyond what "everybody knows".

Jorge R. Barrio corresponding in Consensus Science and the Peer Review 2009 Sep 11
I recently reviewed a lecture on science, politics, and consensus that Michael Crichton—a physician, producer, and writer—gave at the California Institute of Technology in Pasadena, CA, USA on January 17, 2003. I was struck by the timeliness of its content. I am quite certain that most of us have been—in one way or another—exposed to the concept (and consequences) of “consensus science.” In fact, scientific reviewers of journal articles or grant applications—typically in biomedical research—may use the term (e.g., “....it is the consensus in the field...”) often as a justification for shutting down ideas not associated with their beliefs. It begins with Stump's appeal to authority. This is a common evolutionary argument, but the fact that a majority of scientists accept an idea means very little. Certainly, expert opinion is an important factor and needs to be considered, but the reasons for that consensus also need to be understood. The history of science is full of examples of new ideas that accurately described and explained natural phenomena, yet were summarily rejected by experts. Scientists are people with a range of nonscientific, as well as scientific influences. Social, career, and funding influences are easy to underestimate. There can be tremendous pressures on a scientist that has little to do with the evidence at hand. This certainly is true in evolutionary circles, where the pressure to conform is intense. 12

1. Ernst Haeckel on the mechanical theory of life, and on spontaneous generation 1871
2. Lindsay Brownell: Embracing cellular complexity17 July 2014
3. Bharath Ramsundar: The Ferocious Complexity Of The Cell 2016
4. David Goodsell: Our Molecular Nature 1996 
5. John D. Barrow:  FITNESS OF THE COSMOS FOR LIFE,  Biochemistry and Fine-Tuning
6.  Kelly James Clark: Religion and the Sciences of Origins: Historical and Contemporary Discussions 2014
7. Paul Davies: The Goldilocks Enigma: Why Is the Universe Just Right for Life?  April 29, 2008 
8. Andreas Sommer: Materialism vs. Supernaturalism? “Scientific Naturalism” in Context July 19, 2018
9.  Sean Carroll: The Big Picture: On the Origins of Life, Meaning, and the Universe Itself 10 may 2016 
10. http://darwin-online.org.uk/content/frameset?itemID=F373&viewtype=image&pageseq=1
11. https://en.wikipedia.org/wiki/Intelligent_design
12. Jorge R. Barrio Consensus Science and the Peer Review 2009 Apr 28
13. https://www.amazon.com.br/Origins-Modern-Science-1300-1800/dp/0029050707
14. Prof. Dr. Ruth E. Blake: Special Issue 15 February 2020
15. Kepa Ruiz-Mirazo: [url=https://pubs.acs.org/doi/10.1021/cr2004844]Prebiotic Systems Chemistr



Last edited by Otangelo on Mon May 15, 2023 7:37 am; edited 111 times in total

https://reasonandscience.catsboard.com

2The Cell factory maker, Paley's watchmaker argument 2.0 Empty Chapter 2 Sun Aug 09, 2020 10:30 pm

Otangelo


Admin

Michael Crichton:
“I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. Whenever you hear the consensus of scientists agrees on something or other, reach for your wallet because you're being had. Let's be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus. There is no such thing as consensus science. If it's consensus, it isn't science. If it's science, it isn't consensus. Period.” “I would remind you to notice where the claim of consensus is invoked. Consensus is invoked only in situations where the science is not solid enough. Nobody says the consensus of scientists agrees that E=mc2. Nobody says the consensus is that the sun is 93 million miles away. It would never occur to anyone to speak that way.” 13

The model of intelligent design makes predictions and is testable
Design can be tested using scientific logic.  How? Upon the logic of mutual exclusion, design and non-design are mutually exclusive (it was one or the other) so we can use eliminative logic: if non-design is highly improbable, then design is highly probable.  Thus, the evidence against non-design (against production of a feature by  undirected natural process) is evidence for design.  And vice versa. The evaluative status of non-design (and thus design) can be decreased or increased by observable empirical evidence, so a theory of design is empirically responsive and is testable.

Observation: Intelligent agents act frequently with an end goal in mind, instantiating information storage devices like hard disks, and creating blueprints, instructional information, and codified descriptions of factories and machines. They also know how to instantiate information transmission machines, that encode, transmit, decode, or even translate that information, and subsequently, build factories that contain functional irreducibly complex machines made of multiple, integrated parts, and on top of that, assembly lines, where various machines are finely adjusted to each other, to produce useful end products, or intermediate products, that are later assembled in a system with higher functions. In our experience, such systems invariably originate from an intelligent source. No exception.

Hypothesis (Prediction): If structures in nature are found, that are analogous to hardware/software ( computers ) made by man, that direct the making of devices containing many parts arranged in intricate patterns, metabolic pathways similar to electronic circuits, and irreducible integrated systems and structures that perform specific functions, it is an indication that intelligence had to be present, instantiating these systems in the past.

Experiment: Scientific research has unraveled data storage mechanisms ( genes), genetic and epigenetic codes and languages ( genetic code, and over 45 epigenetic languages), information transmission ( encoding through RNA polymerase transcription, transmission through messenger RNA, and decoding/translation through ribosomes, creating molecular machines (proteins) and lining them up and ordering them into metabolic circuits ( analogous to production lines) and compartments ( organelles),  creating irreducible chemical factories ( Cells ). Cells could not keep their basic functions without a minimum number of parts and complex inter-wined and interdependent structures. That indicates these biological machines and pathways had to emerge fully operational, all at once. A stepwise manner is not possible.

Conclusion: Unless someone can falsify the prediction, and point out a non-intelligent source of high levels of instructional complex coded information, irreducible complex and interdependent molecular systems and complex metabolic circuits and biosynthesis pathways, their origin is best explained by the action of an intelligent agent. We do not need direct observed empirical evidence of the instantiation of these structures through an intelligent agency. What is observed ( the evidence) is the path to the cause in the past. We can stick to inference to the best explanation.

What is life?
There are various definitions. The shortest version maybe NASA's: The current working definition of life is “a self-sustaining chemical system capable of Darwinian evolution.”  Another description: Life is chemistry plus information.
A more comprehensive definition was given by Paul Davies in his book:  The Fifth Miracle. Life is composed of Reproduction. Metabolism. Homeostasis Nutrition. Complexity. Organization.  Growth and development. Information content.  Hardware/software entanglement. Permanence and change.1 To that, we would also add sensitivity and regulation. Of course, explaining the origin and emergence of all these features, processes, and properties, constitutes a huge challenge. No wonder, do researchers in the field struggle to come up with a working hypothesis. Every single individual property alone rises a myriad of questions and demands an explanation..

The constraint of philosophical naturalism, and consensus science, leads to bad philosophical inferences
When opponents of creationism/intelligent design are asked about how to explain the origin of natural phenomena, they are often caught in the act of citing/quoting a science paper, that apparently conforms to their view, that they have not read, nor understood. Since science is grounded in philosophical materialism, they know there are no scientific papers that infer design. When asked to quote the relevant part of the paper, which was providing convincing evidence that evolution was the best answer, commonly they don't answer, because they did not make the effort to analyze carefully the proposed evidence. That shows nicely their confirmation bias. They determined already evolution must be true, since it fits their preconceived and wished worldview, so all they do, is try to fit everything they find into their naturalistic worldview, without carefully looking if the evidence is compelling. Most scientific papers on evolution are perfect examples of how philosophical naturalism works and obliges especially historical sciences to wear blinkers.  Since abiogenesis and evolution is the only naturalistic possible explanation for the origin and biodiversity on earth, naturalism is supposed to be the answer right from the beginning.  These papers start with evolution, end with evolution, and in the section of conclusive remarks, the philosophical inference is a not rarely a high concentration of guesswork, ad hoc explanations, and fairytale stories.

In this unhinged rant, I lay out my accusation that most science paper inferences are non-sequiturs: Many are assertions that do not have even the slightest decency to make bad inferences implicitly and hidden but are so certain that both, the professional reader, but also the general public swallows everything, are totally unconcerned about the deceptive irrationality of the claims. I suspect that unwarranted assertions are made shamelessly, and without concern that someone might protest, and point the finger at it. If science writers and authors in any other field than biology would do the same, they would be met with scorn and contempt. The “non-sequitur science author,” does not care about what is true or false: the rhetorical goal is just to say whatever will accomplish the aim to have an explanation that conforms to the pre-established framework of philosophical naturalism.  Abiogenesis, evolution, and naturalism must be true and are assumed a priori, and they don’t much care if what they are saying is plausible or not. It is this disregard for reality that becomes pernicious and corrupt. Much of the alienation is due to the will to keep jobs.

The science fathers were Christians
Augustine of Hippo, (354 – 430) also known as Saint Augustine, a theologian and philosopher, wrote:  “The very order, disposition, beauty, change and motion of the world and of all visible things silently proclaim that it could only have been made by God.” In the thirteenth century, Thomas Aquinas gave an argument on design in the “Fifth Way.” He wrote: We see that natural bodies work toward some goal, and do not do so by chance. Most natural things lack knowledge.  But as an arrow reaches its target because it is directed by an archer, what lacks intelligence achieves goals by being directed by something intelligent. Therefore some intelligent being exists by whom all natural things are directed to their end, and this being we call God.

Who do you think coined the term scientist? It was William Whewell, an Anglican priest, and theologian, who also invented the words physicist, cathode, anode, and many other scientific terms used today. Essentially, the very language used by scientists today was invented by a believer. 14

Here is a list of creationists who founded and established modern science. Due to space I have to make this list short.
Galileo Galilei (1564 - 1642) Contributions: The law of falling bodies; Geometric and Military Compass; An Improved Telescope; The Case for Heliocentrism.
Johann Kepler (1571-1630) Contributions: Physical Astronomy; Celestial Mechanics.
Blaise Pascal (1623-1662) Contributions: Hydrography; Barometer.
Robert Boyle (1627-1691) Contributions: Chemistry; Gas Dynamics.
Isaac Newton (1642-1727) Contributions: Calculus; Dynamics; Law of gravity; Reflecting telescope.
John Woodward (1665-1728) Contributions: Paleontology.
Carolus Linnaeus (1707-1778) Contributions: Systematic Biology; Classification System.
Leonhard Euler (1707-1783) Contributions: calculus, number theory, notation, optics, rational and fluid mechanics.
William Herschel (1738-1822) Contributions: Galactic Astronomy; Double stars.
Carl Friedrich Gauss (1777 - 1855) Contributions: Number theory, geometry, probability theory, geodesy, planetary astronomy, the theory of functions, and potential theory (including electromagnetism).
Michael Faraday (1791-1867) Contributions: Electro-Magnetics; Field Theory; Electronic Generator.
Charles Babbage (1792-1871) Contributions: Computer Science; Actuarial tables; Calculating machine.
James Joule (1818-1889) Contributions: Reversible Thermodynamics.
George Mendel (1822-1884) Contributions: Genetics.
Louis Pasteur (1822-1895) Contributions: Pasteurization; Bacteriology; Biogenesis Law; Vaccination & Immunization; Fermentation Control.
Lord Kelvin (1824-1907) Contributions: Energetics; Thermodynamics; Absolute temperature scale; Trans-Atlantic Cable.
James Clerk Maxwell (1831-1879) Contributions: Statistical Thermodynamics; Electrodynamics.
Orville Wright (1871 - 1948) Contributions: Invented Aviation (flight; first airplane); made improvements on their own invention.
Wernher von Braun (1912 - 1977) Contributions: Rocket Science; Space Exploration; Trip to the Moon; Moon landings.

Paley's watchmaker argument 2.0
In 1802, William Paley, a clergyman, apologist, and philosopher, the famous watchmaker analogy was given in his book: Natural Theology: or Evidences of the Existence and Attributes of the Deity Collected from the Appearances of Nature. It became a classic of teleology, the argument from design. It was intended to give an analogy related to the universe. In the author's opinion, however, it can be expanded to serve as the basis for teleological arguments in general, applied to physics, chemistry, biochemistry, and biology. 

Paley wrote:
In crossing a heath, suppose I pitched my foot against a stone and were asked how the stone came to be there, I might possibly answer, that, for anything I knew to the contrary, it had lain there forever: nor would it perhaps be very easy to shew the absurdity of this answer. But suppose I had found a watch* upon the ground, and it should be inquired how the watch happened to be in that place, I should hardly think of the answer which I had before given, that, for anything I knew, the watch might have always been there. Yet why should not this answer serve for the watch, as well as for the stone? Why is it not as admissible in the second case, as in the first? For this reason, and for no other, viz. that, when we come to inspect the watch, we perceive (what we could not discover in the stone) that its several parts are framed and put together for a purpose, e.g. that they are so formed and adjusted as to produce motion, and that motion so regulated as to point out the hour of the day; that, if the several parts had been differently shaped from what they are, of a different size from what they are, or placed after any other manner, or in any other order, than that in which they are placed, either no motion at all would have been carried on in the machine, or none which would have answered the use, that is now served by it. 11,

Without knowing about biology as we do today, Paley made an observation, which is spot on and has relevant significance and correctness, applied to the reality of molecular biology today. If parts of a complex system were wrongly shaped, if they had a non-adequate size, placed in any wrong manner, or in any other order instead of the functional one, most probably no motion or (intended) function would be the result. That applies to biological systems and the inner working of cells. Each of these four points must be right, or no biological function is granted. Paley's argument can be well regarded as a precursor of Behe's argument of Irreducible Complexity, to which we will come later. We see the Factory maker argument as an advance of Paley's watchmaker argument in face of the advance in science. A 2.0 of Paley's watchmaker, so to say. Two of the main tenets of intelligent design are the argument of specified complexity, popularized by Dembski, and Behe's argument of irreducible complexity. Combined, we have Paley's argument 2.0: Specified complexity observed in genes is encoded, transmitted, decoded, and translated through the language of the genetic code, to dictate and direct the making, operation, and regulating of irreducibly complex molecular machines, robotic production lines, and interlinked chemical nano factory parks. 

Chapter 2

Living Cells are chemical factories
There is a common scheme to all life. It is information that is transferred where needed and directs the cell's activities. Encoding, sending, decoding and expressing information. Almost everything is information-driven and depends on it. The culmination of our investigation is the discovery of a common scheme that is the core of biology, related to the central dogma of molecular biology:

Life is based on enormously complex engineering principles where information manages, choreographs, orchestrates, and directs the making and employment of incredibly efficient nanomachines and chemical production lines, working with extraordinary efficiency and operating very close to thermodynamic perfection, using energy made by almost 100% efficient molecular energy turbines.

Cells have a codified description of themselves in digital form stored in genes and have the machinery to transform that blueprint through information transfer from genotype to phenotype, into an identical representation in analog 3D form, the physical 'reality' of that description. Using Bayesian probability, or abductive reasoning, an intelligent cause is the best explanation.

Cells have a codified description of themselves in digital form stored in genes and have the machinery to transform it through information transfer and the injection of energy into the physical 'reality' of that description. To suggest that a physical non-designed process can create instructional assembly information, a recipe, or a blueprint, is like suggesting that throwing ink on paper will create a blueprint. It is never going to happen. On top of that, believing that somehow information transmission networks will emerge by chance, that will encode, transmit, and decode that information, and subsequently, somehow, add energy, non-intelligent mechanisms will direct the assembly process of complex machines, interconnect them, and produce a self-replicating factory through that information is extremely unlikely. There might be a chance, you might say. If one wins 1000 times the lottery in a row, there might-yes, statistically be a chance, but it is so unlikely, that it makes more sense to believe that someone cheated.

Someone, presented with the argument, once said that it signals the death knell of atheism. According to Merriam-webster, death knell is an action or event presaging death or destruction.

The Cell factory maker, Paley's watchmaker argument 2.0 Gggdfg10

1. Living Cells are information-driven factories. They store very complex epigenetic and genetic information through the genetic code, over forty epigenetic languages, translation systems, and signaling networks. These information systems prescribe and instruct the making and operation of cells and multicellular organisms.  The information drives the operation in a manner analogous to how software in a computer drives computer hardware. The operation of cells is close to thermodynamic perfection, and its operation occurs analogously to computers. Cells ARE computers in a literal sense, using boolean logic. Each cell hosts millions of interconnected molecular machines, production lines, and factories analogous to factories made by man. They are of unparalleled gigantic complexity, able to process constantly a stream of data from the outside world through signaling networks. Cells operate robot-like, autonomously. They adapt the production and recycle molecules on demand. The process of self-replication is the epitome of manufacturing advances and sophistication.
2. Humans routinely create blueprints containing instructional assembly information, and fabricate complex machines and interlinked factories based on these instructions, which produce goods for specific purposes.
3. Since the manufacturing process of biological cells is analogous, having a codified description of themselves in digital form stored in genes and using their machinery to transform that blueprint through information transfer into an identical representation in analog 3D form, the physical 'reality' of that description, that process is best explained by the action of an intelligent designer, who created life for his own purposes, for his own glory.

If there is no creator, then all physical reality, everything, our universe, governed by the physical laws and adjusted to host life with unfathomable precision, life, conscious beings like us, and advanced civilization, is the most astounding miracle ever. A colossal, universal accident. What are the odds? That's like looking at an AI robot and concluding that all that metal and plastic formed spontaneously first into functional subparts, and suddenly a program coming from nowhere directed its entire assemblage and jumped together to make an AI robot. Ha!!

Argument from analogy
John Frederick William Herschel, mathematician, astronomer, chemist, inventor (1830): If the analogy of two phenomena be very close and striking, while, at the same time, the cause of one is very obvious, it becomes scarcely possible to refuse to admit the action of an analogous cause in the other, though not so obvious in itself. 5

A metaphor (“A biological cell is like a production system”) demonstrates that similar behaviors are driven by similar causal mechanisms. We can make a bridge and put the dots together. There is an analogous situation in our world.

Hume: Look round the world: contemplate the whole and every part of it: you will find it to be nothing but one great machine, subdivided into an infinite number of lesser machines, which again admit of subdivisions to a degree beyond what human senses and faculties can trace and explain. All these various machines, and even their most minute parts, are adjusted to each other with an accuracy which ravishes into admiration all men who have ever contemplated them. The curious adapting of means to ends, throughout all nature, resembles exactly, though it much exceeds, the productions of human contrivance; of human designs, thought, wisdom, and intelligence. Since, therefore, the effects resemble each other, we are led to infer, by all the rules of analogy, that the causes also resemble; and that the Author of Nature is somewhat similar to the mind of man, though possessed of much larger faculties, proportioned to the grandeur of the work which he has executed. By this argument a posteriori, and by this argument alone, do we prove at once the existence of a Deity, and his similarity to human mind and intelligence.
13

We, as humans, are equipped with intelligence, and use it to think and conceptualize, how we can facilitate our life, and comfort, and create artifacts and technology for specific purposes all the time. It starts as a concept in our mind, we materialize the concept in the form of a blueprint, a drawing with all the minute details, adding a precise description with sizes, dimensions, and informing the materials employed. Once this is done, the blueprint goes to the factory, where the factory workers read the information, and transform the plan into the 3D reality of the description. There is an equivalence to what occurs on a microscopic level in biological systems. Note how there is intelligence involved from the beginning, to the end in the entire manufacturing chain by humans. In all intermediate steps, workers with intelligence are involved in the process. The blueprint requires a medium to be stored. It can be paper, a hard disk, etc. The making of an information storage device requires always intelligence. Then, there has to be a language using codes to store the information using that language. The invention of a language also depends on conscious intelligent agents. Creating a system of information transfer, that is, encoding, sending, and decoding, also requires an intelligent agent with know-how. Receiving, and transforming codified information into the "real" thing, the 3D object, from digital to analog, also depends on conscious intellect. A high quantity of brainpower and IQ is involved in the entire process. A further step that adds complexity is when the information of the blueprint is stored in one language, but the receiver speaks another language, and it has to be translated. The making of a translation book depends on a translator, which understands both languages, and eventually even the different signs, like the alphabet to kanji.

Cells are factories in a literal sense, or just as a metaphor? 
A common objection is that cells are not factories in a literal sense. The Word Factory is from the Latin word fabricare, which means making. Produce, manufacture. A factory or manufacturing plant is a site, usually consisting of buildings and machinery, or more commonly a complex having several buildings, where, in fully automated factories, for example, pre-programmed robots, manufacture goods or operate machines processing one product into another. And that's PRECISELY what cells do. They produce other cells through self-replication, complex machine processing, computing, etc. They produce all organelles, proteins, membranes, and parts, they make a copy of themselves. Self-replication is a marvel of engineering. the most advanced method of manufacturing. And fully automated. No external help is required. Cells are not just equivalent to human-made factories. Cells are better described as an entire High-Tech Industrial Park, an entire city of chemical factories, smart in the sense that they are fully interconnected and flexible. Cells use real-time production based on inventory data, leveraging to adapt to the changing environment. It even changes its surrounding environment to fit better its needs. That's called niche construction.  Cells self-optimize performance based on input from the signaling network, self-adapt to and learn from new conditions in real or near-real time, and autonomously run entire production processes. Cells employ far more sophisticated and complex manufacturing procedures than any human-made factory, producing the basic building blocks of life in the right quantities, which are strictly regulated and controlled, with sophisticated recycling mechanisms, and using these basic building blocks to make ultracomplex molecular machines, which perform all kind of essential tasks, interlinked into veritable production lines, producing all kind of products necessary for maintaining all life-essential functions: reproduction, metabolism, nutrition, growth, development, permanence and change. On top of that, they self-replicate, which is the epitome of manufacturing advances and achievements, far from being realized by man-made factories. Furthermore, in contrast to human-made factories, cells use a reduced toolbox, they use basically just four building blocks: nucleotides, amino acids, phospholipids, and carbohydrates. Compare that to the usually myriads of different building blocks that we use. Cells can contain up to over 2 billion individual molecular machines ( proteins in Human cells ) analogous to human-made factories, so we can draw an analogy. Cells contain literally billions of machines, not just machine-like, or just similar in a distant way. Cells are the MOST ADVANCED factories complexes full of machines in the universe, far more complex than ANY man-made factory. There are vast scientific literature, science papers, and books, which mention Cells as factories in a literal sense.

USP Brazil: The Cell membrane separates the interior of all cells from the outside environment. That's the exterior factory wall that protects the factory. The Nucleus is the Chief Executive Officer (CEO). It controls all cell activity; determines what proteins will be made and controls all cell activity. Plasma membrane gates regulate what enters and leaves the cell; where cells make contact with the external environment. That's the Shipping/Receiving Department. It functions also as the communications department because it is where the cell contacts the external environment. The Cytoplasm includes everything between the cell membrane and the nucleus. It contains various kinds of cell structures and is the site of most cell activity. The cytoplasm is similar to the factory floor where most of the products are assembled, finished, and shipped. Mitochondria/chloroplasts: The power plant. Transforms one form of energy into another. Mitochondrial membranes keep protein assembly lines together for efficient energy production. Membrane-enclosed vesicles form packages for cargo so that they may quickly and efficiently reach their destinations. Internal membranes divide the cell into specialized compartments, each carrying out a specific function inside the cell. That are the compartments in a manufacturing facility. The Endoplasmic Reticulum (ER) is the compartment where the  Assembly lines reside.  (where workers do their work) The Golgi apparatus: What happens to all the products that are built on the assembly line of a factory? The final touches are put on them in the finishing and packing department. Workers in this part of the plant are responsible for making minor adjustments to the finished products. Ribosomes build the proteins, equal to the Workers in the assembly line. Signal-Recognition Particles (SRP) and signal receptors provide a variety of instructions informing the cell as to what destination and pathway the protein must follow. That's the address on the parcel where it has to be delivered. Kinesin Motors: These are the cargo carriers in the cell. That are the forklift carriers in a factory. Microtubules: They provide platforms for intracellular transport, amongst other things. That are the internal factory highways. Lysosomes: are capable of breaking down virtually all kinds of biomolecules, including proteins, nucleic acids, carbohydrates, lipids, and cellular debris. That's the maintenance crew.  It gets rid of the trash and dismantles and disposes of the outmoded machinery. Hormones: permit the communication between the cells. That's the cell phone to cell phone communication. 2

Cells operate based on cleverly implemented high-performance manufacturing principles which have a lot in common with human designs. All cells operate based on a small set of four basic building blocks that they themselves synthesize, which are with frequency recycled. It is less energetically expeditious to recycle them than to synthesize them from scratch. Proteins are constantly monitored, removed when needed and degraded, and subsequently, replaced. Cells react with high sensitivity to the external milieu and are incredibly efficient to make products with high responsiveness, output, speed, fidelity, error proneness, and adaptive flexibility.  

KUMAR SELVARAJOO: Production Equipment Is Added, Removed, or Renewed Instantly 22 October 2008
The capacity of the cell’s pathways can be adjusted almost immediately if the demand for its products changes. If the current capacity of a pathway is insufficient to meet demand, additional enzymes are “expressed” to generate more capacity within a certain range. Once the demand goes down, these enzymes are broken down again into their basic amino acids. This avoids waste as the released amino acids are then used for the synthesis of new proteins. At any moment, synthesis and breakdown for each enzyme happen in the cell. The constant renewal eliminates the need for other types of “machine maintenance.” Assembly and disassembly of the cell’s machines are so fast and frictionless that they allow a scheme of constant machine renewal. In some industrial manufacturing settings, we are also witnessing signs of the emergence of flexible capacity. Some of these companies do not repair their manufacturing equipment, but have it replaced. Take, for example, a contract manufacturer in Singapore that provides semiconductor assembly and test services for INTEL, AMD, and others. Its manufacturing equipment includes die bonders, wire bonders, and encapsulation and test equipment, all organized in pools. As soon as one machine goes down, the managers work with the equipment supplier to make a one-to-one replacement. All this goes very rapidly indeed. This policy makes sense because the low cost of a machine compared to the cost of downtime makes it economically feasible to have a couple of machines idle in the somewhat longer repair cycle. One can imagine this practice spreading as manufacturing equipment becomes more standardized and less expensive, and as the cost of a capacity shortage increases. In this scenario, machines are still repaired, although at the supplier site rather than on the manufacturing floor. The cell has pushed this principle even further. First, it does not even wait until the machine fails, but replaces it long before it has a chance to break down. And second, it completely recycles the machine that is taken out of production. The components derived from this recycling process can be used not only to create other machines of the same type, but also to create different machines if that is what is needed in the “plant.” This way of handling its machines has some clear advantages for the cell. New capacity can be installed quickly to meet current demand. At the same time, there are never idle machines around taking up space or hogging important building blocks. Maintenance is a positive “side effect” of the continuous machine renewal process, thereby guaranteeing the quality of output. Finally, the ability to quickly build new production lines from scratch has allowed the cell to take advantage of a big library of contingency plans in its DNA that allow it to quickly react to a wide range of circumstances.
https://ink.library.smu.edu.sg/lkcsb_research/1061/

Cell Metabolism as a production line system
Cells perform thousands of different metabolic reactions simultaneously. L.Coli bacteria run 1,000–1,500 biochemical reactions simultaneously. Just as in manufacturing, production-line-like operations in cells where the operators are robot-like enzymes transform the basic building blocks into final products like proteins, cell walls, organelles, energy turbines, etc. These pathways respond to environmental changes, control output, use feedback loops, and speed or slow up depending on environmental pressures. Adaptation through quick reactions to external signals is a life-essential requirement for all life forms and had to be fully set up when life started. These advanced technological solutions are being copied by man, - biomimetics is a fast-growing field.   

Rebecca Mcclellan (2021): Stanford researchers gain insight into how cells avoid assembly-line mistakes. What she wrote, could hardly have been written better by any intelligent design science writer:

Molecular assembly lines maintain their precise control while shepherding growing molecules through a complex, multi-step construction process. Every cell is a master builder, able to craft useful and structurally complex molecules, time and again and with astonishingly few mistakes.  There are thousands of these assembly lines in nature, and they all make unique compounds.   These molecular assembly lines maintain their precise control while shepherding growing molecules through a complex, multi-step construction process. Cells, for example, synthesize polyketides through molecular assembly lines called synthases. Each synthase contains anywhere between three to 30 “modules,” groups of active proteins, or enzymes, organized sequentially. Each module is a station in the assembly line that is responsible for adding a piece to a growing molecular chain and then installing chemical modifications to that unit. Passing from module to module, a polyketide grows in size and complexity until it eventually rolls off the conveyor belt in its final form. This assembly line, like others, always manages to push the growing molecule in the right direction, a feat that the laws of thermodynamics can’t fully explain. The assembly line looks like BMW plant. These are amazingly complex molecular machines. There are so many components that have to come together at the right place and the right time, in a highly orchestrated way.  Each module is made up of a pair of enzymes, each of which has a molecular arm that extends out from the module’s sides. It was widely thought that these arms mirror one another in their poses. One arm extends out while the second arm flexed downward. The structure is the module in action and the bent arm could be the key to the assembly line’s directionality. Each module can only work on two molecules at a time. It's a “turnstile” mechanism, with each module closing itself off to incoming chains until it releases one it’s working on. This flexed arm acts as the arm of the turnstile. The turnstile arm appears to have two jobs. First, it acts as a gatekeeper and physically blocks incoming molecules from entering while one is being processed. Second, the contortion of the enzyme into that asymmetric pose requires energy, which gets stored in the flex of the arm. The relaxation of the arm back to its “normal” state, which releases the pent-up energy, helps propel the molecule under construction to the next stage of the assembly line. These enzymes are capturing energy in these amazing contortions, and they use that energy to power something else. 10

Isn't that an amazing example of ingeniosity of its finest at a molecular level? We see here assembly lines performing coordinated machine-like operations in sequential order, that depend on several modules operating as a joint venture together. Only intelligence invents machines and assembly lines. It has never been demonstrated otherwise.

Following would give a good sci-fi movie. 
If we could create a high-tech reality of computerized robots and factories to supply all humanitarian needs, we would be able to create a society where machines do all the work for us, and we would have time only to entertain ourselves, with almost no need to work and limit financial transactions to a bare minimum..... Imagine we could/would have a fully developed symbiotic relationship with Artificial Intelligence robots that would be in constant direct telecommunication with our brains through an implanted Brain-Computer Interface (BCI). Eyeglasses would give us access to metaverses in 3D virtual space. Advanced robotic communication systems could communicate with our brain, constantly read our thoughts, desires, and needs, and take action accordingly. I would need to buy a domestic product, a hair drier, for example. I would select the product in the metaverse, but AI intelligence would immediately discover and select the cheapest offer, and the closest store nearby to deliver the product. I would buy the product on amazon, and the delivery process would be fully automated through robots.  There would be a fully automated process of the production of crops, fruits, and vegetables production. From making the fertilizers, transporting them to the land, seeding, and planting seeds and crops and trees, to collecting, storing, and transporting to beneficiation in storage houses and factories that make end products for consumption, all in a fully automated process through high-tech robots and machines. Same for the growth of animals for food consumption. This machinery would also be self-sustaining without external help over long periods of time. Imagine a world, where all our basic needs would be supplied by robotic butlers. Advanced technology would/could constantly monitor our health, check fitness and find diseases, and instantly recruit what would be needed, all kinds of drugs, to cure and fix the problem. For example, if my thoughts were to watch a movie, that system could turn on all audio-visual systems, and we would have just to sit on the sofa, and entertain ourselves. 

Our AI Robots would be able to read our thought about the food that our body needs, and our desires and tastes, synthesize the optimal meal, and cook it. It would know where to find the products in the kitchen, and other robots would constantly monitor the supply chain, and whenever any kind of food was missing, it would immediately put the supply chain in action to re-establish what is needed. We would live in houses, that are constantly kept with all needed energy supply, water supply, and internal temperatures to an optimum. We would never need to turn on and off anything. The machines would do all for us, based on their advanced knowledge of what we need. Other robots would be employed to remove waste in the house, and others, to permit everything to be cleaned. Self-driving cars and airplanes would conduct us wherever we like. They would be autonomous for energy supply and repairing themselves.  An advanced web of information exchange would have to be implemented, and a huge amount of information to instantiate such advanced high-tech reality, information communication systems, and all the information driver robots. robots. There would be fully autonomous factories, producing autonomous robots, and machines. And if a robot drives havoc for any reason, detection mechanisms would find out immediately what went wrong, and other robots would fix the problem. All the information contained in all books in our world, and all internet communication networks and systems extant would be just a fraction of the quantity needed to instantiate such a futuristic world.

But such a fully automated world already exists. On the molecular level. There a microscopic world of sublime sophistication operates without any external or internal intelligence being involved to perform any of the processes. They operate fully autonomously...... without external direction or help. In the same sense as we as human beings require many things from the external environment to live, so do individual Cells. We rely on external machines made by us, by smart engineers, to do a lot of things for us: And these machines are up to a certain degree always also requiring an intelligent operator to run them. In contrast, cells are fully preprogrammed to do whatever they need to survive without depending on external direction or intelligence. While, in the macroworld of humans, everything relies on human intelligence for various processes to operate, and needs to be attended to, in the microworld everything operates, all processes occur fully autonomously, with external intelligence absent in the process. Everything is already preprogrammed to operate with maximal independence. The majority of cells in multicellular organisms operate in constant exchange and communication, where one cell supplies the needs of another. It is a web of interdependence, and skin cells are sensing the outside world and adapting accordingly. While in any human factory many intelligent minds operate as nodes in communication with other minds, in a weblike fashion, where on all levels intelligence is involved to achieve predetermined tasks and goals, in the cell, everything is preprogrammed, and no mental intelligence is involved in any process at all. We are far from creating a society where all our needs are supplied by preprogrammed robots. That means, maybe hundreds of years of the brainpower of the smartest engineers of all sorts are required to get closer to creating such an advanced civilization.   If human factories could evolve to produce subsequently better, more adapted products, that would add even further complexity, and point to even more requirements of pre-programming to get the feat done. While the scientific consensus claim has been that evolutionary processes of adaptation are due to a non-designed selection process, the evidence actually points to a designed set-up. Adaptation is pre-programmed. 

Bruce Alberts (2017):  
The surface of our planet is populated by living things—curious, intricately organized chemical factories that take in matter from their surroundings and use these raw materials to generate copies of themselves. 6

B C Currell  et al. : The Molecular Fabric of Cells (1991):  
The central theme of both of these texts is to consider cells as biological factories. Cells are, indeed, outstanding factories. Each cell type takes in its own set of chemicals and making its own collection of products. The range of products is quite remarkable and encompasses chemically simple compounds such as ethanol and carbon dioxide as well as extremely complex proteins, carbohydrates, lipids, nucleic acids, and secondary products. 7

Denton: Evolution: A Theory in Crisis (1986):
"We would see [in cells] that nearly every feature of our own advanced machines had its analog in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction. In fact, so deep would be the feeling of deja-vu, so persuasive the analogy, that much of the terminology we would use to describe this fascinating molecular reality would be borrowed from the world of late-twentieth-century technology.
   “What we would be witnessing would be an object resembling an immense automated factory, a factory larger than a city and carrying out almost as many unique functions as all the manufacturing activities of man on earth. However, it would be a factory that would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours. To witness such an act at a magnification of one thousand million times would be an awe-inspiring spectacle.”
8

Understanding how a factory operates requires knowledge of the tools and equipment available within the factory and how these tools are organized. We might anticipate that our biological factories will be comprised of structural and functional elements. 

Cells are full of robotic assembly lines: evolved, or created?
Industries experienced unprecedented development and advance in the early 20th century in evolving production and fabrication skills and capacities. Before the industrial revolution, things were made one at a time and were generally unique. No two items were exactly identical. Mechanical devices and their parts were made by hand by small craft shops individually. Today, in contrast, we have automated high output production capabilities in modern factories for the mass market. Early craftsmen worked without the benefit of substantial mechanization, without which making identical, or nearly identical items are actually more difficult than making each one different. Craft production, however, has severe drawbacks. Some items do benefit from being made to a standardized pattern. Wagons, for instance, benefit from using a standard track width so that they can all fit into the same set of ruts, and all of the arrows used with a particular bow must be as identical as possible in order to ensure maximum accuracy.
 It was probably one of the greatest innovations and helped in special the car industry to advance considerably in a short period of time.  The drawback of this process was, that human workers had to do the assembly job, which means that each building step along the assembly required the intelligent human presence and intervention, starting from the human brains commando signal to the physical transformation through the handy work, in order to manufacture the required part and joining it together in the greater assemblage. All along during the whole process, new formation of information in the brain to do each step of the task was required. Errors through missing concentration were high. High fidelity of copies was not achievable, and manufacturing tolerance had to be high.   
 With the advance in methods of mass manufacturing, factory workers started to be replaced with machines.  Arms, for example, were made one by one, and when broken, replacement parts were not readily available, and could not be easily fixed. Craft-made items are more difficult to fix than standardized ones. If part of a craft-made item breaks, a new one must be fabricated to the same tolerances as the old part, while standardized parts are interchangeable by design. That changed with the concept of interchangeable parts, an innovation designed by Eli Whitney, an American inventor ( 1765 – 1825) That concept first took place in the firearms industry. The industry started to produce identical parts for guns, which took a shorter period of time, and consequently, reduced the costs. Standardizing the production helped as well to fix the devices, once a part was broken. There was an evolution towards more advanced building techniques, using standardized parts, which made the assembly process faster, more accurate, precise, cheaper, and the end product more reliable, durable, secure, and better to be fixed. Observe carefully how the evolution to arrive at this point required huge efforts of brainpower and invention capacity of many brilliant and skilled specialists, design innovation was achieved through intelligent minds. It was a gradual evolution towards more advanced fabrication processes, requiring time, many ideas did not stick and were discarded as not being useful, some eventually even harmful, all requiring and coming from many inventors, engineers, and scientists. 
 Mass production has many obvious advantages. When fully developed, it is much cheaper than craft production. Machines don't tire or get bored as human workers do, and in many cases, they can perform their functions hundreds or thousands of times faster than any human laborer. They churn out identical parts and products so that repairs can often be as simple as taking out a worn or broken part and putting in a new one- much cheaper than having to make the new part from scratch. Their products are also of much more uniform quality so that buyers can have much greater confidence that their purchase will perform as expected. Another, often underappreciated feature of mass production is that it allows for more thorough engineering. When each product is made one-off, it doesn't make much sense to pour huge amounts of effort into designing it to be perfect. With mass production, though, engineering costs can be spread over thousands or millions of units, which means that it can be cost-effective to incorporate some very sophisticated engineering designs.

The Cell factory maker, Paley's watchmaker argument 2.0 Abioge14
The modern assembly line and its basic concept are credited to Ransom Olds, who build the first mass-produced automobile, the Oldsmobile Curved Dash, beginning in 1901

Olds production line permitted to increase the production five times, to a high rate of 20 cars daily. The car had a low price, was simple to assemble, and nice features. Olds assembly line was later copied by Ford who made his own. Looking for ways to lower the cost of producing cars, Henry Ford build the first factory with a moving assembly line at the Highland Park Plant in Detroit, Michigan, in 1913. It cut the time to assemble a Ford Model T from twelve to six hours. This was already a big step forward in regard to quality control and fidelity to the source ( or copy of the standard of the original )  The invention of an assembly line was a further huge step in direction of economy of time and costs, and the capacity of mass production. Again: the assembly line came as a result of high research efforts, being the invention of highly trained, experienced, and intelligent craftsmen,  inventors, engineers, and scientists, which spend huge amounts of time with experiments, and refinement of an initially rudimentary idea. The assembling of parts in a production line, saves energy and production costs and gains volume in production, making the products more affordable to the masses, and last, not least permitting more profit. In the 1940s,  Delmar Harder created Ford's first automation department, exploring new ways of using autonomous machines on the production line. By the end of the decade, Ford had built a sheet metal stamping plant in Buffalo, New York, and installed hundreds of self-regulating machines. However, workers still played a central role on the assembly line. So that was a major evolutionary step, replacing human crafts power partially with machines. 

These machines were however not fully programmed to do the tasks but were guided by the intervention of operators, which directed the movements with joysticks, controlling and directing cutting sizes, operating time, etc.  A further important step forward and advance to lower costs, faster production, and reliability came in 1961. General Motors installed the first industrial robot in a car factory in New Jersey. Robotics continued to become more sophisticated throughout the decade, and in 1969 American engineer Victor Scheinman invented the "Stanford Arm", an electronic arm directed and controlled by a computer. It was a huge step forwards in the design of industrial robots. After 50 years of the introduction of assembly lines, the first industrial robots entered the scene of human manufacturing processes. A milestone in the achievement of homo sapiens, capable of imagination, thought, and advanced intelligent design, and example and celebration of what human minds are capable of inventing, creating, and realizing. A machine executing pre-programmed tasks without the continued intervention of external intelligence, but fully automated, supplied with a stable energy source, and working with high precision and reliability and low costs, transforming coded specified computational information in physical work and as result useful tools necessary to build complex machines. Being able to take the parts nearby and insert them in the right order, at the right place, with the right fit, or shaping the external structure of a building block to be prepared to be handed over to another robot to provide it as part and serve in a machine as a whole.  

When Henry Ford first introduced mass-production techniques to building cars, he followed the simplest possible method by making all of his cars identical right down to the color of paint. While this was very economical, it limited their marketing appeal. As long as Ford was the only mass producer of cars around, that wasn't such a big problem, but General Motors quickly moved in with a variety of models and colors and outcompeted Ford rather quickly. Still, though, even into the sixties each make of car had only a handful of models and the available options remained limited. Many desirable features had to be added by hand at the dealership. Since then, there has been a gradual increase in the number of models offered, and the number of available features has increased greatly. Typical 60's models sold hundreds of thousands of copies each year, and there were a limited number of body variations. Today manufacturers try to sell niche models which have annual production runs of only tens of thousands, often with greater variations in body style and available features.

The key to this increased market segmentation has been more flexible assembly lines. Lines in the '60s were really only capable of turning out a single design with a few variations of, for instance, engine types. Even this showed limited flexibility, as the engines were produced on a separate line and fitted into the car fully finished. Modern assembly lines, in contrast, use a mixture of multi-program robots and human workers to achieve tremendously greater variation. A single line can turn out both left and right-hand drive cars, models with a much wider selection of available features, and even several different models based around a common platform. A company like Saturn can take a customer's order for a car, including body type, features, and color, and program that data into a radio transponder which is placed on the chassis at the beginning of production. As the car reaches each stage in the assembly process, the automated equipment receives information from the transponder and decides what steps are necessary without further outside assistance. That type of flexibility promises only to increase in the future.  

The Cell factory maker, Paley's watchmaker argument 2.0 Maxres11

The best and most advanced result that intelligent and capable minds, thousands and hundred thousands of the most brilliant and inventive men and women from all over the globe have been able to come up with after over one hundred years of technologic advance and progress, of what is considered one of the greatest innovations of the 20th century, is the construction of complex factories with fully automated assembly lines which use programmed robots in the manufacturing, assembly, quality control and packing process of the most diverse products, in the most economical, efficient and effective way possible,  integrating different facilities and systems, and using advanced statistical methods of quality control, making from cell phones, to cars, to power plants, etc.,  but the constant intervention of intelligent brain power is required to get the whole process done and obtain the final products. The distribution of the products is based on complex distribution networks and companies, which all require huge efforts of constant human intervention and brainpower. 
 
 Amazingly, the highest degree of manufacturing performance, excellence, precision, energy efficiency, adaptability to external change, economy, refinement, and intelligence of production automatization ( at our scale = 100 )  we find in proceedings adopted by biological cells,  analogous to our factory.  A cell uses a complex web of metabolic pathways, comparable to robotic assembly lines, each composed of chains of chemical reactions in which the product of one enzyme becomes the substrate of the next. In this maze of pathways, there are many branch points where different enzymes compete for the same substrate. The system is so complex that elaborate controls are required to regulate when and how rapidly each reaction occurs. Like a factory production line, each enzyme catalyzes a specific reaction, using the product of the upstream enzyme, and passing the result to the downstream enzyme. If just one of the enzymes is not present or otherwise not functioning then the entire process doesn’t work. We now know that nearly every major process in a cell is carried out by assemblies of 10 or more protein molecules. And, as it carries out its biological functions, each of these protein assemblies interacts with several other large complexes of proteins. Indeed, the entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. Cells adopt the highest advanced Mass-Craft production techniques, which yield products with the ability of high adaptability to the environment ( microevolution ) while being produced with high efficiency of production, advanced error checking mechanisms, low energy consumption, and automatization, and so is generally being far far more advanced, complex,  better structured and organized in every aspect, than the most advanced robotic assembly facility ever created by man. Unlike our own pseudo-automated assembly plants, where external controls are being continually applied, the cell's manufacturing capability is entirely self-regulated . . .We advocate that this is strong evidence of a planning, super-intelligent mind, which conceptualized and created life right from scratch.

Important considerations for a high economic,  effective, and proper material flow are required and must be considered, and brought in when planning the concepts and layout design of a new factory assembly line, for example, maximal flexibility in the line for demand and supply fluctuation,  planning deep enough to answer all possible aspects of a new line to get max efficiency afterward.   There should be simple material delivery routes and pathways throughout the facility that connect the processes. Also, there needs to be a plan for flexibility and changes, since volumes and demand are variable. Awareness of the many factors involved right in the planning process of the factory is key. Right-sized equipment and facilities must be planned and considered as well. All equipment and facilities should be designed to the demand rate or takt time projects and facility designs that do not take these considerations into the account,  start out great, but quickly bog down in unresolved issues, lack of consensus, confusion, and delay. On a scale of 1 to 100 ( being 1 the lowest, 100 the highest  ), products made one at a time and generally unique, are at the lowest end of manufacturing advance and evolution, being scale = 1. The highest degree of manufacturing refinement and production technique is reached by a mix of so-called Mass-Craft: The key will be the use of computers, multi-function robots, and similar machines to span the gap between flexible but labor-intensive craft production and cheap but inflexible mass production. This mixture of mass production techniques with multifunction automation to produce customized products from an assembly line-like factory is what we can refer to as mass-craft production. The application of computerization to mass production will be a new revolution comparable only to the industrial revolution. Mass production will be substantially replaced by niche and even personalized production. This new mass craft production will combine the mechanization and efficiency of mass production with the individualized products characteristic of handcrafting, with the lowest need of external informational input, but with the whole process fully programmed, and permitting fast high efficient production with the lowest costs and energy economy.





1. PAUL DAVIES: The Fifth Miracle The Search for the Origin and Meaning of Life 
2. http://www.esalq.usp.br/lepse/imgs/conteudo_thumb/Cells-a-busy-factory.pdf
3. A. G. CAIRNS-SMITH Seven clues to the origin of life, page 58 
4. Bruce Alberts Molecular Biology of the Cell Sixth Edition
5. John Frederick William Herschel: A Preliminary Discourse on the Study of Natural Philosophy, page 149, 1830 
6. Bruce Alberts: Molecular Biology of the Cell Sixth Edition 
7.  B C Currell The Molecular Fabric of Cells December 9, 1991 
8. Denton: Evolution: A Theory in Crisis, 1986, pp. 328,  p. 329. 
9. James A Shapiro: How life changes itself: the Read-Write (RW) genome 2013 Jul 8 
10. REBECCA MCCLELLAN: Stanford researchers gain insight into how cells avoid assembly-line mistakes NOVEMBER 4, 2021 
11. William Paley: Natural Theology: or Evidences of the Existence and Attributes of the Deity Collected from the Appearances of Nature 1802 
12. Sonam Gurung:The exosome journey: from biogenesis to uptake and intracellular signalling 2021 Apr 23 
13. Dav



Last edited by Otangelo on Mon May 15, 2023 9:43 am; edited 114 times in total

https://reasonandscience.catsboard.com

3The Cell factory maker, Paley's watchmaker argument 2.0 Empty Chapter 2 Mon Oct 26, 2020 12:13 pm

Otangelo


Admin

The highest degree of manufacturing performance, excellence, energy efficiency, adaptability to external change, economy, refinement, and intelligence of production automatization ( at our scale = 100 ) we find in proceedings adopted by each living cell,  analogous to our factory, and biosynthesis pathways and processes in biology. Cells adopt the highest advanced Mass-Craft production techniques, which yield products with the ability of high adaptability to the environment ( microevolution ) while being produced with high efficiency of production, advanced error checking mechanisms, low energy consumption, and automatization, and so being generally being far more advanced, complex,  better structured and organized in every aspect, than the most advanced robotic assembly facility ever created by man. We have seen, that it took a century for thousands of brilliant men, the most educated in engineering skills and craftsmanship, to go  from rudimentary individual one by one assembly 
I advocate that this fact is strong evidence of planning, super-intelligent mind, which conceptualized and created life right from scratch.

In response to DNA damage, they rapidly converge on the sites of DNA damage, become activated, and form “repair factories” where many lesions are apparently brought together and repaired. And nuclei often contain hundreds of discrete foci representing factories for DNA or RNA synthesis. mRNA is made by production factories and DNA by replication factories. Subnuclear structures (including Cajal bodies and interchromatin granule clusters) are sites where components involved in RNA processing are assembled, stored, and recycled. The high concentration of components in such “factories” ensures that the processes being catalyzed are rapid and efficient. 

That is the same information sequence can be used by different splicing to make over 300 different protein products !! Cells have unmatched energy efficiency, approximately 10,000 times more energy-efficient than any nanoscale digital transistor. In one second, a cell performs about 10 million energy-consuming chemical reactions, which altogether require about one picowatt (one millionth millionth of a watt) of power. The highest adaptability of the manufacturing process to external changes and pressures, a fast fix of damage of broken parts. Cells continually dismantle and reassemble their machines at different stages of the cell cycle and in response to environmental challenges, such as infections. Cells use a mixed strategy of prefabricating core elements of machines and then synthesizing additional, snap-on molecules that give each machine a precise function.  Cellular transport systems: Gated transports require three basic components to work: an identification tag, a scanner (to verify identification) and a gate (that is activated by the scanner), is a high-efficiency signaling system and communication pathways and as the result: The final product of the cell is the fidel copy of a daughter cell through replication. While human-made factories produce different things than themselves, and the product is far less complex than the factory that builds the artifact,  the cell as the final product makes a copy of itself with slight modifications. In multicellular systems,  when it divides into two, one daughter cell goes on to make a more specialized type of cell or even gives rise to several different cell types, and the final product is far far more complex physically than the cell from which it derived.

Preprogrammed robots, that self-replicate, and self-repair themselves through error-detection and repair mechanisms, that can thrive for generations without the need for external help, that can generate their own energy using solar light, adapt to a variety of environmental conditions, and protect themselves from cold and hot, and evolve, that can communicate using advanced complex languages with other robots of equal complexity, that can sense the environment, move and walk, fly, swim, are the epitome of technological sophistication. We, humans, with all our advanced intelligence and brainpower of generations, are galactic distances far from achieving ad principle of the conceptualization, idealization, and manufacturing of that kind of advanced autonomous devices through engineering and computation.

Does the fact that cells self-replicate refute the claim that cells are factories?
The short answer is no. This is a self-defeating argument because it is not taken into consideration, that self-replication is the epitome of manufacturing advance and achievement, far from being realized by man-made factories.  The fact that cells self-replicate substantiates and reinforces the inference of intelligent design gigantically. There are currently 593 proteins known to be required to assure high fidelity of human DNA replication and to prevent disease. Isnt that stunning?! Without these proteins operating in concert, our DNA would degenerate rapidly and completely within a matter of a few generations. The vital question...how did these proteins, required for DNA replication emerge in the first place? They are paramount to intelligent design. Performing what seems like a miracle with every cell division. If man were able to make self-replicating, fully automated robotic factories using in-situ resources, that would be a game-changing technology for all of humanity. This is a monumental challenge. The number of processes involved and parts to build complex machinery is very large. We don't have any factories without human control or intervention. That would be a smart factory and a huge leap forward from more traditional automation to a fully connected and flexible system - one that can use a constant stream of data from connected operations and production systems to learn and adapt to new demands as biological cells are. And even if we eventually get there one day,  raw material inputs would still have to be managed by man. Cells have sophisticated gates in the membrane, which sort out what materials can be permitted to get in, and waste products out of the cell. They have even sophisticated machines on the membrane surface, like amazing molecular assembly lines called nonribosomal peptide synthetase, which is a protein nanofactory. They detect, attract, and transform iron in the environment into siderophores, which are iron in a form that can be mobilized, uptaken, and imported into the cell to manufacture protein co-factors, Iron-sulfur cores that are used as the catalyzers of enzyme reactions in the core pocket of proteins. Each factory would also need the means to replicate and copy the information storage device, the hard disk, equivalent to the DNA molecule, and the information content. That is staggeringly complex. Self-replication had to emerge and be implemented first, which rises the unbridgeable problem that DNA replication is irreducibly complex : Evolution is not a capable driving force to make the DNA replicating complex, because evolution depends on cell replication through the very own mechanism we try to explain. It takes proteins to make DNA replication happen. But it takes the DNA replication process to make proteins. That’s a catch-22 situation.

But wait a minute! there ARE actually man-made self-replicating factories: John von Neumann's Universal Constructor is a self-replicating machine in a cellular automata environment. It was designed in the 1940s, without the use of a computer. The fundamental details of the machine were published in von Neumann's book Theory of Self-Reproducing Automata, completed in 1966 by Arthur W. Burks after von Neumann's death. Von Neumann's goal was to specify an abstract machine that, when run, would replicate itself. In his design, the machine consists of three parts: a 'blueprint' for itself, a mechanism that can read any blueprint and construct the machine (sans blueprint) specified by that blueprint, and a 'copy machine' that can make copies of any blueprint. After the mechanism has been used to construct the machine specified by the blueprint, the copy machine is used to create a copy of that blueprint, and this copy is placed into the new machine, resulting in a faithful replication of the original machine.

While in Paley's, and later, in Darwin's time, cells were understood as simple structureless protoplasm ( Ernst Haeckel, 1871, in nature ), today we know better.

Carl Sagan, astronomer, "Life," in Encyclopaedia Britannica 1974, 893-894:
A living cell is a marvel of detailed and complex architecture. Seen through a microscope there is an appearance of almost frenetic activity. On a deeper level it is known that molecules are being synthesized at an enormous rate. . The information content of a simple cell has been estimated as around 1012 bits, comparable to about a hundred million pages of the Encyclopaedia Britannica.

The origin of cell factories
The transition from non-life to life means going from non-evolving, non-replicating molecules to fully reproducing and evolving chemical factories. Establishing the core requirements of the smallest cell is a fundamental scientific challenge. What is, can give us a clue, it is the path to the past of what was, and the most plausible cause. There are two ways to tackle the OOL problem which are counter-directional. One is to figure out what the smallest unit of life was, what its system looked like, and start from there to try to find out how this state of affairs could have come about. The second, which is the more common approach, is to try to find solutions to the origin of the basic constituents, and how they could have joined into the complexity, that could be identified as the first life form.

Three different Origin of Life science approaches
Caleb Scharf et al. (2015): Approaches to the OoL can be broadly divided into three classes, which can be termed historical, synthetic, and universal.
Historical approaches are characterized by research to determine the path of events that led to biology on Earth (and elsewhere, to the degree that such studies are generalizable). For historical approaches, success is typically judged by explaining evidence left in the geological record or in the nature of biochemistry, or by constructing narratives that are consistent with this evidence—typically constrained by presumed “plausible” prebiological environmental conditions and available reagents.

Synthetic approaches are less concerned with how life arose historically and more with how to create the process de novo, either in simulation or in the laboratory. Success is measured in terms of being able to create a system with some desired set of properties, even if it does not resemble biological life in every respect. Synthetic approaches are not always concerned with prebiotic plausibility and thus can aim for something that differs markedly from “real” or “modern” biology in terms of composition. This includes much of the work in A-Life, as well as attempts to create chemically orthogonal “living” systems in vitro or in unicellular contexts. We offer a more fine-grained classification of synthetic approaches in the Outcomes below.

Finally, universal approaches are concerned primarily with questions about necessary and sufficient conditions: can life emerge on planets quite different from Earth, or even in simulated “universes” with quite different “physics” from ours? Are there deep theoretical principles through which central processes in the OoL can be understood, irrespective of the domain in which they occur? This category includes aspects of astrobiology, A-Life, systems science, and evolutionary theory.
5

Difficulties in top-down approachesCould life have started simple? 
The answer is: Nothing is simple in biology.  Top-down approaches mean starting with a most plausible model organism that most likely populated the earth right at the beginning when life started, at the root of the tree, establishing its system. Once that model organism has been established, the as a follow-up, second step, a bottom-up approach would become feasible. One can deconstruct the model organism into its parts and survey its properties, and investigate the possible prebiotic route to get them.  We can catalog the parts and properties shared by all life forms and investigate their possible emergence. That also gives possibilities to make an approach from a systems biology perspective.  Top-down starts with the big picture, providing the roadmap from the most likely first life form to the origins of its constituents via data collection and analysis. It includes the elucidation of genomics, which is all genetic information of an organism,  transcriptomics, which measures mRNA transcript levels; proteomics, which quantifies protein abundance; metabolomics, which determines the abundance of small cellular metabolites; interactomics, which resolves the whole set of molecular interactions in cells; and fluxomics, which establishes dynamic changes of molecules within a cell over time. 8

LUCA, the last universal common ancestor
Two relevant points have never been demonstrated of being a fact: The claim of Universal common descent: the claim that all life forms descended from a universal common ancestor. And secondly, that it happened through unguided, unintelligent, purposeless, material processes such as natural selection acting on random variations or mutations, and other similarly naturalistic mechanisms, completely suffice to explain the origin of novel biological architecture and the appearance of design in complex organisms. Unless science would have a complete catalog of all life forms and species, drawing trees of life are based on conjecture. Ed Yong wrote in an article for the website: The Atlantic, back in 2016, bringing that problem to the spotlight, when he wrote:

Around half of bacterial branches belong to a supergroup, which was discovered very recently and still lacks a formal name. Informally, it’s known as the Candidate Phyla Radiation. Within its lineages, evolution has gone to town, producing countless species that we’re almost completely ignorant about. In fact, this supergroup and “other lineages that lack isolated representatives clearly comprise the majority of life’s current diversity,” wrote Hug and Banfield.

“This is humbling,” says Jonathan Eisen from the University of California, Davis, “because holy **#$@#!,  we know virtually nothing right now about the biology of most of the tree of life.”
 59

When we start investigating the origin and diversification of life, we have to start right from the beginning. When it comes to the origin of life, there are two approaches.  

Scientists commonly publish speculative papers attempting to elucidate what the theoretical first life form was. So, many elaborate on a Last Universal Common Ancestor ( LUCA ), a Last Prokaryotic Common Ancestor ( LBCA), a Last Eukaryotic Common Ancestor ( LECA), and a Common Ancestor of Archea and Eukaryotes. All, presupposing that the three of life, and common ancestry, is true.  That would give insight into the complexity of the most primitive life forms of the three domains of life. A Last Universal Common Ancestor ( LUCA )  would be the most recent common ancestor of all life on Earth but is preceded by a First Universal Common Ancestor (FUCA). Since Woese et al. (1990) described the three domains of life, classical evolutionary theory considers this last universal common ancestor (LUCA) as the branching point on which Bacteria, Archaea and Eukarya separated on the tree about 3.5 to 4 billion years ago. It is a theoretical construct. Nobody knows what it looked like.  Rather than being the first self-replicating cell, LUCA was supposed to be a population of organisms. What seems to be clear so far, is, as Koonin wrote:
On the strength of combined evidence, it appears likely that the LUCA was a prokaryote-like organism (that is, like bacteria or archaea) of considerable genomic and organizational complexity 24

Juan A G Ranea and colleagues (2006): 
We know that the LUCA, or the primitive community that constituted this entity, was functionally and genetically complex. Life achieved its modern cellular status long before the separation of the three kingdoms. we can affirm that the LUCA held representatives in practically all the essential functional niches currently present in extant organisms, with a metabolic complexity similar to translation in terms of domain variety.  11

and Christos A Ouzounis and colleagues (2006):
.....a fairly complex genome similar to those of free-living prokaryotes, with a variety of functional capabilities including metabolic transformation, information processing, membrane/transport proteins, and complex regulation, shared between the three domains of life, emerges as the most likely progenitor of life on Earth, with profound repercussions for planetary exploration and exobiology. 12

DIANA YATES, UNIVERSITY OF ILLINOIS (2011) :
The Last Universal Common Ancestor had a complex cellular structure. New evidence suggests that LUCA was a sophisticated organism after all. 10

Eugene V. Koonin (2011):
Arguments for a LUCA that would be indistinguishable from a modern prokaryotic cell have been presented, along with scenarios depicting LUCA as a much more primitive entity (Glansdorff, et al., 2008).
The difficulty of the problem cannot be overestimated. Indeed, all known cells are complex and elaborately organized. The simplest known cellular life forms, the bacterial (and the only known archaeal) parasites and symbionts, clearly evolved by degradation of more complex organisms; however, even these possess several hundred genes that encode the components of a fully-fledged membrane; the replication, transcription, and translation machineries; a complex cell-division apparatus; and at least some central metabolic pathways. As we have already discussed, the simplest free-living cells are considerably more complex than this, with at least 1,300 genes. 

All the difficulties and uncertainties of evolutionary reconstructions notwithstanding, parsimony analysis combined with less formal efforts on the reconstruction of the deep past of particular functional systems leaves no serious doubts that LUCA already possessed at least several hundred genes.  In addition to the aforementioned “golden 100” genes involved in the expression, this diverse gene complement consists of numerous metabolic enzymes, including pathways of the central energy metabolism and the biosynthesis of amino acids, nucleotides, and some coenzymes, as well as some crucial membrane proteins, such as the subunits of the signal recognition particle (SRP) and the H+- ATPase.  

Koonin (2020): 
The presence of a highly complex virome implies the substantial genomic and pan-genomic complexity of the LUCA itself.  Thus, although important features of the LUCA remain to be clarified, we can conclude with reasonable confidence that it was a prokaryotic population with a pangenomic complexity comparable to that of the extant archaea and bacteria.  24

Madeline C. Weiss and colleagues  (2020):
The last universal common ancestor of all living organisms was a complex cell just as intricate as those of many modern bacteria and archaea. 27

Baidouri and colleagues (2020):
We conclude that LUCA, the cenancestor, was far more than a “half-alive” progenote, and show that it was a complex “prokaryotic” cell resembling modern archaea and bacteria. The complex phenotypic picture we depict of LUCA implies a complex genome, which is supported by our estimates of genome size and gene numbers. These results challenge the common assumption of increasing complexity through time, suggesting instead that cellular complexity arose near the very beginning of life and was retained or even lost through the evolution of the prokaryote lineage.We thus reveal LUCA as a complex cell possessing a genetic code more intricate than many modern bacteria and archaea.28

Now that constitutes a considerable problem for naturalistic proposals. If LUCA was already sophisticated, and complex, and if evidence suggests that it had to be so when life started, how could there have been a more primitive life form, preceding it, a First Universal Common Ancestor (FUCA)?
 
John D. Sutherland  (2017):
The latest list of genes thought to be present in LUCA is a long one. The presence of membranes, proteins, RNA and DNA, the ability to perform replication, transcription and translation, as well as harbouring an extensive metabolism driven by energy harvested from ion gradients using ATP synthase, reveal that there must have been a vast amount of evolutionary innovation between the origin of life and the appearance of LUCA. 25

But if that was the case, what did that first life-form look like? The conundrum is that science is empty-handed when it comes to breaking down the features that compose life to a sufficiently simple first life form, that it would, even if only theoretically, be feasible to imagine that complexification could take place, in order to achieve the transition to a first self-replicating living cell, and then complexifying further to become a population giving rise to the LUCA.  Wherever scientists look, there are problems. Unless one resorts to God, which created life complex, from the get-go.  But that is, of course, a route, scientists do not want to go.

Douglas J. Futuyma (1983):
“Organisms either appeared on the earth fully developed or they did not. If they did not, they must have developed from preexisting species by some process of modification. If they did appear in a fully developed state, they must indeed have been created by some omnipotent intelligence” 26

In fact, Futuyma’s words underline a very important truth. He writes that when we look at life on Earth, if we see that life emerges all of a sudden, in its complete and perfect forms, then we have to admit that life was created, and is not a result of chance. As soon as naturalistic explanations are proven to be invalid, then creation is the only explanation left.

Nobody knows what LUCA and FUCA looked like
Science remains largely in the dark when it comes to pinpointing what exactly the first life form looked like. Speculation abounds. Whatever science paper about the topic one reads, the confusion becomes apparent. Patrick Forterre wrote  in a science paper in 2015: The universal tree of life: an update, confessed: 

There is no protein or groups of proteins that can give the real species tree, i.e., allow us to recapitulate safely the exact path of life evolution. 56

LUCA, its form, constitution, timeline, and other characteristics have been the subject of intense discussions within the scientific community. Since the 1950s, after the discovery of the DNA structure and throughout the seventies until the maturation of molecular biology, new hope emerged to elucidate the identity of LUCA, with the possibility of understanding the molecular constitution and makeup. But rather than coming closer to finding an answer that science broadly agrees on, with a certain frequency, new scientific papers are published, that claim to revolutionize the field with a completely new proposal, that supposedly nobody thought about before. One problem is that all investigations start with the premise that life had a universal common ancestor, that gave rise to all three domains: Prokaryotes, archaea, and eukaryotes. Most take the evolutionary framework, the three of life down to its roots, as granted. But there are also those that are honest enough to admit the problems. One thing that researchers constantly fiddle around is the branching point to the three domains. A glaring gap, one of the most strident, is the lack of evidence of the transition from prokaryotes, to eukaryotes. 

W. Ford Doolittle (2000):
Discoveries made in the past few years have begun to cast serious doubt on some aspects of the tree, especially on the depiction of the relationships near the root. The absence of a clear phylogeny (family tree) for microorganisms left scientists unsure about the sequence in which some of the most radical innovations in cellular structure and function occurred. For example, between the birth of the first cell and the appearance of multicellular fungi, plants, and animals, cells grew bigger and more complex, gained a nucleus and a cytoskeleton (internal scaffolding), and found a way to eat other cells. 31

Eric Bapteste and colleagues (2009):
Prokaryotic evolution and the tree of life are two different things, and we need to treat them as such, rather than extrapolating from macroscopic life to prokaryotes. 32

Olga Zhaxybayeva et al. (2004):
There was no single last common ancestor that contained all of the genes ancestral to those shared among the three domains of life. Each contemporary molecule has its own history and traces back to an individual molecular cenancestor. However, these molecular ancestors were likely to be present in different organisms at different times. 52

There are good reasons to conclude that.  

Eukaryotic cells are vastly more complex than prokaryotic cells as evident by their endomembrane system 34 The unicellular green marine alga Ostreococcus tauri is the world's smallest free-living eukaryote known to date and encodes the fewest number of genes. It has been hypothesized, based on its small cellular and genome sizes, that it may reveal the “bare limits” of life as a free-living photosynthetic eukaryote, presumably having disposed of redundancies and presenting a simple organization and very little non-coding sequence. 35  It has a genome size of 12.560,000 base pairs8,166 genes  and 7745 proteins 36

This speaks for itself. Ostreococcus tauri is about ten times bigger in genome size than Synechococcus LMB bulk15N. Both have very few non-coding regions.

But the problems are not limited just to differences between bacteria and eukaryotes, but also between bacteria and archaea, as Celine Petitjean and colleagues outline in the following paper from 2018:
It has been difficult to determine whether Bacteria and Archaea share a common prokaryotic evolutionary regime. Phylogenomics suggests that the deepest split in the universal tree lies between the two prokaryotic domains, Bacteria and Archaea  33

E.Koonin (2020):
The nature of the replication and membrane machineries of LUCA remains unclear owing to the drastic differences between the respective systems of bacteria and archaea, the two primary domains of life 24

But once the presupposition is removed, that was a universal common ancestor, a different alternative explanation can be investigated. Rather than holding to a nested hierarchy at all costs for which there is no evidence, it is possible and warranted to infer, that a common designer created the three domains, and the basic life forms, from scratch, individually and separately, and with the inbuilt mechanism of adaptation, speciation, and diversification to a limited degree.  

Viruses
When we think about viruses, we think immediately about disease. Virus is from Latin: vira, and means poison. Viruses are something in between life, and non-life. Wherever there is life, there are viruses. Interestingly there are genes of viruses, that are unique to them, and not shared with cellular life forms. Life is interdependent with viruses. That creates another conundrum. Life in order to exist, and perpetuate, depends on viruses, but viruses depend on a living host to survive. What came first? Open questions abound.

Koonin (2020):
The LUCA was not a homogenous microbial population but rather a community of diverse microorganisms, with a shared gene core that was inherited by all descendant life-forms and a diversified pangenome that included various genes involved in virus–host interactions, in particular multiple defence systems.  A common ancestor containing all the genes shared by the three domains of life has never existed 24

What a commendable confession. His conclusion creates a huge conundrum and new questions, since, if we are talking about a community of diverse microorganisms, we are not talking anymore about a common universal ancestor. Diverse microorganisms mean they were already diverse, with different species, with variations. But why is that a problem and not a view shared by most scientists? 

Mikhail Butusov ( 2013):
The compelling argument for the one origin of life theory is the uniformity of the genetic system based on the nucleic acids DNA and RNA and the energy system based on ATP known among all existing organisms. The likelihood that such complicated systems would have evolved twice and in parallel seems very slim, thus suggesting one origin of all life forms. 48

Once again, the evidence seems to falsify the claim that life started from a LUCA, but rather, was created, each of its kind, individually. 

Giant Viruses
To muddy the water even further, there are giant viruses.  Gustavo Caetano-Anolles, Professor of Bioinformatics, published a paper in 2016, where he writes:

The discovery of giant viruses with genome and physical size comparable to cellular organisms, remnants of protein translation machinery, and virus-specific parasites (virophages) have raised intriguing questions about their origin. Evidence advocates for their inclusion into global phylogenomic studies and their consideration as a distinct and ancient form of life. They likely represent a distinct form of life that either predated or coexisted with the last universal common ancestor (LUCA) and constitute a very crucial part of our planet’s biosphere. eukaryotes 60 

In July 2013, scientists published their discovery of the Pandoravirus:
This virus contains an astounding minimum of 2.5 million bases, larger than some bacteria and eukaryotic cells. These 2.5 million bases encode for 2,556 genes – only 7% of which match genes known to exist. This means that 97% of its genome has never been identified before.  63 64

The largest giant viruses up to date are Pandoraviruses. A paper from 2014 informed:  
The recently discovered Pandoraviruses exhibit micron-sized amphora-shaped particles and guanine–cytosine-rich genomes of up to 2.8 Mb.62

And Koonin, in 2018:
With virions measuring up to 1.5 μm and genomes of up to 2.5 Mb, the giant viruses break the now-outdated definition of a virus and extend deep into the genome size range typical of bacteria and archaea. Additionally, giant viruses encode multiple proteins that are universal among cellular life forms, particularly components of the translation system, the signature cellular molecular machinery. The evolutionary forces that led to the emergence of virus gigantism remain enigmatic. In the respective phylogenetic trees, the mimivirus did not belong within any of the three domains of cellular life (bacteria, archaea, or eukaryotes) but rather formed a distinct branch. These observations have triggered the “fourth domain hypothesis”.61

What are the oldest life forms?
Answering this question is of supreme scientific importance, a critical issue to understand the origin of life on earth. It would permit to start solid top-down investigations in regard to the OOL. It helps to understand what difficulty chemistry would have had to overcome, the degree of complexity that had to be achieved, in order to go and transition from chemistry to biology, from non-life to life. But these investigations result just in theoretical constructs and are as such, speculative. An alternative way would be to take what might be one of the smallest free-living organisms, and also the oldest, and use it as a model to investigate the origin of life, and the complexity of the first life form. 

To pinpoint what the oldest life form was,  based on scientific evidence,  is difficult for various reasons. There are no unambiguous microfossils, the very limited record of Eoarchean to Paleoarchean (ca. 3.85-3.2 Ga) rocks 67,  a limited fossil record, a lack of data, and phylogenetics has failed to provide a clear picture, and incongruencies and ambiguity are the norm. And so confounding horizontal gene transfers. Several lines of evidence are pursued to investigate the fossil record of cyanobacteria, but all are limited, and challenges remain.  But there are some cues. 

Prokaryotes, the simplest forms of cellular life, are increasingly supported by evolutionary studies as the oldest lifeforms, and thus of utmost importance for OoL research. But prokaryotes (Archaea and Bacteria) are far more metabolically diverse and environmentally tolerant than multicellular eukaryotes (Eukarya) 51, which raises the question if the evidence warrants that all bacteria have a common ancestor.

Timeline of the earliest evidence of life
Based on the evolutionary timeline, the earth was formed about 4,5 bi years ago.  In a time window of about 200 Mio years, life supposedly emerged on early earth 29. 
 
An article from Nature magazine in 2018 predated the emergence of LUCA ( The last universal common ancestor of cellular life ) to the end of the supposed late heavy bombardment (>3.9 Ga) 15 

Wikipedia gives a list of several sites, dating the earliest evidence of life between 3,48, and 4,28 Gya 65
Zircons from Western Australia imply that life existed on Earth at least 4.1 Gya
It was claimed that traces of life were found in 3.950 Mio-year-old sedimentary rocks in Labrador, Canada 19    
The earliest physical evidence so far found consists of microfossils in the Nuvvuagittuq Greenstone Belt of Northern Quebec, in banded iron formation rocks at least 3.77 and possibly 4.28 Gya.
Biogenic graphite has been found in 3.7 Gya metasedimentary rocks from southwestern Greenland
Evidence of early life in rocks from Akilia Island, near the Isua supracrustal belt in southwestern Greenland, dating to 3.7 Gya, have shown biogenic carbon isotopes
Microbial mat fossils from 3.49 Gya Western Australian sandstone
The Pilbara region of Western Australia contains the Dresser Formation with rocks 3.48 Gya, including layered structures called stromatolites.

Are the first life forms traced back to submarine vents?
In 2017, the University of Leeds reported evidence of early life in Earth’s oldest hydrothermal vent precipitates, supposedly 3,770 million and even up to 4,290 million years old.  They described micrometer-scale haematite tubes and filaments with morphologies and mineral assemblages similar to those of filamentous microbes like Fe-oxidizing bacteria from modern hydrothermal vent precipitates.14

 If one can assume a continuity of microbial metabolisms from their inception to the present day, autotrophic archaeal methanogenesis along with bacterial homoacetogenesis constitute likely potential ancestral metabolisms in the alkaline hydrothermal theory for the origin of life. Some authors suggest that Acetothermia may be deeply branched on the tree of life.  We find a central role of bacteria belonging to the Firmicutes in the ecology of the Prony Bay Hydrothermal Field (PHF). These bacteria, along with members of the phyla Acetothermia and Omnitrophica, are identified as the first chimneys inhabitants before archaeal Methanosarcinales. 49 Phylogenetic analysis based on the concatenated sequences of proteins common among 367 prokaryotes suggests that Ca. ‘Acetothermum autotrophicum’ is one of the earliest diverging bacterial lineages. 50

Joana C. Xavier and colleagues wrote in a recent paper from 2021 about the Last Universal Bacterial ancestor:
Anaerobic members of Aquificae also show significant proximity to the root as judged by branch length. There are only three genomes of (anaerobic) Aquificae in our dataset, and all three belong to chemolithoautotrophs isolated from hydrothermal vents that can grow on H2 and CO. 53 One is Thermovibrio ammonificans sp.

Maybe Cyanobacteria? 
The most commonly mentioned and accepted evidence of the oldest life form is the stromatolite remains in the supposedly 3,480-Myr-old Dresser Formation of the Pilbara Craton, Australia. 16  Stromatolites are formed by the interactions of several relevant bacterial groups, in special cyanobacteria. 17 In 2016, Science magazine reported evidence of the oldest life forms found in Greenland rocks, fossilized stromatolites, also called microbial mats and suggested to be 3,7 billion years old. 13 A microbial mat is a multi-layered sheet of prokaryotes. Prokaryotes in a microbial mat are held together by a glue-like sticky substance that they secrete called extracellular matrix. A stromatolite is a sedimentary structure formed when minerals are precipitated out of water by prokaryotes in a microbial mat. Fossilized microbial mats are called stromatolites and consist of laminated organo-sedimentary structures formed by precipitation of minerals by prokaryotes. They represent the earliest fossil record of life on Earth. 58 While it was claimed traditionally that Cyanobacteria diversified from much older micro-organisms, more recently, that view has changed, and it is now argued that photosynthesis, a key feature of cyanobacteria, could be as old as life itself. 37 Several scientific papers have been published over the years about the phylogenetic tree of Cyanobacteria, but with contradicting results. Fundamental questions remain about their origin, the timing and pattern of their diversification, and the origin of oxygenic photosynthesis 41

Gareth A. Coleman wrote a science paper in 2020 attempting to elucidate the oldest bacterial life forms, and placed the last bacterial common ancestor between two major clades, Terrabacteria and Gracilicutes 20 Cyanobacteria and Chloroflexota belong to Terrabacteria. Investigating the phylogenetic tree of cyanobacteria, Melainabacteria, Chloroflexy, Bacteroidetes, and Alphaproteobacteria, belong to the deepest and oldest branches 18  A paper from 2011 which investigated the phylogenetic tree of over 1200 cyanobacteria taxa put Gloeobacter violaceus and Synechococcus spongiarum into the deepest branch. 21  Synechococcus represents the most dominant cyanobacterial clade in the world’s oceans, as it is responsible for ~20 to 40% of marine chlorophyll biomass and carbon fixation. The cyanobacterial symbionts “Candidatus Synechococcus spongiarum” are widely distributed and highly abundant in sponges around the world and show diversity on the clade level correlating to host sponge phylogeny. This mutually beneficial association between sponges and cyanobacteria is thought to be one of the oldest microbe-metazoan interactions.22  Synechococcus spongiarum LMB bulk15N as a representative comes by with 1,470,000 base pairs, 1444 proteins, and 1530 genes 23 but, being a symbiont, and not free-living, it is not the most case-adequate representative.  A paper from 2017 places Synechococcus JA-3-3Ab together with Gloeobacter violaceus very early in the phylogenetic tree. Since the anoxygenic Archaean atmosphere was very warm (about 50 to 70°C according to some calculations), the first cyanobacteria should be thermophiles similar to unicellular Synechococcus sp. JA-2-3Ab 40  Its genome has a size of 2,932,766 base pairs, 2748 proteins, and 2898 genes.   

Gloeobacter violaceus, a basal cyanobacteria
Most studies regarding the reconstruction of phylogenetic relationships are focused on the cyanobacterium Gloeobacter violaceus with a primitive photosynthetic apparatus 40 It has a genome size of 4,659,019 base pairs, 4406 proteins, and 4430 protein-encoding genes. 42 Despite having a comparatively large genome, a science paper from 2013 noted: Numerous phylogenetic papers proved its basal position among all of the organisms and organelles capable of plant-like photosynthesis (i.e., cyanobacteria, chloroplasts of algae and plants). Hence, G. violaceus PCC 7421 has become one of the key species in the evolutionary study of photosynthetic life. 43 Another paper from 2018 outlined: It is known as the most primitive cyanobacterium due to features such as the absence of thylakoids, a circadian clock and its scarce morphological and reproductive differentiation, among others. 44 Despite being the most primitive, Gloeobacter violaceus has a rather large genome, about double the size of average-sized Cyanobacteria.

Nasim Rahmatpour and colleagues wrote in  a recent paper from 2021: The Phylum Cyanobacteria is composed of two extant groups: Gloeobacteria and Phycobacteria which supposedly diverged around 2 billion years ago. Phycobacteria encompasses >99.9% of the known cyanobacterial diversity and is sometimes also referred to as the “crown Cyanobacteria”. Gloeobacteria, on the other hand, is rather enigmatic and has only two species described thus far: Gloeobacter violaceus and G. kilaueensis. 45 A paper from 2011 elucidated:  there are several indications that G. violaceus may be considered as the most primordial cyanobacterium yet studied: This is suggested by comparative analysis with 14 other cyanobacterial genomes (Mulkidjanian et al., 2006) 46 A recent paper from 2020 pointed out: G. violaceus was the basal member among cyanobacterial sequences sampled (Nelissen et al. 1995). Blank & Sánchez-Baracaldo (2010) confirmed this by analyzing the small and large subunit of rDNA and 137 protein sequences and emphasized that Gloeobacter violaceus was the earliest branching or basal organism in Cyanobacteria. It is therefore likely that Gloeobacter spp. have retained ‘primitive’ or ancestral traits, and that such traits have undergone little change since being inherited from the common ancestor. It is important to point out that some of these traits might also be apomorphies, or traits that are unique to Gloeobacter and not necessarily present in other Cyanobacteria; especially given its long history. Given its phylogenetic position, it is reasonable to infer traits that might have been present in ancestral lineages of Cyanobacteria. 47 The oldest should also be the most primitive and smallest life form. Gloeobacter violaceus has a genome twice the size of average-sized cyanobacteria which creates a paradox.

As we will see the three domains of life had most likely not a common ancestor. The simplest life forms are bacteria, so the best to select a candidate is a bacterial prokaryote. 

What does science know about a supposed last bacterial common ancestor (LBCA)? 
A paper from 2022 confessed: 
The nature of the LBCA is unknown, especially the architecture of its cell wall. The lack of reliably affiliated bacterial fossils outside Cyanobacteria makes it elusive to decide the very nature of the LBCA. 55 

Another scientific paper, from 2021, provides a good framework to have an idea about the minimum set of genome, proteome, metabolome, and interactome of a minimal working prokaryotic cell, that could be regarded as LBCA:

The assumption that LBCA was anaerobic is supported by geochemical and phylogenomic evidence. Among all cells on Earth, bacteria are not only the most abundant, they comprise the most diverse domain in terms of physiology and metabolism and are generally regarded as ancient. Isotopic signatures trace autotrophy 3.9 billion years back in time.  Phylogenomic reconstructions indicate that LUCA was a thermophilic anaerobe that lived from gasses in a hydrothermal setting, notwithstanding contrasting views.  Reconstructing the habitat and lifestyle of LBCA is, however, impaired by lateral gene transfer (LGT), which decouples physiological evolution from ribosomal phylogeny. Like LUCA  LBCA must have been an anaerobe, because the accrual of atmospheric oxygen occurred much later in Earth’s history, as a product of cyanobacterial metabolism. Although some details of Earth’s oxygenation continue to be debated, it is generally accepted that the Great Oxidation Event occurred ~2.4 billion years ago. The most important difference between anaerobes and aerobes is related to energy; anaerobic pathways such as fermentation, sulfate reduction, acetogenesis, and methanogenesis yield only a fraction of the energy when compared to aerobic pathways, but this is compensated by the circumstance that the synthesis of biomass costs 13 times more energy per cell in the presence of O2 than under anoxic conditions. This is because, in the reaction of cellular biomass with O2, the thermodynamic equilibrium lies very far on the side of CO2. That is, the absence of O2 offers energetic benefits of the same magnitude as the presence of oxygen does. Although the advent of O2 expanded routes for secondary metabolism, allowed novel O2-dependent steps in existing biosynthetic pathways, and allowed the evolution of new heterotrophic lifestyles by enabling the oxidation of unfermentable substrates, the advent of O2 did not alter the nature of life’s basic building blocks nor did it redesign their biosynthetic pathways. It did, however, promote LGT for genes involved in O2 utilization. In other words, the fundamentals of biochemistry, metabolism, and physiology were invented at a time when the Earth was anoxic.

Both from the geochemical and the biological standpoint, looking back into the earliest phases of evolution ca. 4 billion years ago is challenging. The geological challenge is that rocks of that age are generally rare, and those that bear traces of life are extremely scarce. The biological challenge is that Lateral Gene Transfer has reassorted genes across genomes for 4 billion years. As an alternative to reconstructing gene history, metabolic networks themselves harbor independent inroads to the study of early evolution. Metabolic networks represent the set of chemical transformations that occur within a cell, leading to both energy and biomass production. Genome-scale metabolic networks are inferred from a full genome and the corresponding full set of functional (metabolic) annotations, allowing for predictive models of growth and insights into physiology. Furthermore, metabolism itself is connected to the informational processing machine in the cell, because enzymes are coded in DNA, transcribed, and translated, while they also produce the building blocks of DNA and RNA and polymerize them. However, metabolism is much more versatile than information processing. Metabolic networks include multiple redundant paths, and in different species, different routes can lead to the same functional outcome. Because metabolism is far more variable across lineages than the information processing machinery, the genes coding for enzymes are not universal across genomes and are much more prone to undergo LGT than information processing genes are. This circumstance has impaired the use of metabolic enzymes for the study of early prokaryotic evolution.  53

1. About Life Detection
2. PAUL DAVIES: The Fifth Miracle The Search for the Origin and Meaning of Life 
3. https://www.youtube.com/watch?v=k92xoQJdifk
4. ANN GAUGER The White Space in Evolutionary Thinking APRIL 20, 2015 
5. Caleb Scharf: A Strategy for Origins of Life Research 2015 Dec 1
8. Weiwen Zhang: Integrating multiple 'omics' analysis for microbial biology: application and methodologies 2009 Nov 12 
10. Last Universal Common Ancestor had a complex cellular structure OCT 5, 2011 
11. Protein Superfamily Evolution and the Last Universal Common Ancestor (LUCA) 31 May 2006 
12. Christos A Ouzounis: A minimal estimate for the gene content of the last universal common ancestor--exobiology from a terrestrial perspective 
13. CAROLYN GRAMLING: Hints of oldest fossil life found in Greenland rocks 31 AUG 2016 
15. Holly C. Betts: Integrated genomic and fossil evidence illuminates life’s early evolution and eukaryote origins 2018 Aug 20 
16. Allen P Nutman: Rapid emergence of life shown by discovery of 3,700-million-year-old microbial structures 2016 Sep 22 
17. Karina Stucken: The Smallest Known Genomes of Multicellular and Toxic Cyanobacteria: Comparison, Minimal Gene Sets for Linked Traits and the Evolutionary Implications February 16, 2010
18. Kelsey R. Moore: An Expanded Ribosomal Phylogeny of Cyanobacteria Supports a Deep Placement of Plastids  12 July 2019
19. Takayuki Tashiro: Early trace of life from 3.95 Ga sedimentary rocks in Labrador, Canada 28 September 2017 
20. Gareth A. Coleman: A rooted phylogeny resolves early bacterial evolution 
21. Bettina E Schirrmeister: Evolution of cyanobacterial morphotypes 2011 Jul 1 
22. Beate M. Slaby Draft Genome Sequences of “Candidatus Synechococcus spongiarum,” Cyanobacterial Symbionts of the Mediterranean Sponge Aplysina aerophoba 27 April 2017 
23. Candidatus Synechococcus spongiarum LMB bulk15N 
24. Eugene V. Koonin: The LUCA and its complex virome 
25. John D. Sutherland: Studies on the origin of life — the end of the beginning  
26. Douglas Futuyma, Science on Trial (New York: Pantheon Books, 1983), p. 197,
27. Madeline C. Weiss: The physiology and habitat of the last universal common ancestor 25 JULY 2016 
28. Fouad El Baidouri Phenotypic reconstruction of the last universal common ancestor reveals a complex cell 2020.08.20 
29. Madeline C. Weiss: The last universal common ancestor between ancient Earth chemistry and the onset of genetics 2018 Aug 16 
30. Eugene V. Koonin Logic of Chance: The Nature and Origin of Biological Evolution  2011
31. W. ford doolittle: Uprooting the Tree of Life february 2000 
32. Eric Bapteste Prokaryotic evolution and the tree of life are two different things 2009 Sep 29 [/size]
33. Siri Kellner: Genome size evolution in the Archaea NOVEMBER 14 2018 
34. Josip Skejo: Evidence for a Syncytial Origin of Eukaryotes from Ancestral State Reconstruction 
35. Evelyne Derelle: Genome analysis of the smallest free-living eukaryote Ostreococcus tauri unveils many unique features 2006 Aug 1 
36. https://www.uniprot.org/proteomes/UP000009170
37. Hayley Dunning:  Photosynthesis could be as old as life itself 16 March 2021 
38. Jiří Komárek: Phylogeny and taxonomy of Synechococcus-like cyanobacteria October 14, 2020 
39. Nasim Rahmatpour: A novel thylakoid-less isolate fills a billion-year gap in the evolution of Cyanobacteria JULY 12, 2021
40. S. V. Shestakov: The origin and evolution of cyanobacteria 23 August 2017 
41. Catherine F.Demoulin: Cyanobacteria evolution: Insight from the fossil record 20 August 2019 
42. Yasukazu Nakamura: Complete Genome Structure of Gloeobacter violaceus PCC 7421, a Cyanobacterium that Lacks Thylakoids 
43. Jan Mareš: The Primitive Thylakoid-Less Cyanobacterium Gloeobacter Is a Common Rock-Dwelling Organism  June 18, 2013 
44. Gustavo Montejano: Gloeobacter violaceus: primitive reproductive scheme and its significance  December 2018 
45. Nasim Rahmatpour: Revisiting the early evolution of Cyanobacteria with a new thylakoid-less and deeply diverged isolate from a hornwort 
46. Sascha Rexroth: The Plasma Membrane of the Cyanobacterium Gloeobacter violaceus Contains Segregated Bioenergetic Domains
47. John A. Raven: Gloeobacter and the implications of a freshwater origin of Cyanobacteria 
48. Mikhail Butusov: The Role of Phosphorus in the Origin of Life and in Evolution 05 March 2013 
49. Céline Pisapia: Mineralizing Filamentous Bacteria from the Prony Bay Hydrothermal Field Give New Insights into the Functioning of Serpentinization-Based Subseafloor Ecosystems 
50. Hideto Takami: A Deeply Branching Thermophilic Bacterium with an Ancient Acetyl-CoA Pathway Dominates a Subsurface Ecosystem 
51. Carol E. Cleland: Pluralism or unity in biology: could microbes hold the secret to life? 
52. Olga Zhaxybayeva: Cladogenesis, coalescence and the evolution of the three domains of life .4 April 2004 
55. Raphaël R. Léonard: Was the Last Bacterial Common Ancestor a Monoderm after All? 18 February 2022 
56. Patrick Forterre: The universal tree of life: an update 
58. 110 Prokaryotic Diversity 
59.  Ed Yong: Most of the Tree of Life is a Complete Mystery 
60. Gustavo Caetano-Anolles: Giant viruses coexisted with the cellular ancestors and represent a distinct supergroup along with superkingdoms Archaea, Bacteria and Eukarya 
61. Eugene V. Koonin: Multiple evolutionary origins of giant viruses 2018 Nov 22
62. Matthieu Legendre: Thirty-thousand-year-old distant relative of giant icosahedral DNA viruses with a pandoravirus morphology 
63. NADÈGE PHILIPPE: Amoeba Viruses with Genomes Up to 2.5 Mb Reaching That of Parasitic Eukaryotes 19 Jul 2013
64. The Pandoravirus and the Tree of Life 
65. https://en.wikipedia.org/wiki/Abiogenesis
66. Stilianos Louca: A census-based estimate of Earth's bacterial and archaeal diversity February 4, 2019 
67. Józef Kazmierczak:[url=https://www.ingentaconnect.com/content/ben/coc/2013/00000017/00000016/art00008?crawler=true] Calcium in the Early Evoluti



Last edited by Otangelo on Mon May 15, 2023 9:43 am; edited 92 times in total

https://reasonandscience.catsboard.com

4The Cell factory maker, Paley's watchmaker argument 2.0 Empty Chapter 3 Thu Apr 28, 2022 6:26 pm

Otangelo


Admin

The first bacterial lineages to diverge were most similar to modern Clostridia
We started by focusing on the trees for the 146 LBCA protein families, and we analyzed the divergence accumulated from the bacterial root to each modern genome, measured as root-to-tip distance in terms of (i) sequence divergence (branch length) and (2) node depth. The results identify clostridial genomes as the least diverged both in terms of sequence divergence and node depth. LBCA was autotrophic, gluconeogenetic, and rod-shaped. Our analyses of trees for all genes, not just those universally present in all genomes, point to Clostridia (a class within the phylum Firmicutes) as the modern bacterial group most similar to the first lineages, which diverged from LBCA. It is followed by Deltaproteobacteria. Anaerobic members of Aquificae also show significant proximity to the root as judged by branch length. There are only three genomes of (anaerobic) Aquificae in our dataset, and all three belong to chemolithoautotrophs isolated from hydrothermal vents that can grow on H2 and CO. One is Thermovibrio ammonificans sp. 56

Gareth A. Coleman et al. (2020): We predict that the last bacterial common ancestor was a free-living flagellated, rod-shaped cell featuring a double membrane with a lipopolysaccharide outer layer, a Type III CRISPR-Cas system, Type IV pili, and the ability to sense and respond via chemotaxis. Our analyses suggest that LBCA was a rod-shaped, motile, flagellated double-membraned cell. We recover strong support for central carbon pathways, including glycolysis, the tricarboxylic acid cycle (TCA) and the pentose phosphate pathway. We did not find unequivocal evidence for the presence of a carbon fixation pathway, although we found moderate support for components of both the Wood-Ljungdahl pathway and the reverse TCA cycle. Though not depicted here, our analyses suggest that the machinery for transcription, translation, tRNA and amino acid biosynthesis, homologous recombination, nucleotide excision and repair, and quorum sensing was also present in LBCA. We place the last bacterial common ancestor between two major clades, Terrabacteria and Gracilicutes, although we could not resolve the position of Fusobacteriota in relation to those major radiations. We have sampled only ~30,000 of the estimated 2-4 million prokaryotic species in the biosphere: there is much more diversity out there to discover. 57

But, after all, how simple can we go, and what is the best model candidate to study the origin of life? 
There are at least five organisms with very tiny genomes: the bacteria ‘Candidatus Sulcia muelleri’ (245,530 bp), ‘Candidatus Zinderia insecticola’(208,564 bp), ‘Candidatus Carsonella ruddii’ (159,662 bp), ‘Candidatus Hodgkinia cicadicola’(143,795 bp) and ‘Candidatus Tremblaya princeps’( 138,927 bp). They are not free-living, but obligate symbionts of insects, and are considered below the living-cell status.

Often, Mycoplasma is mentioned as a reference to the threshold of the living from the non-living. Mycoplasma genitalium is often held as the smallest possible living self-replicating cell.  It is, however, a pathogen, an endosymbiont that only lives and survives within the body or cells of another organism ( humans ).  As such, it imports many nutrients from the host organism. The host provides most of the nutrients such bacteria require, hence the bacteria do not need the genes for producing such compounds themselves. As such, it does not require the same complexity of biosynthesis pathways to manufacturing all nutrients as a free-living bacterium.
But in the Guinness world of records, another organism, an archaeon, Nanoarchaeum equitans is listed as the smallest entity universally recognized to be a living organism. Its genome is only 490,885 nucleotide bases long, which makes it the smallest non-viral genome ever sequenced.  39  It is however also a symbiont, it depends on the archaeon Ignicoccus to survive.

In 2009, a science article made the rounds: First-ever blueprint of 'minimal cell' is more complex than expected . They reported: Researchers are providing the first comprehensive picture of a minimal cell, based on an extensive quantitative study of the biology of the bacterium that causes atypical pneumonia. The study uncovers fascinating novelties relevant to bacterial biology and shows that even the simplest of cells is more complex than expected. What are the bare essentials of life, the indispensable ingredients required to produce a cell that can survive on its own? Can we describe the molecular anatomy of a cell, and understand how an entire organism functions as a system? Even the simplest of cells is more complex than expected. Mycoplasma pneumoniae is a small, single-cell bacterium that causes atypical pneumonia in humans. It is also one of the smallest prokaryotes -- organisms whose cells have no nucleus -- that don't depend on a host's cellular machinery to reproduce. It is complex enough to survive on its own, but small and, theoretically, simple enough to represent a minimal cell -- and to enable a global analysis. When studying both its proteome and its metabolome, the scientists found many molecules were multifunctional, with metabolic enzymes catalyzing multiple reactions, and other proteins each taking part in more than one protein complex. They also found that M. pneumoniae couples biological processes in space and time, with the pieces of cellular machinery involved in two consecutive steps in a biological process often being assembled together.40

The simplest free-living bacteria today is Pelagibacter ubique. It is known to be one of the smallest and simplest, self-replicating, and free-living cells.  It has complete biosynthetic pathways for all 20 amino acids.  These organisms get by with about 1,300 genes and 1,308,759 base pairs and code for 1,354 proteins.  Phylogenetics led to the conclusion that its small genome devolved from a slightly larger common ancestor (~2,000 genes).  In the evolutionary timescale, its common ancestor supposedly emerged about 1,3 billion years ago. So, about 2,5 billion years later than the first Cyanobacteria. At least, we know that size is feasible because P.Ubique exists to this day.

Karin Moelling (2019):  The smallest known bacteria are still rather large. One of the smallest known metabolically autonomous bacterial species is Pelagibacter ubique with about 1,400 genes. Genome reduction of Mycoplasma mycoides by systematic deletion of individual genes resulted in a synthetic minimal genome of 473 genes (Hutchison et al., 2016)

A science paper from 2019 made a census-based estimate of Earth's bacterial and archaeal diversity. According to the paper, they:
recovered 739,880 prokaryotic operational taxonomic units (OTUs,), a commonly used measure of microbial richness. Using several statistical approaches, the estimation is that there exist globally about 0.8–1.6 million prokaryotic OTUs

A paper from 2016 predicted that:  Earth is home to as many as 1 trillion (10^12) microbial species, based on empirical and theoretical evidence, and the largest molecular surveys compiled to date. 16

E. Camprubí et al. (2019):  Recently, the genomic exploration of various environments has yielded an expanded view of the large microbial diversity that comprises major lineages in the tree of life. Thus far, it remains unresolved whether (hyper-)thermophilic groups indeed represent early diverging lineages. For instance, preliminary phylogenetic analyses using concatenated ribosomal proteins in place of 16S rRNA genes indicate that a large radiation of predominantly uncultured mesophilic taxa, which are comprised of potentially symbiotic organisms with small cells and genomes and are referred to as the bacterial candidate phyla radiation (CPR) and the DPANN archaea, diversify near the base of the Bacteria and Archaea respectively. In contrast, the hyperthermophilic Aquificales and Thermotogales, as well as the Deinococcus-Thermus phylum, now fall within a more derived, larger cluster of major microbial groups as was suggested previously using more sophisticated models of evolution. If the root is placed between Bacteria and Archaea, this recent data does not, at face value, indicate a thermophilic common ancestor of Bacteria. However, both the placement of the root as well as the identity of early diverging lineages has to be confirmed using this updated dataset of microbial diversity in combination with more complex models of evolution, which are better suited to prevent phylogenetic artifacts such as long-branch attraction. These artifacts are expected to particularly affect microbial lineages comprised predominantly of symbiotic members such as the CPR bacteria and DPANN archaea, as genomes of known symbionts are often compositionally biased and characterized by faster evolutionary rates. Various attempts in this direction are currently being made but this is clearly a challenging undertaking due to large computational demands of using complex evolutionary models on big datasets. Yet, the further improvement of models of evolution and phylogenetic algorithms in combination with ever-increasing computational resources will certainly help to further refine the tree of life and get a better-supported location of the root. 38

Besides difficulties like lack of data, and that the first life form supposedly lived about 4 billion years ago on earth, and the fact that there are millions, and even up to trillions of different bacterial species, makes the search for the best candidate for the first bacterial organism very difficult.   Pelagibacter Ubique is the smallest known, but not the oldest bacteria. Gloeobacter violaceus is supposed to be the oldest, basal cyanobacteria, but it has a genome twice the size of average-sized cyanobacteria, and over three times the size of P.Ubique. The papers that deal with proposals of the earliest bacteria from submarine vents propose microorganisms on the level of phyla, like Aquificae, but tracking down an ancestral species that could serve as a representative is not possible.   Nanoarchaeum equitans is the world record holder as the smallest living organism, but a symbiont. Mycoplasma pneumoniae is in size very close to  N.equitans but has no metabolic pathways to synthesize life-essential amino acids. It uptakes them from its host. It has been investigated in three Science magazine articles 41,42,43, and could eventually serve as the basis for a top-down investigation.  LUCA is a theoretical construct and would have had to be even more complex than the smallest bacteria. 

The paper: Essential metabolism for a minimal cell from 2019 reported: In 2016, we developed a successful design of a minimal genome and create a living cell controlled by it. Starting with the gene sequence from the 1079 kbp genome of the ruminant pathogen Mycoplasma mycoides capri serovar LC GM12, a minimal genome of 531 kbp was designed and constructed containing 473 genes (438 protein-coding genes and 35 genes for RNAs). The resulting strain has a genome smaller than that of any independently-replicating cell found in nature and is considered to be our ‘working approximation to a minimal cell’. This achievement was the culmination of a series of breakthroughs in synthetic biology. This model organism can serve as the starting point for a deeper investigation in regards to the requirements of a first life form. 44

William Martin and colleagues from University Düsseldorf’s Institute of Molecular Evolution give us also an interesting number: The metabolism of cells contains evidence reflecting the process by which they arose. Here, we have identified the ancient core of autotrophic metabolism encompassing 404 reactions that comprise the reaction network from H2, CO2, and ammonia (NH3) to amino acids, nucleic acid monomers, and the 19 cofactors required for their synthesis. Water is the most common reactant in the autotrophic core, indicating that the core arose in an aqueous environment. Seventy-seven core reactions involve the hydrolysis of high-energy phosphate bonds, furthermore suggesting the presence of a non-enzymatic and highly exergonic chemical reaction capable of continuously synthesizing activated phosphate bonds. CO2 is the most common carbon-containing compound in the core. An abundance of NADH and NADPH-dependent redox reactions in the autotrophic core, the central role of CO2, and the circumstance that the core’s main products are far more reduced than CO2 indicate that the core arose in a highly reducing environment. The chemical reactions of the autotrophic core suggest that it arose from H2, inorganic carbon, and NH3 in an aqueous environment marked by highly reducing and continuously far from equilibrium conditions. 54 55

Supplementary Table 1. in the paper lists all 402 metabolic reactions: 
Biosynthetic core comprising 402 metabolic reactions 

Spontaneous generation of life
The idea of the spontaneous generation of life is not new. It goes back to Greek philosophers. Anaximander  (610 – 546 BC) believed that all things emerged from the natural elements of the universe, being maybe the first advocate that life did not originate from the hands of a deity, but naturally. being Lous Pasteur famously stated: "Life only comes from life". That was echoed by German biologist Rudolph Virchow: Omnis cellula e cellula (‘Every cell originates from a cell’) Aristotle (384–322 BC) was another greek philosopher that theorized about the spontaneous generation. He wrote: Living things form quickly whenever this air and vital heat are enclosed in anything. When they are so enclosed, the corporeal liquids being heated, there arises as it were a frothy bubble. 49 From the time of the ancient Romans, through the Middle Ages, and until the late nineteenth century, it was generally accepted that some life forms arose spontaneously from non-living matter. Such "spontaneous generation" appeared to occur primarily in decaying matter. The hypothesis of spontaneous generation was finally laid to rest in 1859 by Louis Pasteur. The French Academy of Sciences sponsored a contest for the best experiment either proving or disproving spontaneous generation. Pasteur boiled meat broth in a flask, heated the neck of the flask in a flame until it became pliable, and bent it into the shape of an S. Air could enter the flask, but airborne microorganisms could not - they would settle by gravity in the neck. As Pasteur had expected, no microorganisms grew. When Pasteur tilted the flask so that the broth reached the lowest point in the neck, where any airborne particles would have settled, the broth rapidly became cloudy with life. Pasteur had both refuted the theory of spontaneous generation and convincingly demonstrated that microorganisms are everywhere - even in the air. 50 What we know, is that Life comes from life. Cells self-replicate and make baby cells. Cell membranes make other cell membranes. DNA is required to make the machinery that makes DNA. Nothing else has ever been observed, neither in nature nor in the lab. That means abiogenesis must have occurred by other means.  

Chapter 3

The bottom-up approach
It has been tried to break down life into early stages, where a bottom-up approach would be the route to go. There have been various hypotheses,  but it is incapable of solving the origin of life problems since it is not able to elucidate the formation and selection process of the basic building blocks, genes, proteins, membranes, energy generating systems, and metabolic pathways. Individually, on their own, each of them has no function. Chemistry and molecules do not possess the intrinsic "urge" to complexify, to form structures that start with chemistry and end with biology, by evolutionary processes. The basic building blocks have to be selected amongst an infinite number of possible combinations of atoms, molecules, and naturally occurring chemical structures. Chemical evolution is just one of these sciency & fancy-sounding terms that attempt to imbue credibility to a concept, but by closer look, unravel to be void of evidence and "raison d'etre". There was no evolution prior to the replication of DNA. Period. Another problem is availability. Nitrogen and Carbon dioxide have to be fixed in order to be used in the cell. Atmospheric nitrogen in the air is a nonreactive compound composed of two atoms that bind to form dinitrogen N2 through one of the strongest bonds in nature: a strong triple covalent bond. It is basically useless for life. Nitrogen fixation converts dinitrogen into ammonia (NH3) in order to become useful, an enzymatic, energy-demanding process, that was not available prebiotically.  Carbon dioxide in the atmosphere has also to be converted to organic compounds through carbon fixation. That is as well a process that was not extant prebiotically. It is performed through six different carbon fixation metabolic cycles,  which are extremely complex multistep processes involving several enzymes.

Either life started all at once by the creative act of an intelligent designer, or, if natural explanations are evoked, the transition from nonlife to life had to go through a long stepwise process. Robert Hazen brings this to the point in a paper from 2007:

The overarching problem with studying life's origins is that even the simplest known lifeform is vastly more complex than any non-living components that might have contributed to it. What now appears as a great divide between
non-life and life reflects the fact that the chemical evolution of life must have occurred as a stepwise sequence of successively more complex stages of emergence. The challenge, therefore, is to establish a progressive hierarchy of emergent steps that leads from a pre-biotic ocean enriched in organic molecules, to functional clusters of molecules perhaps self-assembled or arrayed on a mineral surface, to self-replicating molecular systems that copied themselves from resources in their immediate environment, to encapsulation and eventually cellular life
37

What are the possible mechanisms & causes to explain the origin of life?
In order to become more plausible, different mechanisms have been proposed as driving forces to explain the origin of life. There are basically four: 1. Chemical evolution via natural selection, 2. Physical necessity, 3. Unguided random stochastic lucky events, spontaneous generation or self-organization, and 4. Intelligent design/creation.

Either life just coalesced from atomic building blocks through a random fluke collision of disorderly pieces, emerging by  “dumb, blind” mechanical processes, a fortuitous accident, spontaneously through self-organization by unguided, non-designed, unintended stochastic coincidence, natural events that turned into self-organization in an orderly manner without external direction, chemical non-biological, purely physico-dynamic kinetic processes and reactions influenced by environmental parameters, or through the direct intervention, creative force, and design activity of an intelligent cognitive agency, a powerful conscious creator with intentions, will, goals and foresight.

There's nothing about inert chemicals and physical forces that say we want to get life at the end of the abiogenesis process. Molecules do not have the "drive", they do not "want" to find ways to harness energy and to become more efficient

Evolution
One very persuasive explanation is chemical evolution. It is widespread and very common to see the attempt to smuggle the Darwinian dynamic of replication with a heritable variation into the origin of life. Why not extend biological evolution to chemical prebiotic evolution,  and use it to explain as well the origin of life, and voilá? Trick done? “Evolutionary abiogenesis,” claims the origin of life starts from prebiotic molecules through a process of chemical evolutionary transformations on early earth. Natural selection again as the hero on the block, parading on the red carpet, to be appreciated, acknowledged, and admired. Evolution based on natural selection does it all. We all ought to believe it because that's what science is saying and proposing.  There is a very clear no to that. Evolution through Natural selection requires three processes: reproduction, variation, and inheritance. One cannot explain the origin of evolution, through evolution.  Biological evolution by natural selection does and cannot explain the origin of life. Natural selection only acts on the random variation of alleles based on DNA replication, but the origin of genes, and replication is among the origin of the entire self-replicating cell, what origin of life research has to explain.  

As A. G. Cairns-Smith explains in: Genetic Takeover: And the Mineral Origins of Life (1982): At a very general level the doctrine of chemical evolution is simply: there was a prevital progression, a natural long-term trend analogous in a limited way to biological evolution, that proceeded from atoms to small molecules to larger molecules — and finally to systems able to reproduce and evolve under natural selection; and also: that the relevant molecules in prevital processes were, broadly speaking, the kinds of molecules relevant to life now. 52

Phillip E. Johnson brings it to the point in DARWIN ON TRIAL. He writes:  Darwin persuades us that the seemingly purposeful construction of living things can very often, and perhaps always, be attributed to the operation of natural selection. 

If you have things that are reproducing their kind; 
if there are sometimes random variations, nevertheless, in the offspring; 
if such variations can be inherited; 
if some such variations can sometimes confer an advantage on their owners; 
if there is competition between the reproducing entities;- 
if there is an overproduction so that not all will be able to produce offspring themselves- 

then these entities will get better at reproducing their kind. What is needed for natural selection are things that conform to those 'ifs'. Self-replicating cells are prerequisites for evolution. None of this was available prebiotically to explain the origin of the first life form.  51

The scientific paper: The scientific origin of life, from the year  2000: We hypothesize that the origin of life, that is, the origin of the first cell, cannot be explained by natural selection among self-replicating molecules, as is done by the RNA-world hypothesis. The hypothesis espoused here states that it is virtually impossible that the highly complicated system cell developed gradually around simple self-replicating molecules (RNA-hypercycles or autocatalytic peptide networks) by means of natural selection; as is proposed by, for example, the RNA-world hypothesis.  Despite searching quadrillions of molecules, it is clear that a spontaneous RNAreplicator is unlikely to occur. Reports of nucleotide and peptide self-replication still depend upon human intervention (for instance, by changing the environmental conditions between two rounds of replication or by denaturing the double strands). The problem of denaturing the double-nucleotide strand in a nonenzymatic manner has been overlooked and has contributed to a failure to establish molecular self-replication. The first cell, life, was born and natural selection (selection among variations on the theme of autonomous duplication) commenced. 10

Paul Davies also stated it very clearly in a YouTube video: Why Darwinian evolution does NOT explain the origin of life: Sep 2, 2021
I think in all honesty a lot of people even confuse it the people who aren't familiar with the area that oh I presume Darwinian evolution sort of accounts for the origin of life but of course, you don't get an evolutionary process until you've got a self-replicating molecule. ( Darwin )  gave us a theory of evolution about how life has evolved but he uh didn't want to tangle with how you go from non-life to life and for me, that's a much bigger step. 12

Paul Davies conceded in Fifth Miracle, pg.138: “Unfortunately, before Darwinian evolution can start, a certain minimum level of complexity is required. But how was this initial complexity achieved? When pressed, most scientists wring their hands and mutter the incantation ‘Chance.’ So, did chance alone create the first self-replicating molecule?” 

Another science paper from 2015: This process ( abiogenesis) is still an unsolved problem. By itself, this transition is not an evolutionary one because, without hereditary replicators, no Darwinian evolution is possible. 13

And E.Koonin in his book: The logic of chance (2012), pg.382: Evolution by natural selection and drift can begin only after replication with sufficient fidelity is established. Even at that stage, the evolution of translation remains highly problematic. The emergence of the first replicator system, which represented the “Darwinian breakthrough,” was inevitably preceded by a succession of complex, difficult steps for which biological evolutionary mechanisms were not accessible . The synthesis of nucleotides and (at least) moderate-sized polynucleotides could not have evolved biologically and must have emerged abiogenically—that is, effectively by chance abetted by chemical selection, such as the preferential survival of stable RNA species.

Jack W. Szostak: Functional primordial proteins presumably originated from random sequences

Physical necessity
Another alternative is to invoke physical necessity. That means, that certain chemical interactions are constrained based on the individual chemical constituents, and how they are observed to interact together in a specific way. Amongst an infinite number of physical interactions, some combinations had to emerge that would have had some kind of advantage, leading towards structures, that somehow, in the distant future, would lead to life. 

Luisi, The Emergence of Life; From Chemical Origins to Synthetic Biology, page 21: 
A deterministic answer assumes that the laws of physics and chemistry have causally and sequentially determined the obligatory series of events leading from inanimate matter to life – that each step is causally linked to the previous one and to the next one by the laws of nature. In principle, in a strictly deterministic situation, the state of a system at any point in time determines the future behavior of the system – with no random influences. To invoke a guided determinism toward the formation of life would only make sense if the construction of life was demonstrably a preferential, highly probable natural pathway. 15

And Peter Walde: Prebiotic Chemistry: From Simple Amphiphiles to Protocell Models, wrote: Spontaneous self-assembly occurs when certain compounds associate through noncovalent hydrogen bonds, electrostatic forces, and nonpolar interactions that stabilize orderly arrangements of small and large molecules.  The argument that chemical reactions in a primordial soup would not act upon pure chance, and that chemistry is not a matter of "random chance and coincidence, finds its refutation by the fact that the information stored in DNA is not constrained by chemistry. Yockey shows that the rules of any communication system are not derivable from the laws of physics.  He continues: “there is nothing in the physicochemical world that remotely resembles reactions being determined by a sequence and codes between sequences.” In other words, nothing in nonliving physics or chemistry obeys symbolic instructions. 7

Abel brought is straight to the point in 2004:  Selection pressure cannot select nucleotides at the digital programming level where primary structures form. Genomes predetermine the phenotypes that natural selection only secondarily favors. Contentions that offer nothing more than long periods of time offer no mechanism of explanation for the derivation of genetic programming. No new information is provided by such tautologies. The argument simply says it happened. As such, it is nothing more than blind belief. Science must provide a rational theoretical mechanism, empirical support, prediction fulfillment, or some combination of these three. If none of these three are available, science should reconsider that molecular evolution of genetic cybernetics is a proven fact and press forward with new research approaches which are not obvious at this time. 15

Unguided random accidental events
Iris Fry:  The role of natural selection in the origin of life  21 April 2010
Unlike living systems that are products of and participants in evolution, these prebiotic chemical structures were not products of evolution. Not being yet intricately organized, they could have emerged as a result of ordinary physical and chemical processes. 34

A. G. CAIRNS-SMITH Seven clues to the origin of life, (1990) page  36: And if you ask me how the next stage happened, how the smallish 'molecules of life' came together to make the first reproducing evolving being, I will reply: 'With time, and more time, and the resource of oceans.' I will sweep my arms grandly about. 'Because you see. in the absence of oxygen the oceans would have accumulated "the molecules of life". The oceans would have been vast bowls of nutritious soup. Chance could do the rest. 33

David E. Sadava: LIFE The Science of Biology, TENTH EDITION, (2012) page 3:  When we consider how life might have arisen from nonliving matter, we must take into account the properties of the young Earth’s atmosphere, oceans, and climate, all of which were very different than they are today. Biologists postulate that complex biological molecules first arose through the random physical association of chemicals in that environment. 35

Renato Fani: The Origin and Evolution of Metabolic Pathways: Why and How did Primordial Cells Construct Metabolic Routes? 15 September 2012
It is commonly assumed that early organisms arose and inhabited aquatic environments (oceans, rivers, ponds, etc.) rich in organic compounds spontaneously formed in the prebiotic world. 45

Wilhelm T. S. Huck (2019) Robustness, Entrainment, and Hybridization in Dissipative Molecular Networks, and the Origin of Life  
Life emerged spontaneously from the selfassembly, or spontaneous organization, of the organic products of reactions, occurring in complex mixtures of molecules formed abiotically from simple precursors and sequences of reactions.

Frozen accidents
Crick: The Origin of the Genetic Code,  1968:  Amino acids were substituted when they were able to confer a selective advantage until eventually the code became frozen in its present form. The evolution of the code sketched here has the property that it could produce a code in which the actual allocation of amino acid to codons is mainly accidental and yet related amino acids would be expected to have related codons. 46

Robert Root-Bernstein: Simultaneous origin of homochirality, the genetic code and its directionality 2007 Jul;29
Most theories of the origins of homochirality have assumed either the actions of a physical force (e.g. polarized light or magnetism) or a ‘‘frozen accident’’ that randomly set chiral preferences for one set of molecules. 47

Dmitry Yu Zubarev: Uncertainty of Prebiotic Scenarios: The Case of the Non-Enzymatic Reverse Tricarboxylic Acid Cycle: 26 January 2015:  We conclude that the rTCA cycle should have a low probability of a random realization. We also notice that its length and cost are close to the extreme values. Further selection into biological cycles may have occurred by other means, such as a frozen accident, that is, the selection and preservation of a particular pathway from the ensemble of possibilities due to an undetermined random event 48

Emergent properties: 
Another attempt would be to invoke simply emergent properties of basic chemical reactions. An emergent property is a property that a collection or complex system has, but that the individual members do not have. In biology, for example, the stomach is made of endocrine cells, which on their own don't have the property of intake and digesting food. You will need the whole stomach to be able to digest food. The digestion of food is an emergent property of the stomach. The problem is, that there is no mechanism that accounts for these emergent properties. One can resort to unguided random events or physical necessity. That would fall, IMHO, back to the two previously mentioned mechanisms, and would not be a third option. In the end, we can invoke unguided stochastic events, or physical necessity as possible alternatives, and there is a third, which is intelligent design. 

Time: the naturalist's friend? 
Can you imagine, that this entire process could have come about by pure random accidental events? It takes a big leap of faith to believe so. That's precisely what atheists must go for when confronted with the analogous state of affairs observed in cells. So what is their justification? One often heard, is time. Millions, hundreds, or even billions of years. But is time the naturalist's friend? It's not. Time does not complexify molecules. It disintegrates them. Long periods of time do not make life inevitable; they only make randomization more complete.

Time is not the atheist's friend. This is a frequently raised, but unsophisticated argument for Darwinian evolution and the origin of life. You can't just vaguely appeal to vast and unending amounts of time (and other probabilistic resources) and assume that Darwinian evolution or whatever mechanisms you propose for the origin of life, can produce anything "no matter how complex." Rather, you have to demonstrate that sufficient probabilistic resources or evolutionary mechanisms indeed exist to produce the feature.

Ilya Prigogine, Nobel Prize-winning chemist wrote:  The probability that at ordinary temperatures a macroscopic number of molecules is assembled to give rise to the highly ordered structures and to the coordinated functions characterizing living organisms is vanishingly small. The idea of spontaneous genesis of life in its present form is therefore highly improbable, even on the scale of the billions of years during which prebiotic evolution occurred 8

A. G. CAIRNS-SMITH Seven clues to the origin of life (1990) page 58:  Vast times and spaces do not make all that much difference to the level of competence that pure chance can simulate. Even to get 14 sixes in a row (with one dice following the rules of our game) you should put aside some tens of thousands of years. But for 7 sixes a few weeks should do, and for 3 sixes a few minutes. This is all an indication of the steepness of that cliff-face that we were thinking about: a three-step process may be easily attributable to chance while a similar thirty-step process is quite absurd. 

Wald, G., Scientific American, 1954: "Given so much time, the "impossible" becomes possible. The possible probable. And the probable virtually certain. One only has to wait: Time itself performs the miracles."

Whenever we have presented that argument to atheists and unbelievers, we usually encountered easy dismissal, but this is the benchmark and bottom line, the overarching scheme of what goes on in biochemistry, biology, and life, and demands an adequate explanation.  How did this complexity come about?  Could it be the result of unguided, random, stochastic events that occurred about 4 billion years ago on the early earth? Science has been trying really hard to answer that question in the last hundred years, but its progress has been sobering. What happened? Rather than coming closer to answering successfully how life emerged on the early earth, science has discovered layers over layers of new complexity, and rather than closing the gaps, they have become larger and larger.

Abiogenesis research is a failure
Etiology, the science of causes, has led to dead ends in the search for the origin of the first living self-replicating cell by the hypothesis that natural, unguided random events were responsible,  to the constatation that the lack of natural selection produces too unspecific results in a wast, basically limitless chemical space and space of sequence combinations of molecules. Despite this, popular science writers keep the Zombie science narrative artificially alive.
To tackle the problem of the origin of life, expertise from various fields and backgrounds has to be considered, coming from physicists and chemists, biochemists and biologists, engineers, geologists and bio-astrophysicists, informatics and computer experts, and paleontologists. 

A few of the many abiogenesis hurdles were outlined by Stanley Miller and Harold Urey in a paper published in 1959: Intermediate Stages in Chemical Evolution The major problems remaining for an understanding of the origin of life are (i) the synthesis of peptides, (ii) the synthesis of purines and pyrimidines, (iii) a mechanism by which "high-energy" phosphate or other types of bonds could be synthesized continuously, (iv) the synthesis of nucleotides and polynucleotides, (v) the synthesis of polypeptides with catalytic activity (enzymes), and (vi) the development of polynucleotides and the associated enzymes which are capable of self-duplication. This list of problems is based on the assumption that the first living organisms were similar in chemical composition and metabolism to the simplest living organisms still on the earth. 53

From time to time misleading articles in science journals pop up, claiming that scientists are coming close to solving the riddle. Here are a few examples:

Science magazine: 'RNA world' inches closer to explaining origins of life New synthesis path shows how conditions on early Earth could have given rise to two RNA bases 12 MAY 2016 23
At Phys.org: Chemists claim to have solved riddle of how life began on Earth MARCH 18, 2015 24
JAMES URTON, University Of Washington: Researchers Solve Puzzle of Origin of Life on Earth AUGUST 12, 2019 25
Physicist Lawrence Krauss promised: “We’re coming very close” to explaining the origin of life via chemical evolutionary models 26
Rutgers University: Scientists Have Discovered the Origins of the Building Blocks of Life March 16, 2020
BBC: Charles Darwin’s hunch about early life was probably right: The processes that happen in that warm little pond might happen so easily that they happen all the time – Lena Vincent

Several prominent Origin of life researchers however have made opposing statements that portray the situation far more realistically. Basically, rather than finding answers to the open questions raised by Miller/Urey and others, they have remained unsolved. 

Graham Cairns-Smith: Genetic takeover page 64, 1988: the importance of this work lies, to my mind, not in demonstrating how nucleotides could have formed on the primitive Earth, but in precisely the opposite: these experiments allow us to see, in much greater detail than would otherwise have been possible, just why prevital nucleic acids are highly implausible. and page 66: Now you may say that there are alternative ways of building up nucleotides, and perhaps there was some geochemical way on the early Earth. But what we know of the experimental difficulties in nucleotide synthesis speaks strongly against any such supposition. However it is to be put together, a nucleotide is too complex and metastable a molecule for there to be any reason to expect an easy synthesis. 29

A. G. CAIRNS-SMITH  Seven Clues to the Origin of Life: A Scientific Detective Story 1990 page 14: The optimism ( about the origin of life) persists in many elementary textbooks. There is even, sometimes, a certain boredom with the question; as if it was now merely difficult because of an obscurity of view, a difficulty of knowing now the details of distant historical events. What a pity if the problem had really become like that! Fortunately, it hasn't. It remains a singular case (Sherlock Holmes' favorite kind): far from there being a million ways in detail in which evolution could have got underway, there seems now to have been no obvious way at all. The singular feature is in the gap between the simplest conceivable version of organisms as we know them, and components that the Earth might reasonably have been able to generate. This gap can be seen more clearly now. It is enormous. Evolution through natural selection depends on there being a modifiable hereditary memory - forms of that special kind that survive through making copies of copies..., Successions of machines that can remember like this, i.e. organisms, seem to be necessarily very complicated. Even man the engineer has never contrived such things. How could Nature have done so before its only engineer, natural selection, had had the means to operate? If life really did arise on the Earth ' through natural causes' then it must be that either there does not, after all, have to be a long-term hereditary memory for evolution, or organisms do not, after all, have to be particularly complex. Suddenly in our thinking we are faced with the seemingly unequivocal need for a fully working machine of incredible complexity: a machine that has to be complex, it seems, not just to work well but to work at all. Is there cause to complain about this official tourist route to the mountain? Is it just a garden path that we have been led along - easy walking, but never getting anywhere? I think it is. And I think we have been misled by what seem to be the two main clues: the unity of biochemistry and what is said to be the ease with which 'the molecules of life' can be made. 33

Robert Shapiro: A Replicator Was Not Involved in the Origin of Life 2008: A profound difficulty exists, however, with the idea of RNA, or any other replicator, at the start of life. Existing replicators can serve as templates for the synthesis of additional copies of themselves, but this device cannot be used for the preparation of the very first such molecule, which must arise spontaneously from an unorganized mixture. The formation of an information-bearing homopolymer through undirected chemical synthesis appears very improbable. 30 

Eugene V. Koonin: The Logic of Chance  page 252, 2012: " The origin of life is the most difficult problem that faces evolutionary biology and, arguably, biology in general. Indeed, the problem is so hard and the current state of the art seems so frustrating that some researchers prefer to dismiss the entire issue as being outside the scientific domain altogether, on the grounds that unique events are not conducive to scientific study. Despite many interesting results to its credit, when judged by the straightforward criterion of reaching (or even approaching) the ultimate goal, the origin of life field is a failure—we still do not have even a plausible coherent model, let alone a validated scenario, for the emergence of life on Earth. Certainly, this is due not to a lack of experimental and theoretical effort, but to the extraordinary intrinsic difficulty and complexity of the problem. A succession of exceedingly unlikely steps is essential for the origin of life, from the synthesis and accumulation of nucleotides to the origin of translation; through the multiplication of probabilities, these make the final outcome seem almost like a miracle. The multiplication of probabilities, makes the final outcome seem almost like a miracle. The difficulties remain formidable. For all the effort, we do not currently have coherent and plausible models for the path from simple organic molecules to the first life forms. Most damningly, the powerful mechanisms of biological evolution were not available for all the stages preceding the emergence of replicator systems. Given all these major difficulties, it appears prudent to seriously consider radical alternatives for the origin of life. "27

The Cell factory maker, Paley's watchmaker argument 2.0 90346510

Steve Benner:  Paradoxes in the origin of life 2014: Discussed here is an alternative approach to guide research into the origins of life, one that focuses on “paradoxes”, pairs of statements, both grounded in theory and observation, that (taken together) suggest that the “origins problem” cannot be solved.

Kenji Ikehara Evolutionary Steps in the Emergence of Life Deduced from the Bottom-Up Approach and GADV Hypothesis (Top-Down Approach) 2016 Mar; 6
(1) nucleotides have not been produced from simple inorganic compounds through prebiotic means and have not been detected in any meteorites, although a small quantity of nucleobases can be obtained.
(2) It is quite difficult or most likely impossible to synthesize nucleotides and RNA through prebiotic means.
(3) It must also be impossible to self-replicate RNA with catalytic activity on the same RNA molecule.
(4) It would be impossible to explain the formation process of genetic information according to the RNA world hypothesis, because the information is comprised of triplet codon sequence, which would never be stochastically produced by joining of mononucleotides one by one.
(5) The formation process of the first genetic code cannot be explained by the hypothesis either, because a genetic code composed of around 60 codons must be prepared to synthesize proteins from the beginning.
(6) It is also impossible to transfer catalytic activity from a folded RNA ribozyme to a protein with a tertiary structure. 

Edward J.Steele (2018):  The idea of abiogenesis should have long ago been rejected.…the dominant biological paradigm — abiogenesis in a primordial soup. The latter idea was developed at a time when the earliest living cells were considered to be exceedingly simple structures that could subsequently evolve in a Darwinian way. These ideas should of course have been critically examined and rejected after the discovery of the exceedingly complex molecular structures involved in proteins and in DNA. But this did not happen. Modern ideas of abiogenesis in hydrothermal vents or elsewhere on the primitive Earth have developed into sophisticated conjectures with little or no evidential support.  …independent abiogenesis on the cosmologically diminutive scale of oceans, lakes, or hydrothermal vents remains a hypothesis with no empirical support…The conditions that would most likely have prevailed near the impact-riddled Earth’s surface 4.1–4.23 billion years ago were too hot even for simple organic molecules to survive let alone evolve into living complexity. The requirement now, on the basis of orthodox abiogenic thinking, is that an essentially instantaneous transformation of non-living organic matter to bacterial life occurs, an assumption we consider strains credibility of Earth-bound abiogenesis beyond the limit. The transformation of an ensemble of appropriately chosen biological monomers (e.g. amino acids, nucleotides) into a primitive living cell capable of further evolution appears to require overcoming an information hurdle of super astronomical proportions, an event that could not have happened within the time frame of the Earth except, we believe, as a miracle. All laboratory experiments attempting to simulate such an event have so far led to dismal failure. 32

In 2009, a headline made the news: Wired magazine wrote: Life's First Spark Re-Created in the Laboratory. And Science magazine published an article by biotechnologist Craig Venter: Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome 20 What Craig Venter did, was copy an existing bacterial genome and transplant it into another cell, resulting in a genome of a minimal cell looking different than anything in nature. Claiming that this experiment was recreating life in the laboratory was evidently a far stretch and nothing of the sort. The challenge is to start with biomolecules, laying around on early earth, sort them out, purify, and complexify them by naturally occurring mechanisms. ( RNA and DNA are complex, specified molecules, made of the base, ribose which is the backbone, and phosphate. The three parts have to be joined together at the right prime position. Same with the other three basic building blocks, amino acids, phospholipids, and carbohydrates. But to that, later.)  Creating life out of non-living components has to start from scratch.

In 2014, a "Search for life" gathering in New York hosted Nobel Prize laureate and Professor of Genetics Jack W. Szostak. He boldly claimed that he expected to make "life in the lab" in three to five years. And more likely within three years. 16

In the JULY-AUGUST 2019 Harvard magazine edition, featured an article on "How Life Began Jack Szostak’s pursuit of the biggest questions on Earth". At the end of the article: “When Crick and Watson sat down and started making cardboard models of the structure of DNA, they had no idea that it would spawn an industry worth billions of dollars 70 years later,” John Sutherland notes. Szostak remains committed to chipping away at those big, challenging questions, continuing the work of decades. “I do hope to be able to build an evolving cellular system before I retire,” he says. He’s optimistic about his chances. “I think we’re getting there. There are a few more hard problems, and then I think everything will hopefully be solved in a couple of years.”   

Szostak did not fulfill his own prediction and remained hopeful to be able to build an evolving cellular system before he retired. And expressed optimism about his chances. Since the early fifties, when Watson and Crick discovered DNA, Science has moved forward in gigantic steps. We invented optical fibers, the passenger Jet, computer program languages, Solar cells, the microchip, and the credit card. Today, we do space tourism, man went to the moon, billions of households have computers in their homes, and are interconnected worldwide. But when it comes to Origin of Life research, after 7 decades, and an industry spending billions every year, the result is best expressed by Steve Benner, ex Harvard University professor. He wrote in 2012:


The Cell factory maker, Paley's watchmaker argument 2.0 3418

1. Denton: Evolution: A Theory in Crisis, 1986, pp. 249,  p. 329. 
5. [url= https://www.discovery.org/a/54/]Experimental Support for the Design Inference[/url] DECEMBER 27, 1987
6. Steven A Benner: Paradoxes in the origin of life 2015 Jan 22 
7. Peter Walde: Prebiotic Chemistry: From Simple Amphiphiles to Protocell Models: 12 fevereiro 2010 
https://www.amazon.com.br/Prebiotic-Chemistry-Simple-Amphiphiles-Protocell/dp/3642066143
8. https://libquotes.com/ilya-prigogine/quotes/macroscopic
9. Astrophysicist Breaks Down The Origins Of Life | Edge Of Knowledge | Ars Technica Apr 19, 2022 
10. M Vaneechoutte: The scientific origin of life. 
11. Juli Peretó: Darwinism and the Origin of Life 29 August 2012 
12. Paul Davies: Why Darwinian evolution does NOT explain the origin of life Sep 2, 2021 
13. Eörs Szathmáry: Toward major evolutionary transitions theory 2.0 
14. Jack W. Szostak: Functional proteins from a random-sequence library 5 APRIL 2001 
15. Abel: Chance and necessity do not explain the origin of life 2004 
16 . http://darwin-online.org.uk/content/frameset?itemID=F373&viewtype=image&pageseq=1
16. Suzan Mazur: Jack Szostak: 3 June 2014 
17. ERIN O'DONNELL: How Life Began Jack Szostak’s pursuit of the biggest questions on Earth JULY-AUGUST 2019 
18. Steve Benner:  Paradoxes in the origin of life. 2015 Jan 22 
19. Life's First Spark Re-Created in the Laboratory MAY 13, 2009 
20. J. CRAIG VENTER: Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome 20 May 2010 
21. Sara I. Walker: Re-conceptualizing the origins of life  2017 Dec 28 
22. Norio Kitadai:  Origins of building blocks of life: A review 12 August 2017
23. 'RNA world' inches closer to explaining origins of life: New synthesis path shows how conditions on early Earth could have given rise to two RNA bases 12 MAY 2016 
24. Bob Yirka, Phys.org:  Chemists claim to have solved riddle of how life began on Earth MARCH 18, 2015
25. JAMES URTON, University Of Washington: Researchers Solve Puzzle of Origin of Life on Earth AUGUST 12, 2019  
26. Krauss, Meyer, Lamoureux: What’s Behind it all? God, Science and the Universe.  on Mar 19, 2016 
27. Eugene V. Koonin: The Logic of Chance: The Nature and Origin of Biological Evolution  2012 
28. Steve Benner:  Paradoxes in the origin of life 2014 
29. A. G. Cairns-Smith:  Genetic Takeover: And the Mineral Origins of Life 
30. Robert Shapiro: A Replicator Was Not Involved in the Origin of Life  18 January 2008 
31. Kenji Ikehara: Evolutionary Steps in the Emergence of Life Deduced from the Bottom-Up Approach and GADV Hypothesis (Top-Down Approach) 2016 Jan 26 
32. Edward J.Steele: Cause of Cambrian Explosion - Terrestrial or Cosmic? August 2018 
33. A. G. CAIRNS-SMITH:  Seven Clues to the Origin of Life: A Scientific Detective Story 1990 
34. Iris Fry: The role of natural selection in the origin of life  21 April 2010 
35. David E. Sadava: LIFE The Science of Biology, TENTH EDITION, 2012 
36. Rutgers University: Scientists Have Discovered the Origins of the Building Blocks of Life March 16, 2020  
37. Robert M. Hazen: The Emergence of Chemical Complexity: An Introduction February 15, 2008 
38. E. Camprubí: The Emergence of Life  27 November 2019 
39. https://www.guinnessworldrecords.com/world-records/smallest-living-organism-
40. European Molecular Biology Laboratory: First-ever blueprint of 'minimal cell' is more complex than expected November 27, 2009 
41. Kühner et al. Proteome Organization in a Genome-Reduced Bacterium. Science, 2009; 
42. Yus et al. Impact of Genome Reduction on Bacterial Metabolism and Its Regulation. Science, 2009; 
43. Güell et al. Transcriptome Complexity in a Genome-Reduced Bacterium. Science, 2009; 
44. Marian Breuer: Essential metabolism for a minimal cell  Jan 18, 2019 
45. Renato Fani: The Origin and Evolution of Metabolic Pathways: Why and How did Primordial Cells Construct Metabolic Routes? 15 September 2012 
46. Crick: The Origin of the Genetic Code,  1968 
47. Robert Root-Bernstein: Simultaneous origin of homochirality, the genetic code and its directionality 2007 Jul;29 
48. Dmitry Yu Zubarev: Uncertainty of Prebiotic Scenarios: The Case of the Non-Enzymatic Reverse Tricarboxylic Acid Cycle: 26 January 2015 
49. Aristoteles: On the Generation of Animals
50. Russell Levine: The Slow Death of Spontaneous Generation (1668-1859) 
51. Phillip E. Johnson: Darwin on Trial Paperback  October 5, 2010
52. Cairns-Smith, A.G., Genetic Takeover: And the Mineral Origins of Life  1982
53. MILLER & UREY: Organic Compound Synthesis on the Primitive Earth: Several questions about the origin of life have been answered, but much remains to be studied 31 Jul 1959
54. Jessica Wimmer and William Martin: [url=https://blog.frontiersin.org/2022/01/19/frontiers-mi



Last edited by Otangelo on Mon Jul 25, 2022 9:02 am; edited 69 times in total

https://reasonandscience.catsboard.com

5The Cell factory maker, Paley's watchmaker argument 2.0 Empty Chapter 4 Thu May 05, 2022 12:25 pm

Otangelo


Admin

We are now 60 years into the modern era of prebiotic chemistry. That era has produced tens of thousands of papers attempting to define processes by which “molecules that look like biology” might arise from “molecules that do not look like biology” …. For the most part, these papers report “success” in the sense that those papers define the term…. And yet, the problem remains unsolved 18

J.Tour (2017):We synthetic chemists should state the obvious. The appearance of life on earth is a mystery. We are nowhere near solving this problem. The proposals offered thus far to explain life’s origin make no scientific sense. 58

To state it bluntly: The total lack of any kind of experimental evidence leading to the re-creation of life; not to mention the spontaneous emergence of life… is the most humiliating embarrassment to the proponents of naturalism and the whole so-called “scientific establishment” around it… because it undermines the worldview of who wants naturalism to be true. No scientific experiment has been able to come even close to synthesizing the basic building blocks of life and reproducing a  self-replicating Cell in the Laboratory through self-assembly and autonomous organization.

Life requires 1. Matter, 2. Energy, and 3. Information.
Emily Singer (2015): How Structure Arose in the Primordial Soup:  About 4 billion years ago, molecules began to make copies of themselves, an event that marked the beginning of life on Earth. A few hundred million years later, primitive organisms began to split into the different branches that make up the tree of life. In between those two seminal events, some of the greatest innovations in existence emerged: the cell, the genetic code, and an energy system to fuel it all. ALL THREE of these are ESSENTIAL to life as we know it, yet scientists know disappointingly little about how any of these remarkable biological innovations came about. 1

Singer outlines here a very fundamental key point. Life depends on three factors: 1. Matter, 2. Energy, 3. Information. A cell is made of matter. But the cell isn't simply and only matter. A cell is an integrated system that has form. It is complex, compartmentalized, organized, and full of functional interlocked and interdependent parts. In order to get its form, atoms have to be arranged in a specific order. They form molecules. Life depends on four, specified, and complex building blocks of life: nucleotides, amino acids, phospholipids, and carbohydrates. Energy is the ability for moving matter. In order for life to thrive, it has to be brought into an energy state, out of equilibrium and kept so. Energy in the form of ATP, the energy currency of the cell, is necessary to move atoms and molecules into the right place, and for maintaining the cellular inner operations. But the make of ATP is by no means an easy task. Ingenious, brilliant high-tech solutions like the make of molecular energy turbines and energy plants have to be implemented first.  Information is non-physical. It dictates form. Prescripted information directs the particular arrangement or sequence of things. Cells are full of information stored in genes and epigenetic information storage mechanisms, that dictate the operation of cellular behavior. The origin of all three demands an explanation in abiogenesis research. Could they have originated independently, without the others? As we will see, cells are full of interdependencies. The cell is irreducibly complex.

There was no prebiotic selection to get the basic building blocks of life
This is maybe the single most acute problem of abiogenesis. Molecules have nothing to gain by becoming the building blocks of life. They are "happy" to lay on the ground or float in the early ocean and that's it. What natural mechanisms lack, is goal-directedness. And that's a big problem for naturalistic explanations of the origin of life. There was a potentially unlimited variety of molecules on the prebiotic earth. Why should competition and selection among them have occurred at all, to promote a separation of those molecules that are used in life, from those that are useless? Selection is a scope and powerless mechanism to explain all of the living order, and even the ability to maintain order in the short term and to explain the emergence, overall organization, and long-term persistence of life from non-living precursors. It is an error of false conceptual reduction to suppose that competition and selection will thereby be the source of explanation for all relevant forms of order. Selecting the right materials is absolutely essential. But a prebiotic soup of mixtures of impure chemicals would never purify and select those that are required for life. Chemicals and physical reactions have no "urge" to join, group and start interacting in a purpose and goal-oriented way to produce molecules, that later on would perform specific functions, and generate self-replicating chemical factories. Natural selection cannot be invoked before a system exists capable of accurately reproducing and self-replicating all its parts.

William Dembski: The problem is that nature has too many options and without design couldn’t sort through all those options. The problem is that natural mechanisms are too unspecific to determine any particular outcome. Natural processes could theoretically form a protein, but also compatible with the formation of a plethora of other molecular assemblages, most of which have no biological significance. Nature allows them full freedom of arrangement. Yet it’s precisely that freedom that makes nature unable to account for specified outcomes of small probability. Nature, in this case, rather than being intent on doing only one thing, is open to doing any number of things. Yet when one of those things is a highly improbable specified event, design becomes the more compelling, better inference. Occam's razor also boils down to an argument from ignorance: in the absence of better information, you use a heuristic to accept one hypothesis over the other. 2

A. G. CAIRNS-SMITH Seven clues to the origin of life 2000:  It is one of the most singular features of the unity of biochemistry that this mere convention is universal. Where did such agreement come from? You see non-biological processes do not as a rule show any bias one way or the other, and it has proved particularly difficult to see any realistic way in which any of the constituents of a 'prebiotic soup' would have had predominantly 'left-handed' or right-handed' molecules. It is thus particularly difficult to see this feature as having been imposed by initial conditions. 3

Intractable Mixtures and the Origin of Life (2007): A problem which is familiar to organic chemists is the production of unwanted byproducts in synthetic reactions. For prebiotic chemistry, where the goal is often the simulation of conditions on the prebiotic Earth and the modeling of a spontaneous reaction, it is not surprising – but nevertheless frustrating – that the unwanted products may consume most of the starting material and lead to nothing more than an intractable mixture, or -gunk..Whatever the exact nature of an RNA precursor which may have become the first selfreplicating molecule, how could the chemical homogeneity which seems necessary to permit this kind of mechanism to even come into existence have been achieved? What mechanism would have selected for the incorporation of only threose, or ribose, or any particular building block, into short oligomers which might later have undergone chemically selective oligomerization? Virtually all model prebiotic syntheses produce mixtures. 6

Katarzyna Adamala (2014) makes the same point: Attempts to obtain copolymers, for instance by a random polymerization of monomer mixtures, yield a difficult to characterize mixture of all different products. To the best of our knowledge, there is no clear approach to the question of the prebiotic synthesis of macromolecules with an ordered sequence of residues. 7

Robert Hazen (2007): Prebiotic processes produced a bewildering diversity of seemingly useless molecules; most of the molecular jumble played no obvious role. The emergence of concentrated suites of just the right mix thus remains a central puzzle in origin-of-life research. Life requires the assembly of just the right combination of small molecules into much larger collections - "macromolecules" with specific functions. Making macromolecules is complicated by the fact that for every potentially useful small molecule in the prebiotic soup, dozens of other molecular species had no obvious role in biology. Life is remarkably selective in its building blocks, whereas the vast majority of carbon-based molecules synthesized in prebiotic processes have no obvious biological use. Consequently, a significant challenge in understanding life's chemical emergence lies in finding mechanisms by which the right combination of small molecules was selected, concentrated and organized into the larger macromolecular structures vital to life. 9

Lena Vincent and colleagues (2021): The biggest outstanding problem in understanding the origins of life is how the components of prebiotic soup came to be organized in systems capable of emergent processes such as growth, self-propagation, information processing, and adaptive evolution. Given that prebiotic soups may have been composed of millions of distinct compounds, each at a low concentration, another mystery is how processes winnowed this molecular diversity down to the few compounds it used by biology today, which are a tiny subset of the many compounds that would have arisen from abiotic processes. Undoubtedly, the best-characterized carbonaceous chondrite is the Murchison meteorite, whose organic components have been extensively catalogued. Amino, hydroxy, and carboxylic acids are among some of the important biologically relevant components, though it should be borne in mind that untargeted analyses suggest there may be several million relatively low molecular weight compounds present, and thus the compounds of biological relevance are only a small fraction of the non-biological suite.  32

Stuart Kauffmann et al:  Understanding chemical evolution is difficult partly due to the astronomic number of possible molecules and chemical reactions. 35

1. Life requires the use of a limited set of complex biomolecules, a universal convention, and unity which is composed of the four basic building blocks of life ( RNA and DNA, amino acids, phospholipids, and carbohydrates). They are of a very specific complex functional composition and made by cells in extremely sophisticated orchestrated metabolic pathways, which were not extant on the early earth. In order for abiogenesis to be true, these biomolecules had to be prebiotically available and naturally occurring ( in non-enzyme-catalyzed ways by natural means ) and then somehow join in an organized way and form the first living cells. They had to be available in big quantities and concentrated at one specific building site.
2. Making things for a specific purpose, for a distant goal, requires goal-directedness. And that's a big problem for naturalistic explanations of the origin of life. There was a potentially unlimited variety of molecules on the prebiotic earth. Competition and selection among them would never have occurred at all, to promote a separation of those molecules that are used in life, from those that are useless. Selection is a scope and powerless mechanism to explain all of the living order, and even the ability to maintain order in the short term and to explain the emergence, overall organization, and long-term persistence of life from non-living precursors. It is an error of false conceptual reduction to suppose that competition and selection will thereby be the source of explanation for all relevant forms of the living order.
3. We know that a) unguided random purposeless events are unlikely to the extreme to make specific purposeful elementary components to build large integrated macromolecular systems, and b) intelligence has goal-directedness. Bricks do not form from clay by themselves, and then line up to make walls. Someone made them. Phospholipids do not form from glycerol, a phosphate group, and two fatty acid chains by themselves, and line up to make cell membranes. Someone made them. That is the creator. 

Undesired contamination, and mixtures
Another problem is contamination. Saidul Islam and colleagues explain: A central problem for the prebiotic synthesis of biological amino acids and nucleotides is to avoid the concomitant synthesis of undesired or irrelevant by-products. Additionally, multistep pathways require mechanisms that enable the sequential addition of reactants and purification of intermediates that are consistent with reasonable geochemical scenarios. To avoid the concomitant synthesis of undesired or irrelevant by-products alongside the desired biologically relevant molecules is one of the central challenges to the development of plausible prebiotic chemistry. Previous models have advocated that kinetically controlled, segregated syntheses (under different local geochemical conditions) are required to overcome the incompatibility of distinct reactions. However, these models are necessarily highly contingent on the rapid exploitation of reagents as and when they form. Accordingly, they are reliant on achieving a specific and controlled order of synthetic steps under geochemical constraints and also they are incompatible with the accumulation or purification of intermediates.36

What is available in a laboratory, namely chemists that can carefully set up a sequential biosynthesis process, and use purified reactants, was not there, on the prebiotic earth. In cells, all this is possible, because it occurs in a protected environment. The elements are imported, often transformed into a biologically useful form, like the chelation of Fe on the surface of cell membranes, using sophisticated molecular machines, like enzymes and proteins.  

Biomolecules decompose and degrade. They do not complexify
Even if the earth is an open system, and receives energy from the sun, the second law is still in place. Molecules have the natural tendency to disintegrate, and not complexify. The synthesis of complex biomolecules, therefore, runs against what thermodynamics dictates.

Life in any form is a very serious enigma and conundrum. It does something, whatever the biochemical pathway, machinery, enzymes etc. are involved, that should not and honestly could not ever "get off the ground". It SPONTANEOUSLY recruits Gibbs free energy from its environment so as to reduce its own entropy. That is tantamount to a rock continuously recruiting the wand to roll it up the hill, or a rusty nail "figuring out" how to spontaneously rust and add layers of galvanizing zinc on itself to fight corrosion. Unintelligent simple chemicals can't self-organize into instructions for building solar farms (photosystems 1 and 2), hydroelectric dams (ATP synthase), propulsion (motor proteins) , self repair (p53 tumor suppressor proteins) or self-destruct (caspases) in the event that these instructions become too damaged by the way the universe USUALLY operates. Abiogenesis is not an issue that scientists simply need more time to figure out but a fundamental problem with materialism  31

Steve Benner (2012): The Paradox at the Center of the Bio-origins Problem:  At the center of the problem of bio-origins lies a contrast between observations made routinely in two fields. In chemistry, when free energy is applied to organic matter without Darwinian evolution, the matter devolves to become more and more “asphaltic”, as the atoms in the mixture are rearranged to give ever more molecular species. Even nonchemists know of this observation, perhaps from having left cooking unattended in a kitchen. In the resulting “asphaltization”, what was life comes to display fewer and fewer characteristics of life. The paradox lies at the center of the bio-origins puzzle. Regardless of the organic materials or the kinds of energy present early on Earth, chemists expect that a natural devolution took them away from biology toward asphalt. 4

David Deamer (2017) reports the same problem: It is clear that non-activated nucleotide monomers can be linked into polymers under certain laboratory conditions designed to simulate hydrothermal fields. However, both monomers and polymers can undergo a variety of decomposition reactions that must be taken into account because biologically relevant molecules would undergo similar decomposition processes in the prebiotic environment. 5

Timothy R. Stout (2019):  A Natural Origin-of-Life: Every Hypothetical Step Appears Thwarted by Abiogenetic Randomization  :
Prebiotic processes naturally randomize their feedstock. This has resulted in the failure of every experimentally tested hypothetical step in abiogenesis beginning with the 1953 Miller-Urey Experiment and continuing to the present. Not a single step has been demonstrated that starts with appropriate supply chemicals, operates on the chemicals with a prebiotic process, and yields new chemicals that represent progress towards life and which can also be used in a subsequent step as produced. Instead, the products of thousands of experiments over more than six decades consistently exhibit either increased randomization over their initial composition or no change. We propose the following hypothesis of Abiogenetic Randomization as the root cause for most if not all of the failures:

1) prebiotic processes naturally form many different kinds of products; life requires a few very specific kinds.
2) The needs of abiogenesis spatially and temporally are not connected to and do not change the natural output of prebiotic processes.
3) Prebiotic processes naturally randomize feedstock. A lengthy passage of time only results in more complete randomization of the feedstock, not eventual provision of chemicals suitable for life. The Murchison meteorite provides a clear example of this.
4) At each hypothetical step of abiogenesis, the ratio of randomized to required products proves fatal for that step.
5. The statistical law of large numbers applies, causing incidental appearances of potentially useful products eventually to be overwhelmed by the overall, normal product distribution.
6) The principle of emergence magnifies the problems: the components used in the later steps of abiogenesis become so intertwined that a single-step first appearance of the entire set is required. Small molecules are not the answer. Dynamic self-organization requires from the beginning large proteins for replication, metabolism, and active transport. Many steps across the entire spectrum of abiogenesis are examined, showing how the hypothesis appears to predict the observed problems qualitatively. There is broad experimental support for the hypothesis at each observed step with no currently known exceptions.

Just as there are no betting schemes that allow a person to overcome randomness in a casino, there appear to be no schemes able to overcome randomness using prebiotic processes. We suggest that an unwillingness to acknowledge this has led to the sixty-plus years of failure in the field. There is a large body of evidence—essentially all experiments in abiogenesis performed since its inception sixty-plus years ago—that appears to be consistent with the hypothesis presented in this paper. Randomization prevails.
8

Steven Benner (2012): RNA has been called a “prebiotic chemist's nightmare” because of its combination of large size, carbohydrate building blocks, bonds that are thermodynamically unstable in water, and overall intrinsic instability. Many bonds in RNA are thermodynamically unstable with respect to hydrolysis in water, creating a “water problem”. Finally, some bonds in RNA appear to be “impossible” to form under any conditions considered plausible for early Earth.   In chemistry, when free energy is applied to organic matter without Darwinian evolution, the matter devolves to become more and more “asphaltic”, as the atoms in the mixture are rearranged to give ever more molecular species. In the resulting “asphaltization”, what was life comes to display fewer and fewer characteristics of life.

Biologists routinely observe the opposite. In the biosphere, when free energy is provided to organic matter that does have access to Darwinian evolution, that matter does not become asphaltic. Instead, “life finds a way” to exploit available raw materials, including atoms and energy, to create more of itself and, over time, better of itself. This observation is made across the Earth, from its poles to the equator, from high in the atmosphere to the deepest oceans, and in humidities that cover all but the very driest. The contrast between these commonplace observations in chemistry versus commonplace observations in biology embodies the paradox that lies at the center of the bio-origins puzzle. Regardless of the organic materials or the kinds of energy present early on Earth, chemists expect that a natural devolution took them away from biology toward asphalt. To escape this asphaltic fate, this devolution must have transited a chemical system that was, somehow, able to sustain Darwinian evolution. Otherwise, the carbon on Earth would have ended up looking like the carbon in the Murchison meteorite (or the La Brea tar pits without the fossils). 4

Chapter 4

The earth, and the atmosphere, just right for life
In 2019, NASA reported that over 4000 exoplanets were known, and many more are certainly out there. One of the criteria to find out if one of these could be habitable, and sustain life, is to elucidate the gas composition of their atmospheres.  33 The earth is the only known planet equipped with an atmosphere of the right mixture and composition of gases to sustain plant, animal, and human life. Gravity strength on the earth's surface prevents the atmosphere from losing water to space too fast.  Its pressure enables our lungs to function and water to evaporate at an optimal rate to support life. Its transparency allows an optimal range of life-giving solar radiation to reach the surface. Its capacity to hold water vapor provides for stable temperature and rainfall ranges.  The range of carbon dioxide and carbon monoxide must be in a very narrow range to permit advanced life forms. 34 If the carbon dioxide level in the atmosphere would be greater, runaway greenhouse effects would develop.  If less, plants would be unable to maintain efficient photosynthesis. If the oxygen quantity in the atmosphere would be greater: plants and hydrocarbons would burn up too easily.  If less: advanced animals would have too little to breathe. The atmosphere requires the just right carbon monoxide quantity, and the correct chlorine quantity. the correct water vapor level, the correct quantity of greenhouse gases, the correct rate of change in greenhouse gases.  

Essential elements and building blocks for the origin of life
25 are essential for life. Of these, there are six main elements that are the fundamental building blocks of life. They are, in order of least to most common: sulfur, phosphorous, oxygen, nitrogen, carbon, and hydrogen. The remaining 19 elements are defined as trace elements, which are important, but required only in very small quantities. 20
We can call them the anthropic elements. 96% are the big elements. They are  composed of Carbon, Nitrogen, Oxygen, and Hydrogen. Another 3,5% are major elements, and 0,5%, trace elements.  The origin and source of the materials that composed the fundamental building blocks of life is a fundamental origin of life question.

Six of the anthropic elements in the periodic table are called biogenic elements. These are the most essential ones in the biochemical processes in life. They are also called organogenic elements (carbon, nitrogen, oxygen, hydrogen, phosphorus, and sulfur. They make up over 97,5% of the mass of all organisms. But life is only possible if these elements are combined into molecules. When carbon is linked to hydrogen atoms, the molecules are called organic.

Paul G. Falkowski explains in a chapter of FUNDAMENTALS OF GEOBIOLOGY:
Under Earth’s surface conditions, the addition of hydrogen atoms to carbon requires the addition of energy, while the oxidation of carbon-hydrogen (C–H) bonds yields energy. Indeed the oxidation of C–H bonds form the basis of energy production for all life on Earth. 28

TARA YARLAGADDA in an article published in dec 2021:
The most essential element for life is hydrogen, which is necessary for carbon fixation — the process where carbon dioxide gets converted into organic compounds used to store energy in living beings. “Without hydrogen, nothing happens at all, because hydrogen is required to get carbon from carbon dioxide incorporated into metabolism in the first place,”  29

Carbon and hydrogen become hydrocarbons, which make up the hydrocarbon chains in cell membranes.  If we combine carbon with hydrogen, and oxygen, we get carbohydrates like glucose sugars, one of the life-essential quartet of building blocks of life. Join five elements, carbon, hydrogen, oxygen, nitrogen, and sulfur, and we get amino acids and proteins, the working horses of the cell. And if we join carbon, hydrogen, oxygen, nitrogen, and phosphorus, we get RNA and DNA, the information storage and transmission molecules of life.

The Cell factory maker, Paley's watchmaker argument 2.0 Essent10

Energy cycles, how did they "take off"?
Several energy cycles constantly cycle and transform the elements through the earth's atmosphere, surface, and crust, permitting a life-sustaining planet, and the formation of minerals and the energy required to permit the emergence of advanced technology-based civilizations like ours. Following biogeochemical Cycles are essential for advanced life on earth: the Hydrologic Cycle, (Water Cycle), Carbon Cycle, Oxygen Cycle, Nitrogen Cycle, Global Carbon Cycle, Phosphorus, Iron, and Trace Mineral cycles.  Carbon, hydrogen, nitrogen, and oxygen are constantly recycled through the atmosphere and the earth's crust. Microorganisms and plants are essential to forming the cycle. As a science paper from 2014 points out:

Microbes are critical in the process of breaking down and transforming dead organic material into forms that can be reused by other organisms. This is why the microbial enzyme systems involved are viewed as key ‘engines’ that drive the Earth's biogeochemical cycles. 16

An energy cycle between photosynthesis and cellular respiration sustains life on earth. Advanced multicellular life forms, animals and humans, get the energy they need for life by respiration. We inhale oxygen and combine it with carbohydrates, and breathe out oxygen dioxide,  doing the reverse of photosynthesis. Photosynthesizers like Cyanobacteria reversely split water and produce oxygen as a waste product, and use the energy generated in the process to take up carbon dioxide from the air and transform it into carbohydrates. In other words: Oxygen breathers turn oxygen into carbon dioxide. And cyanobacteria, algae, and plants absorb the energy of sunlight, and carbon dioxide (CO2) from the air, and transform it into food (carbohydrates, sugars) and oxygen. And so the cycle closes.  

Gonzalez, The privileged planet, page 36:
Earth’s ability to regulate its climate hinges on both water and carbon, not least because carbon dioxide and water vapor—and to a lesser extent, methane—are important atmospheric greenhouse gases. These life-essential vapors are freely exchanged among our planet’s living creatures, atmosphere, oceans, and solid interior. Moreover, carbon dioxide is highly soluble in water. Together, they create a unified climate feedback system and have kept Earth a lush planet. Indeed, it’s hard to ignore the need for the planetary environment to be so closely linked to the chemistry of life. We’re made from the dust of Earth and to dust we will return. Life, rocks, and the atmosphere interact in a complex web of feedback loops reminiscent of the classic dilemma of the chicken and the egg: Life needs a habitable planet to exist, but simple organisms seem to be necessary ingredients for making a habitable planet. 30
 
That raises the question of how the cycle took off. 

P.A. Trudinger writes in: Biogeochemical Cycling of Mineral-Forming Elements 1979 page 16:
Since a large number of elements are essential for all, or at least some, of the components of the biosphere, it is obvious that the biogeochemical, cycles of these elements will be interdependent. Photosynthesis results in the evolution of 0xygen, and the incorporation of Carbon dioxide into the organic matter of living cells which, at the same time, incorporate nitrogen, sulfur, and phosphorus. Organic matter and O2 are used to drive independent cycles of sulfur, nitrogen, and carbon each of which requires the participation of phosphorus. The latter three cycles also regenerate carbon, sulfur, and nitrogen in the form required for the initial photosynthetic cell production. The five-element cycles are thus clearly interdependent and any change in one cycle will in the long term have a profound influence on the operation of the other four. 14

The Cell factory maker, Paley's watchmaker argument 2.0 Energy10
Interrelations between the cycles of carbon, sulfur, phosphorus, nitrogen, and oxygen.

This is a gigantic interdependent system, in which, if one part of the cycle is missing, the others break down. This interdependence is well expressed in the following science paper from 2013:
The productivity of plants and soil organisms strongly depends on nitrogen. This fact leads to a tight coupling of the terrestrial nitrogen and carbon cycles.  Nitrogen availability plays an important role in controlling the productivity, structure, and spatiotemporal dynamics of terrestrial ecosystems: perturbations in the nitrogen cycle will have repercussions in the carbon cycle, and vice versa. 15

Carbon
Carbon (C) has the number 6 in the periodic table. It makes up about 0.025 percent of Earth's crust. It is fundamental to building all biological molecules. It is the backbone of all organic compounds used in life, that is proteins, lipids, nucleic acids, and carbohydrates, and as such, essential. Carbon forms four covalent bonds, which are links formed by the sharing of electrons between two atoms. 

The fact that carbon and higher elements exist in the universe is nothing short of a miracle. An extraordinary feat that depends on a very precisely adjusted and tuned process. As Bradford explained in a journal from 2011:

If the strength of the strong nuclear force, gs, were changed by 1% the rate of the triple-alpha reaction would be affected so markedly that the production of biophilic1 abundances of either carbon or oxygen would be prevented. 37

Sir Fred Hoyle, Cambridge Astrophysicist (“The Universe: Past and Present Reflections”):
“From 1953 onward, Willy Fowler and I have always been intrigued by the remarkable relation of the 7.65 Mev energy level in the nucleus of Carbon 12 to the 7.12 Mev level in Oxygen 16. if you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you have to fix, and your fixing would have to be just where these levels are actually found to be. Another put-up job? Following the above argument, I am inclined to think so.  A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature.” 38

As the Chemistry: Atoms First textbook explains:
Carbon’s small atomic radius allows the atoms to approach one another closely, giving rise to short, strong, carbon-carbon bonds and stable carbon compounds.  Silicon atoms, however, are bigger than carbon atoms, so silicon atoms generally cannot approach one another closely. These attributes enable carbon to form chains (straight, branched, and cyclic) containing single, double, and triple carbon-carbon bonds, This, in turn, results in an endless array of organic compounds containing any number and arrangement of carbon atoms. 10

Only carbon atoms can bond to form long carbon chains, which permit the formation of polymerization, or catenation, long chains that don't break apart at higher temperatures, able to form long polymers, like proteins, and DNA. Life-based on silicon has not been shown to be possible. Smithsonian mag puts it that way:
Silicon-based life in water, or on an oxygen-rich planet, would be all but impossible as any free silicon would react quickly and furiously to form silicate rock. And that’s pretty much the end of the story. 11

Carbon is used as the backbone of all organic compounds used in life, that is proteins, lipids, nucleic acids, and carbohydrates, and as such, essential. Carbon forms four covalent bonds, which are links formed by the sharing of electrons between two atoms. As the Chemistry: Atoms First textbook explains:
Carbon’s small atomic radius allows the atoms to approach one another closely, giving rise to short, strong, carbon-carbon bonds and stable carbon compounds.  Silicon atoms, however, are bigger than carbon atoms, so silicon atoms generally cannot approach one another closely. These attributes enable carbon to form chains (straight, branched, and cyclic) containing single, double, and triple carbon-carbon bonds, This, in turn, results in an endless array of organic compounds containing any number and arrangement of carbon atoms. 10

Carbon atoms can form long chains, which permit the formation of polymerization, or catenation, long chains that don't break apart at higher temperatures, able to form long polymers, like proteins, and DNA. Life-based on silicon has not been shown to be possible. Smithsonian mag puts it that way:
Silicon-based life in water, or on an oxygen-rich planet, would be all but impossible as any free silicon would react quickly and furiously to form silicate rock. And that’s pretty much the end of the story. 11

A finely tuned Carbon-cycle - is essential for life
Photosynthetic organisms like Cyanobacteria remove and fix carbon from the atmosphere. Advanced life forms, like us, add carbon to the atmosphere. We respire oxygen and exhale carbon dioxide. Chloroplasts and mitochondria exert an antagonistic effect on the composition of air. That closes the carbon-oxygen cycle.

Paul g. Falkowski writes in FUNDAMENTALS OF GEOBIOLOGY: 
The ‘geological’ or ‘slow’ carbon cycle is critical for maintaining Earth as a habitable planet, but the entry of these oxidized forms of carbon into living matter requires the addition of hydrogen atoms. By definition, the addition of hydrogen atoms to a molecule is a chemical reduction reaction. Indeed, the addition or removal of hydrogen atoms to and from carbon atoms (i.e., ‘redox’ reactions), is the core chemistry of life. The processes which drive these core reactions also form a second, concurrently operating global carbon cycle which is biologically catalyzed and operates millions of times faster than the geological carbon cycle. Approximately 75 to 80% of the carbon on Earth is found in an oxidized, inorganic form either as the gas carbon dioxide (CO2 ) or its hydrated or ionic equivalents.   26

The carbon cycle is a  process through which all of the carbon atoms in the atmosphere, hydrosphere, crust, mantle, and the biomass of living organisms cycle through various biochemical pathways in precise ways. Cyanobacteria, algae, diatoms, plankton, and plants remove carbon from the atmosphere through photosynthesis, while other life forms fill the atmosphere with carbon through the respiration of oxygen. Life is an integral part in the different energy cycles. But life depends on these energy cycles in order to exist. That creates a catch22 problem. Courtney White writes in her book from 2014:
And what carbon does is cycle—a process essential to life on Earth. It’s a carefully regulated process too so that the planet can maintain critical balances. Call it the Goldilocks Principle: not too much carbon, not too little, but just the right amount. For instance, without CO2 and other greenhouse gases, Earth would be a frozen ball of rock. With too many greenhouse gases, however, Earth would be an inferno like Venus. “Just right” means balancing between the two extremes, which helps to keep the planet’s temperature relatively stable. It’s like the thermostat in your house. If it gets too warm, the cycle works to cool things off, and vice versa. When combined with water it forms sugars, fats, alcohols, fats, and terpenes. When combined with nitrogen and sulfur it forms amino acids, antibiotics, and alkaloids. With the addition of phosphorus, it forms DNA and RNA – the essential codes of life – as well as ATP, the critical energy-transfer molecule found in all living cells. The carbon atom is the essential building block of life. Every part of your body is made up of chains of carbon atoms, which is why we are known as “carbon-based life forms.” We are stardust. 12

and Gonzalez writes in: The privileged planet 2004, page 55:
Plate tectonics makes the carbon cycle possible which is essential to our planet’s habitability. This cycle is actually composed of a number of organic and inorganic subcycles, all occurring on different timescales. These cycles regulate the exchange of carbon-containing molecules among the atmosphere, ocean, and land. Photosynthesis, both by land plants and by phytoplankton near the ocean surface, is especially important since its net effects are to draw carbon dioxide from the atmosphere and make organic matter. Zooplankton consumes much of the organic matter produced on the sunlight-rich surface. The carbonate and silicate skeletons of the marine organisms settle obligingly on the ocean floor, to be eventually squirreled away beneath the continents. 13

Origin of carbon fixation.
Life hydrogenates carbon dioxide. In other words, to convert carbon dioxide into organic molecules, life attaches hydrogen atoms to CO2. ( N.Lane 2010) Carbon fixation can be described in other words as "to turn non-volatile". The origin of carbon fixation is one of, if not the most fundamental biosynthesis processes in life.  is intimately linked with the quest for the origin of metabolism. Many, in the aim to solve the riddle of the origin of life, have resorted to "metabolism first" scenarios. All life on Earth depends on and requires as a prerequisite the fixation of inorganic carbon into organic molecules wherein carbon dioxide from the atmosphere is converted into carbohydrates. Even the first life-form had to fix carbon from CO2. Only two biological mechanisms lead to the fixation of inorganic carbon: chemoautotrophy and photoautotrophy. They are divided into six biochemical pathways known today to perform the reaction.  The six pathways are:

1. reductive pentose phosphate cycle (Calvin cycle) in plants and cyanobacteria that perform oxygenic photosynthesis ( this is by far the most important one)
2. reductive citrate cycle (rTCA) cycle), or reductive citric acid cycle (Arnon-Buchanan cycle) ) or reductive tricarboxylic acid (rTCA) in photosynthetic green sulfur bacteria and some chemolithoautotrophs,
3. 3-hydroxypropionate bi-cycle in photosynthetic green nonsulfur bacteria, two variants of 4-hydroxybutyrate pathways in Crenarchaeota
4. hydroxypropionate-hydroxybutyrate cycle and
5. dicarboxylate-hydroxybutyrate cycle, and
6. reductive acetyl-CoA pathway

Each of these cycles has its own biochemical reactions requiring its own enzymes and reducing power of a specific nature. There is no nested hierarchy, and no plausible evolutionary narrative, of how one common ancestor could have given rise to all others. Most life forms today produce energy through sunlight ( photosynthesis), and the energy produced is used to perform CO2 fixation through the so-called Calvin-Benson cycle ( The organisms using that process are called photoautotrophs )   The product is Glucose. from which the making of the building blocks of life starts. Based on the evolutionary narrative, oxygenic photosynthesis is a latecomer. Prior, autotrophic bacteria and archaea supposedly used inorganic chemical compounds rather than sunlight, to generate energy.

For the reader that is unfamiliar with biochemistry, an analogy might help to understand what a biosynthesis pathway is.

The Cell factory maker, Paley's watchmaker argument 2.0 Air-sp10

Imagine a robotic production line in a factory. It has usually a ( varying) number of robots, that are lined up in a functional way. The production line is fed with raw materials. It goes through a refining process. The first robot will receive the raw material and perform the first processing step resulting in the first intermediate or subproduct. After done, the production line moves the subproduct further down the line to be handed over to the next robot, which will process the second step, and the procedure repeats subsequently, several times until the end product is manufactured. In the end, there is a fully formed subpart, like the door of a car. That door is part of a larger object, the finished car.  The whole production line and each robot have to be designed, implemented, and put in the right place in the larger manufacturing system. Everything has to be designed, engineered, and implemented with a higher-end goal in mind. And there is an interdependence. If one of the robots ceases to work for some reason, the whole fabrication ceases, and the completion of the finished car cannot be accomplished. That means, a tiny mal connection of one of the robots in the production line might stop the production, and the finished product cannot be produced. Like a factory production line, in biochemistry, each enzyme catalyzes a specific reaction, using the product of the upstream enzyme, and passing the result to the downstream enzyme. put through a refining process until getting the final product, that is suitable for further goals.

The Cell factory maker, Paley's watchmaker argument 2.0 Rtca_c10

Above is an image of the reverse tricarboxylic acid (rTCA) cycle (also known as the reverse Krebs cycle). It is the center, the metabolic core of the cell. Morowitz writes in The Origin and Nature of Life on Earth, 2016:
The five “pillars of anabolism” are intermediates of the citric acid cycle, and starting points of all major pathways of anabolism (arrows). The precursors and their downstream products are acetate (fatty acid and isoprene alcohol lipids), pyruvate (alanine and its amino acid derivatives, sugars), oxaloacetate (aspartate and derivatives, pyrimidines), α-ketoglutarate (glutamate and derivatives, also pyrroles), and succinate (pyrroles). Molecules with homologous local chemistry are at opposite positions on the circle. Oxidation states of internal carbon atoms are indicated by color (red oxidized, blue reduced). (After Braakman and Smith, Creative Commons.).

The Krebs cycle synthesizes the five compounds acetate, pyruvate, oxaloacetate, succinate, and α-ketoglutarate are the standard universal precursors to all of the biosynthesis. and the reductive TCA (rTCA) cycle ( which runs counterclockwise in the figure) is used as a carbon fixation pathway in several clades of bacteria 24

Nick Lane writes: LIFE ASCENDING The Ten Great Inventions of Evolution (2010): 
In reverse, the Krebs cycle sucks in carbon dioxide and hydrogen to form new organic molecules, all the basic building blocks of life. And instead of releasing energy as it spins, the reverse cycle consumes ATP. Provide it with ATP, carbon dioxide, and hydrogen, and the cycle spins out the basic building blocks of life, as if by magic. This reverse spinning of the Krebs cycle is not widespread even in bacteria, but it is relatively common in the bacteria that live in hydrothermal vents. It is plainly an important, if primitive, way of converting carbon dioxide into the building blocks of life. The pioneering Yale biochemist, Harold Morowitz, now at the Krasnow Institute for Advanced Study, Fairfax, Virginia, has been teasing out the properties of the reverse Krebs cycle for some years. In broad terms, his conclusion is that, given sufficient concentrations of all the ingredients, the cycle will spin on its own. It is bucket chemistry. If the concentration of one intermediate builds up, it will tend to convert into the next intermediate in succession. Of all possible organic molecules, those of the Krebs cycle are the most stable, and so the most likely to form. In other words, the Krebs cycle was not ‘invented’ by genes, it is a matter of probabilistic chemistry and thermodynamics. When genes evolved, later on, they conducted a score that already existed, just as the conductor of an orchestra is responsible for the interpretation–the tempo and the subtleties– but not the music itself. The music was there all along, the music of the spheres.  25

Did you read that carefully? Probabilistic chemistry and thermodynamics did the trick, and came up with the brilliant "idea" of this sophisticated biochemical cycle all by itself, by random happenstance. Awesome !! Can you believe this? We don't have enough faith in blind, unguided random self-assembly, as Lane wants to make us believe.

The TCA cycle is claimed to be close to the ancestral autotrophic carbon fixation pathway, as the following nature magazine science paper from 2017 states:
The reverse tricarboxylic acid (rTCA) cycle (also known as the reverse Krebs cycle) is a central anabolic biochemical pathway whose origins are proposed to trace back to geochemistry, long before the advent of enzymes, RNA or cells, and whose imprint remains intimately embedded in the structure of core metabolism. 22

It is only found in strictly anaerobic bacteria and archaea. It is the central hub from which all basic building blocks for life are made, by all three domains of life. So the origin of the TCA is a central origin of life problem. Nine enzymes are used in the cycle in the right precise sequence:

1, malate dehydrogenase
2, fumarate hydratase (fumarase)
3, fumarate reductase
4, succinyl-CoA synthetase
5, 2-oxoglutarate:ferredoxin oxidoreductase
6, isocitrate dehydrogenase
7, aconitate hydratase (aconitase)
8, ATP citrate lyase
9, pyruvate:ferredoxin oxidoreductase Fdred, reduced ferredoxin.

Furthermore, the following enzymes and their functions are unique to the reverse TCA cycle:

5. 2-oxoglutarate:ferredoxin oxidoreductase;
8. ATP citrate lyase;
9. pyruvate:ferredoxin oxidoreductase)

Libretexts: No.8, ATP citrate lyase, is an enzyme that represents an important step in fatty acid biosynthesis. This step in fatty acid biosynthesis occurs because ATP citrate lyase is the link between the metabolism of carbohydrates (which causes energy) and the production of fatty acids. 39

Remember one of the design detection points: 5. Artifacts might be employed in different systems ( a wheel is used in cars and airplanes ). ATP citrate lyase has two distinct functions. 1. It is an integral part of the rTCA and TCA cycle contributing to producing fixed carbon, and 2. it contributes to fatty acid synthesis. Two flies with one hit. 

Consider that these enzymes all contain fixed carbon. So in order to make fixed carbon, enzymes that contain fixed carbon are required. That creates a catch22, or chicken & egg situation. One of the relevant questions to ask is: Why at all would prebiotic earth's molecules, "happily" laying around on the ground, or swimming in the prebiotic ocean "aim" to complexify to create metabolic routes, self-assemble into complex enzymes, and join them in a function-wise logical way together forming metabolic routes, to produce intermediate metabolites, that would bear function in a distant future, when complex cells would arise?  So how do origin of life researchers attempt to solve this catch22 problem? a science paper from 2010 proposes: 

Inorganic carbon fixation proceeded on minerals and was based on catalysis by transition metal sulfides. Given the structural and catalytic similarity between the minerals themselves and the catalytic metal or Fe–S-containing centres of the enzymes or cofactors in the acetyl-CoA pathway, one attractive idea is that minerals catalyzed a primitive acetyl-CoA pathway 17

In another more recent paper from 2017, Norio Kitadai et al. comes up with a similar hypothesis: The reductive tricarboxylic acid (rTCA) cycle is among the most plausible candidates for the first autotrophic metabolism in the earliest life. Extant enzymes fixing CO2 in this cycle contain cofactors at the catalytic centers, but it is unlikely that the protein/cofactor system emerged at once in a prebiotic process. Here, we discuss the feasibility of non-enzymatic cofactor-assisted drive of the rTCA reactions in the primitive Earth environments, particularly focusing on the acetyl-CoA conversion to pyruvate. Based on the energetic and mechanistic aspects of this reaction, we propose that the deep-sea hydrothermal vent environments with active electricity generation in the presence of various sulfide catalysts are a promising setting for it to progress. Our view supports the theory of an autotrophic origin of life from primordial carbon assimilation within a sulfide-rich hydrothermal vent.

But in the concluding remarks of the paper, Kitadai had to admit:

Abiotic CO2 fixation is among the most fundamental steps for life to originate, but no geochemically feasible process that drives the reaction has been acknowledged.  It can also be envisioned that the geochemical CO2 fixation is a common phenomenon on terrestrial planets and satellites because hydrothermal activity is widespread in our solar system including on Europa, Enceladus, and the ancient Mars 23 

Wow. Really?!! That is a non-sequitur, and a pseudo-scientific conclusion, that does not follow, when taken into consideration, how complex the cycle is, in special, the sophisticated enzymes with metal centers that participate in the chemical reactions. The origin of each of these enzymes, and their connection in the right sequence have to be explained in plausible narratives. 

The Cell factory maker, Paley's watchmaker argument 2.0 Materi10

So minerals basically performed the first non-enzymatic reactions, and then, somehow, an unexplained enigmatic transition to enzymatic reactions and metabolic production occurred. This is a deep gap, a chasm to bridge that goes like a red line across all Origin of life scenarios. On the one side, there is the hypothesized origin of the basic building blocks of life in various scenarios, like hydrothermal vents, meteorites, synthesis on clay, metals, etc., and on the other side there are modern sophisticated molecular production lines, metabolic pathways, employing correctly lined up in the right sequential order, extraordinarily efficient, complex, sophisticated and precise molecular robot-like machines, operating in a joint-venture, producing life-essential metabolites. There could hardly be a bigger chasm.  

That was also brought up by Nick Lane: Perhaps the biggest problem is that the chemistry involved in these clever syntheses does not narrow the gap between prebiotic chemistry and biochemistry—it does not resemble extant biochemistry in terms of substrates, reaction pathways, catalysts or energy coupling. The difficult condensation reactions to form nucleotides and polymers including RNA, DNA and polypeptides are accomplished in water, using ATP. None of this bears any resemblance to cyanosulphidic protometabolism or wet-dry cycles in UV-seared volcanic pools. 40

The paper mentioned above from 2010-made reference to an earlier paper from 2004, which stated:
For anything like a cell ever to emerge, the building blocks of biochemistry would have to have a continuous source of reduced carbon and energy and would have to remain concentrated at their site of sustained synthesis over extended times. 18

But who nailed it, was Leslie Orgel. He wrote in a paper in 2008: Almost all proposals of hypothetical metabolic cycles have recognized that each of the steps involved must occur rapidly enough for the cycle to be useful in the time available for its operation. It is always assumed that this condition is met, but in no case have persuasive supporting arguments been presented. Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of the reverse citric acid cycle was present anywhere on the primitive Earth, or that the cycle mysteriously organized itself topographically on a metal sulfide surface? The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own.

At the very least, six different catalytic activities would have been needed to complete the reverse citric acid cycle. It could be argued, but with questionable plausibility, that different sites on the primitive Earth offered an enormous combinatorial library of mineral assemblies, and that among them a collection of the six or more required catalysts could have coexisted.
19

1. Emily Singer: How Structure Arose in the Primordial Soup May 19, 2015
2. WILLIAM A. DEMBSKI: Naturalism’s Argument from Invincible Ignorance A Response to Howard Van Till SEPTEMBER 9, 2002 
3. Graham Cairns-Smith: Seven Clues to the Origin of Life: A Scientific Detective Story, page 40:  October 5, 2000
4. Steven A. Benner: Asphalt, Water, and the Prebiotic Synthesis of Ribose, Ribonucleosides, and RNA 2012 
5. David Deamer: The Role of Lipid Membranes in Life’s Origin 2017 Mar; 7 
6. Alan W. Schwartz: Intractable Mixtures and the Origin of Life 2007 
7. Katarzyna Adamala OPEN QUESTIONS IN ORIGIN OF LIFE: EXPERIMENTAL STUDIES ON THE ORIGIN OF NUCLEIC ACIDS AND PROTEINS WITH SPECIFIC AND FUNCTIONAL SEQUENCES BY A CHEMICAL SYNTHETIC BIOLOGY APPROACH February 2014 
8. Timothy R. Stout:  A Natural Origin-of-Life: Every Hypothetical Step Appears Thwarted by Abiogenetic Randomization May 5, 2019  
9. Robert M. Hazen: The Emergence of Chemical Complexity: An Introduction February 15, 2008 
10. Julia Burdge: Chemistry: Atoms First 3rd Edition 2017 
11. Dirk Schulze-Makuch: Silicon-Based Life, That Staple of Science Fiction, May Not Be Likely After All June 11, 2020 
12. Courtney White: Grass, Soil, Hope: A Journey Through Carbon Country 2014 
13. Gonzalez: The privileged planet 2004 
14. P.A. Trudinger: Biogeochemical Cycling of Mineral-Forming Elements 1979 
15. S. Zaehle: Terrestrial nitrogen–carbon cycle interactions at the global scale 2013 Jul 5 
16. Christos Gougoulias: The role of soil microbes in the global carbon cycle: tracking the below-ground microbial processing of plant-derived carbon for manipulating carbon dynamics in agricultural systems  2014 Mar 6
17. Ivan A. Berg: Autotrophic carbon fixation in archaea 
18. Michael J. Russell: The rocky roots of the acetyl-CoA pathway  July 2004 
19. Leslie E Orgel †: The Implausibility of Metabolic Cycles on the Prebiotic Earth  January 22, 2008 
20. Wikibooks: AP Biology/The Chemical Building Blocks of Life
21. 
22. 
23. Norio Kitadai: Origin of the Reductive Tricarboxylic Acid (rTCA) Cycle-Type CO2 Fixation: A Perspective 2017 Dec; 7 
24. Harold J. Morowitz: The Origin and Nature of Life on Earth: The Emergence of the Fourth Geosphere 
25. Nick Lane: Life Ascending: The Ten Great Inventions of Evolution Paperback  June 14, 2010 
26. Andrew H. Knoll: Fundamentals of Geobiology 
27. 
28: Paul G. Falkowski:[url= https://www.wiley.com/en-us/Fundamentals+of+Geobiology-p-9781118280874] Fundamentals of Geobiology[/url] 2012
29. TARA YARLAGADDA: HOW DID LIFE ARISE? NEW STUDY OFFERS FUNDAMENTAL EVIDENCE FOR A DISPUTED THEORY 14.12.2021  
30. Guillermo Gonzalez: The Privileged Planet: How Our Place in the Cosmos Is Designed for Discovery  February 1, 2004 
31. Bill Faint Bs in Biology at Harding University in a Facebook interaction
32. Lena Vincent: The Prebiotic Kitchen: A Guide to Composing Prebiotic Soup Recipes to Test Origins of Life Hypotheses  11 November 2021 
33. Sarah Wild Alternative Earths OCT. 15, 2019 
34. Edward W. Schwieterman: A Limited Habitable Zone for Complex Life 2019 June 10 
35. Stuart A. Kauffman: Theory of chemical evolution of molecule compositions in the universe, in the Miller-Urey experiment and the mass distribution of interstellar and intergalactic molecules 2019  
36. Saidul Islam: Prebiotic selection and assembly of proteinogenic amino acids and natural nucleotides from complex mixtures 16 January 2017 
37. R. A. W. Bradford: The Inevitability of Fine-Tuning in a Complex Universe 18 January 2011 
38. Sir Fred Hoyle:  The Universe: Past and Present Reflections November
39. [url=https://bio.libretexts.org/Bookshelves/Microbiology/Book%3A_Microbiology_(Boundless)/5%3A_Microbial_Metabolism/5.12%3A_Biosynthesis/5.12F%3A_Th



Last edited by Otangelo on Fri Jul 22, 2022 3:19 pm; edited 123 times in total

https://reasonandscience.catsboard.com

6The Cell factory maker, Paley's watchmaker argument 2.0 Empty Chapter 4 Thu May 05, 2022 12:26 pm

Otangelo


Admin

Sara I. Walker (2017): The origin of life is widely regarded as one of the most important open problems in science. It is also notorious for being one of the most difficult. It is now almost 100 years since scientific efforts to solve the problem began in earnest, with the work of Oparin and Haldane.  ‘Bottom-up’ approaches have not yet generated anything nearly as complex as a living cell. At most, we are lucky to generate short polypeptides or polynucleotides or simple vesicles—a far cry from the complexity of anything living. 46

Kitadai (2018): To date, various environments have been proposed as plausible sites for life’s origin, including oceans, lakes, lagoons, tidal pools, submarine hydrothermal systems, etc. But no single setting can offer enough chemical and physical diversity for life to originate. 47

The more science unravels, the more we know, the most plausible explanation for the origin of life is God.

PIER LUIGI LUISI in his book THE EMERGENCE OF LIFE From Chemical Origins to Synthetic Biology from 2006 also makes reference to Orgel:
Orgel has argued forcibly against theories involving the organization of complex, small-molecule metabolic cycles, such as the reductive citric-acid cycle on mineral surfaces, having to make unreasonable assumptions about the catalytic properties of minerals and the ability of minerals to organize sequences of disparate reactions.  Orgel (2000) had already underlined that, in general, theories advocating the emergence of complex, self-organized biochemical cycles in the absence of genetic material are hindered, not only by the lack of empirical evidence but also by a number of unreasonable assumptions about the properties of minerals and other catalysts required spontaneously to organize such sets of chemical reactions. 45

And a more recent article from 2020, published in Quanta magazine, pointed out:
Because the TCA cycle feeds into so many vital processes in even the simplest cells, scientists suspect it was one of the early reactions to establish itself in the prebiotic soup. To reconstruct how it evolved, biochemists have generally tried to work backward by replacing the eight enzymes involved in the modern TCA cycle with transition metals, since those can act as catalysts for many reactions and should have been abundant. But the transition metals often failed to produce the desired intermediary molecules, or catalyzed their breakdown as fast as they made them, and the metals typically needed high temperatures or other extreme conditions to work. “Metals and harsh conditions can be good [at] accelerating the reactions yet also [promote] the destruction of the products,” said Juli Peretó, a professor of biochemistry and molecular biology at the University of Valencia in Spain. “This situation makes rather implausible or unrealistic some of the proposed schemes.” 20

The energy problem
Another relevant question is the source of energy. Where did it come from to drive the supposed prebiotic, non-enzymatic cycles? Jessica L. E. Wimmer and colleagues list the various hypotheses:

It has long been recognized that energy was required to promote reactions at metabolic origin, but the nature of that energy has been debated. Many possible environmental sources of energy at origins have been suggested, including pyrophosphate (PPi), cyclic polyphosphates, reduced phosphorous minerals, ultraviolet light, radioactive decay, lightning, geochemical pyrite synthesis, geochemical ion gradients, geoelectrical potential , bolide impacts, and heat. Modern cells in nature, however, harness none of those environmental energy sources, they harness redox reactions instead, and conserve energy for metabolic use in the chemically accessible currency of ATP or reduced ferredoxin. 44

Prebiotic processes are similar in character to dumping a tank of gasoline on a car and igniting it.  By contrast, living cells have machinery which converts energy appearing in a specified form into ATP, the energy currency of the cell, which is useful for biotic processes.  The form of energy to be converted into ATP varies among cellular types, such as UV light, visible light, methane, metallic ion flow, or digestible nutrients. Without machinery matched to the form of energy, energy tends either to have no effect or to act as a tank of gas dumped on a car. Unintelligent simple chemicals can't self-organize into instructions for building solar farms (photosystem 1 and 2 in photosynthesis), hydroelectric dams (ATP synthase), propulsion (motor proteins) , self repair (p53 tumor suppressor proteins) or self-destruct (caspases) in the event that these instructions become too damaged by the way the universe USUALLY operates.  Abiogenesis is not an issue that scientists simply need more time to figure out but a fundamental problem with materialism

A surprisingly honest admission of non plausible naturalistic metabolism first scenarios came from Nature magazine, in 2015:

The rTCA cycle that is found in bacteria is catalyzed by enzymes with high degrees of substrate selectivity. The reaction substrates and the reaction sequence of the enzymatic rTCA cycle are conserved ( not evolved ) On the other hand, the transformations of pre-biological chemistry are assumed to occur under the effect of chemical catalysts. The latter, however, are typically active with respect to certain types of chemical transformations and lack the high substrate selectivity characteristic of enzyme catalysts. The smallest supernetwork that includes rTCA cycle is designated the rTCA supernetwork. It contains 175 molecules and 444 reactions. We conclude that the rTCA cycle should have a low probability of a random realization. We also notice that its length and cost are close to extreme values. Selection for the extreme values implies an optimization process. Is there any evidence so far that such optimization will inevitably lead to the rTCA cycle?  

The failure to provide a plausible narrative based on naturalistic assumptions came when the authors resorted to a frozen accident:

Further selection into biological cycles may have occurred by other means, such as a frozen accident, that is, the selection and preservation of a particular pathway from the ensemble of possibilities due to an undetermined random event. 43

It seems frozen accidents are invoked when no reasonable explanation is left. This is just a made-up assertion based on guessing and imagination, without explaining anything based on evidence. The situation described above equals to the following scenario: On the one side, you have an intelligent agency-based system of irreducible complexity of tight integrated, information-rich functional systems which have ready on-hand energy directed for such, that routinely generate the sort of phenomenon being observed.  And on the other side imagine a golfer, who has played a golf ball through a 10-hole course. Can you imagine that the ball could also play itself around the course in his absence? Of course, we could not discard, that natural forces, like wind, tornadoes or rains, or storms could produce the same result, given enough time.  the chances against it however are so immense, that the suggestion implies that the non-living world had an innate desire to get through the 10-hole course.

Nitrogen
Nitrogen (N) has the atomic number 7 in the periodic table. In our atmosphere, two atoms bind to form N2. It makes up about 78% of our atmosphere. In order for matter to arrange itself into form, complexity, and ultimately, life, the necessary basic building compounds had to be readily extant, available and concentrated for the prebiotic chemical interactions to begin on early earth. One of the essential building blocks required was nitrogen in the form of either ammonia, nitrite, or nitrate. Ammonia is utilized for the synthesis of glutamine amino acids as the first organic nitrogen-containing molecule. Glutamine is the nitrogen donor for the synthesis of other amino acids and other building blocks for life. Elemental nitrogen is diatomic (each molecule contains two N atoms) in its pure form (N2); its atoms are triple-bonded to the other. This is one of the hardest chemical bonds of all to break. So, how can nitrogen be brought out of its tremendous reserves in the atmosphere and into a state where it can be used by living things? Today, microorganisms employ extremely complex and sophisticated biosynthesis pathways using an enzyme called nitrogenase, to transform nitrogen gas in the atmosphere into ammonia, (nitrogen gas is combined with hydrogen to produce ammonia) and the atmospheric nitrogen cycle maintains everything in balance.  On prebiotic earth, nitrogen-fixing microorganisms were not extant.  

The availability of ammonia is amongst many other compounds an essential question to be answered by the origin of life researcher community. 

N2 is in a kinetically stable form in the atmosphere. Its conversion into reactive forms, such as nitric oxide (NO) requires high temperatures, such as those produced by lightning. 1  Recent scientific studies seem to permit the inference that prebiotic N2 fixation is at least in theory possible.  Mateo-Marti et al. (2019) gave a good overview of the current scientific understanding of the subject:

Ammonia (NH3) or ammonium (NH4+), henceforth NH3/NH4+, are necessary precursors for reactions associated with prebiotic syntheses, such as the Strecker ( amino acid) synthesis. Nitrogen is included in all enzymes and genes. The atmosphere of Earth has about 80% of molecular nitrogen N2. Nitrogen fixation on Earth is nowadays predominantly biological and occurs by conversion of N2 to ammonia via enzyme-catalyzed reactions. N2 is exceptionally inert because of its triple bond and will thus not react easily with other chemicals. A prerequisite for the origin of life on Earth is the existence of some abiotic process that provides a source of fixed nitrogen, in a form that is biochemically usable. Because of the strong binding of nitrogen, some previously described natural abiotic nitrogen fixation mechanisms which have been postulated on Earth were very energetic, examples of them include lightning, volcanism, and meteoric impact on ancient oceans. It has also been argued that coronal mass ejection events from the young Sun, produced very energetic particles that initiated reactions converting molecular nitrogen, methane, and carbon dioxide into HCN, NO, and N2O in the early Earth. Other energy sources such as cosmic rays, corona, lightning discharge from thunderstorms, and heat from volcanoes have also been considered plausible processes with a minor role in nitrogen fixation.  Nitrogen fixation requires breaking the strong bonds that hold nitrogen atoms in pairs in gaseous phase in the atmosphere and using the resulting nitrogen atoms to create molecules such as ammonia, which is the building block of many complex organics, including proteins, DNA, RNA etc. Solar levels of UV radiation can fix atmospheric nitrogen within a few hours provided that pyrite acts as a catalyst. This process leads therefore to nitrogen sequestration and may have been active in the prebiotic era on Earth, as it may be active on other terrestrial planets with UV transparent atmospheres and catalytic minerals reducing the levels of nitrogen in the atmosphere and thus having an impact on the radiative balance of the planet. 2 

Mark Dörr and colleagues wrote a science paper in 2003 claiming a possible prebiotic formation of ammonia from dinitrogen on Iron Sulfide Surfaces on early earth  5 Sandra Pizzarello wrote in a science paper in 2011, that asteroids reaching the early earth could have been the source for ammonia 3 András Stirling published a paper in 2016 mentioning hydrothermal vents as possible source:

Our model provides a plausible mechanistic picture of how NH3 (ammonia) can form in hydrothermal vents which may have operated on the early Earth in the synthesis of prebiotic molecules. As ammonia is an essential prebiotic reactant, the present mechanistic picture provides further support for the role of iron sulfide minerals in the chemoautotrophic origin of life. 4

Eric S. Boyd wrote a  paper in 2013 that mentioned various possible N sources.
On early Earth, fixed sources of N may have been supplied by abiotic processes such as electrical (i.e., lightning) based oxidation of N2 to nitric oxide (NO) or mineral (e.g., ferrous sulfide) based reduction of N2, nitrous oxide, or nitrite (NO−2)/nitrate (NO−3) to NH3. Abiotic sources of fixed N (e.g., NO, NO−2, NO−3, NH3) are thought to have become limiting to an expanding global biome, which may have precipitated the innovation of biological mechanisms to reduce N2. 6

In 2019, Sukrit Ranjan from MIT published a scientific study. Quoting a MIT news article:
Primitive ponds may have provided a suitable environment for brewing up Earth’s first life forms, more so than oceans. Shallow bodies of water, on the order of 10 centimeters deep, could have held high concentrations of what many scientists believe to be a key ingredient for jump-starting life on Earth: nitrogen. In shallow ponds, nitrogen, in the form of nitrogenous oxides, would have had a good chance of accumulating enough to react with other compounds and give rise to the first living organisms. In much deeper oceans, nitrogen would have had a harder time establishing a significant, life-catalyzing presence.  Breaking a bond If primitive life indeed sprang from a key reaction involving nitrogen. In the deep ocean, nitrogen, in the form of nitrogenous oxides, could have reacted with carbon dioxide bubbling forth from hydrothermal vents, to form life’s first molecular building blocks.
There could have been enough lightning crackling through the early atmosphere to produce an abundance of nitrogenous oxides to fuel the origin of life in the ocean. This supply of lightning-generated nitrogenous oxides was relatively stable once the compounds entered the oceans. However,  two significant “sinks,” or effects could have destroyed a significant portion of nitrogenous oxides, particularly in the oceans. Nitrogenous oxides in water can be broken down via interactions with the sun’s ultraviolet light, and also with dissolved iron sloughed off from primitive oceanic rocks. Ultraviolet light and dissolved iron could have destroyed a significant portion of nitrogenous oxides in the ocean, sending the compounds back into the atmosphere as gaseous nitrogen. In the ocean, ultraviolet light and dissolved iron would have made nitrogenous oxides far less available for synthesizing living organisms. In shallow ponds, however, life would have had a better chance to take hold. That’s mainly because ponds have much less volume over which compounds can be diluted. As a result, nitrogenous oxides would have built up to much higher concentrations in ponds. Any “sinks,” such as UV light and dissolved iron, would have had less of an effect on the compound’s overall concentrations.  The more shallow the pond, the greater the chance nitrogenous oxides would have had to interact with other molecules, and particularly RNA, to catalyze the first living organisms. In environments any deeper or larger, nitrogenous oxides would simply have been too diluted, precluding any participation in origin-of-life chemistry. 12


The transition to enzymatic fixation of nitrogen
If lightning and other means were sufficient mechanisms to transform nitrogen gas into ammonia, why did diazotrophs ( nitrogen-fixing bacterias ) like Cyanobacteria, and later, rhizobia, which live in nodules on the roots of legumes, some woody plants, etc., and form a convenient symbiosis - evolve an extremely sophisticated and energy-consuming biosynthesis process to transform dinitrogen into ammonia through nitrogenase enzymes? If ammonia was available on the early earth, then biological nitrogen fixation becomes unnecessary raising the question of — at least before the switch from a reducing to an oxidizing atmosphere — what selective pressure would “cause” it to evolve. The story goes as follows:

Concomitant decreases in abiotic N2 oxidation to NO led to a nitrogen crisis at ~3.5 Ga. Navarro-González argues that the nitrogen crisis could have ensued much later, even as late as 2.2 Ga. Abiotic sources of nitrogen produced through mechanisms such as lightning discharge or mineral-based catalysis are thought to have become limiting to an expanding global biome. Since extant nitrogenase functions to relieve N limitation in ecosystems, the imbalance in the supply and demand for fixed N is thought to have represented a strong selective pressure that may have precipitated the emergence of nitrogen fixation. Little direct evidence exists, however, with respect to the availability of ammonia or other reduced forms of nitrogen over the course of geological time, although several recent isotopic analyses of shale kerogens have suggested ample enough supply of ammonia to support nitrifying populations in the late archean, >2.5 Ga.

Cyanobacteria did oxygenic photosynthesis in the evolutionary timeline already ~3.5 Ga. ago. Somehow, evolutionary pressures promoted the change of the primordial rTCA carbon fixation pathway to oxygenic photosynthesis [ despite the fact that there is no homology or nested hierarchy of the enzymes employed in the two systems - science is in the dark about how oxygenic photosynthesis evolved ]  If during the period between ~3.5 Ga. and 2.2 Ga. nitrogen fixation had to evolve, it would have to overcome the fact that nitrogenase activity is inactivated by oxygen. Photosynthesis produces oxygen. But many of the byproducts of oxygen metabolism are toxic for Nitrogenase. So there must be defense mechanisms to protect oxygen-sensitive nitrogenase from photosynthetic oxygen. There are several such mechanisms, but the most remarkable one is the separation of nitrogen-fixing in cells called heterocysts, and photosynthesis happens in other cells, called vegetative cells. As such, Cyanobacteria can be multicellular organisms, and divide tasks by promoting an ultra-complex process of cellular differentiation. The formation of multicellular organisms from the assembly of single-celled ones constitutes one of the most striking and complex problems tackled by biology. Multicellularity involves at least three well­defined processes: cell-cell adhesion, intercellular communication, and cell differentiation. These had to emerge together since if one is missing, nothing done. Cyanobacteria would have to anticipate the evolution of a biological circadian clock to separate oxygenic photosynthesis and nitrogen fixation temporally, and/or multicellularity and cellular differentiation to separate the two processes spatially - before - evolving nitrogen fixation through nitrogenase enzymes. Did Cyanobacteria have the foresight that a natural nitrogen fixation bottleneck would occur, and therefore, anticipating the evolution of protective mechanisms, to then - subsequently evolve nitrogen fixation through nitrogenase enzymes? That, obviously, makes no sense. As the evolutionary narrative and storytelling goes, there had to be " the worst environmental catastrophe ever, estimating that more than 99 % of the organisms existing then died out" 40 because of the rise of oxygen. If evolution is all about survival of the fittest, why ever would microorganisms have transitioned from anaerobic, to aerobic CO2 fixation through oxygenic photosynthesis?  Another evolutionary paradox is that oxygenic photosynthesis supposedly evolved very early in the evolutionary timeline. As previously exposed, it is supposed that cyanobacteria, which use the reaction to produce glucose emerged as the earliest life forms, about 3,5 Gya ago.

Whatever process led to oxygenic photosynthesis, this energy transduction machine is undoubtedly the most complex in nature. In extant cyanobacteria, well over 100 genes are required for the construction of the protein scaffolds as well as the enzymes required for biosynthesis of the prosthetic groups

The Cell factory maker, Paley's watchmaker argument 2.0 5ll17QG

That contradicts the narrative, from simple, to complex. The same paradox applies to nitrogen fixation. Nitrogenase is a very complex enzyme system. Nitrogenase breaks molecular nitrogen's triple bond -- one of the strongest bonds in nature - in a similar fashion to a sledgehammer and is very energetically dispendious. An article from Chemistryworld, from 2019, titled: The mysterious enzyme that can beat the world’s biggest chemical process when it comes to breaking the dinitrogen triple bond, describes the process:

Nitrogenase is actually a di-protein, or two-enzyme, assembly. It has a central catalytic protein, which is where N2 is reduced to ammonia, and it’s all satellited by a reductase. This protein delivers electrons to the catalytic protein. When these two transiently associate, an electron is transferred, one at a time, from this reducing protein to the catalytic protein. With that, we know that ATP has to be hydrolyzed to enable that transfer. But that’s only one electron transfer. Overall, eight of them must happen to accumulate enough electrons to fix N2 to ammonia. For each ammonia molecule produced, the enzyme chomps up a whopping 16 of ATP, a high price in the biological energy currency. The fact that fixing nitrogen requires so much energy yet bacteria still do it shows just how important the process is. The actual nitrogen fixation happens within nitrogenase’s metal heart. The enzyme houses a cluster of iron and molybdenum atoms called FeMoco (for iron-molybdenum cofactor). 10

The reactive center has the most complicated metallocluster known in Nature - and is replete with metals, and harbors a metal-coordinated carbide carbon atom, unique among all enzymes known so far 15 and science is in the dark about how it could have evolved - as outlined above. Another science paper dealing with the physiology and habitat of the last universal common ancestor included nitrogenase in its hypothesis:

We identified 355 protein families that trace to LUCA by phylogenetic criteria.  Their functions, properties, and prosthetic groups depict LUCA as anaerobic, CO2-fixing, H2-dependent with a Wood–Ljungdahl pathway, N2-fixing, ( through nitrogenase ), and thermophilic. 13

It takes a lot of courage to propose that the origin of such an enormously complex, sophisticated,  and energy-demanding enzyme was due to chemical, and not biological evolution.  

The nitrogen cycle, irreducible interdependence, and the origin of life


The Cell factory maker, Paley's watchmaker argument 2.0 Nitrog11
The reactions of the biological N-cycle.

The nitrogen cycle operates in five stages: In the first stage, Nitrogen is fixed into ammonia. In the second stage, nitrification, ammonia is converted into nitrite and then to nitrate. In the third stage, Denitrification, nitrate is changed back to either atmospheric dinitrogen or nitrous oxide, another gas. In the fourth stage, Assimilation, nitrates are converted back to nitrites and finally to ammonia. This ammonia is used to produce amino acids which are used to make proteins, nucleic acids, etc. The final stage in the cycle is decay.  

The book Biology of the Nitrogen Cycle (2007) explains:
In biology, nitrogen undergoes a variety of oxidation and reductions that produce compounds with oxidation states ranging from +5 (as in nitrate, NO3 ) to -3 (as in ammonia, NH3). These nitrogen cycle and redox reactions are performed in different ways by different organisms, and the reactions in total make up the biological N-cycle 16

Nitrogen fixation: Nitrogen fixation is the reduction of atmospheric nitrogen (N2) to ammonia (NH3). The nitrogen atoms in N2 are triply bonded to each other, and this molecule is very inert chemically. Abiotic chemical conversions of N2 either cannot occur (are disfavored energetically) or occur at exceedingly low rates under normal ambient conditions. Only rare intense bursts of energy such as lightning provide sufficient activation energy and highly reactive molecules that allow the formation of other compounds from N2. In this context, biological nitrogen fixation is truly remarkable. It provides almost the only natural entry into living systems from the huge reservoir of nitrogen in the atmosphere. 20
Nitrification is the step in the nitrogen cycle that links the oxidation of ammonia (produced from the degradation of organic matter) to the loss of fixed nitrogen in the form of dinitrogen gas. It is performed by a few different groups of microorganisms, including the ammonia-oxidizing bacteria, the ammonia-oxidizing archaea, and the nitrite-oxidizing bacteria. 17
Denitrification is the microbial process of reducing nitrate and nitrite to gaseous forms of nitrogen, principally nitrous oxide (N2O) and nitrogen (N2). A large range of microorganims can denitrify. 18
Ammonification is a step of the nitrogen cycle during which microorganisms mineralize small organic molecules containing an amine group (such as amino acids, amino sugars, urea, and nucleotides) in order to liberate ammonium (NH4+). 19
Assimilation. Nitrogen occurs as both organic and inorganic nitrogen. Organic nitrogen occurs in living organisms, inorganic is detritus or dead organic matter. Nitrogen assimilation is the formation of organic nitrogen compounds like amino acids from inorganic nitrogen compounds present in the environment.
Excretion and Decay. Diverse organisms break down waste products and recycle nitrogen. 

Bess Ward wrote in the book: Fundamentals in Geobiology, 2011 pg.45:

Several of the steps [ of the nitrogen cycle ] are tightly coupled and directly dependent upon each other.  9

The nitrogen cycle is driven by a considerable number of bacteria that work in a complementary manner.  The Nitrogen cycle is a lot more complex than the carbon cycle. There are a number of stages to the nitrogen cycle, which involve breaking down and building up nitrogen and its various compounds. There is no real starting point. It is an endless cycle. Potential gaps in the system cannot be reasonably bypassed by inorganic nature alone. It must have a degree of specificity. The removal of any one of the individual steps in the cycle would resort to the loss of function of the system. The data suggest that the nitrogen cycle constitutes an interdependent system based on the above criteria.

Nitrogen levels in the atmosphere must be just right
Hugh Ross wrote in 2019 in an article:
Nitrogen is not a greenhouse gas, but its presence in the atmosphere enhances the greenhouse heat-trapping capability of both carbon dioxide and methane. If the quantity of nitrogen in Earth’s atmosphere were any greater, Earth’s surface temperature would be too hot for advanced life. If the quantity of nitrogen in Earth’s atmosphere were any lower, Earth’s surface temperature would be too cold for advanced life to survive.
 The ratio of molecular nitrogen to molecular oxygen determines whether lungs can efficiently function for many continuous years. The nitrogen buffers the oxygen. If the oxygen-to-nitrogen ratio in Earth’s atmosphere were even very slightly lower or very slightly higher than what it is, birds and mammals would not be able to sustain high activity levels for years on end. The quantity of nitrogen in Earth’s atmosphere determines the degree of nitrogen fixation that occurs. Less atmospheric nitrogen or more atmospheric nitrogen than what is now occurring would change the level of nitrogen fixation and lower the diversity of plant species. The bottom line: there are at least four different reasons why the amount of nitrogen in Earth’s atmosphere must be fine-tuned. This fine-tuning implies that both the primordial terrestrial nitrogen and the later delivery of nitrogen to Earth must be fine-tuned. Such fine-tuning is indicative of the supernatural handiwork of the Creator. 36


Oxygen
Oxygen (O) has the atomic number 8 in the periodic table. It is the most abundant element on earth. About 21% in our atmosphere is diatomic oxygen.It is vital, especially for advanced lifeforms. 35 Water is essential for life, and oxygen is essential for water. Hydrogen is needed in almost all organic compounds, and biological structures, but hydrogen would easily be lost in the atmosphere on a rather small planet like ours. Only by binding hydrogen to oxygen to form water, that a significant amount remains on earth.   

We breathe oxygen to generate energy. Stop breathing for a few minutes, and see what happens. The composition of early earth's gas composition has been debated for many decades by the scientific community. One key issue in dispute is what level of oxygen existed on early earth. If the prebiotic atmosphere were oxygenated, the prebiotic synthesis of the building blocks of life would not have been possible. Organic molecules like RNA and DNA would have been susceptible to thermal oxidation and photo-oxidation and would have readily been destroyed. For this reason, most prebiotic scenarios posit that the prebiotic atmosphere did not contain oxygen ( science claims a reduced atmosphere).  Only much later, about 2.4–2.1  Gya ago, the Great Oxidation Event (GOE) took place, where oxygenic photosynthesis was performed by cyanobacteria and phytoplankton, increasing the oxygen levels to a weakly oxidized condition and the emergence of the multicellular Ediacaran fauna in an oxygenated “Canfield Ocean” ( 635 - 542 Mya ) 29 supposedly increased atmospheric oxygen to arrive at the stable 21% that it has today,  but the deep sea remained anoxic. From 0.54 Ga to the present, both the atmosphere and the ocean became oxygenated 37
On the other hand, an atmosphere without oxygen would  not be able to protect life from the lethal short-wavelength ultraviolet rays which are hazardous to life. That is one of the several puzzling Catch22/chicken and egg situations related to the origin of life.  

Jacob Haqq-Misra and colleagues gave a good overview in a science paper published in 2011 about the ongoing debate. They wrote:
Over the past 40 years, both biologists and paleontologists have argued that free O2 must have been available, at least in small quantities, prior to the origin of oxygenic photosynthesis. These arguments have been considered highly speculative because free O2 is generally considered to have been absent near Earth's surface prior to the origin of oxygenic photosynthesis. Indeed, significant amounts of free O2 did not appear in the atmosphere until about 2.4 Ga. Before this major rise of atmospheric oxygen, even photosynthetically produced O2 would have existed only locally within surface water and in short-lived plumes of gas that escaped into the otherwise anoxic atmosphere.
Oxygen should have been produced abiotically in the sunlit portions of the early stratosphere. Following the atmospheric photochemistry, the production of O2 in the stratosphere begins with the photolysis of CO2, which is by far the most abundant oxygen-bearing species in these model atmospheres 
28

James F. Kasting writes in Fundamentals of Geobiology (2012) at  page 102:
All current explanations for the rise of O2 are speculative, of course, and are likely to remain so in the near future. Better data on H2 fluxes from surface volcanoes and from submarine hydrothermal systems are needed, along with better models for Earth’s tectonic evolution.  21

Several Scientific papers in the last few decades did not come to the conclusion of the consensus view but reported low-level accumulation of O2 or evidence of anaerobic conditions altogether.  

Aleisha C. Johnson and colleagues published a scientific paper in 2021, reporting:
Evidence continues to emerge for the production and low-level accumulation of molecular oxygen (O2) at Earth’s surface before the Great Oxidation Event. Quantifying this early O2 has proven difficult. Archean O2 levels were vanishingly low according to our calculations but substantially above those predicted for an abiotic Earth system. 30

Science daily, for example, reported in 2011:
For decades, scientists believed that the atmosphere of early Earth was highly reduced, meaning that oxygen was greatly limited. Such oxygen-poor conditions would have resulted in an atmosphere filled with noxious methane, carbon monoxide, hydrogen sulfide, and ammonia. To date, there remain widely held theories and studies of how life on Earth may have been built out of this deadly atmosphere cocktail. Now, scientists at Rensselaer are turning these atmospheric assumptions on their heads with findings that prove the conditions on early Earth were simply not conducive to the formation of this type of atmosphere, but rather to an atmosphere dominated by the more oxygen-rich compounds found within our current atmosphere -- including water, carbon dioxide, and sulfur dioxide. "We can now say with some certainty that many scientists studying the origins of life on Earth simply picked the wrong atmosphere," said Bruce Watson, Institute Professor of Science at Rensselaer. 22

The paper referenced was published in Nature magazine in 2011: The oxidation state of Hadean magmas and implications for early Earth’s atmosphere:
If the evolution of Hadean melts was similarly constrained, then our results imply that the mantle reached its present-day oxidation state,4,350 Myr ago 23

Charlotte Price Persson reported in an article from 2016:
Earth had oxygen 800 million years earlier than thought. The atmosphere contained oxygen 3.8 billion years ago, raising new questions about the history of life on Earth. Until recently, scientists thought that oxygen first became a part of our atmosphere 2.2 billion years ago. But a new study pushes this further back in time, to about 3.8 billion years ago. “This strikes into a very sensitive part of science, in which there is relatively little evidence, and yet the entire scientific community doesn’t believe that there was oxygen at this time. I’ve struggled against many critical peers and it’s taken me over a year to get the article published,” says Frei, adding that he feels confident about the results. 24

Science Daily: Primordial Air May Have Been "Breathable" January 9, 2002
The Earth may have had an oxygen-rich atmosphere as long ago as three billion years and possibly even earlier, three leading geologists have claimed. Their theory challenges long-held ideas about when the Earth’s atmosphere became enriched with oxygen and pushes the likely date for formation of an atmosphere resembling today’s far back into the early history of the planet. The researchers’ theory has been lent additional weight by evidence from the Western Australian Pilbara region for the presence of sulphates in rocks up to 3.5 billion years old. These, too, could not have formed without an oxygen-rich atmosphere. 27

Fred Hoyle also wrote in his book: ASTRONOMICAL ORIGINS  OF LIFE in 2000:
The oldest surviving rocks have oxidation states that are indicative of an oxidising rather than a reducing atmosphere, at any rate at the time when the rocks condensed. 11

This is echoed by Leslie Orgel, who wrote in 1998:
Recent investigations indicate the earth's atmosphere was never as reducing as Urey and Miller presumed. 25

Even earlier, in 1976, Erich Dimroth and colleagues wrote in the paper:  Precambrian atmospheric oxygen: evidence in the sedimentary distributions of carbon, sulfur, uranium, and iron:
In general, we find no evidence in the sedimentary distributions of carbon, sulfur, uranium, or iron, that an oxygen-free atmosphere has existed at any time during the span of geological history recorded in well preserved sedimentary rocks.  ‘the sedimentary distributions of carbon, sulfur, uranium, and ferric and ferrous iron depend greatly upon ambient oxygen pressure and should reflect any major change in proportion of oxygen in the atmosphere or hydrosphere. The similar distributions of these elements in sedimentary rocks of all ages are here interpreted to indicate the existence of a Precambrian atmosphere containing much oxygen.’ ‘we know of no evidence which proves orders-of-magnitude differences between Middle Archaean and subsequent atmospheric compositions, hydrospheric compositions, or total biomasses.’ 26

How could the atmosphere have been aerobic prior the great oxidation event?
Alternatively to oxygenic photosynthesis, atmospheres can be derived from splitting water molecules in ice by UV photons or energetic particle bombardment. 33 

Bogdanov from St.Petersburg state university reports:
In the early stages of the Earth's development when its radioactivity was almost two orders of magnitude higher than at present, (water) radiolysis could be the principal source of atmospheric oxygen, which ensured the conditions for the origin and development of life on our globe. 34

From an anaerobic to an aerobic atmosphere
In October 2021, The International Society for the Study of the Origin of Life did hold a conference at the University of California, San Diego, pointed out that:

the production rate and speciation of organic matter depends on the availability of H2O as well as total redox state of the whole atmosphere and ocean system. 39 

In other words, the atmosphere had to be totally oxygen-free. That entails that there was no ozone layer. DNA is highly sensitive and easily destroyed when exposed to ultraviolet light at wavelengths near 0.25 µm.

Harris Bernstein (2020): Sagan analyzed the flux of solar UV light that penetrated the earth’s primitive reducing atmosphere. His analysis indicated that unprotected microorganisms of the type existing today would receive a mean lethal dose at 2600 angstroms within 0.3 seconds and that this vulnerability could have posed a major problem during the early evolution of life. A protocell that has only one copy of each ssRNA (a haploid protocell) would be very vulnerable to damage, since damage to even one base in a ssRNA sequence might be lethal to the protocell by either blocking replication of the ssRNA or interfering with an essential ssRNA ribozyme function 49

If there was no oxygen and no ozone layer in the atmosphere in the first 1,5 Gya on the early earth, and microorganisms survived in an environment, where they were protected from ultraviolet radiation, like deep in the ocean, lakes, or even mud and ponds, then the question is: How could they have evolved photosynthesis, depending on sunlight? Without being a need, they would have to evolve first UV light protection and DNA repair mechanisms, before evolving the machinery to transform sunlight into carbohydrates. In 2015, Science Daily reported: "Earth’s first bacteria made their own sunscreen". They wrote:

Earth in the days when life was just beginning had no protective ozone layer, so light-dependent, iron-oxidizing bacteria formed iron minerals around themselves to protect them from damaging ultraviolet rays. 38

That raises the question: How could they have become light-dependent phototrophs, if they could not have survived on the earth's surface, being exposed to sunlight and energy-rich UV radiation?

Reactive oxygen species (ROS) & the origin of life
In an aerobic atmosphere filled with oxygen, besides thermal oxidation and photo-oxidation, another threat would have to be overcome by chemical molecules on their way to becoming living cells: reactive oxygen species (ROS), also called radical species.

Jie Xu and colleagues wrote in a science paper in 2012:
The radicals formed are highly reactive and capable of oxidizing organic molecules in close proximity. For example, hydroxyl radical (OHd ) – the most studied radical species because of its prevalence in biological and environmental systems – reacts rapidly with carbohydrates, fatty acids, RNA, DNA, nucleic acids and other biological molecules leading to alteration or decomposition of these molecules. In general, the ability of minerals to produce ROS represents their potential to degrade biomolecules by oxidation, which may have had a direct or indirect impact on early life by affecting the organic inventory available for prebiotic synthesis and/or by affecting the stability of ‘‘protocell’’ amphiphilic membranes in contact with these mineral surfaces. Thus, ROS, in addition to other mineral properties such as hydrophilicity, hydrophobicity, surface charge, Hamaker constants, etc. may have influenced evolution of life on Earth and other solid terrestrial worlds. ROS could have affected the stability of the membranes of protocells and the earliest cells on Earth.

In the chapter: Coping with Reactive Oxygen Species, of the book: Oxygen and the Evolution of Life Heinz Decker writes on page 47 

Most of the life on Earth with which we are familiar evolved in the presence of dioxygen and had to adapt to this potentially dangerous substance. This was accomplished by evolving a large battery of antioxidant systems. Some of these systems are present in all life forms, from bacteria to mammals, indicating the appearance of at least traces of dioxygen early in the history of life. 14

Ireneusz Ślesak and colleagues write in a science paper from 2016:
In many anaerobic prokaryotes, the superoxide reductases (SORs) have been identified as the main force in counteracting ROS toxicity. We found that 93% of the analyzed strict anaerobes possess at least one antioxidant enzyme, and 50% have a functional EAS, that is, consisting of at least two antioxidant enzymes: one for superoxide anion radical detoxification and another for hydrogen peroxide decomposition. The results presented here suggest that the last universal common ancestor (LUCA) was not a strict anaerobe. O2 could have been available for the first microorganisms before oxygenic photosynthesis evolved, however, from the intrinsic activity of EAS, not solely from abiotic sources. 31

Here, we have further evidence that the atmosphere was aerobic right from the beginning, and even bacteria that are anaerobes, which do not require oxygen for growth, had contact with an aerobic atmosphere, and as such protective mechanisms. Ślesak continues:

Recently several lines of evidence suggest that the appearance of oxygenic photosynthesis preceded the Great Oxidation Event of Earth's atmosphere, and as a consequence, oxygen oases, possibly even with micromolar O2 concentrations, were present in the Archean ocean.  Our results constitute additional biological support for the unorthodox hypothesis that enzymatic antioxidant systems (EAS) or a rudimentary equivalent may have been present in primordial organisms on early Earth, even before the appearance of oxygenic photosynthesis. The theoretical considerations presented here paradoxically indicate that ROS-scavenging reactions could themselves be an intracellular net source of O2/ROS inside hypothetical LUCA protocells.

Why would anaerobic bacteria evolve EAS in the absence of oxygen? That makes no sense. A picture arises, where oxygen was always part of the atmosphere, and life started equipped with enzymatic antioxidant systems (EAS) from the beginning able to deal with ROS.

Hydrogen
Hydrogen (H) is the first atom in the periodic table, with the atomic number 1. It is a gas of diatomic molecules having the formula H2, and the most abundant chemical substance in the universe, constituting about 75% of all normal matter. Here some facts given by Agata Blaszczak in an article from 2015:

The most abundant element in the universe, hydrogen, was named after the Greek words hydro for "water" and genes for "forming," hydrogen makes up more than 90 percent of all of the atoms, which equals three quarters of the mass of the universe, according to the Los Alamos National Laboratory. Hydrogen is essential for life, and it is present in nearly all the molecules in living things, according to the Royal Society of Chemistry. The element also occurs in the stars and powers the universe through the proton-proton reaction and carbon-nitrogen cycle. Stellar hydrogen fusion processes release huge amounts of energy as they combine hydrogen atoms to form helium, according to Los Alamos. 32

Water
Anders Nilsson et al. (2015): The structural origin of anomalous properties of liquid water: Water is the most important liquid for our existence and plays an essential role in physics, chemistry, biology and geoscience. What makes water unique is not only its importance but also the anomalous behaviour of many of its macroscopic properties.… If water would not behave in this unusual way it is most questionable if life could have developed on planet Earth .42

Lisa Grossman (2011): WATER’S life-giving properties exist on a knife-edge. It turns out that life as we know it relies on a fortuitous, but incredibly delicate, the balance of quantum forces. Water is one of the planet’s weirdest liquids, and many of its most bizarre features make it life-giving. For example, its higher density as a liquid than as a solid means ice floats on water, allowing fish to survive under partially frozen rivers and lakes. And unlike many liquids, it takes a lot of heat to warm water up even a little, a quality that allows mammals to regulate their body temperature. But computer simulations show that quantum mechanics nearly robbed water of these life-giving features. Most of them arise due to weak hydrogen bonds that hold H2O molecules together in a networked structure. For example, it is hydrogen bonds that hold ice molecules in a more open structure than in liquid water, leading to a lower density. By contrast, without hydrogen bonds, liquid molecules move freely and take up more space than in rigid solid structures.

Yet in simulations that include quantum effects, hydrogen bond lengths keep changing thanks to the Heisenberg uncertainty principle, which says no molecule can have a definite position with respect to the others. This destabilizes the network, removing many of water’s special properties.  How water continues to exist as a network of hydrogen bonds, in the face of these destabilizing quantum effects, was a mystery. In 2009, theorist Thomas Markland, now at Stanford University, suggested a reason why water’s fragile structure does not break down completely. They calculated that the uncertainty principle should also affect the bond lengths within each water molecule, and proposed that it does so in such a way as to strengthen the attraction between molecules and maintain the hydrogen-bond network. “Water fortuitously has two quantum effects which cancel each other out.

Until recently, though, there was no way to discover whether there is any variation in bond length within the water molecule. Now, Salmon’s team has done this. Their trick was to use so-called heavy water, in which the molecule’s two hydrogen atoms are replaced with deuterium. This isotope of hydrogen contains a neutron as well as a proton. The extra bulk makes it less vulnerable to quantum uncertainties. It’s like turning the quantum mechanics half off.
Salmon and colleagues shot beams of neutrons at different versions of water, and studied the way they bounced off the atoms – a precise way to measure bond lengths. They also substituted heavier oxygen atoms into both heavy and normal water, which allowed them to determine which bonds they were measuring. They found that the hydrogen-oxygen bonds were slightly longer than the deuterium-oxygen ones, which is what you would expect if quantum uncertainty was affecting water’s structure “No one has ever really measured that before,” says Benmore. “Water fortuitously has two quantum uncertainty effects which cancel each other out” We are used to the idea that the cosmos’s physical constants are fine-tuned for life. Now it seems water’s quantum forces can be added to this “just right” list.
 41

Water is fine-tuned for life
Science news (2017): Water has some remarkable properties. It has almost the highest specific heat capacity of any substance, meaning that it can absorb a lot of heat-energy when its environment is hot, and release a lot of heat-energy when its environment is cold -- making it the primary regulator of surface temperature on Earth. It becomes less dense as it solidifies, so that water habitats in cold regions will remain partially liquid as solid ice floats to the top of the water column and forms a protective layer -- allowing aquatic inhabitants to survive frigid winters. When water molecules form they adopt a tetrahedral geometry, one of the consequences of which is formation of a partial electric dipole -- making water an important solvent. In fact, water is called the 'universal solvent", because it dissolves more substances than any other liquid.

The source of many of water's remarkable attributes is due to its molecular geometry -- the partial electric dipole formed by its two hydrogen moieties and one oxygen atom. Such that the oxygen atom is slightly electro-negative at one pole, and the covalently bonded hydrogen atoms are slightly positively charged at the other pole. This causes water to form tetrahedral-oriented weak bonds with adjacent water molecules. It is for this reason that water molecules are strongly attracted to each other, meaning that it takes a good amount of energy to get them to separate, such as in evaporation or the increased distance associated with freezing. This weak bonding between hydrogen atoms and electro-negative elements is called hydrogen bonding, which gives water its amazing qualities but is also involved in many other molecular interactions -- such as holding DNA strands together to form the double helix, coordinating protein folding to enable nearly every function of the cell, and causing stable membranes to form due to hydrophobic--hydrophilic interactions. It is such an important interaction that if the strength of hydrogen bonding were even slightly different in water then life as we know it would not be possible.
48

1. Navarro-Gonzalez, Rafael: Prebiotic nitrogen fixation by lightning in carbon dioxide-nitrogen-hydrogen mixtures relevant to the early Earth's atmosphere 4 February, 2021 
2. Mateo-Marti: Pyrite-induced UV-photocatalytic abiotic nitrogen fixation: implications for early atmospheres and Life 25 October 2019 
3. Sandra Pizzarello: Abundant ammonia in primitive asteroids and the case for a possible exobiology February 28, 2011 
4. András Stirling: [url= https://pubs.acs.org/doi/10.1021/acs.inorgchem.5b02911]Prebiotic NH3 Formation: Insights from Simulations[/url]
5. Mark Dörr: A Possible Prebiotic Formation of Ammonia from Dinitrogen on Iron Sulfide Surfaces† 2003 
6. Eric S. Boyd: New insights into the evolutionary history of biological nitrogen fixation 2013 Aug 5 
7. Paul G Falkowski: Electrons, life and the evolution of Earth's oxygen cycle 2008 Aug 27 
8. Donald E. Canfield: The Evolution and Future of Earth's Nitrogen Cycle March 14, 2011 
9. Andrew H. Knoll: Fundamentals of Geobiology 30 March 2012 
10. KATRINA KRÄMER:Nitrogenase 22 FEBRUARY 2019 
11. F. Hoyle: Astronomical Origins of Life: Steps Towards Panspermia 2000 
12. Jennifer Chu: Earliest life may have arisen in ponds, not oceans April 12, 2019 
13. Madeline C. Weiss: The physiology and habitat of the last universal common ancestor 25 JULY 2016 
14. Heinz Decker: [url=https://link.springer.com/book/10.1007/978-3-642-13179-0#:~:text=Life introduced free O2 into,more efficient oxygen%2Dbased metabolism.]Oxygen and the Evolution of Life[/url] 2011 
15. Yilin Hu: Annual Review of Biochemistry February 1, 2016 
16. Hermann Bothe: The book Biology of the Nitrogen Cycle 2007 
17. B.B.Ward: [url= https://www.sciencedirect.com/science/article/pii/B9780080454054002809]Nitrification[/url]
18. U.Skiba: [url= https://www.sciencedirect.com/science/article/pii/B9780080454054002640]Denitrification[/url]
19. Nicolas Romillac: Ammonification 2019 
20. Moselio Schaechter: Encyclopedia of Microbiology  Third Edition • 2009 
21. James F. Kasting Fundamentals of Geobiology  page 102 
22. Setting the stage for life: Scientists make key discovery about the atmosphere of early Earth  November 30, 2011:
23. Dustin Trail: The oxidation state of Hadean magmas and implications for early Earth’s atmosphere 30 November 2011 
24. Charlotte Price Persson: [url=https://sciencenordic.com/denmark-earth-life/earth-had-oxygen-800-million-years-earlier-than-thought/1431149#:~:text=The atmosphere contained oxygen 3.8,history of life on]Earth had oxygen 800 million years earlier than thought[/url]  21. Mars 2016 
25. Leslie E. Orgel: [url= https://courses.washington.edu/biol354/The Origin of Life on Earth.pdf]The Origin of Life on Earth[/url] 1997
26. Erich Dimroth:  Precambrian atmospheric oxygen: evidence in the sedimentary distributions of carbon, sulfur, uranium, and iron 12 April 1976 
27. Science Daily: Primordial Air May Have Been January 9, 2002
28. Jacob Haqq-Misra: Availability of O2 and H2O2 on Pre-Photosynthetic Earth 
29. R. BUICK: Did the Proterozoic ‘Canfield Ocean’ cause a laughing gas greenhouse? 09 May 2007 
30. ALEISHA C. JOHNSON: Reconciling evidence of oxidative weathering and atmospheric anoxia on Archean Earth 29 Sep 2021 
31. Ireneusz Ślesak: Enzymatic Antioxidant Systems in Early Anaerobes: Theoretical Considerations 2016 May 1 
32. Agata Blaszczak: Facts About Hydrogen January 23, 2015 
33. Dale P. Cruikshank: Generating an Atmosphere 24 DECEMBER 2010 
34. Bogdanov, R.: Water radiolysis, a possible source of atmospheric oxygen 2002 
35. Christopher T. Reinhard: Earth’s oxygen cycle and the evolution of animal life July 25, 2016 
36. Hugh Ross: Origin of Our Amazing Nitrogen April 1, 2019 
37. Lars E P Dietrich: The co-evolution of life and Earth [url=2006 Jun 6]2006 Jun 6[/url]
38. Earth’s first bacteria made their own sunscreen October 26, 2015 
39. Y. Ueno: Revisiting Redox State of the Early Earth's Atmosphere and Prebiotic Synthesis 
40. Mikhail Butusov: The Role of Phosphorus in the Origin of Life and in Evolution  05 March 2013
41. Lisa Grossman Water's quantum weirdness makes life possible 19 October 2011
42. Anders Nilsson The structural origin of anomalous properties of liquid water 08 December 2015
43. Dmitry Yu Zubarev: Uncertainty of Prebiotic Scenarios: The Case of the Non-Enzymatic Reverse Tricarboxylic Acid Cycle: 26 January 2015 
44.  Jessica L. E. Wimmer: Energy at Origins: Favorable Thermodynamics of Biosynthetic Reactions in the Last Universal Common Ancestor (LUCA) 13 December 2021
45. PIER LUIGI LUISI: THE EMERGENCE OF LIFE From Chemical Origins to Synthetic Biology  2006
46. Paul C. W. Davies The algorithmic origins of life 2013 Feb 6
47. Kamila B. Muchowska: Metals promote sequences of the reverse Krebs cycle 2017 Nov;1
48. Science News: Scientists Finally Measure the Strength of the Bonds That Hold Together Water  May 17, 2017
49. Harris Bernstein: Origin of DNA Repair in the RNA World October 12th, 2020



Last edited by Otangelo on Thu Jun 30, 2022 3:01 pm; edited 92 times in total

https://reasonandscience.catsboard.com

7The Cell factory maker, Paley's watchmaker argument 2.0 Empty Chapter 4 Fri May 13, 2022 6:04 am

Otangelo


Admin

Martin Chaplin (2007): "...if the hydrogen bond strength was slightly different from its natural value then there may be considerable consequences for life. At the extremes water would not be liquid on the surface of Earth at its average temperature if the hydrogen bonds were 7% stronger or 29% weaker. The temperature of maximum density naturally occurring at about 4°C would disappear if the hydrogen bonds were just 2% weaker. Major consequences for life are found if the hydrogen bonds did not have their natural strength. Even very slight strengthening of the hydrogen bonds may have substantial effects on normal metabolism. Water ionization becomes much less evident if the hydrogen bonds are just a few percent stronger but pure water contains considerably more H+ ions if they are few percent weaker. The important alkali metal ions Na + and K + lose their distinctive properties if the hydrogen bonds are 11% stronger or 11% weaker respectively. Hydration of proteins and nucleic acids depends importantly on the relative strength of the biomolecule­--water interactions as compared with the water­--water hydrogen bond interactions. Stronger water hydrogen bonding leads to water molecules clustering together and so not being available for biomolecular hydration. Generally the extended denatured forms of proteins become more soluble in water if the hydrogen bonds become substantially stronger or weaker. If the changes in this bonding are sufficient, present natural globular proteins cannot exist in liquid water. The overall conclusion of this investigation is that water’s hydrogen bond strength is poised centrally within a narrow window of its suitability for life." 66

This places hydrogen bond strength, particularly of water, in the "fine-tuning category" with several other constants and force interactions of nature that, if they were to be even slightly different from their given values, would prohibit the formation of a universe with even nominal degrees of complexity.

Temperatures on early earth
David Deamer (2021) mentions a paper from Paul Knauth (2003):  By analyzing certain minerals, Paul Knauth estimated that the global temperature at the time of life’s origin was in the range of 55 to 85 degrees C. 63 64 A more recent paper (2017): Our findings indicate a cooling of Earth's surface temperature from ∼75 °C in the Archean (∼3,000 Ma) to ∼35 °C in the Devonian (∼420 Ma), consistent with previous geological and enzyme-based results. 65

That creates a paradox:

The faint young sun paradox
René Heller and colleagues explain the problem in a recent scientific paper from 2021:
Geological evidence suggests liquid water near the Earth’s surface as early as 4.4 gigayears ago when the faint young Sun only radiated about 70% of its modern power output. At this point, the Earth should have been a global snowball if it possessed atmospheric properties similar to those of the modern Earth. 62

Martin Schwarzschild, German astrophysicist, and British astronomer Fred Hoyle came independently, in the fifties to the same conclusion: The earth received about 30% less luminosity about 4 Gya ago than it has now. If the earth was that cold, it could not have permitted host liquid water, and life to emerge and flourish. Scientists have scratched their heads for decades, unable to solve the paradox. Many hypotheses have been modeled and refined over time. Carl Sagan and George Mullen made substantial attempts to solve the riddle, in the seventies. They suggested that ammonia as a greenhouse gas could have contributed to heating up the earth, and trapping water. That idea did not last long. Ammonia is destroyed by solar UV radiation.

Evidence suggests that there was liquid water 4,3 Gya ago. 3 In Heller's paper from 2021, mentioned above, he provides the following hypothesis as a source of heat, narrated by Jonathan O'Callaghan in an article published in Quanta magazine:
Shortly after the moon formed, it was likely 15 times closer to Earth than it is today. The moon’s gravity would have had a huge impact, creating enormous tidal waves that towered 2 kilometers above any magma or liquid-water oceans present. It would also have pushed and pulled Earth’s interior, generating extreme tidal heating that increased the planet’s temperature. While not enough to solve the faint young sun paradox on its own, the moon could have given Earth a vital boost over our planet’s first 100 million to 300 million years, increasing Earth’s temperature by several degrees and helping to drive volcanic activity across the surface. 2

Several different hypotheses have been elaborated to solve the problem. James C. G. Walker for example speculated on a "negative feedback mechanism for the long-term stabilization of earth's surface temperature" in a paper published in 1981. 4 But even in 2011, M. T. Rosing and colleagues confessed in an article published in Nature magazine: Faint young Sun paradox remains 5

O'Callaghan, mentioned above, at the end of the article, brings the following interesting fact to the reader's attention:
For planets in other solar systems, the faint young sun problem complicates the question of extraterrestrial life. In December 2020, Tyrrell calculated that Earth’s continuing habitability is mostly due to chance. He created a computer model of 100,000 planets. Each started out as habitable. He then subjected each planet to 100 simulations of various climate feedback scenarios. For 91% of the planets, not a single simulation kept the planet habitable over geological timescales. “Earth’s success was not an inevitable outcome but rather was contingent — it could have gone either way.,” he wrote. “It could have gone either way.” Thus, in order for exoplanets to have the potential to develop life, perhaps they need to have the right ingredients in just the right circumstances — like Earth.

What Tyrrell is saying is that there was no physical necessity: In the discussion section of the paper, he writes:
If Earth’s climate system had been subjected, for instance, to different magnitudes of volcanic supereruptions or different timings of impacting asteroids then the outcome could have been different.

In other words, other conditions were equally likely, as those that actualized, and permitted the instantiation of the conditions that permitted life to emerge. Our atmosphere is indeed finely adjusted and balanced to permit life.  Several factors need in fact to be just right. That includes  Nitrogen and carbon dioxide which need to have the correct ratio in the atmosphere for life to emerge.  Nitrogen had to be available in fixed form somehow.  Volcanoes and geysers had to produce carbon dioxide at rather low levels because at high levels it would have been toxic. A water cycle had to guarantee precipitation. The atmospheric temperatures had to be stable during the day, and visible light and infrared radiation had to be in the just right life-permitting range. The earth has a very thin atmosphere - just the right density to maintain the presence of liquid, solid, and gaseous water necessary for life. The atmosphere's pressure enables our lungs to function and water to evaporate at an optimal rate to support life. Its transparency allows an optimal range of life-giving solar radiation to reach the surface, and its capacity to hold water vapor provides for stable temperature and rainfall ranges. The atmosphere also requires the correct quantity and the correct rate of change of greenhouse gases.  If the atmospheric pressure on earth were 10 times smaller, body fluids would vaporize at 38℃.  The oxygen level (about 21%) is about right. If it were above 25%, land vegetation would/could hardly survive, and burning forests would be far more common. Too little oxygen and advanced life forms could not survive. In fact, both, atmospheric pressure, and oxygen levels are controlled by complex feedback mechanisms, and interactions between the hydrosphere, biosphere, and the material making up the crust of Earth.  The earth had to be large enough to be able to exert enough gravity to keep its molecules from escaping the atmosphere. 6

1. The values, constants, and parameters for a life-permitting atmosphere on earth must exist within a finite range for the existence of biological life to be possible.
2. These fine-tune parameters could have taken any of an infinite number of different values.
3. The probability of it occurring by chance approaches close to 0, but is in practical terms, factually zero.
4. The best explanation is an intelligent agent that had a goal in mind, which is to create contingent beings, and designed our life-permitting atmosphere.

Major elements essential for life to start
7 elements are classified as major elements, required for life, to name: calcium, phosphorus, potassium, chlorine, sulfur, sodium, and magnesium. It goes through like a red line. All elements, necessary for the cell's survival and operation, have to be uptaken by the cell through micro machines.  Complex membrane-embedded protein channels, ion pumps, ion exchangers, transporters, importers, translocons, and translocases, symporters, and antiporters, ligands & membrane signal receptors, control the intracellular levels of each element. It cannot be too much, or too less. There has to be a careful balance, a homeostatic milieu, that is just right. It has to be monitored, and many of the players to instantiate this balance, depend on one another. It is a carefully orchestrated joint venture, where, if one of the players fails, the system fails, and the cell might die. Each cell hosts millions of pumps embedded in its plasma membranes. 

Alan R. Kay (2017): The monovalent inorganic ions, sodium (Na+), potassium (K+), and chloride (Cl−) are, next to water, the second most abundant components of cells. These ions play central roles in the energetics of cells and in determining the osmotic stability of cells. 29

Helen Greenwood Hansma (2004): The transport of ions across the membranes of cells and organelles is a prerequisite for many of life’s processes. Transport often involves very precise selectivity for specific ions. Recently, atomic-resolution structures have been determined for channels or pumps that are selective for sodium, potassium, calcium, and chloride: four of the most abundant ions in biology. The flow of ions across the cell membrane is essential to many of life's processes. Ion pumps build gradients across the membrane, which are then used as an energy source by ion channels and other transport proteins to pump nutrients into cells, generate electrical signals, regulate cell volume, and secrete electrolytes across epithelial layers. Life depends on the continued flow of ions into and out of cells. But the cell membrane presents a serious energy barrier to an ion crossing it. This is because ions are energetically more stable in water than in the oily substance of the membrane interior: Outside the membrane, polar water molecules point their charged edges toward an oppositely charged ion, but inside the membrane, such stabilizing interactions are reduced. The resulting energy difference is so large that the predominant ions in biological systems—Naþ, Kþ, Ca2þ, and Cl– —would essentially never cross the membrane unaided. Ion pumps, ion exchangers, and ion channels (membrane proteins that we refer to here as the ion-transport proteins) are used by the cell to transport ions across membranes. 26

Wilfred D. Stein (2015):  Sodium, potassium, calcium, protons, chloride, and bicarbonate ions (and many others) are all needed by cells and pass in and out of cells rapidly, in a controlled fashion, well adjusted to cellular requirements. 30

Let's give a closer look, one by one.

Calcium
Calcium, with the atomic number 20,  is one of the major and most abundant elements in the Earth’s crust. 9 Calcium is a necessary component of all life forms and is needed in large quantities. In view of the importance of calcium (Ca2+) as a universal intracellular regulator, its essential role in cell signaling and communication in many biological intra and extra-cellular processes,  it is remarkable how little it is mentioned in the origins ( evolution/ID) debate. The origin of life cannot be elucidated, without taking into consideration and explaining how the calcium signaling machinery and cell homeostasis appeared.  J K JAISWAL wrote in a scientific article from 2001:

Calcium is among the most commonly used ions, in a multitude of biological functions, so much so that it is impossible to imagine life without calcium.  It appears that during the origin and early evolution of life the Ca2+ ion was given a unique opportunity to be used in several biological processes because of its unusual physical and chemical properties. It is difficult to find a physiological process in a cell that is not dependent on calcium. 7

Shemarova (2005): The first forms of life required an effective calcium (Ca2+) homeostatic system, which maintained intracellular Ca2+ at comfortably low concentrations—somewhere around 100 nanomolar, this being ∼10,000–20,000 times lower than that in the extracellular milieu. Damage the ability of the plasma membrane to maintain this gradient and calcium will flood into the cell, precipitating calcium phosphate, damaging the ATP-generating machinery, and kill the cell. In order to maintain such a low cytosolic calcium concentration, Ca2+ ions thus have to be transported against a steep concentration gradient. 8

The making of a power gradient ( which is a thermodynamically uphill process )  is always an engineering achievement, and a lot of knowledge,  planning, and intelligence are required for setup. Hydroelectric dams are highly complex, and always the result of years of planning by the most skilled, educated, and knowledgeable engineers of large companies. As for many human inventions, the engineering solutions discovered by man are employed in nature at least since life began in a far more elaborate and sophisticated way. So inanimate chemistry had the innate drive of trials and errors to produce a cell membrane, and amongst tons of other things, a Ca+ gradient through highly complex Calcium channels to keep a 10000-fold higher concentration of calcium outside the cell than inside the cytosol in order to create an environment suited for a protocell to keep its vital functions and not to die? Why would chemical elements do that? Did they have the innate drive and goal to become alive and keep an ambiance prerequisite, homeostasis of various elements, to permit life?

Phosphorus
Phosphorus (P) has the atomic number 15 and is a chemical element found in the earth's crust with a concentration of about 1 gram in each kilogram. It is required for energy production, DNA synthesis and protein synthesis. 32

Balkrishna Tiwari (2014): Phosphorus plays a very important role in the synthesis of nucleic acids, phospholipids, and many biochemical intermediates of the cell. Its role in cellular signaling and maintenance of biochemical energy pool makes it a very essential macronutrient for life. Inorganic phosphate (Pi) or orthophosphate is the only form of phosphorus that can be directly used by the living cell but it is limited in many ecosystems. 

Prof. Dr. Ruth E. Blake (2020): From structural to functional, informational, and energetic roles, Phosphorus is absolutely essential to life. 21  

Norio Kitadai (2017): It constitutes biomolecules that play central roles in replication and information (RNA and DNA), metabolism (ATP, NADPH, and other coenzymes), and structure (phospholipids)  58

Radosław W. Piast (2020): All life on Earth uses one universal biochemistry stemming from one universal common ancestor of all known living organisms. One of the most striking features of this universal biochemistry is its utter dependence on phosphate group transfer between biochemical molecules. Both nucleic acid and peptide biological synthesis relies heavily on phosphate group transfer. Such dependents strongly indicate very early incorporation of phosphate chemistry in the origin of life. Perhaps as early as prebiotic soup stage. 22

Mikhail Butusov wrote in the book: The Role of Phosphorus in the Origin of Life and in Evolution:
Phosphorus, in the form of phosphate, has played an important role in the origin and evolution of life on several different levels. It was, most likely, a key component in the early precursors of RNA. It plays an essential role in both the genetics and the energy systems of all living cells as well as in the cell membrane of all modern cells.  Phosphorus has also had a decisive role in forming the climatic and atmospheric conditions that set the boundary conditions for evolution and led to us humans and the world we know now. 10

Stanley Miller gave a sobering verdict based on his investigation in regards of a prebiotic source of phosphate:

There are no known efficient prebiotic synthesis of high-energy phosphates or phosphate esters. There is no known robust synthesis of polyphosphates or even pyrophosphate, thereby raising the question of whether
polyphosphates were used in prebiotic reactions and indeed if the pre-RNA world had informational macromolecules that contained phosphate at all. These results suggest that it may not be possible to produce adequate concentrations of high-energy phosphates using electric discharges or volcanic sources. We recognize that we may have missed some high-energy compounds in these experiments so this statement needs to be taken with some reservation. It is perhaps significant that there have been few experiments in the last 20 years attempting to produce high-energy phosphates. This suggests that robust syntheses may not be possible.
59

De Duve (2005): “How did nature choose phosphates?” Unless one believes in intelligent design, fitness does not account for use, except through a process of selective optimization. But phosphate must have entered metabolism before replication and its correlates, mutation and selection, came on the scene, presumably with RNA. There must be a chemical explanation for nature’s choice of phosphates. As I have discussed elsewhere (de Duve, 1991, 2001), this explanation is far from obvious. 61

There was an attempt to solve the problem raised by Miller, which was pointed out by Sci-news:  Chemical reactions that make the building blocks of living things need a lot of phosphorus, but phosphorus is scarce. 12

Jonathan D. Toner and colleagues supposedly found an answer to this problem in certain types of lakes. They write (2019): Phosphate is generally limited to micromolar levels in the environment because it precipitates with calcium as low-solubility apatite minerals. This disparity between laboratory conditions and environmental constraints is an enigma known as “the phosphate problem.” Here we show that carbonate-rich lakes are a marked exception to phosphate-poor natural waters. In principle, modern carbonate-rich lakes could accumulate up to ∼0.1 molal phosphate under steady-state conditions of evaporation and stream inflow because calcium is sequestered into carbonate minerals. This prevents the loss of dissolved phosphate to apatite precipitation. Even higher phosphate concentrations (>1 molal) can form during evaporation in the absence of inflows. On the prebiotic Earth, carbonate-rich lakes were likely abundant and phosphate-rich relative to the present day because of the lack of microbial phosphate sinks and enhanced chemical weathering of phosphate minerals under relatively CO2-rich atmospheres. 11

And co-author David Catling claimed: The extremely high phosphate levels in these lakes and ponds would have driven reactions that put phosphorus into the molecular building blocks of RNA, proteins, and fats, all of which were needed to get life going 60

How to put phosphorus into the molecular building blocks without the complex cellular machinery is entirely another feat, and unexplained. Miller was also pessimistic about that. He wrote: Phosphate is an unlikely reagent for the prebiotic world, and this may also apply to the preRNA world.

The phosphorus cycle
Living organisms utilize inorganic phosphate from the ecosystem and return it in the form of organic phosphorus. At this level of phosphorus, cycle microbes contribute significantly by adapting various mechanisms to mineralize dissolved organic phosphorus (DOP) which contribute a major part of the total dissolved phosphorus pool in oceanic fresh water and terrestrial ecosystem. Dissolved organic phosphorus contributes>80% of the total pool of dissolved phosphorus in the North Atlantic Ocean.  14

Libretexts explains: Rocks are a reservoir for phosphorus, and these rocks have their origins in the ocean. Phosphate-containing ocean sediments form primarily from the bodies of ocean organisms and from their excretions. However, volcanic ash, aerosols, and mineral dust may also be significant phosphate sources. This sediment then is moved to land over geologic time by the uplifting of Earth’s surface.  Marine birds play a unique role in the phosphorous cycle. These birds take up phosphorous from ocean fish. Their droppings on land (guano) contain high levels of phosphorous. 14

Bacteria use sophisticated mechanisms to, sense, acquire and import phosphate and to maintain intracellular amounts at optimal levels
Juan Francisco Martín and colleagues explain in a scientific paper from 2021: Bacteria transport inorganic phosphate by the high-affinity phosphate transport system PstSCAB, and the low-affinity PitH transporters. The PstSCAB system consists of four components. PstS is the phosphate-binding protein and discriminates between arsenate and phosphate. 15

Vanessa R. Pegos and colleagues write in a scientific publication from 2017: Bacteria have developed specialized systems for phosphorus uptake such as the low-affinity transporter, PitA, and the Phosphate Specific Transporter (Pst), an ATP-Binding Cassette transporter (ABC transporter). Structurally, the Pst system consists of two transmembrane proteins, two associated cytoplasmic ATPases and a periplasmic protein responsible for the affinity and specificity of the system. 16 

Armen Y. Mulkidjanian and colleagues (2019): A topologically closed membrane is a ubiquitous feature of all cellular life forms. This membrane is not a simple lipid bilayer enclosing the innards of the cell: far from that, even in the simplest cells, the membrane is a biological device of a staggering complexity that carries diverse protein complexes mediating energy-dependent – and tightly regulated - import and export of metabolites and polymers 17

Angus Menuge asks in his book: Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105: Hence a chicken and egg paradox: a lipid membrane would be useless without membrane proteins but how could membrane proteins have evolved in the absence of functional membranes? 18

Joseph Panno Ph.D. writes in: THE CELL Evolution of the First Organism, page 17: The cell membrane, far from being featureless, contains a molecular forest that gives the cell its eyes, its ears, and the equipment it needs to capture food and to communicate with other cells. Phospholipids, the main component in cell membranes, are composed of a polar head group (usually an alcohol), a phosphate, glycerol, and two hydrophobic fatty acid tails. Fat that is stored in the body as an energy reserve has a structure similar to a phospholipid, being composed of three fatty acid chains attached to a molecule of glycerol. The third fatty acid takes the place of the phosphate and head group of a phospholipid. Sugars are polymerized to form chains of two or more monosaccharides. Disaccharides (two monosaccharides), and oligosaccharides (about 3–12 monosaccharides), are attached to proteins and lipids destined for the cell surface. Polysaccharides, such as glycogen and starch, may contain several hundred monosaccharides and are stored in cells as an energy reserve. 19

So there is a further catch22 problem: Cell membranes require phosphorus. But the uptake of phosphorus into the cell to make daughter cells with membranes using phosphorus requires pre-existent cell membranes with phosphorus import channels.  Cell membranes only come from cell membranes. A cell cannot produce the cell membrane de novo from scratch. It inherits it. Daughter cell membranes come only from mother cell membranes. The mother cell grows twice its starting size, expands its membrane, and once it reaches the right size, it splits. The process is called binary fission and is an enormously complex process, mediated by a multiprotein complex denominated the divisome. 20

Potassium
Potassium (K) has the atomic number 19 and is found in nature as ionic salts. It is also found dissolved in sea water, and in many minerals. Helen Greenwood Hansma and colleagues explain:

All types of living cells have high intracellular potassium concentrations. [K+] When and how did this high [K+] appear? This is a mystery. Maintaining the K+ gradient across the cell membrane is energetically expensive. Ribosomes require K+ and are essential for life.  Many other key cellular processes also require K+ 23

Alexey Rozov and colleagues write in a science article from 2019:
Potassium ions are required for subunits association and stabilization of tRNAs, rRNAs, and r-proteins. These results shed light on the role of metal ions in the ribosome architecture and function, thereby expanding our view on fundamental aspects of protein synthesis. 24

D. V. Dibrova et al. explain in a science article from 2014: Cell cytoplasm of archaea, bacteria, and eukaryotes contains substantially more potassium than sodium, and potassium cations are specifically required for many key cellular processes, including protein synthesis. we have argued that the first cells could emerge in the pools and puddles at the periphery of primordial anoxic geothermal fields, where the elementary composition of the condensed vapor would resemble the internal milieu of modern cells. Marine and freshwater environments generally contain more sodium than potassium. Therefore, to invade such environments, while maintaining excess of potassium over sodium in the cytoplasm, primordial cells needed means to extrude sodium ions. 25

Evidently, if life was created from the get-go, then the intelligent designer had no such problems as described by Dibrova. The authors do also not take into consideration that a fully operational cell membrane and membrane pumps guaranteeing a homeostatic intracellular milieu are essential from the get-go, and so the membrane channels, that keep and control all the ion levels as it has to be. It is not feasible, that such a state of affairs could emerge through slow evolutionary processes. This is an all-or-nothing business. 

Sodium-potassium pumps
Libretext explains: The sodium-potassium pump system moves sodium and potassium ions against large concentration gradients. It moves two potassium ions into the cell where potassium levels are high and pumps three sodium ions out of the cell and into the extracellular fluid. Three sodium ions bind with the protein pump inside the cell. The carrier protein then gets energy from ATP and changes shape. In doing so, it pumps the three sodium ions out of the cell. At that point, two potassium ions from outside the cell bind to the protein pump. The potassium ions are then transported into the cell, and the process repeats. The sodium-potassium pump is common to all cellular life. It helps maintain cell potential and regulates cellular volume. 27

Of course, to instantiate homeostasis in the cell requires energy. How did early earth instantiate the energy gradients on their own, going against the trend and the direction of thermodynamics ?

Sodium
Sodium (Na) atomic number 11 in the periodic table. It is the sixth most abundant element in the Earth's crust and exists in numerous minerals. Sodium affects cell membrane permeability and other cell membrane functions. 32

Atsuo Nishino et al. explain:
Every cell within living organisms actively maintains an intracellular sodium (Na+) concentration that is 10-12 times lower than the extracellular concentration. The cells then utilize this transmembrane Na+ concentration gradient as a driving force to produce electrical signals, sometimes in the form of action potentials. 28

D. V. Dibrova (2018): While maintaining excess of potassium over sodium in the cytoplasm, primordial cells needed means to extrude sodium ions. 67

Chlorine
Chlorine (Cl) symbol and atomic number 17. Teresia Svensson explains in a scientific article from 05 January 2021:

Chlorine (Cl) is one of the 20 most abundant elements on earth and has various essential functions for living organisms. During the past decades, there have been several unexpected discoveries regarding the terrestrial chlorine (Cl) cycle. Enzymatic control of chlorination processes has been described, and the genetic capacity to carry out chlorination is widespread among prokaryotes and eukaryotes alike. The extensive natural chlorination processes in soil suggest that the Cl turnover likely is linked to common ecosystem processes. 31

Dr. Lawrence Wilson writhes in an article from 2019:

This is a fascinating element that is found in all living tissue.  Chlorine is essential for the function of cleansing the body of debris.  It is also exchanged in the stomach to produce hydrochloric acid, a very necessary acid for protein digestion. Chlorine is a member of a group of elements called halogens.  Others in this group are fluoride, iodine, and bromine.  The body maintains a delicate balance between all these elements. Today too much chlorine, bromine, and fluoride are overwhelming the iodine and causing deficiencies in our bodies. The deficiency of this element is non-existent, unlike all the other electrolytes.  The reason is that chlorine is part of salt (NaCl).  Most people eat too much, rather than too little table salt, as it is found in almost all prepared and processed food items today.  Thus we do not focus on this element in terms of deficiencies. In contrast, excessive exposure to chlorine is a severe problem.  Too much table salt and chlorinated water are the main sources. Some bleached flour products are also sources.  Environmental contamination of the food, water, and air are constant sources of this element, which is highly toxic in these forms. 

Metallomics
There are several "omics" sciences. The genome, the epigenome, the glycome, the lipidome, the mobilome, the transcriptome, the metabolome, the proteome, the interactome, the signalosome, and there is, maybe less known, also the metallome.  It denotes the ensemble of research activities related to metals of biological interest 34 Metals play unique and critical roles in biology, promoting structures and chemistries that would not otherwise be available to proteins alone. 40

Cells need to have mechanisms for uptake, biosynthesis, keeping a homeostatic milieu, an appropriate concentration range, and expelling waste products of all life-essential metals. Keeping homeostasis requires a truly masterful, careful act of balance and regulation.  Systems that are inappropriately regulated can result in potentially disastrous consequences.

The trace elements
Not all trace elements used in biology are essential for life to start. Iodine (I), atomic number 53, is essential for thyroid hormones which are critical cell signaling molecules. 45 Fluorine (F) atomic number 9, is important for the maintenance of the solidity of bones. 46 The ease of the change in the oxidation state of vanadium (V), atomic number 23, is employed by bacteria and cyanobacteria as well as by eukarya (algae and fungi) in respiratory and enzymatic functions. 47 Tin (Sn), atomic number 50, generates a wide variety of biological activities deriving from its chemical character 48  A few organisms use Silicon (Si), atomic number 14. Diatoms, radiolaria, and siliceous sponges use biogenic silica as a structural material for their skeletons. These elements, however, seem not to play an essential role in the origin of life.

Pallavee Srivastava et al. give us a good introduction: Metals such as calcium, cobalt, chromium, copper, iron, potassium, magnesium, manganese, sodium, nickel, and zinc are essential and serve as micronutrients. These metals act as the redox centers for metalloproteins such as cytochromes, blue copper proteins, and iron-sulfur proteins which play a vital role in electron transport.  As the transition metals exist in numerous oxidation states, they efficiently act as electron carriers during redox reactions of electron transport chains to generate chemical energy [2, 3]. Metal ions also function as cofactors and confer catalytic potential and stability to proteins. Both essential and nonessential metals at high concentrations disrupt cell membrane, alter enzymatic specificity, hinder cellular functions, and damage DNA [5]. Thus, as any disturbance in metal ion homeostasis could produce toxic effects on cell viability, the concentrations of metals within cells are stringently controlled. An increase in the ambient metal concentration leads to activation of metal resistance mechanisms to overcome metal stress. Metal homeostasis has been well studied in bacteria and eukaryotes and is attributed to differential regulation of transporters like -type ATPases, ABC transporters, cation diffusion facilitators (CDFs), and metallochaperones in response to metals. 44

Boron:
Boron (B), atomic number 5, is a trace mineral, a micronutrient with diverse and vitally important roles in metabolism that render it necessary, as recent research suggests, for the evolution of life on Earth. Boron boosts magnesium absorption, and influences the formation and activity of key biomolecules, such as S-adenosyl methionine (SAM-e) and nicotinamide adenine dinucleotide (NAD+). 33  Boron is cycled through the atmosphere, hydrosphere, lithosphere, and biosphere by a variety of processes. It is essential for the growth and development of marine algae, cyanobacteria, and an essential nutrient for terrestrial plants. [url=https://reasonandscience.catsboard.com/08 May 2009]37[/url] It mainly exists as uncharged boric acid. Boric acid is imported into cells by channel proteins like BOR1 transporters. Plants actively regulate intracellular localization and abundance of transport proteins to maintain boron homeostasis.42

Nickel:
Nickel (Ni) atomic number 28, is a component of the active sites of several archaeal and bacterial anaerobic enzymes essential for bioenergy processes such as H2 and CO oxidation and CO2 fixation. 35 It is incorporated in essential cofactors of a group of unrelated metalloenzymes catalyzing central reactions in energy and nitrogen metabolism, in detoxification processes, and in a side reaction of the methionine salvage pathway 38 Nickel-requiring microorganisms have elaborate systems to maintain the homeostasis of nickel ions in cells by regulation of nickel uptake, storage and efflux utilization (as enzyme factors). 43

Cobalt:
Cobalt (Co)  atomic number 27. It is the active center of vitamin B12,  involved in methyl transfer reactions, and used in Isomerases,  Methyltransferases,  Dehalogenases, and other, biochemistries. It helps the synthesis of DNA, and as such, is essential for life. 36 the available information is still very limited, and at the present stage, it is impossible to rationalize this information into a detailed molecular mechanism. T

Thomas Eitinger (2013): Nickel and cobalt ions are required by prokaryotes for incorporation into diverse enzymes involved in central metabolic reactions. The metal ions are taken up from the environment by selective high-affinity primary and secondary active transport systems. These systems discriminate between Ni2+ and Co2+ on the one side and other divalent transition metal ions on the other side. Despite similar properties of Ni2+ and Co2+, a couple of primary and secondary uptake systems exist that efficiently distinguish even between those two cations. Relevant transporters include canonical and energy-coupling factor-type ATP-binding cassette importers and members of the nickel/cobalt transporter family and relatives of the latter. Recent advances include the discovery of active transport of nickel and cobalt across the outer membrane of gram-negative bacteria by TonB-dependent systems and the tentative identification of an unusual metallophore for nickel uptake. Export systems that avoid cellular overload with nickel and cobalt ions comprise multimeric systems of the resistance-nodulation-cell division family, secondary active systems from different subgroups, and P-type ATPases. 39

Copper
Copper (Cu) atomic number 29. Copper ions can participate in a wide spectrum of interactions with proteins to drive diverse structures and biochemical reactions. Typical Cu containing enzymes are Cox, NADH dehydrogenase-2, and tyrosinases that reside in the cytoplasmic membrane or periplasm. 40 Copper, while toxic in excess, is an essential micronutrient in all kingdoms of life due to its essential role in the structure and function of many proteins. Pseudomonas aeruginosa OprC is an outer membrane, TonB-dependent transporter that is conserved in many Proteobacteria and which mediates acquisition of both reduced and oxidized ionic copper via an unprecedented CxxxM-HxM metal-binding site.  41

Iron
Iron (Fe) atomic number 26, and essential in all life forms. Iron-sulfur proteins drive fundamental processes in cells, notably electron transfer and CO2 fixation.  49

Deepika M. De Silva et al. (1996): Iron serves essential functions in both prokaryotes and eukaryotes, and cells have highly specialized mechanisms for acquiring and handling this metal. Organisms use a variety of transition metals as catalytic centers in proteins, including iron, copper, manganese, and zinc. Iron is well suited to redox reactions due to its capability to act as both an electron donor and acceptor. In eukaryotic cells, iron is a cofactor for a wide variety of metalloproteins involved in energy metabolism, oxygen binding, DNA biosynthesis and repair, synthesis of biopolymers, cofactors, and vitamins, drug metabolism, antioxidant function, and many others. Because iron is so important for survival, organisms utilize several techniques to optimize uptake and storage to ensure maintenance of sufficient levels for cellular requirements. However, the redox properties of iron also make it extremely toxic if cells have excessive amounts. Free iron can catalyze the formation of reactive oxygen species such as the hydroxyl radical, which in turn can damage proteins, lipids, membranes, and DNA. Cells must maintain a delicate balance between iron deficiency and iron overload that involves coordinated control at the transcriptional, post-transcriptional, and post-translational levels to help fine tune iron utilization and iron trafficking. 50

Robert J. P. Williams (2000): The origin of life required two processes that dominated:
(1) the generation of a proton gradient and
(2) linking this gradient to ATP production in part and in part to uptake of essential chemicals and rejection of others. The generation of a proton gradient required especially appropriate amounts of iron (Fe2+), levels for electron transfer and the ATP production depended on controlling H+, Mg2+ and phosphate in the cytoplasm.
51

Sulfur
Sulfur (S) atomic number 16. Sulfur is an essential element for all life forms on earth. Methionine and cysteine are incorporated into proteins, while homocysteine and taurine, used in biological systems,  are not. John T Brosnan gives an overview:

Methionine is the initiating amino acid in the synthesis of eukaryotic proteins; N-formyl methionine ( which has a formyl group added)  serves the same function in prokaryotes. Because most of these methionine residues are subsequently removed, it is apparent that their role lies in the initiation of translation, not in protein structure. 

Libretexts informs: Amino acids cysteine and methionine contain most of the sulfur, and the element is present in all polypeptides, proteins, and enzymes that contain these amino acids. Disulfide bonds (S-S bonds) between cysteine residues in peptide chains are very important in protein assembly and structure. These covalent bonds between peptide chains confer extra toughness and rigidity.  Many important cellular enzymes use prosthetic groups ending with -SH moieties to handle reactions involving acyl-containing biochemicals: two common examples from basic metabolism are coenzyme A and alpha-lipoic acid. Two of the 13 classical vitamins, biotin and thiamine, contain sulfur, with the latter being named for its sulfur content. Inorganic sulfur forms a part of iron-sulfur clusters as well as many copper, nickel, and iron proteins. Most pervasive are the ferredoxins, which serve as electron shuttles in cells. In bacteria, the important nitrogenase enzymes contains an Fe–Mo–S cluster and is a catalyst that performs the important function of nitrogen fixation, converting atmospheric nitrogen to ammonia that can be used by microorganisms and plants to make proteins, DNA, RNA, alkaloids, and the other organic nitrogen compounds necessary for life 52

Microorganisms require sulfur for growth, and obtain it either from inorganic sulfate or from organosulfur compounds such as sulfonates, sulfate esters, or sulfur-containing amino acids. Transport of sulfate into the cell is catalyzed either by ATP binding cassette (ABC)-type transporters (SulT family) or by major facilitator superfamily-type transporters (SulP family). 53

The Sulfur cycle
The sulfur cycle is the collection of processes by which sulfur moves to and from minerals (including the waterways) and living systems. Such biogeochemical cycles are important in geology because they affect many minerals. Biochemical cycles are also important for life because sulfur is an essential element, being a constituent of many proteins and cofactors. 54

Stefan M.Sievert (2007): The ocean represents a major reservoir of sulfur on Earth, with large quantities in the form of dissolved sulfate and sedimentary minerals (e.g., gypsum and pyrite). Sulfur occurs in a variety of valence states, ranging from –2 (as in sulfide and reduced organic sulfur) to +6 (as in sulfate). Sulfate is the most stable form of sulfur on today’s oxic Earth; weathering and leaching of rocks and sediments are its main sources to the ocean. Sulfur is an essential element for life. At any given time, only a small fraction is bound in biomass. Sulfur makes up about 1% of the dry weight of organisms, where it occurs mainly as constituents of protein (primarily the S-containing amino acids, cysteine and methionine), but also in coenzymes (e.g., coenzyme A, biotin, thiamine) in the form of iron-sulfur clusters in metalloproteins, and in bridging ligands (molecules that bind to proteins, for example, in cytochrome c oxidase). Microorganisms can use inorganic sulfur, mainly sulfate, to form these organic compounds in an energy-dependent process referred to as assimilation. However, animals are dependent on preformed organic sulfur compounds to satisfy their sulfur needs. In addition to assimilation, many bacteria and archaea can use sulfur in energy-yielding reactions, called dissimilatory sulfur metabolism. These latter processes are essential for the cycling of sulfur on our planet.  The global sulfur cycle depends on the activities of metabolically and phylogenetically diverse microorganisms, most of which reside in the ocean. Although sulfur rarely becomes a limiting nutrient, its turnover is critical for ecosystem function. 57

The sulfur cycle depends on microorganisms. But all life forms depend on the Sulfur cycle. What came first? 

Molybdenum
Russell Westerholm (2013): Minerals containing the elements boron and molybdenum are key in assembling atoms into life-forming molecules. Boron minerals help carbohydrate rings to form from pre-biotic chemicals, and then molybdenum takes that intermediate molecule and rearranges it to form ribose, and hence RNA. This raises problems for how life began on Earth, since the early Earth is thought to have been unsuitable for the formation of the necessary boron and molybdenum minerals. It is thought that the boron minerals needed to form RNA from pre-biotic soups were not available on early Earth in sufficient quantity, and the molybdenum minerals were not available in the correct chemical form. "It’s only when molybdenum becomes highly oxidised that it is able to influence how early life formed. "This form of molybdenum couldn’t have been available on Earth at the time life first began, because three billion years ago, the surface of the Earth had very little oxygen. 55

1. René Heller: Habitability of the early Earth: liquid water under a faint young Sun facilitated by strong tidal heating due to a closer Moon  24 November 2021 
2. Jonathan O'Callaghan: A Solution to the Faint-Sun Paradox Reveals a Narrow Window for Life January 27, 2022 
3. S J Mojzsis: Oxygen-isotope evidence from ancient zircons for liquid water at the Earth's surface 4,300 Myr ago 2001 
4. James C. G. Walker: A negative feedback mechanism for the long-term stabilization of Earth's surface temperature  20 October 1981 
5.  M. T. Rosing: [url=https://www.nature.com/articles/nature09961#:~:text=The Sun was fainter when,Rosing et]Faint young Sun paradox remains[/url] 2011 
6. François Forget: On the probability of habitable planets 2013 
7. J K JAISWAL: Calcium – how and why?  September 2001 https://pubmed.ncbi.nlm.nih.gov/11568481/
8. I. V. Shemarova: Evolution of mechanisms of Ca2+-signaling: Role of calcium ions in signal transduction in prokaryotes January 2005 
9. Józef Kazmierczak: Calcium in the Early Evolution of Living Systems: A Biohistorical Approach 2013 
10. Mikhail Butusov: The Role of Phosphorus in the Origin of Life and in Evolution 05 March 2013 
11. Jonathan D. Toner: A carbonate-rich lake solution to the phosphate problem of the origin of life December 30, 2019 
12. First Life Forms on Earth May Have Evolved in Carbonate, Phosphate-Rich Lakes Dec 31, 2019 12
13. Balkrishna Tiwari: Regulation of Organophosphate Metabolism in Cyanobacteria.  October 31, 2014
14. Phosphorus Cycle Apr 6, 2022
15. Juan Francisco Martín: Molecular Mechanisms of Phosphate Sensing, Transport and Signalling in Streptomyces and Related Actinobacteria 23 January 2021
16. Vanessa R. Pegos: Structural features of PhoX, one of the phosphate-binding proteins from Pho regulon of Xanthomonas citri May 22, 2017
17. Armen Y. Mulkidjanian: Co-evolution of primordial membranes and membrane proteins 2009 Sep 28
18. Angus Menuge: Agents Under Fire: Materialism and the Rationality of Science 1 July 2004
19. Ph.D. Panno: The Cell: Evolution of the First Organism  1 august 2004
20. Andrea Casiraghi: Targeting Bacterial Cell Division: A Binding Site-Centered Approach to the Most Promising Inhibitors of the Essential Protein FtsZ 2020 Feb 7
21. Prof. Dr. Ruth E. Blake: Special Issue 15 February 2020
22. Radosław W. Piast: Small Cyclic Peptide for Pyrophosphate Dependent Ligation in Prebiotic Environments 2 July 2020
23. Helen Greenwood Hansma: Potassium at the Origins of Life: Did Biology Emerge from Biotite in Micaceous Clay?  17 May 2022
24. Alexey Rozov: Importance of potassium ions for ribosome structure and function revealed by long-wavelength X-ray diffraction  2019 Jun 7
25. D V Dibrova: Ancient Systems of Sodium/Potassium Homeostasis as Predecessors of Membrane Bioenergetics 2015 May;8
26. ERIC GOUAUX: Principles of Selective Ion Transport in Channels and Pumps 2 Dec 2005
27. https://bio.libretexts.org/Bookshelves/Introductory_and_General_Biology/Book%3A_Introductory_Biology_(CK-12)/02%3A_Cell_Biology/2.16%3A_Sodium-Potassium_Pump#:~:text=The%20sodium%2Dpotassium%20pump%20system,and%20into%20the%20extracellular%20fluid.
28. Atsuo Nishino: Evolutionary History of Voltage-Gated Sodium Channels 2018
29. Alan R. Kay*: How Cells Can Control Their Size by Pumping Ions  08 May 2017
30. Wilfred D. Stein: [url=https://www.amazon.com./Channels-Carr iers-Pumps-Introduction-Transport/dp/0124165796]Channels, Carriers, and Pumps, Second Edition: An Introduction to Membrane Transport[/url] 2015
31. Teresia Svensson: Chlorine cycling and the fate of Cl in terrestrial environments 05 January 2021
32. Dr. Lawrence Wilson: MINERALS FOR LIFE, A BASIC INTRODUCTION 2019
33. Lara Pizzorno: Nothing Boring About Boron 2015 Aug; 14
34. Takafumi Hirata: Earth Metallomics : new approach to decode origin and evolution of life
35. Juan C Fontecilla-Camps: Nickel and the origin and early evolution of life 16 March 2022
36. Michael J. Russell: Cobalt: A must-have element for life and livelihood January 13, 2022
37. Carl J. Carrano: Boron and Marine Life: A New Look at an Enigmatic Bioelement 08 May 2009
38. Thomas Eitinger: Encyclopedia of Metalloproteins pp 1515–1519 Nickel Transporters 2013
39. Thomas Eitinger: Nickel, Cobalt Transport in Prokaryotes  05 December 2013
40. Richard A. Festa: Copper: an Essential Metal in Biology 2013 Jul 22
41. Satya Prathyusha Bhamidimarri: Acquisition of ionic copper by a bacterial outer membrane protein April 06, 2021.
42. Akira Yoshinari: Insights into the Mechanisms Underlying Boron Homeostasis in Plants 2017 Nov 17
43. Tianfan Cheng Exploration into the nickel ‘microcosmos’ in prokaryotes 15 March 2016
44. Pallavee Srivastava: Mechanisms of Metal Resistance and Homeostasis in Haloarchaea 21 Feb 2013
45. Susan J Crockford: Evolutionary roots of iodine and thyroid hormones in cell-cell signaling 2009 Jun 23
46. Chemical properties of fluorine - Health effects of fluorine - Environmental effects of fluorine
47. Dieter Rehder: The role of vanadium in biology 2015 Jan 22
48. Arakawa: [Biological activity of tin and immunity] 1997 Jan;3
49. Eloi Camprubi: Iron catalysis at the origin of life 2017 May 3.
50.  Deepika M. De Silva: Molecular mechanisms of iron uptake in eukaryotes 1996
51. Robert J. P. Williams: Calcium Homeostasis and Its Evolution
52. Libretext: Sulfur
53. M A Kertesz: Bacterial transporters for sulfate and organosulfur compounds Apr-May 2001
54. The sulfur cycle
55. Russell Westerholm: Earth Life Began on Mars; Red Planet May Have Had Building Blocks for RNA and DNA First  Aug 29, 2013
56. John T Brosnan: The sulfur-containing amino acids: an overview 2006 Jun;13
57. Stefan M.Sievert The Sulfur Cycle 2007
58. Norio Kitadai: Origins of building blocks of life: A review 2018
59. S L Miller: Are polyphosphates or phosphate esters prebiotic reagents? 1995
60. Hannah Hickey: Life could have emerged from lakes with high phosphorus December 30, 2019
61. Christian de Duve: Singularities: Landmarks on the Pathways of Life 2005
62. John Rennie: New Clues to Chemical Origins of Metabolism at Dawn of Life October 12, 2020 
63. Paul Knauth: High Archean climatic temperature inferred from oxygen isotope geochemistry of cherts in the 3.5 Ga Swaziland Supergroup, South Africa MAY 01, 2003
64. David Deamer: Where Did Life Begin? Testing Ideas in Prebiotic Analogue Conditions 2021
65. Amanda K. Garcia: Reconstructed ancestral enzymes suggest long-term cooling of Earth’s photic zone since the Archean April 17, 2017
66. Martin Chaplin Water’s Hydrogen Bond Strength  2007
67. D. V. Dibrova: Ancient Systems of Sodium/Potassium Homeostasis as Predecessors of Membrane Bioenergetics 2018



Last edited by Otangelo on Fri Jul 22, 2022 1:35 pm; edited 94 times in total

https://reasonandscience.catsboard.com

8The Cell factory maker, Paley's watchmaker argument 2.0 Empty Chapter 5 Wed May 18, 2022 11:12 am

Otangelo


Admin

Chapter 5

Origin of the building blocks of life
For life to begin, the various organic molecules had to be recruited abiotically. There was no biological process at hand to manufacturing by life's machinery its very own building blocks. RNA, amino acids, lipids, and carbohydrates are all synthesized by complex metabolic networks in the cell. But there was no molecular cell machinery lying around randomly to do the job on prebiotic earth as pre-job. The first struggle and challenge to explain the origin of life begins by trying to understand is where these building blocks came from, and how they were selected. 

The origin of the basic building blocks of life is a fundamental OOL problem. One important question is, how they were selected prebiotically.  

Robert M. Hazen gave his version of how it might have happened in the book: FUNDAMENTALS OF GEOBIOLOGY 2012,
The emergence of natural selection: Molecular selection, the process by which a few key molecules earned key roles in life’s origins, proceeded on many fronts. Some molecules were inherently unstable or highly reactive and so they quickly disappeared from the scene. Other molecules easily dissolved in the oceans and so were effectively removed from contention. Still, other molecular species may have sequestered themselves by bonding strongly to surfaces of chemically unhelpful minerals or clumped together into tarry masses of little use to emerging biology. In every geochemical environment, each kind of organic molecule had its dependable sources and its inevitable sinks. For a time, perhaps for hundreds of millions of years, a kind of molecular equilibrium was maintained as the new supply of each species was balanced by its loss. Such equilibrium features nonstop competition among molecules, to be sure, but the system does not evolve.  Competition drives the emergence of natural selection. Such behavior appears to be inevitable in any self-replicating chemical system in which resources are limited and some molecules have the ability to mutate. Over time, more efficient networks of autocatalytic molecules will increase in concentration at the expense of less efficient networks. In such a competitive milieu, the emergence of increasing molecular complexity is inevitable; new chemical pathways overlay the old. So it is that life has continued to evolve over the past four billion years of Earth's history.3

This is evolutionary storytelling. A classical example of pseudo-science, where guesswork is sold as science. Each of the quartet that forms the building blocks of life, nucleotides, amino acids, phospholipids, or carbohydrates, is specified and complex. Sophisticated integrated cellular machinery in modern cells synthesizes them either from scratch, starting with importing, and using the materials from the surrounding environment, and then processing them in very complex metabolic pathways, which core systems are conserved in life, and have little changed over time. Or they are broken down after some use, and then either recycled or discarded as waste products. One of the big intractable OOL problems is scientific investigations and experiments have failed to demonstrate that random chemical interactions could have assembled functional biomolecules without guidance.

In an article published in 2008, Craig Venter wrote:
To me the key thing about Darwinian evolution is selection. Biology is a hundred percent dependent on selection. No matter what we do in synthetic biology, synthetic genomes, we're doing selection. It's just not natural selection anymore. It's an intelligently designed selection, so it's a unique subset. But selection is always part of it. 1

What natural mechanisms lack, is goal-directedness. And that's a big problem for naturalistic explanations of the origin of life. There was a potentially unlimited variety of molecules on the prebiotic earth. Why should competition and selection among them have occurred at all, to promote a separation of those molecules that are used in life, from those that are useless? Selection is an inadequate mechanism to explain all of the living order, and even the ability to maintain order in the short term and to explain the emergence, overall organization, and long-term persistence of life from non-living precursors. It is an error of false conceptual reduction to suppose that competition and selection will thereby be the source of explanation for all relevant forms of order. Selecting the right materials is absolutely essential. But a prebiotic soup of mixtures of impure chemicals has never been demonstrated to purify, select, and accumulate spontaneously those building blocks that are required for life at a specific building site. Chemicals and physical reactions have no "urge" to join, group, and start interacting in a purpose and goal-oriented process to produce specified complex molecules starting to perform specific integrated functions, and generate self-replicating chemical cell factories. Even more, huge quantities of the same molecules would have to be produced in a repetitive orchestrated manner, millions, if not billions of them, like nucleotides with the same selected ribose backbone sugar, nucleobases, purines, and pyrimidines.

Graham Cairns-Smith: Genetic takeover page 70, 1988:
Suppose that by chance some particular coacervate droplet in a primordial ocean happened to have a set of catalysts, etc. that could convert carbon dioxide into D-glucose. Would this have been a major step forward
towards life? Probably not. Sooner or later the droplet would have sunk to the bottom of the ocean and never have been heard of again. It would not have mattered how ingenious or life-like some early system was; if it
lacked the ability to pass on to offspring the secret of its success then it might as well never have existed. So I do not see life as emerging as a matter of course from the general evolution of the cosmos, via chemical evolution, in one grand gradual process of complexification. 
2

A frequent cop-out to this problem is to say: Even if we take your unknowns as true unknowns or even unknowable, the answer is always going to be “We don’t know yet.” Scientists hate saying and confessing "we don't know".  The scientist's mind is all about getting knowledge and diminishing ignorance and lack of understanding. Confessing about not knowing, when there is a good reason for it, is ok. But claiming not knowing, despite the evident facts easy at hand and having the ability to come to informed well-founded conclusions based on sound reasoning, and through known facts, background information, repeated experiments, and evidence is not only willful ignorance but plain foolishness. If there were hundreds of possible statements, then claiming not knowing which makes most sense could be justified. In our case, IMHO, it's just: Either an intelligent designer was involved in the selective process, or not. That's it. There is a wealth of evidence in the natural world, which can lead us to informed, well-justified conclusions. We know for example that nature's course is to act upon the laws of thermodynamics, and molecules disintegrate. Applying eliminative induction permits us to infer logically: Since random natural events have shown to be incapable, then the alternative, a guiding and selecting agency with foresight, goals, and intentions is the best case-adequate explanation.

Origin of the organic compounds on the prebiotic earth:
The question of the origin of organic compounds on early earth is fundamental. It was tackled by Miller-Urey in 1959: Oparin further proposed that the atmosphere was reducing in character and that organic compounds might be synthesized under these conditions. This hypothesis implied that the first organisms were heterotrophic-that is, that they obtained their basic constituents from the environment instead of synthesizing them from carbon dioxide and water. Various sources of energy acting on carbon dioxide and water failed to give reduced carbon compounds except when contaminating reducing agents were present. The one exception to this was the synthesis of formic acid and formaldehyde in very small yields by the use of 40-million-electron volt helium ions  from a 60-inch cyclotron. While the simplest organic compounds were deed synthesized, the yields were so small that this experiment can be best interpreted to mean that it would not have been possible to synthesize compounds nonbiologically as long as oxidizing conditions were present on the earth.  55

but an unsolved issue that is still debated today. In plain 2022, a science news report claims: "Likely energy source behind first life on Earth found ‘hiding in plain sight’". 56 That means, in the last 67 years, since Miller's investigation, science did not solve the issue. But have Jessica Wimmer and Prof William Martin ? They claim: The new findings uncover a natural tendency of metabolism to unfold under the environmental conditions of H2 producing submarine hydrothermal vents. No light or other source of radiation was required. Just H2 and CO2 in the dark. Our calculations indicate whether a reaction can go forward. Whether or not reactions will go forward depends on the presence of catalysts, which are abundant at H2 producing hydrothermal vents. The role of inorganic catalysts in early metabolic evolution is currently being studied by many groups as they help to bridge the gap between chemical reactions on the early Earth and enzymatic reactions as they occur in modern cells. One important piece of evidence for the nature of energy at origins has been hiding in plain sight: the central hub of reactions that make up the life process itself. The driving force behind metabolic energy release ultimately traces to a steady geochemical interface of H2 and CO2, a chemical mixture that is brimming with energy like a fresh battery. That energy is released in the 400 reactions comprising the core of a universally conserved set of biosynthetic pathways, a thermodynamic imprint of the environment at which life arose, uncovering surprising clues about the source of energy at origins.

The authors pressuppose that these 400 reactions were extant. How do they know this? How do they know that in the absence of enzymes, non-enzymatic reactions replaced them? That is putting the cart in front of the horse. In fact, Leslie Orgel gave already a no to these speculative hypotheses in 2008 58

Going from prebiotic to biotic synthesis: a major unsolved open question                                                                                                                                                                                                                         
How did chemistry transition to biochemistry, and biology? Supposed chemical, to biological evolution? An inanimate mixture of chemical compounds to life? A bunch of building blocks, to a fully assembled, operational, self-replicating chemical factory? Abiogenesis research has focused in the last decades on attempting to elucidate the prebiotic origin of the basic building blocks of life. But that is a far cry, from explaining the transition from the elements on earth, to unimaginably complex self-replicating cells. There is a huge chasm in between, that is often not recognized. On the one side, there was supposedly prebiotic synthesis and recruitment of the building blocks of life, energy in some form, self-replicating RNAs, and subsequently,  the aleatory emergence of semantophoretic, information-bearing genes. On the other side, there are fully operational, self-replicating living cells with fully developed integrated metabolic pathways that synthesize the cell's building blocks, the generation of ATP molecules through nano-turbines, and genes that store the information on how to make cells, and pass that information from generation to generation. Between the two, there is a huge transition, an uncrossable gulf.
 
Lynn Margulis: To go from a bacterium to people is less of a step than to go from a mixture of amino acids to a bacterium.

Panspermia is one of the many different proposals for the origin of the building blocks for life. They were supposedly "imported" from meteorites, comets, or interplanetary dust particles. The Miller/Urey experiment did its try with discharge experiments to make just one of the building blocks, amino acids, with poor results ( nonetheless, even today heralded as proof that abiogenesis is possible ).  These building blocks had to be made in sufficient concentrations on earth, should not be annihilated by UV radiation, overcome the concomitant synthesis of undesired or irrelevant by-products, and avoid the fact that protein chains are broken down in the water by hydrolysis. Somehow these molecules had to begin to chemically react and self-organize into interdependent complex form - and when the first self-replicating living cell was created, find nutrition in form of glucose or starch (another huge problem, since these hydrocarbons are synthesized by photosynthesis - depending on cyanobacteria, or algae, which were not floating around yet)  

Let's suppose, the basic building blocks were there, all fully ready for action, and things would have had a go, naturally. There had to be a transition from recruiting first single complex monomers, amino acids only in levorotatory form (left-handed) and dextrorotatory form ( RNA and DNA, carbohydrates and lipids) and all, at the same time, form bioactive chains. In the case of amino acids, first dipeptides, and then polypeptides. In the case of RNA and DNA, first constructing single nucleic acids, that is joining the sugar and the base, then add phosphorus, making all four information-bearing molecules, purine, and pyrimidine bases in sufficient quantity, fine-tune them to match in size and form, able to do informational Watson–Crick base-pairing, and join them into polynucleotides. Sugars would have to form into Disaccharides, and then into polysaccharides ( carbohydrates ). The first process would have no enzymes at hand, and the delivery of ATP energy for chemical processes also would have to be provided by recruiting ATP from the environment ( ATP is always made by ATP synthase proteins). The challenge would be enormous to organize these molecules themselves to create the extremely complex metabolic integrated structures.

While the formation of amino acids from inorganic chemicals is not a problem from a thermodynamic standpoint, it is, to join them, to make polypeptides. To catenate them to form polymer strands that form proteins. Forming peptide bonds is an exergonic process. Energy is consumed. In extant polymerization processes, energy is supplied in the manufacturing process in the ribosome. That energy wasn't there free at disposal to boost the reaction that joins the amino acids, The ultrasophisticated ribosome machinery wasn't there. The same situation occurs to be the case of the polymerization of RNA, and DNA. Let's even suppose that a prebiotic energy source was at its disposal. That, by far solves the problem. Energy is precisely employed in metabolic reactions and in proteins where needed. Imagine an explosion. Lots of energy there, but also a lot of destruction. Same in cells.   In most discussions of organic synthesis on the primitive Earth, electric discharges, ultraviolet radiation, and thermal sources have been singled out as possible energy sources. 11  Energy has to be generated, in the form of ATP, and be funneled precisely to the place where it's needed. The proposed sources are simply too unspecific.

There would have to be a transition to a point where all life-essential processes would/could start creating the life-essential molecules by their own metabolic machinery. That entails, that for some miraculous reason, protocells would simply start "ignoring" fully ready building blocks for life lying around on earth, or floating in ponds or oceans ( previously recruited for the first cells ), and would not evolve cell membrane import channels for readily available building blocks from the environment, and rather evolve the machinery to select the basic elements, like sulfur, iron, phosphor, calcium, ammonia, etc., import them, transform them into usable form by extraordinarily complex molecular machines, and as a next step, transform the raw materials into useful building blocks, essential for life. Why would cells evolve such expeditious complex processes if the raw materials were still continuing to lie around nearby? 

Fred Hoyle's junkyard analogy is a classic. He writes in  "The Intelligent Universe," 1983, pg.18
"The popular idea that life could have arisen spontaneously on Earth dates back to experiments that caught the public imagination earlier this century. If you stir up simple nonorganic molecules like water, ammonia, methane, carbon dioxide and hydrogen cyanide with almost any form of intense energy, ultraviolet light for instance, some of the molecules reassemble themselves into amino acids, a result demonstrated about thirty years ago by Stanley Miller and Harold Urey. The amino acids, the individual building blocks of proteins can therefore be produced by natural means. But this is far from proving that life could have evolved in this way. No one has shown that the correct arrangements of amino acids, like the orderings in enzymes, can be produced by this method. No evidence for this huge jump in complexity has ever been found, nor in my opinion will it be. Nevertheless, many scientists have made this leap-from the formation of individual amino acids to the random formation of whole chains of amino acids like enzymes-in spite of the obviously huge odds against such an event having ever taken place on the Earth, and this quite unjustified conclusion has stuck. In a popular lecture I once unflatteringly described the thinking of these scientists as a "junkyard mentality". As this reference became widely and not quite accurately quoted I will repeat it here. A junkyard contains all the bits and pieces of a Boeing 747, dismembered and in disarray. A whirlwind happens to blow through the yard. What is the chance that after its passage a fully assembled 747, ready to fly, will be found standing there? So small as to be negligible, even if a tornado were to blow through enough junkyards to fill the whole Universe." 4

Amino acids
Amino acids are monomers. (A monomer is a molecule that can react together with other monomer molecules to form a larger polymer chain or three-dimensional network 49) They are one of the principal components that make up proteins. 22 amino acids encode proteins. 20 in the standard genetic code and another 2 (selenocysteine, which is an essential amino acid component in selenoproteins, which are involved in a variety of cellular and metabolic processes 43 and pyrrolysine are restricted to a very small number of organisms and proteins. 42 )

Scitable gives us a short description of AAs:
Chemically, an amino acid is a molecule that has a carboxylic acid group 38 (COOH) and an amino group 39 (NH2) that are each attached/bonded to a central carbon atom, also called the α carbon. Each of the 20 amino acids has a specific side chain, known as an R group, that is also attached to the α carbon. 40 and a hydrogen atom.  The R groups, 41 the variable group or the side-chain, have a variety of shapes, sizes, charges, and reactivities. This allows amino acids to be grouped according to the chemical properties of their side chains. For example, some amino acids have polar side chains that are soluble in water; examples include serine, threonine, and asparagine. Other amino acids avoid water and are called hydrophobic, such as isoleucine, phenylalanine, and valine. The amino acid cysteine has a chemically reactive side chain that can form bonds with another cysteine. 23

The Cell factory maker, Paley's watchmaker argument 2.0 Aminoa10

A single amino acid – the subunit monomer of polypeptides and proteins.
NH2 is the amine group and the blue -COOH is the carboxylic acid group. The green R is a side-chain that is different for each of the 20 or so amino acids found in proteins.Attribution: Marc T. Facciotti (own work)

Alpha-amino acids are monomer units that are bonded ( polymerized ) and linked to polymer strands via a head-to-tail linkage that can fold ( depending on the sequence ) into complex functional 3D shapes. Once folded, often as a joint venture with other polymer strands, they form secondary, tertiary, and quaternary structures and catalytic pockets where in many cases complex metal clusters perform the catalytic reaction where the manufacturing of a compound takes place.

Origin of the proteinogenic ( protein creating ) amino acids used in life
There are many hypotheses about how the amino acids used in life could have originated. They are divided into terrestrial, and extraterrestrial origin. Terrestrial proposals are Spark discharge, Irradiation (UV, X-ray, etc.), Shock heating, and hydrothermal vents. Extraterrestrial amino acids have been observed in various types of carbonaceous chondrites, comets, and micrometeorites. Norio Kitadai and colleagues gave a good overview in their scientific paper: Origins of building blocks of life: A review from 2017 54 As we will see, none of them withstand scrutiny. Wherever one looks, there are problems.

Extraterrestrial origins
Norio Kitadai et al. 2017: 

To date, over 80 kinds of amino acids have been identified in carbonaceous chondrites, including 12 protein-amino acids of Ala, Asp, Glu, Gly, Ile, Leu, Phe, Pro, Ser, Thr, Tyr, and Val. 

Amongst the problems is that these amino acids come always in mixtures with non-proteinogenic aa's, and they are all chirally mixed ( L and R-handed chiral form)

Panspermia
One hypothesis is that amino acids amongst other biofriendly molecules were made in space, and delivered to our planet by meteorites, comets,  interplanetary dust particles, etc. It is called panspermia ( seeds everywhere in the universe ). There are good reasons to reject the idea. Nir Goldman and colleagues published an article in Nature magazine on the subject in 2010. They wrote:

Delivery of prebiotic compounds to early Earth from an impacting comet is thought to be an unlikely mechanism for the origins of life because of unfavorable chemical conditions on the planet and the high heat from impact.  5

Hugh Ross pointed out what seems to be one of the main problems:
What happens to comets and their supply of these molecules when they pass through Earth’s atmosphere and when they strike the planetary surface presents a big problem. Calculations and measurements show that both events generate so much heat (atmosphere passage generates 500°+ Centigrade while the collision generates 1,000°+ Centigrade) that they break down the molecules into components useless for forming the building blocks of life molecules. In 1974, comet 81P Wild passed within 500,000 miles of Jupiter, which caused the comet to be perturbed into orbiting within the inner solar system. This new orbit enabled NASA to send the Stardust Spacecraft to the comet in 2004 to recover samples, which were returned to Earth and analyzed for organic molecules. The only amino acid indisputably detected in the sample was glycine at an abundance level of just 20 trillionths of a mol per cubic centimeter. 7

Amino acids found in meteorites are racemic, that is, they come in right-handed, and left-handed helicity. Life uses 100% left-handed AAs. 10

Recently a new discovery made the rounds. Yasuhiro Oba and colleagues reported in Nature Communications:
the detection of nucleobases in three carbonaceous meteorites using state-of-the-art analytical techniques optimized for small-scale quantification of nucleobases down to the range of parts per trillion (ppt). 8

Liz Kruesi reported in an article in science news about the finding:
Space rocks that fell to Earth within the last century contain the five bases that store information in DNA and RNA. These “nucleobases” — adenine, guanine, cytosine, thymine and uracil — combine with sugars and phosphates to make up the genetic code of all life on Earth. The discovery adds to evidence that suggests life’s precursors originally came from space, the researchers say. Scientists have detected bits of adenine, guanine and other organic compounds in meteorites since the 1960s Researchers have also seen hints of uracil, but cytosine and thymine remained elusive, until now. “We’ve completed the set of all the bases found in DNA and RNA and life on Earth, and they’re present in meteorites,” says astrochemist Daniel Glavin of NASA’s Goddard Space Flight Center in Greenbelt, Md. 9

but here comes the cold shower from the same article:

In the new analysis, the researchers measured more than a dozen other life-related compounds, including isomers of the nucleobases.

That means the anthropogenic nucleobases are mixed with other isomeric molecules, not used in living cells. That raises the question, of how those used in life could have been joined, concentrated, selected and sorted out from those not used in life.

And PIERAZZO and colleagues wrote in the paper: Amino acid survival in large cometary impacts: 
It is clear that there are substantial uncertainties in estimates for both exogenous and endogenous sources of organics, as well as the dominant sinks. All of the likely mechanisms described here lead to extremely low global concentrations of amino acids, emphasizing the need for substantial concentration mechanisms4r for altogether different approaches to the problem of prebiotic chemical synthesis. 6 

This is what Stanley Miller had to say in an interview that was conducted in October, 1996
The amount of useful compounds you are going to get from meteorites is very small. The dust and comets may provide a little more. Comets contain a lot of hydrogen cyanide, a compound central to prebiotic synthesis of amino acids as well as purines. Some HCN came into the atmosphere from comets. Whether it survived impact, and how much, are open to discussion. I'm skeptical that you are going to get more than a few percent of organic compounds from comets and dust. It ultimately doesn't make much difference where it comes from. I happen to think prebiotic synthesis happened on the Earth, but I admit I could be wrong. There is another part of the story. In 1969 a carbonaceous meteorite fell in Murchison Australia. It turned out the meteorite had high concentrations of amino acids, about 100 ppm, and they were the same kind of amino acids you get in prebiotic experiments like mine. This discovery made it plausible that similar processes could have happened on primitive Earth, on an asteroid, or for that matter, anywhere else the proper conditions exist. 20

What about the synthesis of amino acids in hydrothermal vents?
Hugh Ross and Fazale Rana explain:
Laboratory experiments simulating a hot, chemically harsh environment modeled after deepsea hydrothermal vents indicate that amino acids, peptides, and other biomoleculars can form under such conditions. However, a team led by Stanley Miller has found that at 660 °F (350 °C), a temperature that the vents can and do reach, the amino acid half-life in a water environment is only a few minutes. (In other words, half the amino acids break down in just a few minutes.) At 480 °F (250 °C) the half-life of sugars measures in seconds. For a nucleobase to function as a building block for DNA or RNA it must be joined to a sugar. For polypeptides (chains of amino acids linked together by peptide bonds but with much lower molecular weight than proteins) the half-life is anywhere from a few minutes to a few hours. 21

Punam Dalai and colleagues inform:
The thermal and chemical gradients at hydrothermal vents on the Earth’s surface may have played an important role in thermodynamically favorable reactions for organic synthesis. These reactions may have been catalyzed by transition metal–sulfide minerals such as pyrite. However, destructive free radicals are also generated photo catalytically at the surface of these sulfides and at the surfaces of the ultramafic minerals that constitute peridotite and komatiite. 58

The Miller-Urey experiment
Miller and Urey performed the legendary in vitro spark-discharge experiments in 1953, attempting to produce amino acids under primitive earth conditions. 13 With that experiment, only a few weeks distant from Watson and Crick discovering the DNA structure, the modern era in the study of the Origin of life began. The hope was that this experiment would pave the road to finding naturalistic answers to life's origins. It has widely been heralded as evidence for the origin of the set of amino acids used in life, and based on it, many claim even today that abiogenesis did become a plausible explanation for the origin of life. After 50 years, in 2003 Jeffrey L. Bada and Antonio Lazcano commemorated the Miller-Urey experiment in an article published in Science magazine. They wrote:

But is the “prebiotic soup” theory a reasonable explanation for the emergence of life? Contemporary geoscientists tend to doubt that the primitive atmosphere had the highly reducing composition used by Miller in 1953. 14

Miller himself pointed out that:
There is no agreement on the composition of the primitive atmosphere; opinions vary from strongly reducing (CH4 + N2, NH3 + H2O, or CO2 + H2 + N2) to neutral (CO2 + N2 + H2O) 53

We don't know what the conditions were, so every laboratory experiment is flawed from the get-go. Nobody knows the influencing conditions, like geological, temperature, electromagnetic radiation, etc. contributing to the  Physico-chemical complexity of the earth influencing biochemical processes. There are several other fatal flaws. There is always a team of researchers, that tweak and fine-tune the experiment towards the desired outcome. On prebiotic earth, there was no such thing. 

Consider what the researchers had to do to set up the experiment: Jeffrey L. Bada and colleagues explained:
Numerous steps in the protocol described here are critical for conducting Miller-Urey type experiments safely and correctly. First, all glassware and sample handling tools that will come in contact with the reaction flask or sample need to be sterilized. Sterilization is achieved by thoroughly rinsing the items in question with ultrapure water and then wrapping them in aluminum foil, prior to pyrolyzing at 500 °C in air for at least 3 hr. Once the equipment has been pyrolyzed and while preparing samples for analysis, care must be taken to avoid organic contamination. The risk of contamination can be minimized by wearing nitrile gloves, a laboratory coat, and protective eyewear. Be sure to work with samples away from one's body as common sources of contamination include fingerprints, skin, hair, and exhaled breath. Avoid contact with wet gloves and do not use any latex or Nylon materials. 19
This is just the first step. Bada continues:  There are many additional notes worth keeping in mind when carrying out various steps in the protocol outlined here.

You know where this goes. This has nothing to do with what happened on early earth. There were no test drives multiple times, trial and error to get to optimal conditions. Get a bit of atmospheric pressure here, change the gas composition a bit there. Change the electromagnetic radiation or the temperature variations. There are basically innumerable possibilities of different atmospheric conditions, and having the just-right atmosphere depends on many different factors. One can resort to the number of possible earth-like candidates in the universe and claim, that one by chance could have the right conditions. While nobody knows the odds, some scientific papers have calculated numbers that are far from what could be considered a reasonably probable chance 12  In 2008, after Miller's death,  Adam P. Johnson and colleagues reexamined the boxes containing the dried residues from the apparatus from the second ( the volcanic experiment ) in 1953. 15 Miller identified five different amino acids, plus several unknowns in the extracts from this apparatus. Johnson et al. however, identified 22 amino acids and five amines. Several were not identified previously in Miller’s experiments. In 2011, the same researchers extended their analysis of Miller’s old flacons to include those from a spark-discharge experiment made in 1958. They reported:

The samples contained a large assortment of amino acids and amines, including numerous sulfur amino acids. This mixture might have been prominent on a regional scale (for example, near volcanoes), where these gases may have played a vital role in the localized synthesis of some of the first terrestrial organic compounds. 16 

A survey from the U.S.Government informed that:
Ninety-nine percent of the gas molecules emitted during a volcanic eruption are water vapor (H2O), carbon dioxide (CO2), and sulfur dioxide (SO2). The remaining one percent is comprised of small amounts of hydrogen sulfide, carbon monoxide, hydrogen chloride, hydrogen fluoride, and other minor gas species. 17

This composition is not conducive to producing amino acids on early earth. Unless the emissions were different, supposedly 3,9 Gya ago. Were they?

The gases released into the atmosphere by high-temperature volcanic eruptions have been dominated by H2O, CO2, and SO2 since at least 3600 Ma, and probably since at least ∼3900 Ma. Mantle-derived volcanic gases that entered the atmosphere from high-temperature volcanism would have provided low, but not zero, yields of prebiotic molecules during that interval. 18

Carol Turse informed in a science paper from 2013 that:
Variations of Miller’s experiments, some completed by Miller himself, have been completed that include aspects of hydrothermal vents, neutral atmospheres, reducing H2S atmospheres, as well as volcanic conditions  In each of these variations amino acids or organic precursors of amino acids are produced at some level. 51

but the following proteinogenic amino acids were never produced in any of the experiments: Cysteine, Histidine, Lysine, Asparagine, Pyrrolysine, Proline, Glutamine, Arginine, Threonine, Selenocysteine, Tryptophan, Tyrosine 52

Homochirality
The DNA and RNA ribose backbone has to be in right-handed chiral form. (Chirality is from Greek and means handedness). Amino acids, left-handed. Phospholipids, right-handed. That is essential for life, and its origin in biological systems demands an explanation. In nature, carbon compounds come in mixed chiral forms. In the cell, the complex protein machinery fabrics the materials in the right enantiomeric form, as life requires. Change Laura Tan and Rob Stadler bring it succinctly to the point. They write in:  The Stairway To Life:

In all living systems, homochirality is produced and maintained by enzymes, which are themselves composed of homochiral amino acids that were specified through homochiral DNA and produced via homochiral messenger RNA, homochiral ribosomal RNA, and homochiral transfer RNA. No one has ever found a plausible abiotic explanation for how life could have become exclusively homochiral.

In order for proteins to fold into functional 3D structures, the building blocks that make them, amino acids, must be chiral. As Wikibooks explains:
A tetrahedral carbon atom with four distinct groups is called asymetric, or chiral. The ability of a molecule to rotate plane-polarized light to the left, L (levorotary) or right, D (dextrorotary) gives it its optical and stereochemical fingerprint. 50

Biologically synthesized amino acids, for instance, occur exclusively in their lefthanded (levorotatory L) form, while the sugar backbone of nucleic acids, ribose, are all right-handed (dextrorotatory D), and phospholipid glycerol headgroups of archaea and bacteria exclusively homochiral.  (Bacteria and eukarya have membranes comprised of phospholipids with backbones of left-handed configurational stereochemistry, whereas archaea contain backbones of right-handed stereochemistry). Amino acids are not the same as their mirror image, analogously a left shoe will not fit properly on one's right foot no matter how someone rotates it. 

Daniel P. Glavin and colleagues elucidated in a scientific paper from 2020:
The observed homochirality in all life on Earth, that is, the predominance of “left-handed” or l-amino acids and “right-handed” or d-sugars, is a unique property of life that is crucial for molecular recognition, enzymatic function, information storage, and structure and is thought to be a prerequisite for the origin or early evolution of life. 32

Racemic, mixed proteins are non-functional.  If a chemist cooks up a bunch of amino acids or their precursor molecules in a laboratory, the result will always be a racemic mixture of left and right. Proteins made by mixtures of left and right-handed amino acids do not form well-defined tertiary and quaternary protein structures. Ribose must have been in its right-handed form for the first RNA molecules to be useful for functional structures, which cannot occur with random mixtures of right and left-handed nucleotides. Chemical reactions starting from racemic mixtures result always in mixed, nonchiral systems. The chemistry explaining how exclusive left-handed and right-handed molecules could have been formed is one of the biggest open questions, a profound mystery, persisting for over 150 years since Lous Pasteur discovered the right and left-handed chiral form (levorotatory and the dextrorotatory form) in chemistry. Hypotheses on the origin of homochirality in the living world can be classified into two major types: biotic and abiotic. The abiotic appearance of chiral materials hence leads to deeper questions. How can symmetry be originated in a universe that is not governed by physical laws that convey this symmetry?  Scientists do not know how the right-handed amino acids and left-handed sugars and either exclusively right-handed or left-handed backbones of phospholipids could have been instantiated prebiotically by accident. Since there was no prebiotic natural selection, the only alternative to conscious choice is an unguided random non-designed coincidence. Homochirality had to arise for amino acids and sugars and phospholipids simultaneously. Life uses its complex molecular machinery to instantiate just the right or lefthanded molecules. 

Benjamin List and David MacMillan were awarded the Nobel Prize in Chemistry in 2021. A science news article quoted them:  

“Why in the world is biology single-handed? Why do we have this preference in nature? We don’t know,” List said. “This handedness is transferred in the catalytic reaction onto the substrates so that you get more of these handed molecules. It’s a great gift, I would say, that nature provides these molecules for us.”

“Chirality, for me, is the most interesting question in physics and chemistry and maybe even in biology,” Felser says, adding that today’s announcement could be “inspiring for the younger generations to look more for symmetry violations in nature”.
34

Homochirality, its origin a scientifically longstanding unresolved issue
Donna G. Blackmond explains in her paper published in 2010:
There is one general feature of the molecules constituting all known living systems on Earth, and in particular of biopolymers, which needs to be explained within the problem of origins: their homochirality. Most molecules of life are homochiral, that is, they possess the same handedness or chirality. Homochirality of biological molecules is a signature of life. The chirality or sense of handedness of the amino acid molecules is an important problem.
Figure above shows two versions, or enantiomers, of the amino acid alanine. Each contains exactly the same number of elements with the same types of chemical bonds, and yet they are the mirror image of each other. A molecule that is not superimposable on its mirror image is chiral. When a molecule with a definite sense of handedness reacts chemically with one that is symmetric (or otherwise does not have a particular handedness), the left- and right-handed amino acids have similar properties. Likewise, the chemical properties of an interaction between two left-handed molecules or two right-handed molecules are the same. However, neither of these interactions is the same as when left- and right-handed molecules are interacting with each other. Hence, the handedness of biological molecules such as amino acids or nucleotides plays a role in their functionality.

 “Symmetry breaking” is the term used to describe the occurrence of an imbalance between left and right enantiomeric molecules. This imbalance is traditionally measured in terms of the enantiomeric excess, or ee, where ee are concentrations of the right and left-hand molecules, respectively. Proposals for how an imbalance might have come about may be classified as either terrestrial or extraterrestrial, and then subdivided into either random or deterministic (sometimes called “de facto” and “de lege” respectively). A trivial example is that any collection of an odd number of enantiomeric molecules has, by definition, broken symmetry. Fluctuations in the physical and chemical environment could result in transient fluctuations in the relative numbers of left- and right-handed molecules. However, any small imbalance created in this way should average out as the racemic state unless some process intervenes to sustain and amplify it. Thus, whether or not the imbalance in enantiomers came about by chance, arising on earth or elsewhere, an amplification mechanism remains the key to increasing enantiomeric excess and ultimately to approaching the homochiral state. 
22

The Cell factory maker, Paley's watchmaker argument 2.0 6FjIcEc
The two mirror-image enantiomers of the amino acid alanine

A. G. CAIRNS-SMITH exposes the problem in his book: Seven clues to the origin of life, on page 40:
A particularly clear case is in the universal choice of only 'left-handed' amino acids for making proteins, when, as far as one can see, 'right-handed' ones would have been just as good. Let me clarify this.
Molecules that are at all complex are usually not superposable on their mirror images. There is nothing particularly strange about this: it is true of most objects. Your right hand, for example, is a left hand in the mirror. It is only rather symmetrical objects that do not have 'right-handed' and 'left-handed' versions. When two or more objects have to be fitted together in some way their 'handedness' begins to matter. If it is a left hand it must go with a left glove. If a nut has a right-hand screw, then so must its bolt. In the same sort of way the socket on an enzyme will generally be fussy about the 'handedness' of a molecule that is to fit it. If the socket is 'left-handed' then only the 'left-handed' molecule will do. So there has to be this kind of discrimination in biochemistry, as in human engineering, when 'right-handed' and 'left-handed' objects are being dealt with. And it is perhaps not surprising that the amino acids for proteins should have a uniform 'handedness'. There could be a good reason for that, as there is good reason to stick to only one 'handedness' for nuts and bolts. But whether, in such cases, to choose left or right, that is pure convention. It could be decided by the toss of a coin.
24

A. G. CAIRNS-SMITH genetic takeover, page53
It is commonly believed that proteins of a sort or nucleic acids of a sort (or both) would have been necessary for the making of those first systems that could evolve under natural selection and so take off from the launching platform provided by prevital chemical processes. We have already come to a major difficulty here: Much of the point of protein and the whole point of nucleic acid would seem to be lost unless these molecules have appropriate secondary/tertiary structures, and that is only possible with chirally defined units. As we saw, the ‘abiotic‘ way of circumventing this problem (by prevital resolution of enantiomers) seems hopelessly inadequate, and ‘biotic’ mechanisms depend on efficient machinery already in action. 25

Sean Henahan interviewed Dr. Stanley L. Miller  in 1998. To the question:  What about the even balance of L and D (left and right oriented) amino acids seen in your experiment, unlike the preponderance of L seen in nature? How have you dealt with that question?, Miller answered:

All of these pre-biotic experiments yield a racemic mixture, that is, equal amounts of D and L forms of the compounds. Indeed, if you're results are not racemic, you immediately suspect contamination. The question is how did one form get selected. In my opinion, the selection comes close to or slightly after the origin of life. There is no way in my opinion that you are going to sort out the D and L amino acids in separate pools. My opinion or working hypothesis is that the first replicated molecule had effectively no asymmetric carbon. 30

Some claim that the problem of the origin of chiral molecules has been solved (May 2022), but as far as the scientific literature illucidates, that is not the case. Following are a few quotes:
Homochirality is a common feature of amino acids and carbohydrates, and its origin is still unknown. (September 24, 2020) 26
The origin of homochirality in L-amino acid in proteins is one of the mysteries of the evolution of life. (30 November 2018) 27
How L-chiral proteins emerged from demi-chiral mixtures is unknown.The lack of understanding of the origins of the breaking of demi-chirality found in the molecules of life on Earth is a long-standing problem (December 26, 2019) 28
How homochirality concerning biopolymers (DNA/RNA/proteins) could have originally occurred (i.e., arisen from a non-life chemical world, which tended to be chirality-symmetric) is a long-standing scientific puzzle.
(January 8, 2020) 29

Why only left-handed, and not right-handed amino acids? 
Apparently, there is no deeper functional reason or justification that makes it necessary that they are left, rather than right-handed, which are no less stable and no more reactive 45. All they need is to be pure, that is, not mixed with left-handed ones. 

Viviane Richter wrote an article for cosmosmagazine in 2015:

It didn’t have to be that way. When life first emerged, why did it choose left and not right? Steve Benner believes biology picked left by chance. Malcolm Walter, an astrobiologist at the Australian Centre for Astrobiology at the University of New South Wales agrees. He also doubts we’ll ever come up with a definitive answer for why biology decided to be a lefty. “It’s going to remain speculative for a very long time – if not forever!” 44

From prebiotic to biotic chirality determination
The formation of the left-handedness of amino acids is performed in cells by a group of enzymes called aminotransferase through a transaminase reaction. The transamination reaction involves the transfer of an amino group for example by one of these enzymes, Aspartate Transaminase AST  37 from a donor, like an aspartate amino acid,  to the carbon atom of an alpha-keto acid 35, the acceptor, so that once the alpha-keto acid ring receives that amino group it will be converted into a glutamate amino acid ( the product). An example of an alpha-keto acid is an alpha-ketoglutarate (  Alpha-ketoglutarate (AKG) is a key molecule in the Krebs cycle [ or tricarboxylic acid TCA ] cycle determining the overall rate of the citric acid cycle of the organism. 36) By losing the amino group, the aspartate amino acid is transformed into oxaloacetate. And by receiving an amino group, alpha-ketoglutarate is being transformed into glutamate. In order to perform this reaction,  AST requires pyridoxal 5′ phosphate (P5P) as an essential cofactor for maximum enzyme activity. P5P is the active metabolite of vitamin B6, therefore it is a  reduced vitamin B6 ( it is used in hundreds of enzymes ) P5P serves as a molecular shuttle for ammonia and electrons between the amino donor and the amino acceptor. 18 different proteinogenic amino acids can be used as the starting point of the reaction.   The reaction can be an anabolic reaction to make amino acids or catabolic to make waste products, like nitrogenous waste urea, and released from the body as a toxic product. Aspartate aminotransferase (AST) has high specificity to operate with alpha-ketoglutarate. 

This is a complex process. The literature on ASTs spans approximately 60 years, and much fundamental mechanistic information on PLP-dependent reactions has been gained from its study. 47 but even in 2019 it was still not fully understood despite being "one of the most studied enzymes of this category" 46

Aspartate Aminotransferase
Since left-handedness is life-essential,  AST is a key metabolic enzyme, and its origin has to be ancient and be part of the minimal proteome and enzymatic setup of the first life forms. It is found in bacterial to eukaryotic species.  

The authors, Mei Han, and colleagues, reported in a scientific paper from 2021:
Aspartate Aminotransferase is present in all of the free-living organisms AST is a much-conserved enzyme found in both prokaryotes and eukaryotes and is closely linked to purine’s biosynthesis salvage pathway as well as the glycolytic and oxidative phosphorylation pathways. 48


1. Craig Venter: Life: What A Concept! 2008 
2. A. G. Cairns-Smith:  Genetic Takeover: And the Mineral Origins of Life 
3. Robert M. Hazen: Fundamentals of Geobiology 2012 
4. Fred Hoyle: The Intelligent Universe   1983
5. Nir Goldman: Synthesis of glycine-containing complexes in impacts of comets on early Earth 12 September 2010 
6. PIERAZZO  Amino acid survival in large cometary impacts 1999 
7. Hugh Ross: Could Impacts Jump-Start the Origin of Life? November 8, 2010 
8.Yasuhiro Oba: Identifying the wide diversity of extraterrestrial purine and pyrimidine nucleobases in carbonaceous meteorites 26 April 2022 
9. About Liz Kruesi: All of the bases in DNA and RNA have now been found in meteorites 
10. Jamie E. Elsila: Meteoritic Amino Acids: Diversity in Compositions Reflects Parent Body Histories 2016 Jun 22 
11. E A Martell: Radionuclide-induced evolution of DNA and the origin of life 
12. Brian C. Lacki: THE LOG LOG PRIOR FOR THE FREQUENCY OF EXTRATERRESTRIAL INTELLIGENCES September 21, 2016  
13. Stanley L. Miller A Production of Amino Acids Under Possible Primitive Earth Conditions May 15, 1953 
14. JEFFREY L. BADA: Prebiotic Soup--Revisiting the Miller Experiment 
15. ADAM P. JOHNSON: The Miller Volcanic Spark Discharge Experiment 17 Oct 2008 
16. Eric T. Parker: Primordial synthesis of amines and amino acids in a 1958 Miller H2S-rich spark discharge experiment March 21, 2011 
17. [url=https://www.usgs.gov/faqs/what-gases-are-emitted-kilauea-and-other-active-volcanoes#:~:text=Ninety%2Dnine percent of the,and other minor gas species.]What gases are emitted by Kīlauea and other active volcanoes? [/url]
18. J W Delano: Redox history of the Earth's interior since approximately 3900 Ma: implications for prebiotic molecules Aug-Oct 2001 
19. Eric T. Parker: Conducting Miller-Urey Experiments 2014 Jan 21 
20. Dr. Stanley L. Miller: From Primordial Soup to the Prebiotic Beach An interview with exobiology pioneer 
21. Hugh Ross, Fazale Rana,  Origins of Life, page 73 
22. Donna G. Blackmond: The Origin of Biological Homochirality 2010 May; 2 
23. https://www.nature.com/scitable/definition/amino-acid-115/
24. A. G. CAIRNS-SMITH Seven clues to the origin of life, page 58 
25. A. G. CAIRNS-SMITH genetic takeover 1988  
26. Shubin Liu: Homochirality Originates from the Handedness of Helices September 24, 2020 
27. Tadashi Ando: Principles of chemical geometry underlying chiral selectivity in RNA minihelix aminoacylation 30 November 2018 
28. Jeffrey Skolnick:  On the possible origin of protein homochirality, structure, and biochemical function December 26, 2019 
29. Yong Chen: The origin of biological homochirality along with the origin of life January 8, 2020 
30. From Primordial Soup to the Prebiotic Beach An interview with exobiology pioneer, Dr. Stanley L. Miller, 
31. Change Laura Tan, Rob Stadler: The Stairway To Life: An Origin-Of-Life Reality Check  March 13, 2020 
32. Daniel P. Glavin:  The Search for Chiral Asymmetry as a Potential Biosignature in our Solar System November 19, 2019 
33. 
34. Davide Castelvecchi: ‘Elegant’ catalysts that tell left from right scoop chemistry Nobel 06 October 2021
35. About: Keto acid: 
36. Nan Wu: Alpha-Ketoglutarate: Physiological Functions and Applications 2016 Jan 24 
37. Daniel Nelson: Amino Group: Definition And Examples  2, November 2019 
38. What is Carboxylic Acid? 
39. Introduction to Amines – Compounds Containing Nitrogen 
40. alpha carbon: 
41. [url=http://www.chem.ucla.edu/~harding/IGOC/R/r_group.html#:~:text=R group%3A An abbreviation for,halogens%2C oxygen%2C or nitrogen.]R-group: [/url]
42. Guillaume Borrel: Unique Characteristics of the Pyrrolysine System in the 7th Order of Methanogens: Implications for the Evolution of a Genetic Code Expansion Cassette 
43. Rare, but essential – the amino acid selenocysteine June 19, 2017 
44. Viviane Richter [url= https://cosmosmagazine.com/science/biology/why-the-building-blocks-in-our-cells-turned-to-the-left/]Why building blocks in our cells turned left[/url] 10 August 2015
45. https://www.scripps.edu/newsandviews/e_20040920/onpress.html
46. Kumari Soniya: Transimination Reaction at the Active Site of Aspartate Aminotransferase: A Proton Hopping Mechanism through Pyridoxal 5′-Phosphate 
47. Michael D. Toney: [url= https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3946379/]Aspartate Aminotransferase: an old dog teaches new tricks [/url]2013 Oct 9.
48. Mei Han: l-Aspartate: An Essential Metabolite for Plant Growth and Stress Acclimation 2021 Apr; 26 
49. Amino acids :https://bio.libretexts.org/Courses/University_of_California_Davis/BIS_2A%3A_Introductory_Biology_(Easlon)/Readings/04.3%3A_Amino_Acids
50. https://en.wikibooks.org/wiki/Structural_Biochemistry/Volume_5#Modified_Amino_Acids
51. Carol Turse: Simulations of Prebiotic Chemistry under Post-Impact Conditions on Titan 2013 Dec 17 
52. Miller–Urey experiment https://en.wikipedia.org/wiki/Miller%E2%80%93Urey_experiment
53. Stanley L. Miller: [url= https://global.oup.com/us/companion.websites/fdscontent/uscompanion/us/pdf/Rigoutsos/I-SampleChap.pdf]Prebiotic Chemistry on the Primitive Earth[/url] 2006
54. Norio Kitadai: Origins of building blocks of life: A review 29 July 2017
55. STANLEY L. MILLER AND HAROLD C. UREY: Organic Compound Synthesis on the Primitive Earth: Several questions about the origin of life have been answered, but much remains to be studied 31 Jul 1959
56. Jessica Wimmer and William Martin: Likely energy source behind first life on Earth found ‘hiding in plain sight’ January 19, 2022
57. Leslie E Orgel †: The Implausibility of Metabolic Cycles on the Prebiotic Earth  January 22, 2008 
58. Punam Dalai: [url=https://pubs.geoscienceworld.org/msa/eleme



Last edited by Otangelo on Thu Jun 16, 2022 3:38 pm; edited 27 times in total

https://reasonandscience.catsboard.com

9The Cell factory maker, Paley's watchmaker argument 2.0 Empty Chapter 5 Mon May 23, 2022 4:23 pm

Otangelo


Admin

Soo Yeon Jeong and colleagues described the enzyme in a scientific paper from 2019: L-aspartate aminotransferase (AST) is highly conserved across species and plays essential roles in varied metabolic pathways. It also regulates the cellular level of amino acids by catalyzing amino acid degradation and biosynthesis. AST generally forms a homodimer consisting of two active sites in the vicinity of subunit interfaces; these active sites bind to its cofactor PLP and substrate independently. Each subunit is composed of three parts: large domain, small domain, and N-terminal arm. The active site is situated in the cavity formed by two large domains and one small domain. 17

Proteopedia informs: It is a homodimer that is 413 amino acids long and serves a critical role in amino acid and carbohydrate metabolism 18

This is an enzyme that operates based on high specificity. Soo Yeon Jeong reported in the conclusion remarks even: "We observed the mode of intercommunication during catalytic reactions between two protomers of the dimer." There are several life-essential enzymes, that operate based on intrinsic signaling and communication ( ribosomes, aminoacyl tRNA synthetases) which indicates the high sophistication of these enzymes.  pyridoxal 5′ phosphate is furthermore an integral part of the enzymatic reaction, which indicates interdependence. It performs various key functions. Considering this all together, it is remarkable evidence of an intelligently designed setup.

How were the 20 proteinogenic amino acids selected on early earth?
Science is absolutely clueless about how and why specifically this collection of amino acids is incorporated into the genetic code to make proteins. Why 20, and not more or less? ( in some rare cases, 22) considering that many different ones could have been chosen?  Amino acids are average 19 atoms each.

The Cell factory maker, Paley's watchmaker argument 2.0 30253510

Stanley Miller wrote in the  science paper from 1981: Reasons for the Occurrence of the Twenty Coded Protein Amino Acids:

There are only twenty amino acids that are coded for in protein synthesis, along with about 120 that occur by post-translational modifications. Yet there are over 300 naturally-occurring amino acids known, and thousands of amino acids are possible. The question then is - why were these particular 20 amino acids selected during the process that led to the origin of the most primitive organism and during the early stages of Darwinian evolution. Why Are beta, gamma and theta Amino Acids absent? The selection of a-amino acids for protein synthesis and the exclusion of the beta, gamma, and theta amino acids raises two questions. First, why does protein synthesis use only one type of amino acid and not a mixture of various α, β, γ, δ… acids? Second, why were the a-amino acids selected? The present ribosomal peptidyl transferase has specificity for only a-amino acids. Compounds with a more remote amino group reportedly do not function in the peptidyl transferase reaction. The ribosomal peptidyl transferase has a specificity for L-a-amino acids, which may account for the use of a single optical isomer in protein amino acids. The chemical basis for the selection of a-amino acids can be understood by considering the deleterious properties that beta, theta, and gamma-amino acids give to peptides or have for protein synthesis. 1

The Cell factory maker, Paley's watchmaker argument 2.0 Beta_a10

The question is not only why not more or less were selected and are incorporated in the amino acid "alphabet", but also how they could/would have been selected from a prebiotic soup, ponds, puddles, or even the archaean ocean?
The ribosome core that performs the polymerization, or catenation of amino acids, joining one amino acid monomer to another,  the ribosomal peptidyl transferase center, only incorporates alpha-amino acids, as Joongoo Lee and colleagues explain in a scientific article from 2020:

Ribosome-mediated polymerization of backbone-extended monomers into polypeptides is challenging due to their poor compatibility with the translation apparatus, which evolved to use α-L-amino acids. Moreover, mechanisms to acylate (or charge) these monomers to transfer RNAs (tRNAs) to make aminoacyl-tRNA substrates is a bottleneck. The shape, physiochemical, and dynamic properties of the ribosome have been evolved to work with canonical α-amino acids 11

There are no physical requirements that dictate, that the ribosome should/could not be constructed capable to incorporate β, γ, δ… amino acids. Indeed, scientists work on polymer engineering, designing ribosomes that use an expanded amino acid alphabet. A 3D printer uses specifically designed polyester filaments to be fed with, that can process them, and print various objects based on the software information that dictates the product form. If someone tries to use raw materials that are inadequate, the printer will not be able to perform the job it was designed for. The ribosome is a molecular 3D nano printer, as Jan Mrazek and colleagues elucidate in a science paper published in 2014

Structural and functional evidence point to a model of vault assembly whereby the polyribosome acts like a 3D nanoprinter to direct the ordered translation and assembly of the multi-subunit vault homopolymer, a process which we refer to as polyribosome templating. 12 where the reaction center is also specifically adjusted to perform its reaction with the specific set of α-amino acids. 

The materials that the machine is fed with, and the machine itself have both to be designed from scratch, in order to function properly. One cannot operate with the adequacy of the other. There is a clear interdependence that indicates that the amino acid alphabet was selected to work with the ribosome as we know it.

From Georga Tech: The preference for the incorporation of the biological amino acids over non-biological counterparts also adds to possible explanations for why life selected for just 20 amino acids when 500 occurred naturally on the Hadean Earth. “Our idea is that life started with the many building blocks that were there and selected a subset of them, but we don’t know how much was selected on the basis of pure chemistry or how many biological processes did the selecting. Looking at this study, it appears today’s biology may reflect these early prebiotic chemical reactions more than we had thought,” said Loren Williams,  professor in Georgia Tech’s School of Chemistry and Biochemistry 4

The authors mention 500 supposedly extant on early earth. Maybe they got that number from a scientific article about nonribosomal peptides (NRPs) which coincides with that number of 500. Areski Flissi and colleagues write:

Secondary metabolites (nonribosomal peptides) are produced by bacteria and fungi. In fact, >500 different building blocks, called monomers, are observed in these peptides, such as derivatives of the proteinogenic amino acids, rare amino acids, fatty acids or carbohydrates. In addition, various types of bonds connect their monomers such as disulfide or phenolic bonds. Some monomers can connect with up to five other monomers, making cycles or branches in the structure of the NRPs. 5

Stuart A. Kauffman (2018) gives us an entirely different perspective. He wrote on page 22, in the section Discussion: Using the PubChem dataset and the Murchison meteorite mass spectroscopy data we could reconstruct the time evolution and managed to calculate the time of birth of amino acids, which is about 165 million years after the start of evolution. ( They mean after the Big Bang)  a mere blink of an eye in cosmological terms. All this puts the Miller-Urey experiment in a very different perspective. the results suggest that the main ingredients of life, such as amino acids, nucleotides and other key molecules came into existence very early, about 8-9 billion years before life. 6

Why should the number of possible amino acids on early earth be restricted to 500? In fact, as Allison Soult, a chemist from the University of Kentucky wrote: Any ( large ) number of amino acids can possibly be imagined.  7 

Steven Benner goes along with the same reasoning (2008): Conceptually, the number of compounds in gas clouds, meteorites, Titan, and laboratory simulations of early Earth is enormous, too many for any but a super-human imagination to start puzzling over. Each of those n compounds (where n is a large number) can react with any of the other compounds (for the mathematically inclined, this gives n 2 reactions). Of course, each of these n 2 products can react further. Thus, any useful scientific method must begin by constraining the enormity of possibilities that observations present to focus the minds of us mortal scientists. 19

This number is defacto limitless. The universe should theoretically be able to produce an infinite number of different amino acids. The AA R sidechains can have any isomer combination. They can come right-handed, or left-handed, with one or two functional groups, with cyclic (cyclobutane, cyclopentane, and cyclohexane) and/or branched structures, they can come amphoteric, aliphatic, aromatic, polar, uncharged, positively and negatively charged, and so on. Furthermore: A carbon atom bonded to a functional group, like carbonyl,  is known as the α carbon atom. The second is β (α, β, γ, δ…) and so on, according to the Greek alphabetical order. It is conceivable that the protein alphabet would be made of β peptides. There is nothing that physically constrains or limits amino acids to have different configurations. In fact, we do know bioactive peptides that use β-amino acids do form polymer sequences 3  Every synthetic chemist will confirm this. There is also no plausible reason why only hydrogen, carbon, nitrogen, oxygen, and sulfur should/could be used in a pool of 118 elements extant in the universe. If the number of possible AA combinations to form a set is limitless, then the chance of selecting randomly a specific set of AAs for specific functions is practically zero. It would have never happened by non-designed means. 

Science Daily reported in 2018, claiming that quantum chemistry supposedly solved the mystery of why there are these 20 amino acids in the genetic code. They wrote: "The newer amino acids had become systematically softer, i.e., more readily reactive or prone to undergo chemical changes. The transition from the dead chemistry out there in space to our own biochemistry here today was marked by an increase in softness and thus an enhanced reactivity of the building blocks." 14 The pertinent follow-up question then is:  Why the soft amino acids were added to the toolbox in the first place? What exactly were these readily reactive amino acids supposed to react with?
They answered: " At least some of the new amino acids, especially methionine, tryptophan, and selenocysteine, were added as a consequence of the increase in the levels of oxygen in the biosphere. This oxygen promoted the formation of toxic free radicals, which exposes modern organisms and cells to massive oxidative stress. The new amino acids underwent chemical reactions with the free radicals and thus scavenged them in an efficient manner. The oxidized new amino acids, in turn, were easily repairable after oxidation, but they protected other and more valuable biological structures, which are not repairable, from oxygen-induced damage. Hence, the new amino acids provided the remote ancestors of all living cells with a very real survival advantage that allowed them to be successful in the more oxidizing, "brave" new world on Earth. "With this in view, we could characterize oxygen as the author adding the very final touch to the genetic code" 

There are several problems with this hypothesis. 
1. if the prebiotic atmosphere were oxygenated, organic molecules like RNA and DNA would have been susceptible to thermal oxidation and photo-oxidation and would have readily been destroyed. 
2. Twelve of the proteinogenic amino acids were never produced in any lab experiment 15 
3. there was no selection process extant to sort out those amino acids best suited and used in life. ( those used are better than 2 million possible alternative amino acid "alphabets"  
4. There was no concentration process to collect the amino acids at one specific assembly site. 
5. There was no enantiomer selection process 
6. They would have disintegrated, rather than complexified 
7. There was no process to purify them. 

John Maynard Smith, a British biologist wrote in The Major Transitions in Evolution in 1997: Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid, because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion or a " good compromise"? 16

No, of course, chemical reactions have no knowledge, no know-how, no foresight, no goals. 

Optimality of the amino acid set that is used to encode proteins 
Gayle K. Philip (2011): The last universal common ancestor of contemporary biology (LUCA) used a precise set of 20 amino acids as a standard alphabet with which to build genetically encoded protein polymers. Many alternatives were also available, which highlights the question: what factors led biological evolution on our planet to define its standard alphabet? Here, we demonstrate unambiguous support that the standard set of 20 amino acids represents the possible spectra of size, charge, and hydrophobicity more broadly and more evenly than can be explained by chance alone. 2

We know that conscious intelligent agents with foresight are able to conceptualize and visualize apriori, a system of building blocks, like Lego bricks, that have a set of properties that optimally perform a specific function or/and task, that is intended to be achieved, and subsequently, we know that intelligent agents can physically instantiate the physical 3D object previously conceptualized. 

Lego bricks in their present form were launched in 1958. The interlocking principle with its tubes makes it unique and offers unlimited building possibilities. It's just a matter of getting the imagination going – and letting a wealth of creative ideas emerge through play. 8

Amino acids are analogous to Lego bricks. Bricks to build a house are made with the right stability, size, materials, and capacity of isolation for maintaining adequate narrow-range temperatures inside a house. Glass is made with transparency to serve as windows.  (Rare earth) Metals, plastic, rubber, etc. are made to serve as building blocks of complex machines. A mix of atoms will never by itself organize to become the building blocks of a higher-order complex integrated system based on functional, well-integrated, and matching sub-parts. But that is precisely what nature needs in order to complexify into the integrated systems-level organization of cells and multicellularity. We know about the limited range of unguided random processes. And we know the infinite range of engineering solutions that capable intelligent agents can instantiate. 

Gayle K. Philip continues: We performed three specific tests: we compared (in terms of coverage) (i) the full set of 20 genetically encoded amino acids for size, charge, and hydrophobicity with equivalent values calculated for a sample of 1 million alternative sets (each also comprising 20 members)  results showed that the standard alphabet exhibits better coverage (i.e., greater breadth and greater evenness) than any random set for each of size, charge, and hydrophobicity, and for all combinations thereof. Results indicate that life genetically encodes a highly unusual subset of amino acids relative to any random sample of what was prebiotically plausible. A maximum of 0.03% random sets out-performed the standard amino acid alphabet in two properties, while no single random set exhibited greater coverage in all three properties simultaneously. These results combine to present a strong indication that the standard amino acid alphabet, taken as a set, exhibits strongly nonrandom properties. Random chance would be highly unlikely to represent the chemical space of possible amino acids with such breadth and evenness in charge, size, and hydrophobicity (properties that define what protein structures and functions can be built). It is remarkable that such a simple starting point for analysis yields such clear results.

If the set does exhibit nonrandom properties, and random chance is highly unlikely, where does that optimality come from? It cannot be due to physical necessity. Matter has not the necessity to instantiate, to sort out a set of building blocks for distant goals. Evolution and natural selection is a hopelessly inadequate mechanism that was not at play at that stage. The only option left is intelligent design.

Melissa Ilardo (2015) : We compared the encoded amino acid alphabet to random sets of amino acids. We drew 10^8 random sets of 20 amino acids from our library of 1913 structures and compared their coverage of three chemical properties: size, charge, and hydrophobicity, to the standard amino acid alphabet. We measured how often the random sets demonstrated better coverage of chemistry space in one or more, two or more, or all three properties. In doing so, we found that better sets were extremely rare. In fact, when examining all three properties simultaneously, we detected only six sets with better coverage out of the 10^8 possibilities tested. Sets that cover chemistry space better than the genetically encoded alphabet are extremely rare and energetically costly. The amino acids used for constructing coded proteins may represent a largely global optimum, such that any aqueous biochemistry would use a very similar set. 9

That's pretty impressive and remarkable. That means, that only one in 16 million sets is better suited for the task. The most recent paper to be mentioned was written by Andrew J. Doig in 2016. He wrote:

Why the particular 20 amino acids were selected to be encoded by the Genetic Code remains a puzzle. They were selected to enable the formation of soluble structures with close-packed cores, allowing the presence of ordered binding pockets. Factors to take into account when assessing why a particular amino acid might be used include its component atoms, functional groups, biosynthetic cost, use in a protein core or on the surface, solubility and stability. Applying these criteria to the 20 standard amino acids, and considering some other simple alternatives that are not used, we find that there are excellent reasons for the selection of every amino acid. Rather than being a frozen accident, the set of amino acids selected appears to be near ideal.10 The last sentence is noteworthy: "the set of amino acids selected appears to be near ideal." It remains a puzzle as so many other things in biology that find no answer by the ones that build their inferences on a constraint set of possible explanations, where an intelligent causal agency is excluded a priori. Selecting things for specific goals is a conscious process, that requires intelligence. Attributes, that chance alone lacks, but an intelligent creator can employ to create life.

Biosynthetic cost: Protein synthesis takes a major share of the energy resources of a cell. Leu costs only 1 ATP, but its isomer Ile costs 11. Why would life ever, therefore, use Ile instead of Leu, if they have the same properties? Larger is not necessarily more expensive; Asn and Asp cost more in ATP than their larger alternatives Gln and Glu, and large Tyr costs only two ATP, compared to 15 for small Cys. The high cost of sulfur-containing amino acids is notable.

This is indeed completely counterintuitive and does not conform with naturalistic predictions.

Burial and surface: Proteins have close-packed cores with the same density as organic solids and side chains fixed into a single conformation. A solid core is essential to stabilize proteins and to form a rigid structure with well-defined binding sites. Nonpolar side chains have therefore been selected to stabilize close-packed hydrophobic cores. Conversely, proteins are dissolved in water, so other side chains are used on a protein surface to keep them soluble in an aqueous environment.

The problem here is that molecules and an arrangement of correctly selected varieties of amino acids would bear no function until life began. Functional subunits of proteins, or even fully operating proteins on their own would only have a function after life began, and the cells intrinsic operations were on the go. It is as if molecules had the inherent drive to contribute to life to have a first go, which of course is absurd. The only rational alternative is that a powerful creator had the foresight, and knew which arrangement and selection of amino acids would fit and work to make life possible.

Which amino acids came first? It is plausible that the first proteins used a subset of the 20 and a simplified Genetic Code, with the first amino acids acquired from the environment.

Why is plausible? It is not only not plausible, but plain and clearly impossible. The genetic code could not emerge gradually, and there is no known explanation for how it emerged. The author also ignores that the whole process of protein synthesis requires all parts in the process fully operational right from the beginning. A gradual development by evolutionary selective forces is highly unlikely.

Energetics of protein folding: Folded proteins are stabilized by hydrogen bonding, removal of nonpolar groups from water (hydrophobic effect), van der Waals forces, salt bridges, and disulfide bonds. Folding is opposed by loss of conformational entropy, where rotation around bonds is restricted, and introduction of strain. These forces are well balanced so that the overall free energy changes for all the steps in protein folding are close to zero.

Foresight and superior knowledge would be required to know how to get a protein fold that bears function, and where the forces are outbalanced naturally to get an overall energy homeostatic state close to zero. In the most recent  Christopher Mayer-Bacon (2021): Three fundamental physicochemical properties of size, charge, and hydrophobicity have received the most attention to date in identifying how the standard amino acid alphabet appears most clearly unusual. The standard amino acid alphabet appears more evenly distributed across a broader range of values than can reasonably be explained by chance. This model indicates a probability of approximately one in two million that an amino acid set would exhibit better coverage by chance 13

How is ammonium introduced to synthesize amino acids?

Unsolved issues about the origin of amino acids on early earth:
How did unguided nondesigned coincidence select the right amino acids amongst over 300 ( known, but the number is theoretically limitless ) that occur naturally on earth? All life on Earth uses the same 20 ( in some cases, 22 genetically encoded) amino acids to construct its proteins even though this represents a small subset of the amino acids available in nature?
How would twenty amino acids be selected (+2)  and not more or less to make proteins?
How was the concomitant synthesis of undesired or irrelevant by-products avoided?
How were bifunctional monomers, that is, molecules with two functional groups so they combine with two others selected, and unfunctional monomers (with only one functional group) sorted out?
How were β, γ, δ… amino acids sorted out?
How did a prebiotic synthesis of biological amino acids avoid the concomitant synthesis of undesired or irrelevant by-products?
How could achiral precursors of amino acids have produced and concentrated only left-handed amino acids? ( The homochirality problem )?
How did the transition from prebiotic enantiomer selection to the enzymatic reaction of transamination occur that had to be extant when cellular self-replication and life began?
How did ammonia (NH3), the precursor for amino acid synthesis, accumulate on prebiotic earth, if the lifetime of ammonia would be short because of its photochemical dissociation?
How could prebiotic events have delivered organosulfur compounds required in a few amino acids used in life, if in nature sulfur exists only in its most oxidized form (sulfate or SO4), and only some unique groups of procaryotes mediate the reduction of SO4 to its most reduced state (sulfide or H2S)?
How did a prebiotic synthesis of biological amino acids avoid the concomitant synthesis of undesired or irrelevant by-products?
How did the transition from prebiotic enantiomer selection to the enzymatic reaction of transamination occur that had to be extant when cellular self-replication and life began?
How did natural events have foreknowledge that the selected amino acids are best suited to enable the formation of soluble structures with close-packed cores, allowing the presence of ordered binding pockets inside proteins?
How did nature select the set of amino acids which appears to be near-optimal in regard to size, charge, and hydrophobicity more broadly and more evenly than in 16 million alternative sets?
How did natural events have foreknowledge that the selected amino acids are best suited to enable the formation of soluble structures with close-packed cores, allowing the presence of ordered binding pockets inside proteins?
How did Amino acid synthesis regulation emerge? Biosynthetic pathways are often highly regulated such that building blocks are synthesized only when supplies are low.
How did the transition from prebiotic synthesis to the synthesis through metabolic pathways of amino acids occur? A minimum of 112 enzymes is required to synthesize the 20 (+2) amino acids used in proteins.

1. S L Miller: Reasons for the occurrence of the twenty coded protein amino acids 1981 
2. Gayle K. Philip: Did evolution select a nonrandom 2011 Mar 24 
3. Chiara Cabrele: Peptides Containing β-Amino Acid Patterns: Challenges and Successes in Medicinal Chemistry September 10, 2014 
4. Pre-Life Building Blocks Spontaneously Align in Evolutionary Experiment 
5. Areski Flissi: Norine: update of the nonribosomal peptide resource 
6. Stuart A. Kauffman: Theory of chemical evolution of molecule compositions in the universe, in the Miller-Urey experiment and the mass distribution of interstellar and intergalactic molecules  30 Nov 2019
7. LibreTexts: Amino Acids
8. https://web.archive.org/web/20150905173143/http://www.lego.com/en-us/aboutus/lego-group/the_lego_history
9. Melissa Ilardo: Extraordinarily Adaptive Properties of the Genetically Encoded Amino Acids 24 March 2015 
10. Andrew J. Doig: Frozen, but no accident – why the 20 standard amino acids were selected 2 December 2016
11. Joongoo Lee: Ribosome-mediated polymerization of long chain carbon and cyclic amino acids into peptides in vitro 27 August 2020 
12. Jan Mrazek: Polyribosomes Are Molecular 3D Nanoprinters That Orchestrate the Assembly of Vault Particles 2014 Oct 30 
13. Christopher Mayer-Bacon: Evolution as a Guide to Designing xeno Amino Acid Alphabets 10 March 2021 
14. Quantum chemistry solves mystery why there are these 20 amino acids in the genetic code February 1, 2018 
15. Miller–Urey experiment 
16. John Maynard Smith: The Major Transitions in Evolution 1997
17. Soo Yeon Jeong: Crystal structure of L-aspartate aminotransferase from Schizosaccharomyces pombe August 29, 2019 
18. https://proteopedia.org/wiki/index.php/Aspartate_Aminotransferase#cite_note-AST_Structure-3
19. Benner SA  Life, the universe and the scientific method. (2009)



Last edited by Otangelo on Sat Sep 03, 2022 4:34 am; edited 9 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Nucleotides
DNA is one of the most intriguing and fascinating biomolecules found in nature. It forms the famous double helix which is elegant and beautiful. It is made of DNA (deoxyribonucleic acid) monomers, which are the molecules that make up the “alphabet” that specifies biological heredity. Life is information-driven. Specified complex information stored in genes dictates, instructs and directs the making of very complex molecular machines, autonomous robotic production lines, and chemical cell production plants, and it also directs and orders the cell to do its work, and how to operate and is as such of central importance in all life forms. Who wants to find answers about how life started, needs to find compelling explanations about how RNA and DNA first emerged on earth. The information stored in DNA is transcribed into RNA ( ribonucleic acid) and finally translated to make proteins. RNA has several other important roles in the cell. Interestingly, some viruses use RNA to store information. 

Nucleic acid research started in 1871, with a small sentence in the essay “Über die chemische Zusammensetzung der Eiterzellen” He characterized this substance as nitrogen-containing and being very rich in phosphorous. The following decades were marked by resolving the molecular structure of the “nuclein”. (“About the chemical composition of pus cells”) by Miescher 31 James Watson and Francis Crick discovered the structure of the DNA  molecule in 1953.  RNA is built of (almost) the same four-letter alphabet as DNA. It is more fragile, and as such, it could also be an information carrier, but less adequate long term. In all known living beings, genetic information flows from DNA to RNA to proteins. The work of  Watson and Crick on the structure of DNA was performed with some access to the X-ray crystallography of Maurice Wilkins and Rosalind Franklin at King's College London.  This information was critical for their further progress. They obtained this information as part of a report by Franklin to the Medical Research Council. Combining all of this work led to the deduction that DNA exists as a double helix. The report was by no means secret, but it put the critical data on the parameters of the helix (base spacing, helical repeat, number of units per turn of the helix, and diameter of the helix) in the hands of two who had contributed none of those data. With this information, they could begin to build realistic models. The big problem was where to put the purine and pyrimidine bases. Details of the diffraction pattern indicated two strands and indicated that the relatively massive phosphate ribose backbones must be on the outside, leaving the bases in the center of the double helix.

RNA and DNA  are chemically unlikely molecules that are composed of three parts: a nitrogenous base, a  five-carbon sugar (pentose), and phosphate.  DNA uses thymine as a base, and RNA uses uracil. These monomers are joined to form polymers by the phosphate group. In the genome, they form double strands with Watson-Crick base-pairing. 

How did RNA synthesize prebiotically?
A number of reasons have been given why a prebiotic synthesis of RNA, and even more, DNA, is too complex. In cells, the synthesis of RNA and DNA requires extremely complex energy-demanding, finely adjusted, monitored,  and controlled anabolic pathways. Since they were not extant prebiotically, RNA had to be synthesized spontaneously on early earth by abiotic alternative non-enzymatic pathways.  This is one of the major, among many other unsolved origin of life problems. Krishnamurthy points out that "there has been some common ground on what would be needed for organic synthesis of DNA/RNA (for example, the components of ribose and nucleobases to come from formaldehyde, cyanide and their derivatives) but none of the various approaches has found universal acceptance within the origins of life community at large. 26

Over the last decades, Extraterrestrial sources like meteorites, interplanetary dust particles, hydrothermal vents in the deep ocean, and warm little ponds, a prebiotic soup, have been a few of the proposals. High-energy precursors to produce purines and pyrimidines would have had to be produced in sufficient quantities, and concentrated at a potential building site of the first cells. As we will see, there has to be put an unrealistic demand for lucky accidents, and, de facto, there is no known prebiotic route to this plausibly happening by unguided means.  

An article published in 2014 summarizes the current status quo: The first, and in some ways the most important, problem facing the RNA World is the difficulty of prebiotic synthesis of RNA. This point has been made forcefully by Shapiro and has remained a focal point of the efforts of prebiotic chemists for decades. The ‘traditional’ thinking was that if one could assemble a ribose sugar, a nucleobase, and a phosphate, then a nucleotide could arise through the creation of a glycosidic bond and a phosphodiester bond. If nucleotides were then chemically activated in some form, then they could polymerize into an RNA chain. Each of these synthetic events poses tremendous hurdles for the prebiotic Earth, not to mention the often-invoked critique of the inherent instability of RNA in an aqueous solution. Thus, the issue arises of whether there could have been a single environment in which all these steps took place. Benner has eloquently noted that single-pot reactions of sufficient complexity lead to ‘asphaltization’ (basically, the production of intractable ‘goo’). 2 

Steve Benner (2013): The late Robert Shapiro found RNA so unacceptable as a prebiotic target as to exclude it entirely from any model for the origin of life. Likewise, Stanley Miller, surveying the instability of carbohydrates in water, concluded that ‘‘neither ribose nor any other carbohydrate could possibly have been a prebiotic genetic molecule’’ (Larralde et al., 1995). Many have attempted to awaken from the RNA nightmare by proposing alternative biomolecules to replace ribose, RNA nucleobases, and/or the RNA phosphate diester linkages, another source of prebiotic difficulty. These have encountered chemical challenges of their own. 47

Steve Benner (2012): Gerald Joyce called RNA  has been called a “prebiotic chemist's nightmare” because of its combination of large size, carbohydrate building blocks, bonds that are thermodynamically unstable in water, and overall intrinsic instability. No experiments have joined together those steps ( to make RNAs) without human intervention. Further, many steps in the model have problems. Some are successful only if reactive compounds are presented in a specific order in large amounts. Failing controlled addition, the result produces complex mixtures that are inauspicious precursors for biology, a situation described as the “asphalt problem”. Many bonds in RNA are thermodynamically unstable with respect to hydrolysis in water, creating a “water problem”. Finally, some bonds in RNA appear to be “impossible” to form under any conditions considered plausible for early Earth. 48

De Duve confesses: "Unless we accept intelligent design, it is clear that the RNA precursors must have arisen spontaneously as a result of existing conditions" 21 - the problem is, - Science is clueless about how nucleotides could have been formed prebiotically.

Prof. Dr. Oliver Trapp (2019): Many questions arising from the RNA world hypothesis have not yet been answered. Among these are the transition from RNA to DNA and the pre-eminence of D-ribose in all coding polymers of life. 10

DNA and RNA: The only possible information storage molecules?
Steven A. Benner (2005): Starting in the 1980s, some synthetic biologists began to wonder whether DNA and RNA were the only molecular structures that could support genetics on Earth or elsewhere.   This knowledge, and the fact that the Watson–Crick model proposed no particular role for the phosphates in molecular recognition, encouraged the inference that the backbone could be changed without affecting pairing rules. The effort to synthesize non-ionic backbones changed the established view of nucleic acid structure. Nearly 100 linkers were synthesized to replace the 2′-deoxyribose sugar. Nearly all analogues that lacked the REPEATING CHARGE showed worse rule-based molecular recognition. Even with the most successful uncharged analogues (such as the polyamide-linked nucleic-acid analogues (PNA)) molecules longer than 15 or 20 building units generally failed to support rule-based duplex formation. In other uncharged systems, the breakdown occurs earlier. The repeating charge in the DNA backbone could no longer be viewed as a dispensable inconvenience. The same is true for the ribose backbone of RNA: The backbone is not simply scaffolding to hold the nucleobases in place; it has an important role in the molecular recognition that is central to genetics.  17

Lack of natural selection
The idea that nucleotides were readily laying around on the early earth, just waiting to be picked up, and concentrated on the building site of life, was mocked by Leslie Orgel as 'the Molecular Biologist's Dream. This is maybe the most stringent problem of prebiotic nucleotide synthesis: The materials on prebiotic earth were a mess of mixtures of lifeless chemicals, and nothing restricts the possibility of a great diversity of nucleotides with differing sugar moieties. There was no natural selection. Many science papers simply ignore this and resort nonetheless to little magic of selective pressure. It's like from Frankenstein to man. Some patchwork here and there, and chance does the rest and figures things out.  Szostak and colleagues were well aware of the problem. They wrote:

There are many nucleobase variations such as 8-oxo-purine, inosine, and the 2-thio-pyrimidines, as well as sugar variants including arabino-, 2′- deoxyribo-, and threonucleotides. The likely presence of byproducts leads to a significant problem with regard to the emergence of the RNA world, since the initially synthesized oligonucleotides would be expected to be quite heterogeneous in composition. How could such a heterogeneous mixture of oligonucleotides give rise to the relatively homogeneous RNAs that are thought to be required for the evolution of functional RNAs such as ribozymes? 30

So, in 2020, they presented a model, ignoring the fact made by Benner and others, that molecules simply disintegrate and randomize, they proposed that "  many versions of nucleotides merged to form patchwork molecules with bits of both modern RNA and DNA, as well as largely defunct genetic molecules, such as ANAThese chimeras, like the monstrous hybrid lion, eagle and serpent creatures of Greek mythology, may have been the first steps toward today's RNA and DNA." 29 Rather than focussing "on the consequences of coexisting activated arabino- and 2′-deoxy-nucleotides for nonenzymatic template-directed primer extension", the authors need to provide a plausible trajectory for how natural selection pressures provided the separation of non-canonical nucleotides to achieve a homogeneous state of affairs, where only RNA's and DNAs used in life polymerize. Often, the key questions in the mids of the often confusing technical jargon get lost.  

The nucleobases
The nucleobases are key components of RNA and DNA. The bases are divided into purines ( adenine (A) and guanine (G)) and pyrimidines [cytosine (C) and thymine (T) in DNA, and Cytosine (C) and uracil (U) in RNA]. While purines have a double ring structure and nine atoms, purines have a single ring structure with six atoms. The structural difference between these sugars is that ribonucleic acid contains a hydroxyl (-OH) group, whereas deoxyribonucleic acid contains only a hydrogen atom in place of this hydroxyl group.

Purines
Purines are one of the two compounds that are used to make the semantophoretic nucleotides RNA and DNA that store genetic information. Adenine and guanine are made of two nitrogen-containing rings. 

Adenine
One of the earliest experiments attempting to synthesize adenine in prebiotic conditions was made by Oró in 1961, where he presented evidence for the "synthesis of adenine from aqueous solutions of ammonium cyanide at temperatures below 100°." 18 In 1966, P. Ferris and L. E. Orgel pointed out, what the achilles heel was in Oró's experiment: "Adenine was formed in only 0.5% yield in Oro’s experiment; most of the cyanide formed an intractable polymer."19 Evidently, there was no prebiotic natural selection to sort out those bases that could later be used as nucleobases, from those with no function. 

Shapiro pointed out that: Useful yields of adenine cannot be obtained except in the presence of 1.0 M or stronger ammonia. The highest reasonable concentration of ammonia or ammonium ion that can be postulated in oceans and lakes on the primitive earth is about 0.01 M. Orgel  has put forward the following prerequisite for the very first information system: 'its monomeric components must have been abundant components of a prebiotic mixture of organic compounds.' Adenine does not seem to meet this requirement. The instability of adenine on a geological time scale makes its widespread prebiotic accumulation unlikely. Adenine synthesis requires unreasonable Hydrogen cyanide concentrations. Adenine plays an essential role in replication in all known living systems today and is prominent in many other aspects of biochemistry. Despite this, a consideration of its intrinsic chemical properties suggests that it did not play these roles at the very start of life. These properties include the low yields in known syntheses of adenine under authentic prebiotic conditions, its susceptibility to hydrolysis and to reaction with a variety of simple electrophiles, and its lack of specificity and strength in hydrogen bonding at the monomer and mixed oligomer level. 14

Elsewhere, Shapiro addressed an eventual extraterrestrial source: The isolation of adenine and guanine from meteorites has been cited as evidence that these substances might have been available as “raw material” on prebiotic Earth (18). However, acid hydrolyses have been needed to release these materials, and the amounts isolated have been low 5

In a recent paper from 2018, Annabelle Biscans mentions other routes investigated: Miyakama et al. suggest that purines have been formed in the atmosphere in the absence of hydrogen cyanide. They reported that guanine could have been generated from a gas mixture (nitrogen, carbon monoxide, and water) after cometary impacts. Also, it has been proposed that adenine was formed in the solar system (outside of Earth) and brought to Earth by meteorites, given the fact that adenine was found in significant quantity in carbonaceous chondrites - and concludes: Despite great efforts and impressive advancements in the study of nucleoside and nucleotide abiogenesis, further investigation is necessary to explain the gaps in our understanding of the origin of RNA. 20

Guanine
In 1984, Yuasa reported a 0.00017% yield of guanine after electrical discharge experiments. However, it is unknown if the presence of guanine was not simply resulted from a contaminant of the reaction. . S L Miller and colleagues made experiments in 1999, and yield trace amounts of guanine form by the polymerization of ammonium cyanide (0.0007% and 0.0035% depending on temperatures) indicating that guanine could arise in frozen regions of the primitive earth. 22

Abby Vogel Robinson reported in 2010: For scientists attempting to understand how the building blocks of RNA originated on Earth, guanine -- the G in the four-letter code of life -- has proven to be a particular challenge. While the other three bases of RNA -- adenine (A), cytosine (C) and uracil (U) -- could be created by heating a simple precursor compound in the presence of certain naturally occurring catalysts, guanine had not been observed as a product of the same reactions.

Pyrimidines
Pyrimidine bases are the second of the quartet that makes up DNA that stores genetic information. Uracil ( Thymine in DNA) and cytosine are made of one nitrogen-containing ring. In 2009, Sutherland, and Szostak published a paper on a high-yielding route to activated pyrimidine nucleotides under conditions thought to be prebiotic, claiming to be "an encouraging step toward the greater goal of a plausible prebiotic pathway to RNA and the potential for an RNA world." 27 Robert Shapiro disagrees:

Although as an exercise in chemistry this represents some very elegant work, this has nothing to do with the origin of life on Earth whatsoever.  The chances that blind, undirected, inanimate chemistry would go out of its way in multiple steps and use of reagents in just the right sequence to form RNA is highly unlikely. 28

Cytosine
Scientists have failed to produce cytosine in spark-discharge experiments.

Robert Shapiro (1999): The formation of a substance in an electric spark discharge conducted in a simulated early atmosphere has also been regarded as a positive indication of its prebiotic availability. Again, low yields of adenine and guanine have been reported in such reactions, but no cytosine. The failure to isolate even traces of cytosine in these procedures signals the presence of some problem with its synthesis and/or stability. The deamination of cytosine and its destruction by other processes such as photochemical reactions place severe constraints on prebiotic cytosine syntheses.  12

Rich Deem (2001):  
Cytosine has never been found in any meteorites.
Cytosine is not produced in electric spark discharge experiments using simulated "early earth atmosphere."
Synthesis based upon cyanoacetylene requires the presence of large amounts of methane and nitrogen, however, it is unlikely that significant amounts of methane were present at the time life originated.
Synthesis based upon cyanate is problematical, since it requires concentrations in excess of 1 M (molar). When concentrations of 0.1 M (still unrealistically high) are used, no cytosine is produced.
Synthesis based upon cyanoacetaldehyde and urea suffers from the problem of deamination of the cytosine in the presence of high concentrations of urea (low concentrations produce no cytosine). In addition, cyanoacetaldehyde is reactive with a number of prebiotic chemicals, so would never attain reasonable concentrations for the reaction to occur. Even without the presence of other chemicals, cyanoacetaldehyde has a half-life of only 31 years in water.
Cytosine deaminates with an estimated half-life of 340 years, so would not be expected to accumulate over time.
Ultraviolet light on the early earth would quickly convert cytosine to its photohydrate and cyclobutane photodimers (which rapidly deaminate). 49

Uracil
In 1961, Sidney Fox and colleagues synthesized Uracil under: "thermal conditions which yield other materials of theoretical prebiochemical significance. The conditions studied in the synthesis of uracil included temperatures in the range of 100° to 140°C, heating periods of from 15 minutes to 2 hours". 33  Other attempts to provide plausible prebiotic scenarios for the non-enzymatic synthesis of nucleotides and nucleobases continue to the present day. In 2019, Okamura and colleagues published a paper on pyrimidine nucleobase synthesis where their conclusion remarks is noteworthy:

We show that the cascade reaction proceeds under one-pot conditions in a continuous manner to provide SMePy 6. Importantly the key intermediate SMePy 6 gives rise not only to canonical but also to non-canonical bases arguing for the simultaneous prebiotic formation of a diverse set of pyrimidines under prebiotically plausible conditions.

This highlights a general problem mentioned before: chemical reactions are very common resulting in a mixture of molecules, of which most are not relevant for abiogenesis. There was no mechanism to sort out those detrimental in the process towards life. 32

Fast decomposition rate
Adenine deaminates at 37°C with a half-life of 80 years (half-life = time that a substance takes to decompose, and loses half of its physiologic activity). At 100°C its half-live is 1 year. For guanine, at 100°C its half-live is 10 months, uracil is 12 years, and thymine 56 years.  For the decomposition of a nucleobase, this is very short. For nucleobases to accumulate in prebiotic environments, they must be synthesized at rates that exceed their decomposition. Therefore, adenine and the other nucleobases would never accumulate in any kind of "prebiotic soup." 14

A paper published in 2015 points out that: 

Nucleotide formation and stability are sensitive to temperature. Phosphorylation of nucleosides in the laboratory is slower at low temperatures, taking a few weeks at 65 ◦C compared with a couple of hours at 100 ◦C. The stability of nucleotides, on the other hand, is favored in warm conditions over high temperatures. If a WLP is too hot (>80 ◦C), any newly formed nucleotides within it will hydrolyze in several days to a few years. At temperatures of 5 ◦C to 35 ◦C that either characterize more-temperate latitudes or a post snowball Earth, nucleotides can survive for thousand-to-million-year timescales. However, at such temperatures, nucleotide formation would be very slow.  25

That means, in hot environments, nucleotides might form, but they decompose fast. On the other hand, in cold environments, they might not degrade that fast, but take a long time to form. Nucleotides would have to be generated by prebiotic environmental synthesis processes at a far higher rate than they are decomposed and destroyed, and accumulated and concentrated at one specific construction site. Putting that into perspective, P.Ubique, the smallest known free-living cell, has a genome size of 1,3 million nucleotides. The best-studied mechanism relevant to the prebiotic synthesis of ribose is the formose reaction. Several problems have been recognized in ribose synthesis via the formose reaction, which reaction is very complex. It depends on the presence of a suitable inorganic catalyst. Ribose is merely an intermediate product among a broad suite of compounds including sugars with more or fewer carbons. There would have been no way to activate phosphate somehow, in order to promote the energy dispendious reaction.

Extraterrestrial nucleobase sources
In april 2022, nature magazine announced the identification of nucleobases in carbonaceous meteorites.  Guanine and adenine were detected in murchison meteorite extracts, and now various pyrimidine nucleobases such as cytosine, uracil, and thymine, and their structural isomers such as isocytosine, imidazole-4-carboxylic acid, and 6-methyluracil, respectively. They came to the conclusion that "a diversity of meteoritic nucleobases could serve as building blocks of DNA and RNA on the early Earth".23 An article of NASA echoed the authors conclusion: "This discovery demonstrates that these genetic parts are available for delivery and could have contributed to the development of the instructional molecules on early Earth."24 The fatal blow is the fact that the nucleobases relevant for life come always mixed together with isomers that are irrelevant. There was no prebiotic selection to sort out and concentrate exclusively those relevant for life. 

Selecting the nucleobases used in life
Maybe you are familiar with the concept of "sequence space". It relates to the fact that there is a huge combinatorial space (or possibilities) to put an amino acid strand together, but only a very limited number of sequences bear function, or eventually fold into 3D forms, and become functional proteins. That makes it very remotely possible, that random chance joined functional sequences together on the early earth. Analogously, the same goes for "Structure space" of the four macromolecular "bricks" or building blocks used in life. Adenine, for example, one of the five nucleobases used in RNA and DNA,  are purines, made of carbon, hydrogen, and nitrogen atoms. They have a six-membered nitrogen ring, fused to a five-membered nitrogen ring. The thymine nucleobase is a pyrimidine, and has just a one-ring structure, using carbon, hydrogen, and nitrogen atoms. There is no physical law, that restricts these molecules to have this isomeric ring structure and atomic composition. But in structure space, only a very small set or arrangement of nucleobases, with a specified chemical arrangement, bears function. How was the functional nucleobase quintet selected prebiotically? 

H. James Cleaves 2nd (2015): ‘‘Structure space’’ represents the number of molecular structures that could exist given specific defining parameters. For example, the total organic structure space, the druglike structure space, the amino acid structure space, and so on. Many of these chemical spaces are very large. For example, the total number of possible stable drug-like organic molecules may be on the order of 10^33 to 10^180. , The number of known naturally occurring or synthetic molecules is much smaller. As of July 2009, there were 49,037,297 unique organic and inorganic chemical substances registered with the Chemical Abstracts Service As a final comparison, a recent exploration of the organic contents of methanol extracts of the Murchison meteorite using high-resolution mass spectrometry revealed a complex though a relatively small set of compounds ranging from 100,000 to perhaps 10,000,000. Clearly, nature is constrained in its exploration of the vastness of chemical space by the reaction mechanisms available to it at any given point in time and the physicochemical stability of the resulting structures in their environmental context.

The number of molecules that could fulfill the minimal requirements of being ‘‘nucleic acid-like’’ is remarkably large and in principle limitless, though reasonable arguments could probably be made as to why monomers cannot contain more than some given number of carbon atoms.

A variety of structural isomers of RNA could potentially function as genetic platforms. Ribonucleosides may have competed with a multitude of alternative structures whose potential proto-biochemical roles and abiotic syntheses remain to be explored. The rules of organic chemistry, though the set of possible molecules could be very large. If there were alternative molecules that could better fulfill these criteria, then extant genetic systems could be considered suboptimal. It is of interest to understand whether biology’s solution to these various problems is optimal, suboptimal, or arbitrary. To date, no one-pot reaction has yielded either the purine or pyrimidine ribonucleosides directly from likely prevalent prebiotic starting materials. Enumeration of the riboside BC5H9O4 space gives some appreciation of the size and dimensionality of nucleic acid-like molecule space and allows some consideration of the optimality or arbitrariness of biology’s choice of this particular isomer.

With respect to the atom choice explored here (using only carbon, hydrogen, and oxygen), we note first that C, H, and O are among the most cosmo- and geochemically abundant elements and that CHO isomers are in principle derivable from formose-type chemistry, which allows an obvious linkage to abiotic geochemistry. The evaluation of the BC5H9O4 isomer space must thus be viewed as a first practical example of an exploration of what is a much larger chemical space. Limiting the search to structural isomers with the molecular formula of the core sugar of RNA (BC5H9O4, where B= a nitrogenous base), the range and variety of possible structures is enumerated precisely with structure generation software. This gives a glimpse of what abiotic chemistry could produce.

The structural space explored here is restricted to the molecular formula of the core RNA riboside but nonetheless includes a large number of possible isomers. In the formula range from BC3H7O2 to BC5H9O4 (RNA’s) there are likely scores of valid formulas. These could collectively produce many thousands of structurally sound isomers. In turn, each of these isomers could yield many stereo- and macromolecular linkage isomers, leading ultimately to perhaps billions of nucleic acid polymer types potentially capable of supporting base-pairing. It is likely that only a subset of these structural and stereoisomers would lead to stable base-pairing systems 50

Andro C. Rios (2014): The native bases of RNA and DNA are prominent examples of the narrow selection of organic molecules upon which life is based. How did nature “decide” upon these specific heterocycles? Evidence suggests that many types of heterocycles could have been present on the early Earth. The prebiotic formation of polymeric nucleic acids employing the native bases remains a challenging problem. Hypotheses have proposed that the emerging RNA world may have included many types of nucleobases. This is supported by the extensive utilization of non-canonical nucleobases in extant RNA and the resemblance of many of the modified bases to heterocycles generated in simulated prebiotic chemistry experiments. Nucleobase modification is a ubiquitous post-transcriptional activity found across all domains of life. These transformations are vital to cellular function since they modulate genetic expression 9

If we consider that basically any of the basic compounds and atoms extant on early earth or in meteorites could have been incorporated to make macromolecules, and a wide array of different ring structures and isomeric conformations to make nucleobases, for example, could be formed, then it becomes clear, that the structure space becomes basically limitless. 

1. On the early earth, in the existing "structure space", a limitless number of different molecules could have been generated by natural processes, like lightning, hydrothermal vents, volcanic gas eruptions, etc.  
2. Life uses exclusively a quartet of specified complex macromolecules, that are synthesized in modern cells by complex metabolic pathways, that were not extant, prebiotically. 
3. Selecting a specific set of complex macromolecules out of unlimited "structure space" by unguided means is theoretically remotely possible, but de facto, impossible. Therefore, these molecules were not selected naturally. They were designed.   


The difficulty to get ribose prebiotically
One of the most debated questions concerns the availability and synthesis of prebiotic ribose. Pentose sugar is a 5-carbon monosaccharide. These form two groups: aldopentoses and ketopentoses. The pentose sugars found in nucleotides are aldopentoses. Deoxyribose and ribose are two of these sugars.  Ribose is a monosaccharide containing five carbon atoms. d-ribose is present in the six different forms. 

The formose reaction
Jim Cleaves II (2011): The formose reaction, discovered by Butlerow in 1861, is a complex autocatalytic set of condensation reactions of formaldehyde to yield sugars and other small sugar-like molecules. 46
Gaspar Banfalvi (2020): Among the best-known nonenzymatic pathways to ribose formation, we find the formose (from formaldehyde and aldose words) or Butlerow reaction 44 Gerald F Joyce (2012)The classical prebiotic synthesis of sugars is by the polymerization of formaldehyde (the “formose” reaction). It yields a very complex mixture of products including only a small proportion of ribose. This reaction does not provide a reasonable route to the ribonucleotides. A number of other studies have addressed the problems presented by the lack of specificity of the formose reaction and by the instability of ribose. 43

S.Islam (2017): Several problems have been recognized for ribose synthesis via the formose reaction. The formose reaction is very complex. It depends on the presence of a suitable inorganic catalyst. Ribose is merely an intermediate product among a broad suite of compounds including sugars with more or fewer carbons. The reality of the formose reaction is that it descends into an inextricable mixture. The vast array of sugars produced is overwhelming and the intrinsic lack of selectivity for ribose is its undoing. Ultimately, the formose reaction produces a disastrously complex mixture of linear and branched Aldo and keto-sugars in the racemic forms. The consequences of such uncontrolled reactivity are that ribose is formed in less than 1% yield among a plethora of isomers and homologs. The instability of ribose prevents its accumulation and requires it to undergo extremely rapid onward conversion to ribonucleosides before the free sugar is lost to rapid degradation. 36

Irina V Delidovich and colleagues (2014): The classical formose reaction (FR) is hardly applicable for any practical purposes outside of the history of chemical science. The typical “sugary substance” formed as a result of the catalytic oligomerization of formaldehyde nowadays known as “formose” comprises dozens of straight-chain and branched monosaccharides, polyols, and polyhydroxycarbonic acids.

There are no further alternatives: Either chance "choose" by fortuitous random events the five-membered ring ribofuranose backbone for DNA and RNA, or it was a choice by intelligence with specific purposes. What is more plausible and probable?  The formose reaction requires a high concentration of Formaldehyde, which, however, readily undergoes a variety of reactions in aqueous solutions. Another problem is that ribose is unstable and rapidly decomposes even at low temperature and neutral pH, and as well in water. Furthermore, as Stanley Miller and his colleagues reported:  Ribose and other sugars have surprisingly short half-lives for decomposition at neutral pH, making it very unlikely that sugars were available as prebiotic reagents. 4

Leslie Orgel (2004): We conclude that some progress has been made in the search for an efficient and specific prebiotic synthesis of ribose and its phosphates. However, in every scenario, there are still a number of obstacles to the completion of a synthesis that yields significant amounts of sufficiently pure ribose in a form that could readily be incorporated into nucleotides. 34

Cairns-Smith (1990) Sugars are particularly trying. While it is true that they form from formaldehyde solutions, these solutions have to be far more concentrated than would have been likely in primordial oceans. And the reaction is quite spoilt in practice by just about every possible sugar being made at the same time - and much else besides. Furthermore the conditions that form sugars also go on to destroy them. Sugars quickly make their own special kind of tar - caramel - and they make still more complicated mixtures if amino acids are around. 1990

There have been a wide variety of attempts and proposals to try to solve the riddle, but up to date, without success. Science magazine (2016): Ribose is the central molecular subunit in RNA, but the prebiotic origin of ribose remains unknown. 35 Annabelle Biscans (2018): Even if some progress has been made to understand ribose formation under prebiotic conditions, each suggested route presents obstacles, limiting ribose yield and purity necessary to form nucleotides. A selective pathway has yet to be elucidated. 6

RNA and DNA use a five-membered ribose ring structure as backbone. Rings containing six carbons instead of five carbons do not possess the capability of efficient informational Watson–Crick base-pairing. Therefore, these systems could not have acted as functional competitors of RNA in a genetic system, even though these six-carbon alternatives of RNA should have had a comparable chance of being formed under the conditions that formed RNA. The reason for their failure is the fact that six-carbon-six-membered-ring sugars are too bulky to adapt to the requirements of Watson–Crick base-pairing within oligonucleotide duplexes. In sharp contrast, an entire family of nucleic acid alternatives in which each member comprises repeating units of one of the four possible five-carbon sugars (ribose being one of them) turns out to be a highly efficient informational base-pairing system. But why and how would natural non-designed events on early earth select what works? Observe Albert Eschenmoser's end note in his science paper from 1986: Optimization, not maximization, of base-pairing strength, was a determinant of RNA's selection. 8 But how and why would unintended events select something, that on its own has no function? These ring structures would simply lay around and then soon disintegrate. The smuggling in of evolutionary jargon is widespread for sake of the lack of any alternative. The authors omit and do not ask these relevant questions. That permits keeping the naturalistic paradigm alive. But it should be evident how nonsensical evolutionary claims and such inferences are. 

Ribose - the best alternative 
Prof. Gaspar Banfalvi (2006): Ribose was not randomly selected but the only choice, since β-D-ribose fits best into the structure of physiological forms of nucleic acids. 16

Ribose sugar is the molecule of choice for nucleic acids, yet because it is difficult to imagine forming under plausible prebiotic conditions and has a short lifetime, origin-of-life researchers have searched diligently for alternatives, like glycerol, that might have served as scaffolding for prebiotic chemicals prior to the emergence of DNA.  Unfortunately, they don’t work.  Steven Benner: Over 280 alternative molecules have been tested, and they just do not work at all; those that might be better than ribose are implausible under prebiotic conditions.  “Ribose is actually quite good – uniquely good,” he said.  Deal with it: one’s chemical evolution model is going to have to include ribose.  That means figuring out how it can form, how it can avoid destruction in water, and how it can avoid clumping into useless globs of tar.  (RNA, the main player in the leading “RNA World” scenario for the origin of life, uses ribose; DNA uses a closely-related sugar, deoxyribose.) 38

Various possible ribose configurations: 

The Cell factory maker, Paley's watchmaker argument 2.0 Ribose11
Ribose can exist in various forms: α-D-ribose, β-D-ribose  ( right-handed chiral form, dextrorotary) or α-L-ribose, β-L-ribose ( left-handed chiral form, levorotary).  it can form α-nucleosides, β-nucleoside, envelope or twisted conformations,

The Cell factory maker, Paley's watchmaker argument 2.0 Ribose10
Ribose conformations and configurations. (a) Major conformers of cyclopentane. (b) Envelope and twisted conformers of tetrahydrofuran. (c) D-configuration as well as α and β anomeric configurations of D-ribose. (d) Twisted conformations in ribose, C3′-endo in A-DNA and C2′-endo conformations in B-DNA. 44

Selecting β-nucleosides
Life uses mostly β-nucleosides rather than α-nucleosides ( which are extremely rare in biological systems). In β-nucleosides, the ribose or deoxyribose is linked to nucleobases through β-glycosidic bonds, which means that the nucleobase at C1 is cis with respect to the hydroxymethyl group at C4, known as the β-configuration. In  α-nucleosides, the nucleobase and hydroxymethyl group in the ribose or deoxyribose are in a trans relationship

The Cell factory maker, Paley's watchmaker argument 2.0 Alpha_10

Configuration of β-nucleosides and α-nucleosides

Life uses exclusively right-handed homochiral β-D-ribonucleotides. Roger D. Blandford (2020): The homochirality of the sugars has important consequences for the stability of the helix and, hence, on the fidelity or error control of the genetic code.  45

Prof. Gaspar Banfalvi (2006): Bases in α-anomeric position are unable to base-pair, eliminating the possibility of helix formation. 16

Tan, Change; Stadler, Rob. The Stairway To Life:
In all living systems, homochirality is produced and maintained by enzymes, which are themselves composed of homochiral amino acids that were specified through homochiral DNA and produced via homochiral messenger RNA, homochiral ribosomal RNA, and homochiral transfer RNA. No one has ever found a plausible abiotic explanation for how life could have become exclusively homochiral. 50

Emily Singer (2016): At a chemical level, a deep bias permeates all of biology. The molecules that make up DNA and other nucleic acids such as RNA have an inherent “handedness.” These molecules can exist in two mirror-image forms, but only the right-handed version is found in living organisms. Handedness serves an essential function in living beings; many of the chemical reactions that drive our cells only work with molecules of the correct handedness. DNA takes on this form for a variety of reasons, all of which have to do with intermolecular forces. 42

Abiogenesis researchers are in the dark when it comes to explaining how the molecules of life, amongst them, the life-essential RNAs and DNAs could have been selected without a mental agency. 

Phosphorus
Phosphorus is the third essential element making part of the structures of DNA and RNA. It is perfect to form a stable backbone for the DNA molecule. Phosphates can form two phosphodiester bonds with two sugars at the same time and connect two nucleotides. Phosphorus is difficult to dissolve, and that would be a problem both in an aquatic as-as well on a terrestrial environment. Phosphoesters form the backbone of DNA molecules. 

Libretexts explains: A phosphodiester bond occurs when exactly two of the hydroxyl groups in phosphoric acid react with hydroxyl groups on other molecules to form two ester bonds. Phosphodiester bonds are central to all life on Earth as they make up the backbone of the strands of nucleic acid. In DNA and RNA, the phosphodiester bond is the linkage between the 3' carbon atom of one sugar molecule and the 5' carbon atom of another, deoxyribose in DNA and ribose in RNA. Strong covalent bonds form between the phosphate group and two ribose 5-carbon rings over two ester bonds.  On prebiotic earth, however, there would have been no way to activate phosphate somehow, in order to promote the energy dispendious reaction.  

That adds up to the fact that concentrations on earth are very low. Kitadai (2017): So far, no geochemical process that led to abiotic production of polyphosphates in high yield on the Earth has been discovered. 39 The phosphate is connected to ribose which is connected to the nitrogenous base. Each of the 3 parts of nucleotides must be just right in size, form, and must fit together. The bonds must have the right forces in order to form the spiral form DNA molecule. And there would have to be enough units concentrated at the same place on the prebiotic earth of the four bases in order to be able to form a self-replicating RNA molecule if the RNA world is supposed to be true. The Albert team explains: A nucleotide is differentiated from a nucleoside by one phosphate group. Accordingly, a nucleotide can also be a nucleoside monophosphate. If more phosphates bond to the nucleotide (nucleoside monophosphate) it can become a nucleoside diphosphate (if two phosphates bond), or a nucleoside triphosphate (if three phosphates bond), such as adenosine triphosphate (ATP). 40 Adenosine triphosphate, or ATP, is the energy currency in the cell, a crucial component of respiration and photosynthesis, amongst other processes. The base, sugar, and phosphate need to be joined together correctly - involving two endothermic condensation reactions involved in joining the nucleotides, which means it has to absorb energy from its surroundings. In other words, compared with polymerization to make proteins, nucleotides are even harder to synthesize and easier to destroy; in fact, to date, there are no reports of nucleotides arising from inorganic compounds in primeval soup experiments.

Why phosphorus?
The selection of phosphorus as a backbone of RNA and DNA was a very smart choice. F H Westheimer (1987): The existence of a genetic material such as DNA requires a compound for a connecting link that is at least divalent. In order that the resulting material remain within a membrane, it should always be charged, and therefore the linking unit should have a third, ionizable group. The linkage is conveniently made by ester bonds, but, in order that the ester be hydrolytically stable, that charge should be negative and should be physically close to the ester groups. All of these conditions are met by phosphoric acid, and no alternative is obvious. Furthermore, phosphoric acid can form monoesters of organic compounds that can decompose by a mechanism other than normal nucleophilic attack, a mechanism that allows them sufficient reactivity to function in intermediary metabolism. 15

Bonding ribose to the nucleobase, to get nucleosides
Supposing that the parts were available, they would have had to be joined together at the same assembly site,  and sorted out from non-functional molecules.  Joining all three components together involves two difficult reactions: formation of a glycosidic bond, with the right stereochemistry linking the nucleobase and ribose, and phosphorylation of the resulting nucleoside. Nucleosides lack a phosphate at the C5′ position. There are no known ways of bringing about this thermodynamically uphill reaction in an aqueous solution: purine nucleosides have been made by dry-phase synthesis, but not even this method has been successful for condensing pyrimidine bases and ribose to give nucleosides.

John D. Sutherland (2010): It has normally been assumed that ribonucleotides arose on the early Earth through a process in which ribose, the nucleobases, and phosphate became conjoined. However, under plausible prebiotic conditions, condensation of nucleobases with ribose to give β-ribonucleosides is fraught with difficulties. The reaction with purine nucleobases is low-yielding and the reaction with the canonical pyrimidine nucleobases does not work at all. The route as operated thus far in the laboratory is associated with several steps, and the conditions for these steps are different. Furthermore, purification in between certain steps was carried out to make analysis of the chemistry easier. Clearly, these issues need to be addressed before the synthesis can be seen as geochemically plausible. 3

Brian J. Cafferty (2015): The coupling of ribose with a base is the first step to form RNA, and even those engrossed in prebiotic research have difficulty envisioning that process, especially for purines and pyrimidines. 40 
Terence N. Mitchell (2008):  Nucleosides are formed by linking an organic base ( guanine, adenine, uracil or cytosine) to a sugar (here D-ribose). This reaction looks simple, but how it could have occurred by an enzyme-free prebiotic synthesis, in particular involving pyrimidine bases, is an open question. 41 Fazale Rana (2011): In order for a molecule to be a self-replicator, it has to be a homopolymer, of which the backbone must have the same repetitive units; they must be identical. In the prebiotic world, for what reason would the generation of a homopolymer be useful? 37

Consider that only random non-designed events could account for the generation, which seems rationally extremely unlikely, if not impossible. The chance for that alone occurring by coincidence is extremely remote. Whatever the mode of joining base and sugar was, it had to be between the correct nitrogen atom of the base and the correct carbon atom of the sugar. The prebiotic synthesis of simple RNA molecules would, therefore, require an inventory of ribose and nucleobases. Assembly of these components into proto-RNA would further require a mechanism to link the ribose and nucleobase together in the proper configuration to form polymers, and then to activate the combined molecule (called a nucleoside) with a pyrophosphate or some other functional component that would promote the formation of a bond between the nucleoside.  There have been many imaginative ideas and attempts for its solution, all unsuccessful.   In most cases the nucleoside components generated in the experiments, attempting to join the bases to the ribose backbone represent only a minor fraction of a full suite of compounds produced, so the synthesis of a nucleoside would require either that the components be further purified or that some mechanism exist to selectively bring the components together out of a complex mixture. How would non-designed random events be able to attach the nucleic bases to the ribose and in a repetitive manner at the same, correct place?  

From nucleosides to nucleotides
Activated monomers are essential because polymerization reactions occur in an aqueous medium and are therefore energetically uphill in the absence of activation. A plausible energy source for polymerization remains an open question. Condensation reactions driven by cycles of anhydrous conditions and hydration would seem to be one obvious possibility but seem limited by the lack of specificity of the chemical bonds that are formed. 51

Libretexts: Phosphodiester bonds are central to most life on Earth, as they make up the backbone of the strands of DNA. In DNA and RNA, the phosphodiester bond is the linkage between the 3' carbon atom of one sugar molecule and the 5' carbon atom of another, deoxyribose in DNA and ribose in RNA. Strong covalent bonds form between the phosphate group and two 5-carbon ring carbohydrates (pentoses) over two ester bonds. In order for the phosphodiester bond to be formed and the nucleotides to be joined, the tri-phosphate or di-phosphate forms of the nucleotide building blocks are broken apart to give off energy required to drive the enzyme-catalyzed reaction. When a single phosphate or two phosphates known as pyrophosphates break away and catalyze the reaction, the phosphodiester bond is formed. Hydrolysis of phosphodiester bonds can be catalyzed by the action of phosphodiesterases which play an important role in repairing DNA sequences. 52

1. R. Shapiro: Life: https://jsomers.net/life.pdf]What A Concept! 2008 , page 84
2. Paul G. Higgs: The RNA World: molecular cooperation at the origins of life 11 November 2014
3. John D Sutherland: Ribonucleotides 2010 Mar 10.
4. STANLEY L. MILLER: Rates of decomposition of ribose and other sugars: Implications for chemical evolution August 1995 
5. Irina V. Delidovich: Catalytic Formation of Monosaccharides: From the Formose Reaction towards Selective Synthesis 2014
6. Annabelle Biscans: Exploring the Emergence of RNA Nucleosides and Nucleotides on the Early Earth 2018 Dec; 8
7. Cornelia Meinert: Ribose and related sugars from ultraviolet irradiation of interstellar ice analogs 2016 Apr 8
8. ALBERT ESCHENMOSER: Chemical Etiology of Nucleic Acid Structure 25 Jun 1999
9.  Andro C. Rios: On the Origin of the Canonical Nucleobases: An Assessment of Selection Pressures across Chemical and Early Biological Evolution 2014 Oct 1.
10. Prof. Dr. Oliver Trapp: Direct Prebiotic Pathway to DNA Nucleosides 26 May 2019
11.  
12. Robert Shapiro: Prebiotic cytosine synthesis: A critical analysis and implications for the origin of life April 13, 1999
13. SAbby Vogel Robinson: Study: Adding UV light helps form ?Missing G? of RNA building blocks June 14, 2010
14. R Shapiro The prebiotic role of adenine: a critical analysis 1995 Jun;25
15. F H Westheimer:Why nature chose phosphates[/size]  1987 Mar 6
16. Prof. Gaspar Banfalvi: Why Ribose Was Selected as the Sugar Component of Nucleic Acids 28 Mar 2006
17. Steven A. Benner: SYNTHETIC BIOLOGY 01 July 2005
18. J.Oró: Synthesis of adenine from ammonium cyanide  June 1960
19. James P. Ferris and L. E. Orgel: Studies in Prebiotic Synthesis. I. Aminomalononitrile and 4-Amino-5-cyanoimidazole”” 1966 Aug 20
20. Annabelle Biscans: Exploring the Emergence of RNA Nucleosides and Nucleotides on the Early Earth 6 November 2018
21. Christian de Duve: Singularities: Landmarks on the Pathways of Life 2005
22. Guanine
23. Yasuhiro Oba: Identifying the wide diversity of extraterrestrial purine and pyrimidine nucleobases in carbonaceous meteorites 26 April 2022
24. Anil Oza: Could the Blueprint for Life Have Been Generated in Asteroids? Apr 26, 2022
25. Ben K. D. Pearce: Origin of the RNA world: The fate of nucleobases in warm little ponds October 2, 2017
26. R. Krishnamurthy: Experimentally investigating the origin of DNA/RNA on early Earth 12 December 2018
27. J.D. Sutherland, and Jack W. Szostak: Chemoselective Multicomponent One-Pot Assembly of Purine Precursors in Water November 2, 2010
28. James Urquhart Insight into RNA origins May 13, 2009
29. Caitlin McDermott-Murphy: First building blocks of life on Earth may have been messier than previously thought January 22, 2020
30. Jack W. Szostak* A Model for the Emergence of RNA from a Prebiotically Plausible Mixture of Ribonucleotides, Arabinonucleotides, and 2′-Deoxynucleotides January 8, 2020
31. Florian M. Kruse: Prebiotic Nucleoside Synthesis: The Selectivity of Simplicity 19 May 2020
32. Hidenori Okamura: A one-pot, water compatible synthesis of pyrimidine nucleobases under plausible prebiotic conditions 07 Jan 2019
33. SIDNEY W. FOX Synthesis of Uracil under Conditions of a Thermal Model of Prebiological Chemistry 16 Jun 1961
34. Leslie E Orgel: Prebiotic chemistry and the origin of the RNA world Mar-Apr 2004
35. Cornelia Meinert: Ribose and related sugars from ultraviolet irradiation of interstellar ice analogs 2016 Apr 8
36. Saidu Islam: Prebiotic Systems Chemistry: Complexity Overcoming Clutter  13 April 2017
37. Fazale Rana: Creating Life in the Lab: How New Discoveries in Synthetic Biology Make a Case for the Creator February 1, 2011
38. Origin-of-Life Expert Jokes about Becoming a Creationist   11/05/2004
39. Norio Kitadai: Origins of building blocks of life: A review July 2018
40. Brian J. Cafferty: Was a Pyrimidine-Pyrimidine Base Pair the Ancestor of Watson-Crick Base Pairs? Insights from a Systematic Approach to the Origin of RNA 23 April 2015
41. Terence N. Mitchell: The “RNA World” 2008
42. Emily Singer New Twist Found in the Story of Life’s Start OCTOBER 11, 2016
43. Gerald F Joyce: The Origins of the RNA World 2012 May; 4
44. Gaspar Banfalvi: Ribose Selected as Precursor to Life 30 Jan 2020
45. Roger D. Blandford: The Chiral Puzzle of Life 2020
46. 
47. 

48. Steven A. Benner: Asphalt, Water, and the Prebiotic Synthesis of Ribose, Ribonucleosides, and RNA March 28, 2012
49. Rich Deem: [url=https://web.archive.org/web/20210615084624/https://www.godandscience.org/evolution/rnamodel.html]Origin of life: latest theories/problems[



Last edited by Otangelo on Sun Jul 17, 2022 7:38 am; edited 112 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Prebiotic phosphodiester bond formation
An often-cited claim is that RNA polymerization could be performed on clay. Robert Shapiro wrote a critique in regards to prebiotic proposals of clay-catalyzed oligonucleotide synthesis (2006): 
An extensive series of studies on the polymerization of activated RNA monomers has been carried out by Ferris and his collaborators. A recent publication from this group concluded with the statement: “The facile synthesis of relatively large amounts of RNA oligomers provides a convenient route to the proposed RNA world. The 35–40 oligomers formed are both sufficiently long to exhibit fidelity in replication as well as catalytic activity”. The first review cited above had stated this more succinctly: “The generation of RNAs with chain lengths greater than 40 oligomers would have been long enough to initiate the first life on Earth”. Do natural clays catalyze this reaction? The attractiveness of this oligonucleotide synthesis rests in part on the ready availability of the catalyst. Montmorillonite is a layered clay mineral-rich in silicate and aluminum oxide bonds. It is widely distributed in deposits on the contemporary Earth. If the polymerization of RNA subunits was a common property of this native mineral, the case for RNA at the start of life would be greatly enhanced. However, the “[c]atalytic activity of native montmorillonites before being converted to their homoionic forms is very poor”. The native clays interfere with phosphorylation reactions. This handicap was overcome in the synthetic experiments by titrating the clays to a monoionic form, generally sodium, before they were used. Even after this step, the activity of the montmorillonite depended strongly on its physical source, with samples from Wyoming yielding the best results. Eventually the experimenters settled on Volclay, a commercially processed Wyoming montmorillonite provided by the American Colloid Company 25

Selecting the binding locations
Once the three components would have been synthesized prebiotically, they would have had to be separated from the confusing jumble of similar molecules nearby, and they would have had to become sufficiently concentrated in order to move to the next steps, to join them to form nucleosides, and nucleotides. 

The phosphate/ribose backbone of DNA is hydrophilic (water-loving), so it orients itself outward toward the solvent, while the relatively hydrophobic bases bury themselves inside. 

Xaktly explains: Additionally, the geometry of the deoxyribose-phosphate linkage allows for just the right pitch, or distance between strands in the helix, a pitch that nicely accommodates base pairing. 23
Lots of things come together to create the beautiful right-handed double-helix structure. Production of a mixture of d- and l-sugars produces nucelotides that do not fit together properly, producing a very open, weak structure that cannot survive to replicate, catalyze, or synthesize other biological molecules. 23

Eduard Schreiner (2011): In DNA the atoms C1', C3', and C4' of the sugar moiety are chiral, while in RNA the presence of an additional OH group renders also C2' of the ribose chiral.24

A biological system exclusively uses d-ribose, whereas abiotic experiments synthesize both right- and lefthanded-ribose in equal amounts. But the pre-biological building blocks of life didn’t exhibit such an overwhelming bias. Some were left-handed and some right. So how did right-handed RNA emerge from a mix of molecules?  Some kind of symmetry-breaking process leading to enantioenriched bio monomers would have had to exist. But none is known. Gerald Joyce wrote a science paper that was published in Nature magazine, in 1984. His findings suggested that in order for life to emerge, something first had to crack the symmetry between left-handed and right-handed molecules, an event biochemists call “breaking the mirror.” Since then, scientists have largely focused their search for the origin of life’s handedness in the prebiotic worlds of physics and chemistry, not biology - but with no success. So what is the cop-out? Pure chance !! Luck did the job. That is the only thinkable explanation once God's guiding hand is excluded. How could that be a satisfying answer in face of the immense odds? It is conceivable that the molecules were short enough for all possible sequences, or almost, to be realized (by way of their genes) and submitted to natural selection. This is the way de Duve thought that Intelligent Design could be dismissed. This coming from a Nobel prize winner in medicine makes one wondering, to say the least.  De Duve dismissed intelligent design and replaced it with natural selection. Without providing a shred of evidence. But based on pure guesswork and speculation.

Prebiotic base-pairing
In order to create a stable genome which was necessary for life to start, bases need to be paired between pyrimidines and purines. In molecular biology, complementarity describes a relationship between two structures each following the lock-and-key principle. Complementarity is the base principle of DNA replication and transcription as it is a property shared between two DNA or RNA sequences, such that when they are aligned antiparallel to each other, the nucleotide bases at each position in the sequences will be complementary, much like looking in the mirror and seeing the reverse of things. This complimentary base pairing is essential for cells to copy information from one generation to another. There is no reason why these structures could or would have emerged in this functional complex configuration by random trial and error. A paper from Nature magazine, in 2016, demonstrates the complete lack of explanations despite decades of attempts to solve the riddle. Brian J. Cafferty and colleagues write:

The RNA World hypothesis presupposes that abiotic reactions originally produced nucleotides, the monomers of RNA and universal constituents of metabolism. However, compatible prebiotic reactions for the synthesis of complementary (that is, base pairing) nucleotides and mechanisms for their mutual selection within a complex chemical environment have not been reported. Despite decades of effort, the chemical origin of nucleosides and nucleotides (that is, nucleobases glycosylated with ribose and phosphorylated ribose) remains an unsolved problem.   They then proceed: Here we show that two plausible prebiotic heterocycles, melamine, and barbituric acid, form glycosidic linkages with ribose and ribose-5-phosphate in water to produce nucleosides and nucleotides in good yields. The data presented here demonstrate the efficient single-step syntheses of complementary nucleosides and nucleotides, starting with the plausible proto-nucleobases melamine and BA and ribose or R5P. 22

The problem with such experiments is that they start with Ribose 5-phosphate (R5P) which is already a complex molecule that was not available on the prebiotic earth.  Once all the parts would have been available, they would have had to be joined together at the same assembly site,  and sorted out from non-functional molecules. 

Nucleotide biosynthesis regulation
Rani Gupta (2021):  Nucleotide biosynthesis is regulated by feedback inhibition, feed-forward activation as well as by cross-regulation. Nucleotide analogs, precursor/substrate analogs and inhibitors of folic acid pathway can inhibit nucleotide biosynthesis. 15

Since biosynthesis regulation had to be extant at LUCA, researchers have to explain the emergence of all these complex feedback systems before life started without invoking natural selection & evolution. Instantiating systems that can monitor, fine-tune and regulate complicated production systems is a major challenging task depending on the knowledge and pre-set and foresight of specific targets, and what is intended to be achieved.  

Srivatsan Raman (2014): Microbes can be made to produce industrially valuable chemicals in high quantities by engineering their central metabolic pathways. Through iterations of genetic diversification and selection, we increased the production of naringenin and glucaric acid 36- and 22-fold, respectively. Engineering biosynthetic pathways for chemical production requires extensive optimization of the host cellular metabolic machinery. Because it is challenging to specify a priori an optimal design, metabolic engineers often need to construct and evaluate a large number of variants of the pathway. We report a general strategy that combines targeted genome-wide mutagenesis to generate pathway variants with evolution to enrich for rare high producers.  Because artificial selection tends to amplify unproductive cheaters, we devised a negative selection scheme to eliminate cheaters while preserving library diversity. 16

Engineering, selecting, optimizing, specifying an optimal design, evaluating, elaborating strategies, goal-oriented elimination and preservation and identifying, are all clear activities that require mental elaboration, and are best assigned to an intelligent setup. Daniel Charlier's scientific paper (2018) about the crossroad of arginine and pyrimidine biosynthesis in E.Coli bacteria gives us insight into how cells tackle this task: He writes:

In all organisms, carbamoylphosphate (CP) ( which is the second intermediate product in pyrimidine synthesis ) is a precursor common to the synthesis of arginine and pyrimidines. In Escherichia coli and most other Gram-negative bacteria, CP is produced by a single enzyme, carbamoylphosphate synthase (CPSase). This particular situation poses a question of basic physiological interest: what are the metabolic controls coordinating the synthesis and distribution of this high-energy substance in view of the needs of both pathways? The study of the mechanisms has revealed unexpected moonlighting gene regulatory activities of enzymes and functional links between mechanisms as diverse as gene regulation and site-specific DNA recombination. At the level of enzyme production, various regulatory mechanisms were found to cooperate in a particularly intricate transcriptional control of a pair of tandem promoters. Transcription initiation is modulated by an interplay of several allosteric DNA-binding transcription factors using effector molecules from three different pathways (arginine, pyrimidines, purines), nucleoid-associated factors (NAPs), trigger enzymes (enzymes with a second unlinked gene regulatory function), DNA remodeling (bending and wrapping), UTP-dependent reiterative transcription initiation, and stringent control by the alarmone ppGpp. At the enzyme level, CPSase activity is tightly controlled by allosteric effectors originating from different pathways: an inhibitor (UMP) and two activators (ornithine and IMP) that antagonize the inhibitory effect of UMP. Furthermore, it is worth noticing that all reaction intermediates in the production of CP are extremely reactive and unstable, and protected by tunneling through a 96 Å long internal channel. 17

The instantiation of complex network systems that autonomously coordinate, regulate, cooperate, modulate, remodel, control, and protect ( which are all processes to achieve specific results ), require careful planning and engineering skills in order to be instantiated.  In the list of ten things that can be safely attributed as signatures of intelligent setup & design are artifacts which use might be employed in different systems. In the above case, it is one metabolic network, that is used to manufacture different end-products, all needed in the overarching function of the system.

In what environment did life start?
Many proposals have been made to explain where the origin of life might have happened. We will mention the main ones. Water which is clean and non-toxic is life-essential. And so an energy source. Darwin suggested a ‘warm little pond’. A popular idea today is hydrothermal vents in mid-ocean ridges,  Submarine hot springs, hydrogel environments, Others suggest panspermia. Life originated in space and was subsequently seeded on earth. Electric spark discharges, underwater volcanoes, nuclear geyser systems, etc.  

S.Maruyama outlines a few requirements for life to start in an article  (2019):

1. a powerful energy source is required. Energy from the Sun alone is not enough to break down inorganic compounds such as nitrogen (N2), carbon dioxide (CO2) and water (H2O) and convert them to complex organic molecules. However, a natural nuclear reactor would provide more than enough energy to drive the required reactions.
2. A supply of major elements is another condition for the formation of life, as most organisms on earth are made up of carbon, hydrogen, oxygen, and nitrogen.
3. A ready supply of nutrients that sustain life, aka. rocks on the Earth’s surface, rich in iron and phosphorus as well as rarer elements like potassium and uranium.
4. A high concentration of gases containing compounds such as ammonia and methane is another key requirement. An enclosed space, such as an underground chamber occurring in the plumbing of a nuclear geyser, would collect a sufficient concentration of these gasses.
5. dry-wet cycles in the environment. Alternating between hydration and dehydration can generate more complex organic molecules from amino acids, such as RNA.
6. Clean, non-toxic water. The early oceans were highly acidic and very salty, and life would not have emerged nor survived here. This suggests that life would have formed in a watery environment on land, such as pool or wetland.
Water must also be poor in sodium and rich in potassium. We know this as modern cells contain little sodium, suggesting that life formed in an environment where this was relatively unavailable.
7. A cycle between day and night, allowing variations in temperature, with low and high temperatures driving different types of reactions, and encouraging self-replication of DNA sequences.
8. A diverse environment is necessary for life to emerge. Variations in pH, salinity, and temperature help to drive different types of reactions, leading to more complex and varying organic molecules. Diverse environments would occur across the Earth’s landmass through plate tectonics, but not within its oceans.
8

Bernard Marty and colleagues confirm point 6 in the paper:  Salinity of the Archaean oceans from analysis of fluid inclusions in quartz (2017):
Data for FIs in Archaean quartz samples from different localities present chemical variations which are consistent with mixings between several hydrothermal end-members and a common component regarded as Archaean
seawater. The latter appears to have a salinity (Cl) comparable to that of modern seawater. We conclude that the present data appear to exclude a salinity twice higher than the modern value. Overall, the oceanic chloride budget appears to be in a steady-state on the long term, at least since 600 Ma. Major unknowns are the fluxes at midocean ridges, which could be significant in either way. These uncertainties leave room for the case of a salinity (e.g., Cl) of the Archaean oceans comparable to the modern one.
9

Dr. Stanley L. Miller mentions other reasons against hydrothermal vents (1994):
What about submarine vents as a source of prebiotic compounds? I have a very simple response to that . Submarine vents don't make organic compounds, they decompose them. Indeed, these vents are one of the limiting factors on what organic compounds you are going to have in the primitive oceans. At the present time, the entire ocean goes through those vents in 10 million years. So all of the organic compounds get zapped every ten million years. That places a constraint on how much organic material you can get. Furthermore, it gives you a time scale for the origin of life. If all the polymers and other goodies that you make get destroyed, it means life has to start early and rapidly. If you look at the process in detail, it seems that long periods of time are detrimental, rather than helpful. 10

Norio Kitadai gives a resume in a review:

Prebiotic soup: The best-known theory is the Prebiotic soup hypothesized by Oparin in 1924. In this theory, organic compounds were created in a reductive atmosphere from the action of sunlight and lightning. The compounds were then dissolved in the primitive ocean, concentrated, and underwent polymerization until they formed “coacervate” droplets. The droplets grew by fusion with other droplets, were split into daughter droplets by the action of tidal waves, and developed the ability to catalyze their own replication through natural selection, which eventually led to the emergence of life.  

The hot springs hypothesis: In this hypothesis, fluctuating volcanic hot spring pools play a central role, where lipid-encapsulated polymers would be synthesized by cycles of hydration and dehydration to form protocells. Progenote populations would undergo selection and distribution, construct niches in new environments, and enable a sharing network effect that can collectively evolve them into the first microbial communities. 18

Hydrothermal origin of life: The discovery of thermophilic organisms in association with deep-sea hydrothermal systems in the late 1970s led to a new idea that life might have originated in hydrothermal systems on the primitive Earth. The perceived benefits afforded to early life in this environment include protection from intense asteroid bombardment and UV radiation, and a source of thermal and chemical energy, along with potentially catalytic minerals.

Extraterrestrial origin of life: Another important source of organic compounds on the primitive Earth is delivery by extraterrestrial objects (meteorites, comets, and interplanetary dust particles (IDPs)).Carbonaceous chondrites contain a wide variety of organic compounds including amino acids, purines, pyrimidines, sugar-like compounds, and long-chain monocarboxylic acids with amphiphilic properties. These compounds could have been used as a component of primitive life. 19

What are the supposed steps from non-life to life?
 Yasuji Sawada (2020): 1. Non-equilibrium geophysical and chemical fluctuation 2. Accumulation of amino-acid, sugar, fat, nucleic acids, and various chemical reactions 3. Vesicle formation by amphiphilic molecules and reactions for RNA formation inside 4. Formation of self-replicative DNA networks inside vesicles 5. The exponential increase of life's number by cell division. 20

Hurdles in origin of life experiments
Lena Vincent (2021): Origins of life research programs can generally be characterized based on whether they aim to address the historical question of the emergence of biochemistry on Earth or the ahistorical question of how life as a general phenomenon arises. A research program can only be said to solve the ahistorical problem if it uses mixtures and conditions that might realistically occur in at least one natural environment, somewhere in the Universe. Engineering life in an artificial lab setting would not explain how life could emerge spontaneously. Furthermore, whether research is at the more historical or ahistorical ends of the spectrum, there will be a trade-off between inferred geological realism and expediency. In practice, all experiments sacrifice some degree of realism. For example, the original Miller–Urey experiment.  21

General hurdles
Prebiotic synthesis entails a number of different difficulties.

1. Natural selection
There was no selection on early earth. In the living world, complex molecular machines are pre-programmed to make the building blocks of life, precisely as needed. The nucleic acids for a limited set, so do the 20 amino acids, and they come in the functional enantiomeric form, and those that are wrong, like right-handed amino acids, are sorted out by the cell machinery. The same applies to the five information-bearing nucleobases, phospholipids, and carbohydrates. None of these building blocks were readily available prebiotically. There was a jumble and a chaotic mess of all sorts of molecules without any order. How did unguided processes winnow this molecular diversity down to the few compounds used in biological systems today, which are a tiny subset of the many compounds that would have arisen from abiotic processes. Consequently, it is important to understand how complex mixtures of dilute organic molecules generated by non-designed processes could have been “tamed” to give rise to the specified chemistry of metabolism (Vincent 2021)

2. Time
Time is not the naturalist's friend. There are chemical reactions performed by certain classes of enzymes, that speed up the process billions of times.  Without the OMP decarboxylase enzyme, a reaction ‘“absolutely essential” in creating the building blocks of DNA and RNA would take 78 million years in the water. Some chemical reactions are so unspecific that getting the right one by unguided means resorting to time and enough attempts of trial and error can very easily lead to huge numbers of odds that exceed the number of atoms in the entire universe ( 10^80).

3. Getting pure materials.
Evidently, what chemists do in the lab, namely using pure reagents, was not what happened on the early earth. Impure contamination in the pool of chemicals was the state of affairs. In order to recreate what was going on back then, chemists would have to recreate as close as possible the situation on the early earth, which includes using contaminated chemicals. 

4. Getting free Gibbs energy 
Spontaneous prebiotic reactions would have to "invent" ways to recruit Gibbs free energy from its environment so as to reduce its own entropy. That is as to rocks continuously recruiting forces to roll up the hill, or a rusty nail "figuring out" how to spontaneously add layers of galvanizing zinc on itself to fight corrosion. These reactions would also have to find ways to "funnel" this energy and direct it to where it is required to achieve a precise reaction necessary for achieving self-organization and in the end, free-living, self-replicating biological systems.

5. Activation and repetitive processes
Monomers need to be activated in order for polymerization and catenation to make amino acid strands, and genes, to be possible. That demands a repetitive ordered process, where the bond reactions happen repeatedly at the same place in the molecules. In RNA or DNA polymerase protein complexes, or in the ribosome, sophisticated molecular machines perform these reactions with exuberant precision and efficiency. Science has failed to explain how that could have happened on the early earth.

6. Information
Specified complex information, digital data, stored in genes through the language of the genetic code, dictates and directs the making of irreducibly complex molecular machines, robotic molecular production lines, and chemical cell factories. Scientific investigation has not brought light to the origin of prebiotic information: It remains a mystery. The origin of the genetic code is also enigmatic.

7. Polymerization
Kepa Ruiz-Mirazo (2014): There have to be ways of prebiotic efficient polycondensation of amino acids and nucleotides, in heterogeneous aqueous solutions or in interfaces with waterbased media, are needed to explain the emergence, without the aid of biological catalysts, of the first functional biopolymers (e.g., polypeptides and polynucleotides). 11 

8. Eigen's paradox is one of the most intractable puzzles in the study of the origins of life. It is thought that the error threshold concept described above limits the size of self-replicating molecules to perhaps a few hundred digits, yet almost all life on earth requires much longer molecules to encode their genetic information. This problem is handled in living cells by enzymes that repair mutations, allowing the encoding molecules to reach sizes on the order of millions of base pairs. These large molecules must, of course, encode the very enzymes that repair them, and herein lies Eigen's paradox, first put forth by Manfred Eigen in his 1971 paper (Eigen 1971) Simply stated, Eigen's paradox amounts to the following: Without error correction enzymes, the maximum size of a replicating molecule is about 100 base pairs. For a replicating molecule to encode error correction enzymes, it must be substantially larger than 100 bases. This is a chicken-or-egg kind of a paradox, with an even more difficult solution. Which came first, the large genome or the error correction enzymes?

9. Muller's Ratchet The theory of Muller's Ratchet predicts that small asexual populations are doomed to accumulate ever-increasing deleterious mutation loads as a consequence of the magnified power of genetic drift and mutation that accompanies small population size. Evolutionary theory predicts that mutational decay is inevitable for small asexual populations, provided deleterious mutation rates are high enough. Such populations are expected to experience the effects of Muller's Ratchet where the most-fit class of individuals is lost at some rate due to chance alone, leaving the second-best class to ultimately suffer the same fate, and so on, leading to a gradual decline in mean fitness. The mutational meltdown theory built upon Muller's Ratchet to predict a synergism between mutation and genetic drift in promoting the extinction of small asexual populations that are at the end of a long genomic decay process. Since deleterious mutations are harmful by definition, accumulation of them would result in loss of individuals and a smaller population size. Small populations are more susceptible to the ratchet effect and more deleterious mutations would be fixed as a result of genetic drift. This creates a positive feedback loop that accelerates extinction of small asexual populations. This phenomenon has been called mutational meltdown.

10. Protected environments
If these chemical reactions had happened in places being exposed to UV radiation, no deal. If it was too cold, or too hot, too acidic, or too alkaline, in the wrong atmospheric conditions, no deal. Shapiro (2006): Prebiotic syntheses conducted in the laboratory often involve multistep procedures, with purified reagents and very different conditions permitted at each new step. The extensive purification procedures and changes of locale that would be needed to produce comparable results on the early Earth are seldom discussed but must be taken into account when attempting to judge the plausibility of the entire sequence. 1

11. The right sequence of reactions
In metabolic pathways in the cell, the enzymes, our sophisticated molecular robots, are lined up in the right sequence. Once a manufacturing step by enzyme one is concluded, and the intermediate product is ready, it is handed over to enzyme two, which uses the product of the previous enzyme, to perform the subsequent manufacturing step. If the enzyme sequence were wrong, no deal, and the entire manufacturing process in the production line breaks down. On prebiotic earth, natural catalysts, like ions, clay, etc. had to replace enzymatic reactions. How could the right sequence have been performed? its far from realistic to believe that order, timing, and the right subsequent reactions could have been performed by random chaotic events without direction.  

12. How the components of prebiotic soup came to be organized in systems
The biggest outstanding problem in understanding the origins of life is how the components of prebiotic soup came to be organized in systems capable of emergent processes such as growth, self-propagation, information processing, and adaptive evolution. Given that prebiotic soups may have been composed of millions of distinct compounds, each at a low concentration.

13. Irreducible complexity
The cell is an irreducible, minimal entity of life. The individual parts by themselves bear no function unless integrated into a higher-order system. A complex specified intermediate state or product that is not functionally useful would never be selected and emerge by non-designed events.  A minimal amount of instructional complex information is required for a gene to produce useful proteins. A minimal size of a protein is necessary for it to be functional.  A minimal number of parts are necessary to be integrated into metabolic systems to make all building blocks and metabolites, used in life. A fully developed system that generated energy, and a genome with all the information is necessary to keep the basic functions to have a living cell.

14. Homeostasis
Freeman Dyson, Origins of Life, page 73: The essential characteristic of living cells is homeostasis, the ability to maintain a steady and more-or-less constant chemical balance in a changing environment.  12

The control of metabolism is a fundamental requirement for all life, with perturbations of metabolic homeostasis underpinning numerous disease-associated pathologies. Any incomplete Metabolic network without the control mechanisms in place to get homeostasis would mean disease and cell death. A minimal metabolic network and the control mechanisms had to be in place from the beginning, which means, and gradualistic explanation of the origin of biological Cells, and life is unrealistic. Life is an all-or-nothing business.

The difficult ( if not impossible) task of prebiotic RNA and DNA synthesis on early earth
Kepa Ruiz-Mirazo (2013): The chemistry of nucleotides seems to be, by far, the most complicated one among the different bio-monomers, involving many independent reactions that had to be optimized and coupled in order to give an overall efficient process. This is the reason why, up to now, modular approaches have failed to solve the question of whether it was possible that RNA monomers could be synthesized on the early Earth 13

ROBERT SHAPIRO clarifies some important points. He was interviewed by J.Craig Venter in 2008:

 I then spent decades running a laboratory in DNA chemistry, and so many people were working on DNA synthesis — which has been put to good use as you can see — that I decided to do the opposite, and studied the chemistry of how DNA could be kicked to Hell by environmental agents. Among the most lethal environmental agents I discovered for DNA — pardon me, I'm about to imbibe it — was water. Because water does nasty things to DNA. For example, there's a process called DNA animation, where it kicks off part of the coding part of DNA from the units — that was discovered in my laboratory. Another thing water does is help the information units fall off of DNA, which is called depurination and ought to apply only one of the subunits — but works under physiological conditions for the pyrimidines as well, and I helped elaborate the mechanism by which water helped destroy that part of DNA structure. 

Since then, so-called prebiotic chemistry, which is of course falsely named, because we have no reason to believe that what they're doing would ever lead to life — I just call it 'investigator influenced abiotic organic chemistry' — has fallen into the same trap. In the proceedings of the National Academy of Sciences about two months ago there was a paper — I think it was theoretical — they showed that in certain hydro-thermal events, convection forces and other attractive forces, about which I am unable to comment, would serve to concentrate organic molecules so that organic molecules would get much more concentrated in the bottom of this than they would in the ordinary ocean. Very nice, perhaps it's a good place for the origin of life, and interesting finding, but then there was another commentary paper in the Proceedings by another invited commentator, who said,
Great advance for RNA world because if you put nucleotides in, they'll be concentrated enough to form RNA; and if you put RNA in, the RNA will come together and form aggregates, giving you much more chance of forming a ribosome or whatever. I looked at the paper and thought, How did nucleotides come in? How did RNA come in? How did anything come in? The point is, you would take whatever mess prebiotic chemistry gives you and you would concentrate that mess so it's relevant to RNA or the origin of life — it's all in the eye of the beholder. And almost all of prebiotic chemistry is like this; they take chemicals of their own selection.

People were talking about Steve Benner and his borate paper where he selected, of his own free will, the chemical formaldehyde, the chemical acid-aldehyde, and the mineral borate, and he decided to mix them together and got a product that he himself said was significant in leading to the origin of RNA world, and I, looking at the same thing, see only the hands of Steve Benner reaching to the shelf of organic chemicals, picking formaldehyde, and from another shelf, picking acidaldehyde, etc. Excluding them carefully. Picking a mineral that occurs only in selective places on the Earth and putting it in heavy doses. And at the end getting a complex of ribose and borate, which by itself would be of no use for making RNA, because the borate loves to hold onto the ribose, and as long as it holds onto the ribose it can't be used to make RNA. If it lets go of the ribose, then the ribose becomes vulnerable to destruction by all the other environmental agents. The half-life of pure ribose in solution, a different experiment and a very good one, by Stanley Miller is of the order of one or two hours, and all of the other sugars prominent in Earth biology have similar instability.

I was publishing papers like this and I got the reputation, or the nickname in the laboratory of the prebiotic chemist, of 'Dr. No'. If someone wanted a paper murdered, send it to me as a referee. At some point, someone said, Shapiro, you've got to be positive somewhere. So how did life start? And do we have any examples of authentic abiotic chemistry, not subject to investigator interference? The only true samples we have are those meteorites, which are scooped up quickly and often fallen in an unspoiled place — there was a famous meteorite that fell in France in a sheep field in the 1840s and led to dreadful chemistry of people seeing all sorts of biomolecules in it, not surprisingly. But if you took pristine meteorites and look inside, what you see are a predominance of simple organic compounds. The smaller the organic compound, the more likely it is to be present. The larger it is, the less likely it is to be present. Amino acids, yes, but the simplest ones. Over a hundred of them. All the simplest ones, some of which, coincidentally, overlap the unique set of 20 that coincide with Earth life, but not containing the larger amino acids that overlap with Earth life. 

And no sample of a nucleotide, the building block of RNA or DNA, has ever been discovered in a natural source apart from Earth life. Or even take off the phosphate, one of the three parts, and no nucleoside has ever been put together. Nature has no inclination whatsoever to build nucleosides or nucleotides that we can detect, and the pharmaceutical industry has discovered this. Life had to start with the mess — a miscellaneous mixture of organic chemistry to begin with. How do you organize this? You have to have a preponderance of some chemicals or lacking others would be against the second law of thermo-dynamics — it violates a concept that as a non-physicist that I barely grasp called 'entropy'.

In the simplest case, and there may be many more elaborate cases, they found that the energy wouldn't be released unless some chemical transformations took place. If the chemical transformations took place then the energy was released, a lot of it is heat. If this just went on continuously, all you do is use up the energy. Release all of it and you've converted one chemical to another. Big deal. To get things interesting, you have to close the cycle where the chemicals can be recycled by processes of their own, and then go through it again, releasing more energy. And once you have that, you can then develop nodes — because organic chemistry is very robust, there are reaction pathways leading everywhere, which is why it's such a mess.

One doesn't need a freak set of perhaps a hundred consecutive reactions that will be needed to make an RNA, and life becomes a probable thing that can be generated through the action of the laws of chemistry and physics, provided certain conditions are met. You must have the energy. It's good to have some container or compartment because if your products just diffuse away from each other and get lost and cease to react with one another you'll eventually extinguish the cycle. You need a compartment, you need a source of energy, you need to couple the energy to the chemistry involved, and you need sufficiently rich chemistry to allow for this network of pathways to establish itself. Having been given this, you can then start to get evolution.

The Cell factory maker, Paley's watchmaker argument 2.0 Robert11

Shapiro wrote in: A Skeptic's Guide to the Creation of Life on Earth 1986, p.186:

In other words,' I said, `if you want to create life, on top of the challenge of somehow generating the cellular  components out of non-living chemicals, you would have an even bigger problem in trying to it the ingredients together in the right way.' `Exactly! ... So even if you could accomplish the thousands of steps between the amino acids in the Miller tar-which probably didn't exist in the real world anyway-and the components you need for a living cell-all the enzymes, the DNA, and so forth-you's still immeasurably far from life. ... the problem of  assembling the right parts in the right way at the right time and at the right place, while keeping out the wrong material, is simply insurmountable. 5

A. Graham Cairns-Smith also lists several hurdles that would have to be overcome in his book: Genetic takeover, page 64:

What is missing from this story of the evolution of life on earth is the original means of producing such sophisticated materials as RNA. The main problem is that the replication of RNA depends on a clean supply of rather complicated monomers—activated nucleotides. What was required to set the scene for an RNA world was a highly competent, long-term means of production of at least two nucleotides. In practice the discrimination required to make nucleotide parts cleanly, or to assemble them correctly, still seems insufficient. 

The implausibility of prevital nucleic acid If it is hard to imagine polypeptides or polysaccharides in primordial waters it is harder still to imagine polynucleotides. But so powerful has been the effect of Miller’s experiment on the scientific imagination that to read some of the literature on the origin of life (including many elementary texts) you might think that it had been well demonstrated that nucleotides were probable constituents of a primordial soup and hence that prevital nucleic acid replication was a plausible speculation based on the results of experiments. There have indeed been many interesting and detailed experiments in this area. But the importance of this work lies, to my mind, not in demonstrating how nucleotides could have formed on the primitive Earth, but in precisely the opposite: these experiments allow us to see, in much greater detail than would otherwise have been possible, just why prevital nucleic acids are highly implausible. Let us consider some of the difficulties to make RNA & DNA

1. as we have seen, it is not even clear that the primitive Earth would have generated and maintained organic molecules. All that we can say is that there might have been prevital organic chemistry going on, at least in special locations.
2. high-energy precursors of purines and pyrimidines had to be produced in a sufficiently concentrated form (for example at least 0.01 M HCN).
3. the conditions must now have been right for reactions to give perceptible yields of at least two bases that could pair with each other.
4. these bases must then have been separated from the confusing jumble of similar molecules that would also have been made, and the solutions must have been sufficiently concentrated.
5. in some other locations a formaldehyde concentration of above 0.01 M must have built up.
6. this accumulated formaldehyde had to oligomerize to sugars.
7. somehow the sugars must have been separated and resolved, so as to give a moderately good concentration of, for example, D-ribose.
8. bases and sugars must now have come together.
9. they must have been induced to react to make nucleosides. (There are no known ways of bringing about this thermo dynamically uphill reaction in an aqueous solution: purine nucleosides have been made by dry phase synthesis, but not even this method has been successful for condensing pyrimidine bases and ribose to give nucleosides
10. Whatever the mode of joining base and sugar it had to be between the correct nitrogen atom of the base and the correct carbon atom of the sugar. This junction will fix the pentose sugar as either the a- or fl-anomer of either the furanose or pyranose forms. For nucleic acids, it has to be the fl-furanose. (In the dry-phase purine nucleoside syntheses referred to above, all four of these isomers were present with never more than 8 ‘Z, of the correct structure.)
11. phosphate must have been, or must now come to have been, present at reasonable concentrations. (The concentrations in the oceans would have been very low, so we must think about special situations—evaporating lagoons etc.   
12. the phosphate must be activated in some way — for example as a linear or cyclic polyphosphate — so that (energetically uphill) phosphorylation of the nucleoside is possible.
13. to make standard nucleotides only the 5’- hydroxyl of the ribose should be phosphorylated. (In solid-state reactions with urea and inorganic phosphates as a phosphorylating agent, this was the dominant species to begin with. Longer heating gave the nucleoside cyclic 2’,3’-phosphate as the major product although various dinucleotide derivatives and nucleoside polyphosphates are also formed
14. if not already activated — for example as the cyclic 2’,3’-phosphate — the nucleotides must now be activated (for example with polyphosphate) and a reasonably pure solution of these species created of reasonable concentration. Alternatively, a suitable coupling agent must now have been fed into the system.
15. the activated nucleotides (or the nucleotides with coupling agent) must now have polymerized. Initially this must have happened without a pre-existing polynucleotide template (this has proved very difficult to simulate ; but more important, it must have come to take place on pre-existing polynucleotides if the key function of transmitting information to daughter molecules was to be achieved by abiotic means. This has proved difficult too. Orgel & Lohrmann give three main classes of problem.
(i) While it has been shown that adenosine derivatives form stable helical structures with poly(U) — they are in fact triple helixes — and while this enhances the condensation of adenylic acid with either adenosine or another adenylic acid — mainly to di(A) - stable helical structures were not formed when either poly(A) or poly(G) Were used as templates.
(ii) It was difficult to find a suitable means of making the internucleotide bonds. Specially designed water-soluble carbodiimides were used in the experiments described above, but the obvious pre-activated nucleotides — ATP or cyclic 2’,3’-phosphates — were unsatisfactory. Nucleoside 5'-phosphorimidazolides, for example N/\ n K/N/P-r’o%OHN/\N were more successful, but these now involve further steps and a supply of imidazole, for their synthesis.
(iii) Internucleotide bonds formed on a template are usually a mixture of 2’—5’ and the normal 3’—5’ types. Often the 2’—5’ bonds predominate although it has been found that Zn“, as well as acting as an eflicient catalyst for the template-directed oligomerization of guanosine 5’-phosphorimidazolide also leads to a preference for the 3’—5’ bonds.
16. the physical and chemical environment must at all times have been suitable — for example the pH, the temperature, the M2+ concentrations.
17. all reactions must have taken place well out of the ultraviolet sunlight; that is, not only away from its direct, highly destructive effects on nucleic acid-like molecules, but away too from the radicals produced by the sunlight, and from the various longer lived reactive species produced by these radicals.
18. unlike polypeptides, where you can easily imagine functions for imprecisely made products (for capsules, ionexchange materials, etc), a genetic material must work rather well to be any use at all — otherwise it will quickly let slip any information that it has managed to accumulate.
19. what is required here is not some wild one-off freak of an event: it is not true to say ‘it only had to happen once’. A whole set-up had to be maintained for perhaps millions of years: a reliable means of production of activated nucleotides at the least.


The Cell factory maker, Paley's watchmaker argument 2.0 Cairns10

As the difficulties accumulate the stakes get higher: success would be all the more resounding, but it becomes less likely. Sooner or later it becomes wiser to put your money elsewhere.
 2

M. Gargaud and colleagues detail the size of the problem:

One of the principal problems concerning the hypothesis of the RNA world is that it appears quite unlikely that a prebiotic environment could have existed containing the mixture of activated nucleotides favoring the formation and replication of ribozymes, as well as their evolution through natural selection. Even if there were several candidate reactions for the efficient prebiotic synthesis of nucleic bases, access to monomeric nucleotides by chemical pathways in fact comes up against several obstacles. If one goes no further than mimicking the biochemical pathway, the first difficulty that occurs is that of synthesizing ribose, which is formed in just negligible quantities within the complex mixture obtained by polymerization of formaldehyde, and, what is more, has a limited lifetime. The bond between a nucleic base and ribose that produces a nucleoside is then a  very difficult reaction. There still remains the matter of obtaining a nucleotide by phosphorylation, which leads to mixtures because three positions remain available on the ribose, and then there is its activation. So there are two possibilities, either to envisage an easier pathway for the prebiotic synthesis of nucleotides or to squarely reject RNA as the initial bearer of information, in favor of an alternative bearer that has not left any evolutionary traces. 4

Unsolved issues regarding nucleic acid synthesis 
How would the early Earth have generated and maintained organic molecules? All that can be said is that there might have been prebiotic organic chemistry going on, at least in special locations.
How would prebiotic processes have purified the starting molecules to make RNA and DNA which were grossly impure? They would have been present in complex mixtures that contained a great variety of reactive molecules.
How did the synthesis of the nitrogenic nucleobases in prebiotic environments occur?
How did fortuitous accidents select the five just-right nucleobases to make DNA and RNA, Two purines, and three pyrimidines?
How did unguided non-designed events select purines with two rings, with nine atoms, forming the two rings: 5 carbon atoms and 4 nitrogen atoms, amongst almost unlimited possible configurations?
How did lucky coincidence pick pyrimidines with one ring, with six atoms, forming its ring: 4 carbon atoms and 2 nitrogen atoms, amongst an unfathomable number of possible configurations?
How did random trial and error foresee that this specific atomic arrangement of the nucleobases is required to get the right strength of the hydrogen bond to join the two DNA strands and form Watson–Crick base-pairing?
How did mechanisms without external direction foresee that this specific atomic arrangement would convey one of, if not the best possible genetic system to store information?
How would these functional bases have been separated from the confusing jumble of similar molecules that would also have been made?
How were high-energy precursors to produce purines and pyrimidines produced in a sufficiently concentrated form and joined to the assembly site?
How could the adenine-uracil interaction function in any specific recognition scheme under the chaotic conditions of a "prebiotic soup" considering that its interaction is weak and nonspecific?
How could the ribose 5 carbon sugar rings which form the RNA and DNA backbone have been selected, if 6 or 4 carbon rings, or even more or less, are equally possible but non-functional?
How would the functional ribose molecules have been separated from the non-functional sugars?
How could right-handed configurations of RNA and DNA have been selected in a racemic pool of right and left-handed molecules? Ribose must have been in its D form to adopt functional structures ( The homochirality problem )
How was exclusively β-D-ribofuranose chosen in nucleic acids over pyranose, given that the former species are substantially more stable at equilibrium?
How were the correct nitrogen atom of the base and the correct carbon atom of the sugar selected to be joined together?
How could random events have brought all the 3 parts together and bonded them in the right position ( probably over one million nucleotides would have been required ?)
How could prebiotic reactions have produced functional nucleosides? (There are no known ways of bringing about this thermodynamically uphill reaction in aqueous solution)
How could prebiotic glycosidic bond formation between nucleosides and the base have occurred if they are thermodynamically unstable in water, and overall intrinsically unstable?
How could  RNA nucleotides have accumulated, if they degrade at warm temperatures in time periods ranging from nineteen days to twelve years? These are extremely short survival rates for the four RNA nucleotide building blocks.
How was phosphate, the third element, concentrated at reasonable concentrations?. (The concentrations in the oceans or lakes would have been very low)
How would prebiotic mechanisms phosphorylate the nucleosides at the correct site (the 5' position) if, in laboratory experiments, the 2' and 3' positions were also phosphorylated?
How could phosphate have been activated somehow? In order to promote the energy dispendious nucleotide polymerization reaction, and (energetically uphill) phosphorylation of the nucleoside had to be possible.
How was the energy supply accomplished to make RNA? In modern cells, energy is consumed to make RNA.
How could a transition from prebiotic to biochemical synthesis have occurred? There are a huge gap and enormous transition that would be still ahead to arrive at a fully functional interlocked and interdependent metabolic network.
How could  RNA have formed, if it requires water to make them, but RNA cannot emerge in water and cannot replicate with sufficient fidelity in water without sophisticated repair mechanisms in place?
How would the prebiotic synthesis transition of RNA to the highly regulated cellular metabolic synthesis have occurred?  The pyrimidine synthesis pathway requires six regulated steps, seven enzymes, and energy in the form of ATP.
The starting material for purine biosynthesis is Ribose 5-phosphate, a product of the highly complex pentose phosphate pathway, which uses 12 enzymes. De novo purine synthesis pathway requires ten regulated steps, eleven enzymes, and energy in the form of ATP.
How did formaldehyde concentration of above 0.01 M build up?
How did accumulated formaldehyde oligomerise to sugars?
How were they induced to react to make nucleosides? (There are no known ways of bringing about this thermo dynamically uphill reaction in aqueous solution: purine nucleosides have been made by dry phase synthesis, but not even this method has been successful for condensing pyrimidine bases and ribose to give nucleosides

1. Robert Shapiro: Small Molecule Interactions were Central to the Origin of Life 2006
2. A. Graham Cairns-Smith: Genetic Takeover: And the Mineral Origins of Life 1988
3. 
4. M. Gargaud: Young Sun, Early Earth  and the Origins of Life 2012
5. Robert Shapiro: Origins : A Skeptic's Guide to the Creation of Life on Earth  January 1, 1986
6. 
7. Lena Vincent: The Prebiotic Kitchen: A Guide to Composing Prebiotic Soup Recipes to Test Origins of Life Hypotheses 11 November 2021
8. S.Maruyama: The origin of life: The conditions that sparked life on Earth December 23, 2019
9. Bernard Marty: Salinity of the Archaean oceans from analysis of fluid inclusions in quartz 12 December 2017
10. Dr. Stanley L. Miller: From Primordial Soup to the Prebiotic Beach: An interview with exobiology pioneer 1994
11. Kepa Ruiz-Mirazo: Prebiotic Systems Chemistry: New Perspectives for the Origins of Life December 30, 2011
12. Freeman Dyson:  Origins of Life 2nd Edition 2010
13. Kepa Ruiz-Mirazo: Prebiotic Systems Chemistry: New Perspectives for the Origins of Life October 31, 2013
14. Yasuji Sawada: A Thermodynamic Approach towards the Question “What is Cellular Life?” 20 Jul 2020
15. Rani Gupta: [url=https://link.springer.com/chapter/10.1007/978-981-16-0723-3_19#:~:text=other pyrimidine nucleotides.-,Nucleotide biosynthesis is regulated by feedback inhibition%2C feed%2Dforward activation,pathway can inhibit nucleotide biosynthesis.]Nucleotide Biosynthesis and Regulation[/url] 21 April 2021
16. Srivatsan Raman: Evolution-guided optimization of biosynthetic pathways December 1, 2014
17. Daniel Charlier: Regulation of carbamoylphosphate synthesis in Escherichia coli: an amazing metabolite at the crossroad of arginine and pyrimidine biosynthesis 20 September 2018
18. David Deamer: The Hot Spring Hypothesis for an Origin of Life 2020 Mar 25
19. Norio Kitadai: Origins of building blocks of life: A review 2017
20. M. S. Dodd: Evidence for early life in Earth’s oldest hydrothermal vent precipitates 2017
21. Lena Vincent: The Prebiotic Kitchen: A Guide to Composing Prebiotic Soup Recipes to Test Origins of Life Hypotheses 11 November 2021
22. Brian J. Cafferty: Spontaneous formation and base pairing of plausible prebiotic nucleotides in water 25 April 2016
23. Jim Cleaves II: Encyclopedia of Astrobiology: Formose Reaction 2011
24. Steven A. Benner: 
The ‘‘Strong’’ RNA World Hypothesis: Fifty Years Old   2013 Apr;13
25. A. G. Cairns-Smith Seven Clues to the Origin of Life: A Scientific Detective Story  1990



Last edited by Otangelo on Mon Jul 25, 2022 7:24 pm; edited 44 times in total

https://reasonandscience.catsboard.com

12The Cell factory maker, Paley's watchmaker argument 2.0 Empty Phospholipids Mon Jun 06, 2022 10:25 pm

Otangelo


Admin

Carbohydrates
Carbohydrates ( carbo = carbon + hydro = water ) are a result of the chemical bonding of Hydrogen, Oxygen, and Carbon atoms.  Steve Benner (2010): Carbohydrates have a ratio of 1:2:1 and an aldehyde or ketone group 1. They are subdivided into monosaccharides ( simple sugars, normally containing 3 to 7 sugars), disaccharides ( two monosaccharides joined together), and polysaccharides ( long chains of monosaccharides joined together).

The most common carbohydrates are six-carbon (hexose) and five-carbon (pentose) sugars. Carbohydrates are the building blocks of RNA and DNA, and as such, their origin on early earth is of fundamental importance for the origin of life questions. Their six-carbon versions, like glucose, are also an important source of energy through oxidation. Most carbon used in biology today is sourced by photoautotrophs like cyanobacteria that fix carbon dioxide CO2 using the energy from the sun through photosynthesis and they synthesize sugars like glucose.   

Geoffrey Zubay, Origins of Life on the Earth and in the Cosmos (2000): In the prebiotic world, it seems likely that one-carbon chemistry also dominated the synthesis of carbon compounds that were crucial to the origin of life. However, in this case, the key precursor carbon was probably formaldehyde (CH2O) formed in the atmosphere or in the lithosphere. Formaldehyde always has been considered to be the most likely precursor of carbohydrates in the prebiotic world. It has the advantage that it is a high-energy compound for which routes of synthesis in the atmosphere have been found. A major stumbling block for a long time was that no feasible routes from formaldehyde to ribose had been found. Yields always were very small and ribose always constituted a very minor component of the mixture of sugars that were usually formed. In the 1860s Butlerov discovered that when an aqueous solution of formaldehyde is warmed in the presence of a calcium hydroxide suspension, a mixture of sugars is produced. The process is referred to as the formose reaction.  The formose reaction starts from a concentrated solution of formaldehyde (usually 1 to 2%). Within a matter of minutes at temperatures around 55°C a broad array of sugars and other products is formed. Could this procedure be used without modification for the prebiotic synthesis of ribose? This is very unlikely because ribose usually constitutes 1% or less of the reaction products. 2

J. Oró (1990): Under the slightly basic conditions of the Butlerow synthesis, a complex mixture of more than 50 different pentoses, hexoses,  and many other sugars is obtained. Furthermore, under the reaction conditions in which it is formed, ribose tends to decompose. Accordingly, at the present time it is difficult to understand how ribose could have accumulated and  separated from other sugars of abiotic origin in the prebiotic environment into acidic compounds 3

Carl Sagan (1992) Sources of organic molecules on the early Earth divide into three categories: delivery by extraterrestrial objects; organic synthesis driven by impact shocks; and organic synthesis by other energy sources (such as ultraviolet light or electrical discharges). Estimates of these sources for plausible end-member oxidation states of the early terrestrial atmosphere suggest that the heavy bombardment before 3.5 Gyr ago either produced or delivered quantities of organics comparable to those produced by other energy sources. 6

Carbohydrates come in left, and right-handed chiral forms. Ribose, the backbone of RNA and DNA, is right-handed. Kitadai (2017) mentions and lists several scientific papers that deal with problems that have been recognized for ribose synthesis via the formose reaction 4 

The three-carbon glycerol backbone in phospholipids in archaea is 'right-handed', but left-handed in all other organisms, where they are only used in the right-handed form.

Extraterrestrial sources
R.Shapiro (2007) Some writers have presumed that all of life’s building could be formed with ease in Miller-type experiments and were present in meteorites and other extraterrestrial bodies. This is not the case. A careful examination of the results of the analysis of several meteorites led the scientists who conducted the work to a different conclusion: inanimate nature has a bias toward the formation of molecules made of fewer rather than greater numbers of carbon atoms, and thus shows no partiality in favor of creating the building blocks of our kind of life. (When larger carbon-containing molecules are produced, they tend to be insoluble, hydrogen-poor substances that organic chemists call tars.) I have observed a similar pattern in the results of many spark discharge experiments. 46

Daniel P. Glavin (2018): Asteroids, comets, and their fragments including meteorites, micrometeorites, and interplanetary dust particles (IDPs) also serve as delivery vehicles for organic matter. At present, about 4 ~x 10^7 kg of extraterrestrial material, ranging in size from meter-sized meteorites down to micron-sized IDPs, rain down on the Earth every year.  Therefore, exogenous delivery may have been an important source of organic carbon on the early Earth including complex prebiotic molecules available for the origin of life 7

The narrative is well exposed in the recapitulation and closing thoughts of Maheen Gull's paper (2021):
" There is a strong possibility of the emergence of life on the early Earth as a consequence of the abiotic synthesis of organic molecules via interstellar chemical reactions and their subsequent delivery to early Earth via meteoritic bombardments, followed by concentration, self-assembly reactions catalyzed by the minerals present on the early Earth, and finally the proliferation of the early life on the Earth." 11

DANIEL SEGRÉ (1999): Assuming that a significant amount of the organic substances in cometary and meteoritic infall survived atmospheric entry, most of the material would presumably enter the oceans and be released over a period of many years. One mechanism for the release of organic components from extraterrestrial infall is thermal extraction. Primordial membranes would need to continuously add amphiphilic components in order to accommodate the growth and replication of the encapsulated macromolecular system or of the lipid aggregate itself. The challenge is to find a plausible synthetic pathway for hydrocarbons with 10 or more carbons in their chains. Such chains must also have modifications, e.g. chain branching, that will allow them to be fluid at the permissive environmental temperature. 35

Much is made of the Murchison meteorite. Here's a quick calculation: Assuming no contamination from its collision with earth, then "sugar-related" compounds were measured at 60 parts per million (ppm), about the same as amino acids (quotes, because not all are related to biology and all, are racemic). Using a small amino acid-like alanine (MW=89 g/mol) and assuming you would need at least 0.1 Molar concentration in the ocean (V=1.35^21 Liters) to get peptides, then the mass of meteorites to deliver enough alanine would be 2^26 g. (The mass of the earth is about 6^24 kilograms, so 6^27 grams). The Murchison weighs about 100 kg. Thus the earth would've been struck by 2^21 meteorites the size of Murchison. The age of the earth is about 1.5^17 seconds. Thus 13,000 meteorites would have to strike every second of Earth's existence. That's a far stretch to be plausible. 12

Kepa Ruiz-Mirazo (2013): Both long-chain monocarboxylic acids and polycyclic aromatic hydrocarbons (PAHs) with amphiphilic properties were extracted from the Murchison meteorite. Deamer et al. have shown that these materials are able to form vesicle-like structures under specific conditions. Their formation could have occurred by irradiation of interstellar matter with UV light, as revealed by experimental irradiation of simulated cometary and interstellar ice. 36

In 2012, ScienceDaily reported that:  "researchers at NASA's Jet Propulsion Laboratory are creating concoctions of organics, or carbon-bearing molecules, on ice in the lab, then zapping them with lasers. Their goal: to better understand how life arose on Earth."  The abstract of the paper reported, started with this:

Understanding the evolution of organic molecules in ice grains in the interstellar medium (ISM) under cosmic rays, stellar radiation, and local electrons and ions is critical to our understanding of the connection between ISM and solar systems. Our study is aimed at reaching this goal of looking directly into radiation-induced processing in these ice grains. 9

So what did they produce? Murthy S. Gudipati writes: The organics looked at in the study are called polycyclic aromatic hydrocarbons, or PAHs for short. These carbon-rich molecules can be found on Earth as combustion products: for example, in barbecue pits, candle soot and even streaming out of the tail pipe of your car. They have also been spotted throughout space in comets, asteroids and more distant objects. 8 That has hardly something to do with organic building blocks to start life.  

Phospholipids
A factory requires a building, that protects all its workers, manufacturing processes, machines, etc. from the hostile external conditions of the environment ( rain, storms, winds, etc.) and acts like a security guard -  that only lets certain things enter and leave the cell/factory, makes sure wanted things the factory needs come in and makes sure unwanted things that would be detrimental/ harmful to the factory don't enter, and that controls the right internal conditions. A factory is also subdivided into different compartments for labor division.  

The first life form must have had membranes with similar active stratagems. Cell membranes control what substances can go in and out, like nutrients and ions, using various transport mechanisms. It also hosts proteins that permit cell communication through signaling. Biological cell membranes are essential for internal compartmentalization, in special in eukaryotic cells, which are far more complex than prokaryotes. Cell membranes also form energy gradients - which are necessary to generate energy, similar to a hydroelectric energy factory, that needs a dam ( The cell membrane can act as a dam, enabling the energy gradient). Cell membranes are furthermore essential to keeping a homeostatic milieu (pH, fluid balance, cell size control etc.) Cell membranes, cell proteins, and an internal homeostatic milieu form an interdependent system, that had to be fully functional. All at once. A gradual emergence would produce non-functional intermediate states of affairs.

Lipids can be distinguished between mono - or diacyl glycerols (“incomplete lipids”, ILs) or phospholipids (“complete lipids”, CLs). 28 

The Cell factory maker, Paley's watchmaker argument 2.0 Osc_mi11
All unstructured text is available under the Creative Commons Attribution-ShareAlike License;

Their head groups typically consist of a phosphate group bound to a glycerol backbone. They link to the tails that are usually long, linear fatty acids. These chains have normally a length of between sixteen to eighteen carbon atoms. In the picture, there are two different fatty acids, one saturated and one unsaturated, bonded to the glycerol molecule. The unsaturated fatty acid has a slight kink in its structure due to the double bond.

David W. Deamer (2010): In contemporary cells, a fundamental role of membrane boundaries is to provide a selective permeability barrier that is necessary for separating the cytoplasm from the external environment. The transmembrane transport of nutrients and ionic solutes is mediated by a variety of membrane-associated proteins that act as channels, carriers and active transporters (pumps). Membrane receptors provide a sensor mechanism that permits communication between the intracellular milieu and the outside world. Membranes also capture light energy and redox energy by using pigment systems and electron transport to generate electrochemical proton gradients as a source of free energy. All of these functions require membrane-associated proteins which were presumably absent in the first forms of cellular life. It, therefore, seems likely that the membrane boundaries of the earliest cells simply provided a selective permeability barrier that permitted the permeation of essential nutrients but retained polymeric products of primitive biosynthesis. 19

Martin M Hanczyc: (2017): The membrane defines the unit cell and its internal volume. This barrier also acts to preserve the integrity of the cell in varying environments. But membranes are more than just passive containers. They mediate the interactions of cells with the environment including the harvesting of energy, material, and other resources, and the interaction with other cells including potential pathogens. Such functionalities are essential mechanisms for cells to avoid equilibrium and death. The material and informational flux through a cell is often controlled by various proteins and lipid conjugates integrated into the membrane. For example, transmembrane proteins govern signal transduction pathways. 5 19

Cell (cytoplasmic) membranes are made of phospholipids, which are made of fatty acids attached to a glycerol backbone. Their polar head group makes them amphiphilic (water-loving) on the outside, and their fatty acid tail is hydrophobic ( water-repelling). They form spontaneously bilayers in aqueous environments.  Today's cell membranes form with all the membrane proteins an enormously complex system. Scientific hypotheses are that they are the product of spontaneous formation, self-assembling structures, or due to a long period of evolution, being preceded by simpler systems, but there is no consensus on what such supposed "proto-cell" membranes could have looked like. Is the emergence through a gradual process plausible?

Saturated vs unsaturated 
Membrane fluidity is life-essential.  A.J.M.Driessen (2014): A vital function of the cell membrane in all living organisms is to maintain the membrane permeability barrier and fluidity. 32 S.Ballweg (2016): The maintenance of a fluid lipid bilayer is key for membrane integrity and cell viability. 33 

David Deamer (2017): Saturated hydrocarbon chains would “freeze” into gels at ordinary temperature ranges, so adding unsaturated cis double bonds near the center of the chain solves this problem. 37

Libretext: In their saturated form, the fatty acids in phospholipid tails are saturated with bound hydrogen atoms; there are no double bonds between adjacent carbon atoms. This results in tails that are relatively straight. In contrast, unsaturated fatty acids do not contain a maximal number of hydrogen atoms, although they do contain some double bonds between adjacent carbon atoms; a double bond results in a bend of approximately 30 degrees in the string of carbons. Thus, if saturated fatty acids, with their straight tails, are compressed by decreasing temperatures, they press in on each other, making a dense and fairly rigid membrane. If unsaturated fatty acids are compressed, the “kinks” in their tails elbow adjacent phospholipid molecules away, maintaining some space between the phospholipid molecules. This “elbow room” helps to maintain fluidity in the membrane at temperatures at which membranes with saturated fatty acid tails in their phospholipids would “freeze” or solidify. The relative fluidity of the membrane is particularly important in a cold environment. A cold environment tends to compress membranes composed largely of saturated fatty acids, making them less fluid and more susceptible to rupturing. Many organisms (fish are one example) are capable of adapting to cold environments by changing the proportion of unsaturated fatty acids in their membranes in response to the lowering of the temperature. 30

Besides helping maintain fluidity, unsaturated lipids help decrease sensitivity to oxidative degradation of lipids and help increase lifespan. 

Homeoviscous adaptation
The bilayer cell membrane is unstable unless sophisticated mechanosensing and signaling pathways provide a mechanism of adaptation that controls its properties.
R.Ernst (2016): Biological membranes are complex and dynamic assemblies of lipids and proteins. Bacteria, fungi, reptiles, and fish do not control their body temperature and must adapt their membrane lipid composition in order to maintain membrane fluidity in the cold. This adaptive response was termed homeoviscous adaptation. The most common structure, the lamellar lipid bilayer, has various physicochemical properties including phase behavior, different degrees of fluidity/viscosity, membrane thickness, and bending rigidity that are determined both by the molecular composition and membrane curvature. A cell must monitor membrane properties to mount adaptive responses and maintain organelle identities. Lipids have a pivotal role in membrane remodeling processes and their biosynthesis and turnover are tightly regulated.

Eukaryotic cells and their organelles synthesize hundreds to thousands of lipid molecules differing in their molecular structures, physicochemical properties, and molar abundances. This stunning diversity derives from the combinatorial complexity of the lipid ‘building blocks’. Glycerophospholipids and sphingolipids have a modular design featuring two apolar hydrocarbon chains (or acyl chains) and a hydrophilic headgroup. The proportion of saturated and unsaturated acyl chains in membrane lipids is a key factor determining lipid packing, membrane viscosity, and water permeability. Bacteria, cyanobacteria, fungi, plants, and fish that do not control their body temperature increase the proportion of unsaturated acyl chains in membrane lipids to maintain fluidity in the cold. However, temperature is not the only factor that explains the unsaturation level of biological membranes. In homeotherms, such as mammals, large variations exist between the acyl chain profiles of several tissues, suggesting that this profile endows cellular membranes with specific properties. Thus, eukaryotic cells establish lipid gradients, with sterols and saturated acyl chains being gradually enriched along the secretory pathway at the expense of monounsaturated acyl chains.
43

Observe how the authors describe the Glycerophospholipids and sphingolipids having a "modular design".

Doris Berchtold (2012): As TORC2 regulates sphingolipid metabolism, our discoveries reveal a  homeostasis mechanism in which TORC2 responds to plasma membrane stress to mediate compensatory changes in cellular lipid synthesis and hence modulates the composition of the plasma membrane. The components of this pathway and their involvement in signaling after  membrane stretch are evolutionarily conserved 45

Natalia Soledad Paulucci (2021): It is vitally important that bacteria maintain the fluidity of their membranes at optimal values to ensure physiological homeostasis and the integrity of all the processes that occur in them. This fluidity control process, called homeoviscous adaptation, was first demonstrated in E. coli ( Sinensky 1974) by observing that membrane fluidity remains relatively constant at various temperatures.  Underlying the process of homeoviscous adaptation is the stress-triggered catalytic activity of membrane-bound enzymes and/or membrane sensors related to signal transduction mechanisms. Thus, the membrane remodeling in composition and organization may operate as an on/off switch on the controlling mechanisms 18:0. 44

The evidence indicates that maintaining a homeostatic internal milieu independently of external environmental variations is vital, and depends on a control process of membrane fluidity ( homeoviscous adaptation ) which depends on complex membrane-bound enzymes and/or membrane sensors related to signal transduction mechanisms. That points to an interdependent, irreducibly complex system, an interplay of phospholipid biosynthesis, directed, and depending on signals transmitted by these signaling pathways. Since synthesizing unsaturated chains depends on these complex enzymatic synthesis processes that were not available prebiotically, that raises the question of how they could have emerged prebiotically. 

Membrane structures
Researchers are looking for viable primitive self-assembling membranes supposing that the early earth-hosted simple amphiphilic molecules, such as fatty acids and phospholipids. They hypothesize that might have been sufficient for the formation of primitive membranes through self-assembly. Is it?

The Cell factory maker, Paley's watchmaker argument 2.0 Cell_m10
Membrane structures.
Top, an archaeal phospholipid: 1, isoprene chains; 2, ether linkages; 3, L-glycerol moiety; 4, phosphate group.
Middle, a bacterial or eukaryotic phospholipid: 5, fatty acid chains; 6, ester linkages; 7, D-glycerol moiety; 8, phosphate group. 
Bottom: 9, lipid bilayer of bacteria and eukaryotes; 10, lipid monolayer of some archaea. 
I, the copyright holder of this work, release this work into the public domain. This applies worldwide.

David Deamer (2017): An amphiphile is defined as a molecule having both a non-polar (hydrophobic or “water-fearing”)  hydrocarbon moiety ( tail)  and a polar (hydrophilic or “water-loving”) head group. The simplest amphiphiles are fatty acids having a single hydrocarbon chain and a carboxylic acid head group.  10

The Cell factory maker, Paley's watchmaker argument 2.0 Phosph10
Creative Commons CC0 License

Fatty acids
Fatty acids (FA) are constituents of phospholipids that make up cell membranes. Their derivatives have also other functions in the cell, like cell signaling, and supply of energy.  

The Cell factory maker, Paley's watchmaker argument 2.0 Fatty_11
Libretext: Fatty acids consist of a carboxylic acid group and a long hydrocarbon chain, which can either be unsaturated or saturated. A saturated fatty acid tail only consists of carbon-carbon single bonds, and an unsaturated fatty acid has at least one carbon-carbon double or triple bond. 29  Creative Commons Attribution-ShareAlike License;

Different membrane structures between bacteria, archaea, and eukaryotes
Archaeal phospholipids are chemically distinct from those that are present in bacterial and eukaryotic membranes; the glycerine moieties possess opposite chiralities, and the corresponding biosynthetic enzymes are either unrelated or are, at least, not orthologous 48

Jonathan Lombard (2012): Two different, albeit structurally similar, kinds of phospholipids exist in nature. Bacteria and eukaryotes have the same membrane biochemistry, with ester-linked fatty acid phospholipids that are based on glycerol-3-phosphate (G3P). These G3P phospholipids were thought to be universal, but the surprise came when pioneering studies of archaeal biochemistry showed that archaeal phospholipids are made of glycerol-1-phosphate (G1P) that is ether-linked to isoprenoid chains. This chemical disparity mirrors the use of different phospholipid biosynthesis pathways in archaea and bacteria, and in particular the use of a distinctive glycerol phosphate dehydrogenase to synthesize G1P11. When they were discovered, these archaeal pathways were considered to be unique and non-homologous to those of bacteria and eukaryotes. 

Juli Peretó (2004): The two key dehydrogenase enzymes that produce G1P and G3P, G1PDH and G3PDH, respectively, are not homologous. 47

There are often exceptions to the norm. Inversely: J.Lombard (2012): The presence of fatty acids in archaea was described several decades ago, and is often neglected, as is the presence of archaeal-like ether lipids in some bacteria 14

Eugene V. Koonin (2005) Waechtershaeuser suggests that the LUCA was a form of life that existed in two dimensions only and that could synthesize both lipid (and, implicitly, cell wall) types, followed by differential loss. Differential loss explains all of the differences between archaebacteria and eubacteria, but makes the (primitive?) LUCA the biochemically most-potent organism that ever lived, with functionally redundant parallel pathways for a plethora of essential functions (lipids, cell walls, DNA replication). 20

Eukaryotic membranes
Daniel Segré (2001): A present-day eukaryotic cell incorporates three primary classes of lipid: phospholipids, sphingolipids and sterols. When these are further differentiated with respect to head groups, hydrocarbon chains and linking bonds, hundreds of different membrane lipids can be defined. 35

J.Lombard (2012): Eukaryotic membranes have typical bacterial-like phospholipids. By contrast, the apparent conservation of the isoprenoid biosynthesis mevalonate pathway in archaea and eukaryotes, and its loss in most bacteria, could support a relationship between archaea and eukaryotes. However, recent phylogenomic analyses show that there are major differences between the archaeal and eukaryotic mevalonate pathways; archaea have the most divergent pathway, whereas eukaryotes and several bacteria appear to have retained the ancestral version. This suggests that eukaryotes inherited their membranes directly from bacteria or from a common ancestor of bacteria and eukaryotes to the exclusion of archaea. This is at odds with the classical Woesian three-domain phylogeny rooted on the bacterial branch. With regard to the eukaryotes, this phylogeny implies that the last common ancestor of archaea and eukaryotes would have had either an archaeal-like membrane that was subsequently replaced by bacterial-like phospholipids in eukaryotes, or an ancestral mixed membrane with both G1P and G3P phospholipids that evolved towards a modern archaeal-like membrane in archaea and towards a bacterial-like membrane in eukaryotes after the divergence of both lineages (the pre-cell-like model). Both options are problematic. Unless considering massive horizontal transfer of all the necessary genes, the mixed membrane model implies the less parsimonious assumption that bacterial-like membranes evolved twice from the ancestral mixed membrane, in bacteria and eukaryotes independently. The fact that no archaeal-to-bacterial membrane transition has been identified so far also undermines the hypothesis that an archaeal-like membrane was secondarily replaced in eukaryotes.  14

There are no difficulties in the hypothesis that an intelligent designer created the three domains separately, and independently, which then does remove the necessity to find plausible transition routes. 

Gáspár Jékely (2006) admitted: If one assumes that none of the two membrane forms could have evolved gradually from the other one or from a mixed membrane, the conclusion that eu- and archaebacterial membranes originated independently is inevitable. 34

Prebiotic synthesis of fatty acids
Kepa Ruiz-Mirazo (2013): The Fischer−Tropsch synthesis, which is known to produce long hydrocarbon chains from carbon monoxide and hydrogen gases in the presence of a metal catalyst at high temperatures, is considered a possible source of fatty acids and fatty alcohols. In addition to the classical results on those lines by Oró and co-workers, more recently Simoneit and coworkers have conducted this reaction by heating oxalic acid solutions at temperatures that simulate the conditions at deep-sea hydrothermal vents. At the optimal temperature (150−250 °C), the lipid components ranged from C12 to more than C33 and included n-alcohols, n-alkanoic acids, n-alkyl formates, n-alkanals, n-alkanones, n-alkanes, and n-alkenes, all with essentially no carbon number preference. 36

David Deamer (2017): For stability in an aqueous cytosol of a living cell, the hydrocarbon chains must be in the range of 14 to 18 carbons in length.  37

The fischer-Tropsch synthesis produces a varied length of hydrocarbons, and in order to have stability in aqueous cytosols, the chain must have a size of 14 to 18 carbons. That means the product is not viable.

Biotic synthesis of fatty acids
Dr. Peter Reilly: The standard way for cells to synthesize fatty acids is through the fatty acid synthesis cycle, using eight enzymes (acyl-CoA synthase, acyl-CoA carboxylase, acyltransferase, ketoacyl synthase, ketoacyl reductase, hydroxyacyl dehydratase, enoyl reductase, and thioesterase) and acyl carrier protein) 21

Differences in the biosynthesis of fatty acids in bacteria and eukaryotes, and isoprenoids in archaea
Yosuke Koga (2012): In prokaryotic and eukaryotic cells, the (fatty acid) carbon chains are mainly linear but archaeal lipids are branched every fourth carbon, with a single methyl group linked to these carbon atoms. The unique structure of archaeal lipids and their stereospecificity was hypothesized to be responsible for the ability of these organisms to resist and thrive under extreme environmental conditions 16

C. de Carvalho ( 2018):The pathway for FA biosynthesis is highly conserved within the kingdoms of life, starting with the formation of malonyl-CoA by carboxylation of acetyl-CoA and further condensation of malonyl-CoA with acetyl-CoA with the release of CO2.  Studies on de novo synthesis of FA in Archae are rare. Archaeal membrane phospholipids are considered to incorporate isoprenoids instead of FA 15

The building blocks of isoprenoids ( used in archaea) are universal carbon five subunits called isopentenyl pyrophosphate (IPP) and dimethylallyl pyrophosphate (DMAPP) that are isomers. The biosynthetic pathway leading to the synthesis of IPP and DMAPP vary in different organisms.  To date, three distinct pathways are known. 17

How did the transition from the prebiotic synthesis of fatty acids in bacteria and eukaryotes, and isoprenoids in archaea occur? This is a huge step, a large gap, that is unexplained. 

Glycerol
Maheen Gull (2021): Glycerol is the structural backbone of lipid molecules (triacylglycerols). It is synthesized from sn-glycerol-3-phosphate in the presence of an enzyme called glycerol-3-phosphate phosphatase. In order to understand how life started, we need to understand the prebiotic origin of glycerol. One of the first questions for the origin/prebiotic synthesis of glycerol is the ‘site of origin’. For instance, the environments that can lead to the formation of glycerol rely on reduced carbon species (as glycerol is even more reduced in oxidation state than formaldehyde) and generally UV-rich sources for polymerization. This is in contrast to the formation of fatty acids, which has generally been considered as a product of hydrothermal systems (which are generally H2O and hydrocarbon-rich) or Fisher-Tropsch type reactions. Therefore the combination of such systems is questionable owing to the significant difference between these environments, such as pressure, temperature, and pH. The classical formose reaction has been reported to produce certain chemical derivatives of glycerol such as 2-hydroxymethyl glycerol, along with pentaerythritol. 11

The authors then cite several sources and conclude: The above-mentioned experiments under simulated astrophysical environments, i.e., very low temperatures (typically, <20 K), very low pressures (typically, <10−8 mbar), and high doses of ionizing radiation (typically, UV, extreme UV, or X-ray photons, high-energy electrons, or high-energy protons) plausibly show a universal process in space for the formation of glycerol. To this end, extraterrestrial and terrestrial sources may have both been sources of glycerol on the early Earth.

Prebiotic origin of glycerol and glycerol precusors
M.Fiore (2022): The prebiotic synthesis yields racemic glycerol phosphate mixtures, while biotic syntheses are catalyzed by specific enzymes, producing either sn-G1P ( bacteria) or sn-glycerol-3-phosphate (G3P) ( archaea). 39

What science papers can do, is outline the differences between the synthesis of G1P in bacteria, and G3P in archaea, but there is no detailed explanation of the prebiotic, to biotic synthesis ( and the mechanisms/forces that promoted that transition, and either why the divergent biosynthesis pathways in archaea and bacteria/eukaryotes emerged.  

The unsolved problem of symmetry breaking from prebiotic racemic mixtures, to homochiral phospholipids used in life
Emiliano Altamura (2020): In a series of papers published in 1848, Louis Pasteur argued that the crystals, composed of the same molecules, were bearing different symmetries. When combined in what is now called a racemic mixture, the different molecules cancelled each other’s ability to rotate the direction of uniformly polarized light. At the time, Pasteur probably ignored the fact that he was giving birth to one of the major questions of natural sciences: given that racemic mixtures are produced in any achiral environment, and that both mirror-imaged molecular forms, now called enantiomers, have, to the limits of detection, exactly identical energies and reactivity, how did the biological homochiral world emerge from the primitive inanimate and achiral environment? In other words, the big question is not the appearance of chiral molecules, but how the population symmetry of dissymmetric objects was broken, that is, the fact that dissymmetric objects of the same potential energy became strongly unequally populated. 23

Victor Sojo (2014): Homochirality, the exclusive prevalence of one chemical structure over its otherwise identical mirror image or enantiomer, the single-handedness of optically asymmetric chemical structures, is present and ubiquitous in all major groups of biological macromolecules. Terrestrial life’s preference for one isomer over its mirror image in D-sugars and L-amino acids has both fascinated and puzzled biochemists for over a century. A conclusive explanation for the evolutionary origin and maintenance of homochirality is still lacking. Although the phospholipid glycerol headgroups of archaea and bacteria are both exclusively homochiral, the stereochemistries between the two domains are opposite. The question, if any, lies in why nature went in one specific direction towards L-amino acids and D-sugars, rather than the opposite.  13

The author attributes this dual homochirality to "a simple evolutionary choice". The problem with this reasoning is that a) there was no evolution when that "choice" had to be made, and b) evolution cannot be anthropomorphized. It does not make any choices. Naturalistic/evolutionary explanations are entirely inadequate. 
  
Sojo continues: The carbonyl center of dihydroxyacetone phosphate (DHAP), from which both G1P and G3P are formed, is prochiral: hydrogenation from one side of the double bond produces G1P, while reacting from the opposite side gives G3P. At the atomic level, the amino acids of the active site of G3PDH face the pro-S hydrogen of NADH, whereas the G1PDH active site has been recently reported to exhibit a pro-R geometry. The idea of a nonstereospecific GP-synthase is difficult to reconcile with biochemical knowledge of the enzymes that catalyze these reactions.

Phospholipid biosynthesis is very different between bacteria and archaea,

The cell machinery is programmed to synthesize the glycerol moiety either right or left-handed. How was the gap between meteorites delivering glycerol organic compounds to the early earth, and the complex biosynthesis processes by complex enzyme machines overcome and bridged, to leap over it? Victor Sojo: Free-solution chemistry is not directly comparable to enzymatic catalysis.  The only satisfying answer seems to be that the machinery was designed from scratch. Life started fully developed, in the hand of the designer, who maybe knew that homo sapiens evolutionis scientificus was coming, and in order to leave him with interrogation points of his Darwinian theory, he made different lipid chiral directionality between archaea and bacteria.  

Emiliano Altamura (2020): Although the preparation of enantiopure phospholipid esters has been extensively reviewed during the past forty years, to the best of our knowledge, no large-scale synthesis of racemic phospholipids has ever been reported. Generally speaking, here we have concluded that racemic and scalemic ( enantiopure) lipids, in particular POPC ( phosphatidylcholine, an important phospholipid for biophysical experiments ), form stable membranes essentially, as well as homochiral lipids. 23

Why are phospholipid membranes homochiral?
The question of the biological significance of the phenomenon, this entrenched general biological sign of a living matter, is rarely discussed. 25  There must be a logic behind the fact that cell membranes are chiral. In mammalian cells, chiral recognition is a factor in mediating cell viability. 24 

John Harden (2009): Chiral lipids display piezoresponses while their racemic mixture does not. It demonstrates an important role played by lipid chirality in lyotropic phases and in membranes: it makes lamellar lyotropic phases piezoelectric. 26

Prebiotic origin of glycerol phosphates
Maheen Gull  (2021): Glycerol phosphates (GP) play a central role in modern biochemistry. These compounds are directly associated with crucial life processes, such as cellular respiration and cell structure. For a better understanding of the origin of early membranes, it is essential to understand the prebiotic syntheses of GP, which also are critical to the synthesis of phospholipids, an essential component of cell membranes in almost all organisms . GP links via a phosphate diester bond to form a ‘head group’ that is the polar/hydrophilic part of the phospholipid molecule.  Prebiotic syntheses of GP have been reported previously by using ammonium phosphates to phosphorylate glycerol with condensation agents at 85 °C, under simulated hydrothermal conditions and by using various minerals and clays as catalysts by employing various non-aqueous solvents, by using high energy phosphates such as amidophosphates, and by the formation of activated phosphate, e.g., imidazole phosphate, which then reacts with the organic compounds. In addition, the syntheses of GP from the meteoritic mineral schreibersite have been reported.

The above-mentioned methods have challenges such as the use of non-aqueous solvents that may not be prebiotically prominent. High energy conditions may degrade organic substrates and use of high energy phosphates, which are uncommon in the rock record.  
42

From prebiotic synthesis to the biotic synthesis of glycerol phosphates
Maheen Gull  (2021): In bacteria and eukaryotes, Glycerol kinase (GK) catalyzes the synthesis of G3P from glycerol. There are two biosynthetic pathways to obtain G3P from either DHAP or from glycerol.  In archaea, G1P is catalyzed by G1P dehydrogenase (G1PDH). There is only one pathway to obtain G1P from DHAP

Prebiotic origin of phospholipids
Juli Peretó (2004): The origin of cell membranes is a major unresolved issue. 47 Origin of life investigators have tried to find explanations of the prebiotic origin of the compounds required and ways to assemble them into amphiphile bilayers that could serve for the unguided self-assembly of the first cell membranes hosting the building blocks required to kick-start life. There have been several hypotheses for extraterrestrial sources, like carbonaceous chondrites, asteroids, etc. Most of these proposals are oversimplified, and large explanatory gaps exist. Cell membranes are generated from other membranes but not created from scratch. Usually, the hypotheses are from simple to complex. First, there were simple lipid droplets, then micelles, and last closed bilayer vesicles.  Fatty acids form usually micellar structures, while phospholipids, as bilayer structures, yield more stable vesicles than fatty acids 38

How could/would it be possible to go from simple chemistry compounds and their self-assembly to the complex biosynthesis pathways requiring multiple complex enzymes,  that diverge in the three domains of life? There is also the fact that there was no prebiotic selection process of the enantiomer handedness. If there was a cenancestor with a heterochiral membrane, how and why did a transition occur (racemic→scalemic→enantiopure) to divergent chiral form in bacteria and archaea?

Michele Fiore (2016): An implicit assumption behind this analysis is that prebiotically formed amphiphiles (“pre-Darwinian” amphiphiles, before proto-cellular replication set in), which assemble into membranes and close into semi-permeable boundaries of vesicular compartments (with a void volume inside), must be racemic (if chiral). 28

Sean F. Jordan (2018): Phospholipids are arguably too complex to have been formed via prebiotic chemical syntheses. 18

David W. Deamer (2010): Phospholipids spontaneously form bilayer vesicles having dimensions in the range of bacterial cells. Lipid bilayer vesicles are commonly referred to as liposomes, and such self-assembled membrane structures can be used as models of the earliest cell membranes. A variety of membranous structures can also be prepared from single-chain amphiphiles such as fatty acids. Such vesicles are plausible models for the formation of early cellular compartments. An important aspect of this argument is that the prebiotic availability of such amphiphiles has been established. Carbonaceous meteorites contain a rich mixture of organic compounds that were synthesized abiotically in the early solar system, and this mixture can be used as a guide to the kinds of organics likely to be available on the early Earth, either delivered during late accretion or synthesized at the Earth’s surface. 19

Drake Lee  (2018): The origin of fatty acids on the prebiotic Earth is important as they likely formed the encapsulating membranes of the first protocells. Carbon-rich meteorites (i.e., carbonaceous chondrites) such as Murchison and Tagish Lake are well known to contain these molecules, and their delivery to the early planet by intense early meteorite bombardments constitutes a key prebiotic source. We collect the fatty acid abundances measured in various carbonaceous chondrites from the literature and analyze them for patterns and correlations. Fatty acids in meteorites include straight-chain and branched-chain monocarboxylic and dicarboxylic acids up to 12 carbons in length—fatty acids with at least 8 carbons are required to form vesicles, and modern cell membranes employ lipids with ∼12–20 carbons.  Straight-chain monocarboxylic acids (SCMA) are the dominant fatty acids in meteorites, followed by branched-chain monocarboxylic acids (BCMA). Vesicles can be composed of a single fatty acid type as short as 8 carbons in length. Meteorites contain fatty acids 2–12 carbons in length. Therefore vesicles could indeed form directly out of meteorite-delivered fatty acids. 22

Benoit E. PRIEUR (1995) Of all the questions pertaining to the origins of life, the prebiotic synthesis of fatty acids has given scientists the most difficulty. The chemistry is not easy, but we do know that all prebiotic synthesis would have to be simple, fast, and possible in vast quantities.  27

Prebiotic Synthesis of Complete and Incomplete Phospholipids
Michele Fiore (2016): The prebiotic synthesis of phospholipids can be divided into two steps:  the formation of incomplete lipids (IL), the critical step being the acylation of glycerol, and second, the phosphorylation of such into complete lipids (CLs). Both acylation of glycerol and phosphorylation fall under the category of condensation reactions, which require the elimination of one molecule of water. 28

Prebiotic phospholipid bond formation
Libretext: The fatty acids are attached to the glycerol at the 1 and 2 positions on glycerol through ester bonds. The third oxygen on glycerol is bonded to phosphoric acid through a phosphate ester bond (oxygen-phosphorus double bond oxygen). In addition, there is usually a complex amino alcohol also attached to the phosphate through a second phosphate ester bond. The phosphate group has a negatively charged oxygen and a positively charged nitrogen to make this group ionic. In addition, there are other oxygen of the ester groups, which make on whole end of the molecule strongly ionic and polar. 31

Sutter M (2015): Phospholipid ethers are complex molecules and their synthesis in the laboratory requires several steps, including protections and deprotections of the glycerol backbone and the polar head 40  Evidently, there were no such conditions existing on the early earth.

No prebiotic explanation for the origin of complete lipids (CLs)
Michele Fiore (2016):  One of the challenges in this field is to discover plausible reaction pathways that allow the synthesis of complete lipids (CLs) from simple polyols (glyceraldehyde or glycerol), long alkyl chains (primary alkanols or fatty acids), in the presence of a reactive phosphorous source. An important approach for establishing an evolvable chemical system is to supply a population of vesicles with amphiphilic components that insert into the membrane of existing vesicles, leading to vesicle growth and division, thus to the growth in population size and an evolution of “shape replicating” compartments (vesicles). To achieve this, the amphiphiles that are supplied should have a critical vesicle concentration (cvc) similar or somewhat higher than that of the amphiphiles composing the vesicles. Once inserted, the added amphiphiles, if chemically different from those in the vesicles, should eventually be transformed into “first generation” amphiphiles without diffusing out of the vesicles. Otherwise, they would form a separate set of de novo vesicles upon chemical transformation.

This is a major hurdle in the evolutionary transition from fatty acid vesicles to phospholipid vesicles, which requires esterification of fatty acids with, for example, phosphoglycerol. Fatty acids need 105-fold higher minimal concentrations to form vesicles than phospholipids, and the average residing time of fatty acids in membranes is much shorter than that of phospholipids. As a result, any chemical reaction involving fatty acids would take place outside the vesicles, thereby interrupting the evolution of the parent vesicles’ contents.
  28

The transition from the prebiotic to biotic synthesis and formation of phospholipid cell membranes
Gáspár Jékely asks: Did the last common ancestor have a biological membrane? (2006)  The last common ancestor was associated with a hydrophobic layer with two hydrophilic sides (an inside and an outside) that had a full-fledged and asymmetric protein insertion and translocation machinery and served as a permeability barrier for protons and other small molecules. It is difficult to escape the conclusion that the last common ancestor had a closed biological membrane from which all cellular membranes evolved. The universal presence of two transmembrane proteins, the F0F1-ATPase and SecY seems to suggest that the universal ancestor was a membrane-bound cell 34

Kepa Ruiz-Mirazo (2013): The structure of most of lipids and surfactant compounds (e.g., phospholipids, glycolipids, cholesterol, etc.) is in general quite complex, and the probability that they were formed prebiotically seems rather low. It is considered very improbable that fatty acids, glycerol, and phosphate (i.e., the standard molecular components of a phospholipid) could have been present together in high enough concentrations on the primordial Earth.. In living organisms, cellular division occurs very regularly, after a growth phase, but this is a genetically controlled process, which relies on a complex membrane of diverse composition and, once more, on a suite of concerted macromolecular mechanisms in action. 36

The degradation problem
Michele Fiore (2016): Lipids are chemically and thermally relatively labile over geological timescales. Extracts from the remnants of extraterrestrial objects that entered the Earth’s atmosphere (meteorites), or from samples taken by a lander instrument (on planets, moons, asteroids, and comets) are expected to contain at best degradation products of lipids, viz. alkanes, long-chain alcohols, polyols, and carboxylic acids.

Three chemically distinct starting ingredients were prerequisites: (a) a source of long-chain “fatty” acids, aldehydes, or alcohols, (b) a polyol scaffold-like glycerol that can bear one or two lipophilic chains and (c) a source of phosphate such as inorganic orthophosphate like glycerophosphate, for the direct synthesis of Complete Lipids CLs.28


1. Steven A. Benner: Planetary Organic Chemistry and the Origins of Biomolecules 2010 Jul; 2
2. Geoffrey Zubay: Origins of Life on the Earth and in the Cosmos  2000
3. J Oró: The origin and early evolution of life on Earth 1990
4. Norio Kitadai: Origins of building blocks of life: A review  2017
5. Martin M Hanczyc:  Primordial membranes: more than simple container boundaries 2017
6. Carl Sagan: Endogenous production, exogenous delivery and impact-shock synthesis of organic molecules: an inventory for the origins of life 09 January 1992
7. Daniel P.Glavin: Chapter 3 - The Origin and Evolution of Organic Matter in Carbonaceous Chondrites and Links to Their Parent Bodies 2018
8. ScienceDaily: How life arose on Earth: Researchers brew up organics on ice September 18, 2012
9. Murthy S. Gudipati: IN-SITU PROBING OF RADIATION-INDUCED PROCESSING OF ORGANICS IN ASTROPHYSICAL ICE ANALOGS—NOVEL LASER DESORPTION LASER IONIZATION TIME-OF-FLIGHT MASS SPECTROSCOPIC STUDIES 2012 August 17
10. David Deamer: The Role of Lipid Membranes in Life’s Origin 17 January 2017
11. Maheen Gull: The Role of Glycerol and Its Derivatives in the Biochemistry of Living Organisms, and Their Prebiotic Origin and Significance in the Evolution of Life 10 January 2021
12. Lena Vincent: The Prebiotic Kitchen: A Guide to Composing Prebiotic Soup Recipes to Test Origins of Life Hypotheses 11 November 2021
13. Victor Sojo: On the Biogenic Origins of Homochirality 27 November 2014
14. Jonathan Lombard: The early evolution of lipid membranes and the three domains of life  2012 Jun 11
15. Carla C. C. R. de Carvalho: The Various Roles of Fatty Acids 2018 Oct; 23
16. Yosuke Koga: Thermal adaptation of the archaeal and bacterial lipid membranes 2012 Aug 15.
17. Samta Jain: Biosynthesis of archaeal membrane ether lipids 2014 Nov 26
18. Sean F. Jordan: Isoprenoids enhance the stability of fatty acid membranes at the emergence of life potentially leading to an early lipid divide 18 October 2019
19. David W. Deamer: Membrane Self-Assembly Processes: Steps Toward the First Cellular Life 13 October 2010
20. Eugene V. Koonin  On the origin of genomes and cells within inorganic compartments 2005 Dec; 21
21. 
22. Drake Lee: Meteoritic Abundances of Fatty Acids and Potential Reaction Pathways in Planetesimals Sept. 24, 2018
23. Emiliano Altamura: Racemic Phospholipids for Origin of Life Studies 3 July 2020
24. Kohei Sato: Chiral Recognition of Lipid Bilayer Membranes by Supramolecular Assemblies of Peptide Amphiphiles May 31, 2019
25. Ekaterina V. Malyshko: Chiral Dualism as a Unifying Principle in Molecular Biophysics 8 February 2021
26. John Harden: Chirality of lipids makes fluid lamellar phases piezoelectric  2009 Jan 7.
27. Benoit E. PRIEUR: ORIGIN of FATTY ACIDS  1995
28. Michele Fiore: Prebiotic Lipidic Amphiphiles and Condensing Agents on the Early Earth 28 March 2016
29. Libretext: Lipids
30. Libretexts: Components and Structure - Membrane Fluidity
31. Libretext: Phosphoglycerides or Phospholipids
32. Arnold J. M. Driessen: Biosynthesis of archaeal membrane ether lipids  26 November 2014
33. Stephanie Ballweg: Control of membrane fluidity: the OLE pathway in focus October 27, 2016
34. Gáspár Jékely: Did the last common ancestor have a biological membrane? 2006 Nov 27
35. Daniel Segré: The Lipid World February 2001
36. Kepa Ruiz-Mirazo: Prebiotic Systems Chemistry: New Perspectives for the Origins of Life October 31, 2013
37. David Deamer: The Role of Lipid Membranes in Life’s Origin   17 January 2017
38. Augustin Lopez: Chemical Analysis of Lipid Boundaries after Consecutive Growth and Division of Supported Giant Vesicles 2020 Nov 20
39. Michele Fiore: Synthesis of Phospholipids Under Plausible Prebiotic Conditions and Analogies with Phospholipid Biochemistry for Origin of Life Studies 10 May 2022
40. Marc Sutter: Glycerol Ether Synthesis: A Bench Test for Green Chemistry Concepts and Technologies July 21, 2015
41. Dr. Peter Reilly: Biosynthesis of Fatty Acids  2021
42. Maheen Gull: Catalytic Prebiotic Formation of Glycerol Phosphate Esters and an Estimation of Their Steady State Abundance under Plausible Early Earth Conditions 17 November 2021
43. Robert Ernst: Homeoviscous Adaptation and the Regulation of Membrane Lipids 4 December 2016
44. Natalia Soledad Paulucci: Membrane Homeoviscous Adaptation in Sinorhizobium Submitted to a Stressful Thermal Cycle Contributes to the Maintenance of the Symbiotic Plant–Bacteria Interaction 17 December 2021
45. Doris Berchtold: TOR complex 2 regulates plasma membrane homeostasis Mai 2012
46. Robert Shapiro: A simpler origin for life 2007 Jun;2
47. Juli Peretó: Ancestral lipid biosynthesis and early membrane evolution 2004 Sep;29
48. Eugene V. Koonin: Inventing the dynamo machine: the evolution of the F-type and V-type ATPases November 2007



Last edited by Otangelo on Tue Jun 28, 2022 2:28 pm; edited 22 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

What came first: Lipid membranes, or membrane proteins?
Eugene V. Koonin (2009): A topologically closed membrane is a ubiquitous feature of all cellular life forms. This membrane is not a simple lipid bilayer enclosing the innards of the cell: far from that, even in the simplest cells, the membrane is a biological device of a staggering complexity that carries diverse protein complexes mediating energy-dependent – and tightly regulated - import and export of metabolites and polymers. Despite the growing understanding of the structural organization of membranes and molecular mechanisms of many membrane proteins, the origin(s) of biological membranes remain obscure. 7

Armen Y. Mulkidjanian (2010) The origins of membrane proteins are inextricably coupled with the origin of lipid membranes. Indeed, membrane proteins, which contain hydrophobic stretches and are generally insoluble in water, could not have evolved in the absence of functional membranes, while purely lipid membranes would be impenetrable and hence useless without membrane proteins. The origins of biological membranes – as complex cellular devices that control the energetics of the cell and its interactions with the surrounding world – remain obscure 8

Eugene V. Koonin: The origin of the cellular membrane itself seems to involve a catch-22: for a membrane to function in a cell, it must be endowed with at least a minimal repertoire of transport systems but it is unclear how such systems could evolve in the absence of a membrane. 6

The challenge to start harvesting energy
Geoffrey Zubay (2000):  Metabolism depends on factors that are external to the organism. The living system must extract nutrients from the environment and convert them to a biochemically useful form. In the next phase of the metabolism, which is internal, small molecules are synthesized and degraded. 19

Jeremy England (2020): EVERY LIFE IS ON FIRE How Thermodynamics Explains the Origins of Living Things: A spring has first to be brought to a compressed state, that is ready to burst apart, forcefully, when properly triggered. When glass and dishes are thrown to the ground, the stored energy is released, but they get smashed, broken, or damaged. Accordingly, people can eat sugar, but not dynamite; plants love sunlight,  but not intense gamma rays. Life needs access to energy, but it has to absorb it in specific ways that are conducive to activating “healthy” motions while avoiding “unhealthy” ones. To get a little bit more technical, it helps to remember that living things are in highly specialized, exceptionally rare configurations of their constituent parts that would not easily be discovered by a random and unbiased search of the space of their possible arrangements. 17

ADDY PROSS (2012) Organized complexity and one of the most fundamental laws of the universe—the Second Law of Thermodynamics—are inherently adversarial. Nature prefers chaos to order, so disorganization is the natural order. Within living systems, however, the highly organized state that is absolutely essential for viable biological function is somehow maintained with remarkable precision. : The living cell is able to maintain its structural integrity and its organization through the continual utilization of energy, which is in fact part of the cell’s modus operandi. . So there is no thermodynamic contradiction in life’s organized high-energy state, just as there is no contradiction in a car being able to drive uphill in opposition to the Earth’s gravitational pull, or a refrigerator in maintaining a cool interior despite the constant flow of heat into that interior from the warmer exterior. Both the car driving uphill and the refrigerator with its cold interior can maintain their energetically unstable state through the continual utilization of energy. In the car’s case the burning of gasoline in the car’s engine is the energy source, while in the case of the refrigerator, the energy source is the electricity supply that operates the refrigerator’s compressor. In an analogous manner, energetically speaking, the body can maintain its highly organized state through the continual utilization of energy from some external source—the chemical energy inherent within the foods we eat, or, in the case of plants, the solar energy that is captured by the chlorophyll pigment found in all plants. But how the initial organization associated with the simplest living system came about originally is a much tougher question. 15

ATP - the Miracle molecule
Geoffrey Zubay (2000): The compound adenosine triphosphate (ATP) is the main source of chemical energy used by living systems. Through hydrolysis, ATP is converted into adenosine diphosphate (ADP) and inorganic phosphate ion (Pi ), and in the process a great deal of free energy is made available to drive other reactions. 19

Daniel Zuckerman: ATP is the most important energized molecule in the cell. ATP is an activated carrier that stores free energy because it is maintained out of equilibrium with its hydrolysis products, ADP and Pi. There is a strong tendency for ATP to become hydrolyzed (split up) into ADP and Pi, and any process that couples to this reaction can be "powered" by ATP - even if that process would be unfavorable on its own. 14

Libretext: ATP is an unstable molecule that hydrolyzes to ADP and inorganic phosphate when it is in equilibrium with water. The high energy of this molecule comes from the two high-energy phosphate bonds. The bonds between phosphate molecules are called phosphoanhydride bonds. They are energy-rich and contain a ΔG of -30.5 kJ/mol. 12

Yijie Deng (2021): Adenosine triphosphate (ATP) is the key energy source for all living organisms, essential to fundamental processes in all cells from metabolism to DNA replication and protein synthesis.  Cellular power consumption varies significantly from approximately 0.8 and 0.2 million ATP/s for a tested strain during lag and stationary phases to 6.4 million ATP/s during exponential phase, indicating ~ 8–30-fold changes of metabolic rates among different growth phases. Bacteria turn over their cellular ATP pool a few times per second during the exponential phase and slow this rate by ~ 2–5-fold in lag and stationary phases. 3

ATP was not extant prebiotically. So there had to be a way, a trajectory from non-ATP, to ATP. Prebiotic energy sources were sunlight, chemical compounds, electric discharges, cosmic rays, radioactivity, volcanoes, steam vents, and hydrothermal vents.


Cell membranes, proton gradients, and the origin of life
Every factory needs energy to do its work, and fuel for its machines to operate. So does the living chemical cell factory. It was a long way for researchers to discover how cells tackle this problem and make their energy. It comes out, that the solution is unusual and unexpected. A masterpiece of sophisticated engineering.  

Leslie E. Orgel (1999): One day a young scientist unknown to me made an appointment to talk about a theoretical matter that he thought would interest me. He was Peter Mitchell, and he wanted to talk about his ideas on how living cells derive energy: his novel chemiosmotic hypothesis. According to Mitchell's ideas, metabolic energy was used to pump protons across a biological membrane thus establishing a concentration gradient. It was the return of protons down the gradient that led to the synthesis of ATP. Energy could, in principle, be obtained by transporting an ion across a membrane from a more concentrated to a less concentrated solution. His ideas seemed bizarre to most of his contemporaries. They might well have asked, “Are you serious, Dr. Mitchell?” Of course, he was and he was right. 13

Alicia Kowaltowski (2015): Peter Mitchell was awarded the 1978 Nobel Prize in Chemistry for his discovery of the chemiosmotic mechanism of ATP synthesis, a hypothesis he first published in 1961. All groups of lifeforms present today have the genes necessary to build ATP synthases. 18

Kevin Drum (2016): A proton gradient is a complex and highly unusual way of providing energy, but it’s also nearly universal in modern life, suggesting that it goes back to the very beginnings of life. But if it’s so unusual, how did it get its start? 4

Nick Lane (2017): Chemiosmotic coupling – the harnessing of electrochemical ion gradients across membranes to drive metabolism – is as universally conserved as the genetic code. It requires not only a rotor-stator ATP synthase but also (apparently) ion-tight lipid membranes and complex proton pumps to generate electrochemical ion gradients. The claim that the last universal common ancestor had chemiosmotic coupling is treated with reservation.  All that might seem too complex to be primitive, and so it is understandable that most researchers have put the vexed question of its origins aside until more is known. Nonetheless, the fact remains that the ATP synthase is as universally conserved across life as the ribosome itself, and shares the same deep split between the bacteria and archaea. Some form of chemiosmotic coupling probably evolved very early in the history of life, arguably before LUCA; the question is how, and why?   5

Nick Lane (2010): Proton gradients are strictly necessary to the origin of life. The proton gradients that power respiration are as universal as the genetic code itself, giving an insight into the origin of life and the singular origin of complexity. There is a proton gradient across a membrane. It works much like a hydroelectric dam. The energy released by the oxidation of food (via a series of steps) is used to pump protons across a membrane — the dam — creating, in effect, a proton reservoir on one side of the membrane. The flow of protons through amazing protein turbines embedded in this membrane powers the synthesis of ATP in much the same way that the flow of water through mechanized turbines generates electricity. The flow of protons through the membrane turbines rotates the stalk of the ATP synthase, and the conformational changes induced by this rotation catalyze ATP synthesis.  How do bacteria keep their insides different from the outside? Membrane proteins can create gradients across a membrane, and these gradients can in turn power work. Although cells can generate sodium, potassium, or calcium gradients, proton gradients rule supreme. Protons power respiration not only in mitochondria but also in bacteria and archaea. Proton gradients are equally central to all forms of photosynthesis, as well as to bacterial motility (via the famous flagellar motor, a rotary motor similar to the ATP synthase) and homeostasis (the import and export of many molecules in and out of the cell is coupled directly to the proton gradient). 1

Lane comes to the conclusion: The idea that LUCA was chemiosmotic is not actually particularly challenging, as LUCA certainly had genes and proteins, and the ATP synthase is no more complex than the ribosome. It is a product of natural selection, and presumably the recruitment of subunits with pre-existing functions. 5 He references Eugene Koonin's paper published in 2007, where Koonin writes: We propose that these ATPases originated from membrane protein translocases, which, themselves, evolved from RNA translocases. We suggest that in these ancestral translocases, the position of the central stalk was occupied by the translocated polymer. 6

Is it plausible to believe that RNA translocases would be the product of prebiotic non-designed processes?

Effrosyni Papanikou (2007) The Sec machinery is essential for life. All cells must traffic proteins across their membranes. This essential process is responsible for the biogenesis of membranes and cell walls, motility and nutrient scavenging and uptake. The translocase is an impressively dynamic nanomachine that is the central component that catalyzes transmembrane crossing. This complex, the multi-stage reaction involves a cascade of inter-and intramolecular interactions that select, sort and target polypeptides to the membrane, and use energy to promote the movement of these polypeptides across — or their lateral escape and integration into — the phospholipid bilayer, with high fidelity and efficiency. Metabolic energy in the form of both ATP and the proton motive force is used to power pre-protein movement through the translocase machine  10

ATP synthase, which makes ATP, is explained by the evolution from a translocase, which requires ATP in order to be made. That is a catch22 situation. 

Is a transition from a proton gradient in inorganic compartments in hydrothermal vents, to membrane-based proton gradients a plausible hypothesis? It doesn't seem so. Here is, why: 

Inorganic compartments versus membrane-bounded cells as the means for confining the LUCA 
Eugene V. Koonin (2005): It has been repeatedly argued that the complex molecular composition inferred for LUCA could not have been attained without prior evolution of biogenic-membrane-bounded cells, mainly because (i) compartmentalization is a prerequisite for the evolution of any complex system; and (ii) certain key membrane-associated enzymes, such as the signal recognition particle (SRP) and the proton ATPase, are conserved in eubacteria and archaebacteria. The model of a compartmentalized, but inorganically confined LUCA obviates the first problem. However, the second problem – the conservation of certain membrane-associated functions in all modern forms of life – is more challenging. The ubiquity of the SRP (with its notable RNA component) and the proton ATPase across genomes, together with the clear split between archaebacterial–eukaryotic and eubacterial versions, suggests that these complexes were present in LUCA. Because the SRP inserts proteins into hydrophobic layers and ATPase requires a hydrophobic layer to function, this would seem to imply the existence of membranes in LUCA, apparently in contradiction to arguments concerning the late and independent emergence of lipid biosynthetic pathways. The essential distinction to be made is between a ‘hydrophobic layer’ and a ‘biogenic membrane’. The latter requires elaborate suites of lineage-specific enzymes (given the unrelated isoprene ether versus fatty acid ester chemistries of the membrane lipids in archaebacteria and eubacteria, respectively). 9

J. Baz Jackson (2016):  The hypothesis that a natural pH gradient across inorganic membranes lying between the ocean and fluid issuing from hydrothermal alkali vents provided energy to drive chemical reactions during the origin of life has an attractive parallel with chemiosmotic ATP synthesis in present-day organisms. However, such natural pH gradients are unlikely to have played a part in life’s origin. There is as yet no evidence for thin inorganic membranes holding sharp pH gradients in modern hydrothermal alkali vents at Lost City near the Mid-Atlantic Ridge. Proposed models of non-protein forms of the H+-pyrophosphate synthase that could have functioned as a molecular machine utilizing the energy of a natural pH gradient are unsatisfactory. Some hypothetical designs of non-protein motors utilizing a natural pH gradient to drive redox reactions are plausible but complex, and such motors are deemed unlikely to have assembled by chance in prebiotic times. Small molecular motors comprising a few hundred atoms would have been unable to function in the relatively thick (>1 μm) inorganic membranes that have hitherto been used as descriptive models for the natural pH gradient hypothesis. 11

Tan, Change; Stadler, Rob: The Stairway To Life (2020): The cell uses the proton gradient to charge “batteries” such as adenosine triphosphate (ATP), hence the “coupling” part of chemiosmotic coupling. ATP is a nearly universal battery in life. Once charged, it can be “plugged into” a wide variety of molecular machines to perform a wide variety of functions: activating amino acids for protein synthesis, copying DNA, untangling DNA, breaking bonds, transporting molecules, or contracting muscles. Like rechargeable batteries, ATP cycles frequently between powering gadgets and recharging. Although a human body contains only about sixty grams of ATP at any given moment, it is estimated that humans regenerate approximately their own weight in molecules of ATP every day. Because chemiosmotic coupling is essential for life and is highly conserved across all of life, abiogenesis must include a purely natural means to arrive at chemiosmotic coupling. This requires a membrane, a mechanism for pumping protons across the membrane, and a mechanism for producing or “recharging” ATP. The challenge is particularly onerous because these three components are highly complex in all of life and are interdependent to provide energy for life. In other words, the pumping of protons is of no use unless the membrane is there to maintain a gradient of protons. The membrane has no function for energy generation unless there is a mechanism for pumping protons across it. Similarly, the method of ATP production is of no use without a proton gradient across a membrane. 2

Rob Stadler (2022): The energy density required by life is about 100,000,000 times that which can be produced by the pH gradients of the vents. The small compartments in the rock structure have “membranes” that are far too thick for energy harnessing. And they would still require complex molecular machinery to make use of the free pH gradient. Energy harnessing in even the simplest forms of life requires extreme complexity and exhibits circular causality. Advocates for abiogenesis desperately seek to sidestep this complexity, but their best approach thus far requires placing blind faith in the wonders of natural selection. 23

Michael Marshall (2020): The biggest problem for the alkaline vent hypothesis is its most unique element, which at first sight seems the most convincing: the idea that a natural proton gradient could supply the energy to kick-start metabolism. This idea is a brilliant intuitive leap, but there is no experimental evidence. All life does use proton gradients, but all life also uses ribosomes and nobody thinks ribosomes were present at the very beginning. The problem is twofold. First, we do not know that there are sharp proton gradients within alkaline vents like Lost City. Instead, alkali may slowly blend into acid over the length of each chimney, in which case the proton gradient will be too gentle to generate useful power. Second, the enzymes that life uses, including the one that makes ATP, are big and complex. So far, nobody has found a simpler version that works and could plausibly have formed. This absence is glaring, just as the lack of a self-replicating RNA has been a problem for the RNA World. In the last few years Russell has tried to solve this problem. It seems unlikely that the first life used ATP itself, as the adenosine part of the molecule is elaborate. However, the key part is the chain of phosphates, and these ‘polyphosphate’ chains may simply have formed on their own. Indeed, Harold Morowitz pointed out in 1992 that many microorganisms make polyphosphates and used them to store chemical energy. Russell now suspects that the first life used the simplest possible polyphosphate: pyrophosphate, which is simply two phosphates strung together. To integrate pyrophosphate into his scheme, he has abandoned the idea of iron sulfide bubbles. ‘There were a lot of people who loved that because it looked like a cell,’ he says. However, he now thinks the pores in the rocks of alkaline vents were lined with multiple thin layers of ‘green rust’. Most of us have seen green rust, for instance on old iron ships that have long been exposed to seawater. It is a compound of iron, hydrogen, oxygen, and other chemicals, and Russell found it often formed when he tried to simulate conditions in the vents. He posits that these layers of green rust within the rock pores were the first cell membranes. This may seem an odd addition – surely the pores themselves were suitable containers for proto-life? – but Russell thinks the film of green rust could have been how the first life harnessed proton gradients to make pyrophosphate, without an enzyme. His proposal is that the proton gradient over the green rust membrane pulled phosphates and protons into narrow gaps in the green rust crystals, where they fused to form pyrophosphate. This was then released into the gaps between green rust layers, where it drove the synthesis of other biological molecules. It is an ingenious idea, which he is now trying to test. ‘If we can’t show in three years that that works, then we’re in dead trouble,’ he says. However, Russell is facing a new obstacle. In 2019 he lost his long-standing position at NASA’s Jet Propulsion Laboratory, and he is now living in Italy. He’s trying to get the experiments done at European Universities. Meanwhile, genetics has yielded a startlingly powerful piece of evidence in favor of the alkaline vent hypothesis.

In 2016, Martin’s team published a detailed reconstruction of the Last Universal Common Ancestor (LUCA) from which all modern organisms are descended. They did so by examining the genes of 1930 microorganisms, searching for genes that they all shared – which probably existed in LUCA. This was not easy, because microbes sometimes take a gene from an unrelated species; a process called horizontal gene transfer. This can make a gene appear universal and ancestral, when it actually evolved later and then spread. After the team had cleaned up the data as best they could, they were left with 355 gene families that seemingly existed in LUCA. These suggested that LUCA lived somewhere hot – which is compatible with an alkaline vent but doesn’t prove it – and that it used the Wood–Ljungdahl pathway to make biological molecules, as Martin predicted. Furthermore, it seems LUCA had the equipment to harness a proton gradient, but not to generate one – which fits with the idea that it relied on a natural proton gradient in a vent. This latter finding is striking, but must be taken with a pinch of salt because of the horizontal gene transfer problem. The alkaline vent hypothesis is beautiful and detailed and lines up with microbiology. But that doesn’t make it true. Plenty of beautiful, plausible seeming ideas have turned out to be wrong. It is not at all clear that the hypothesis can surmount its many problems. However, several of its key elements are so compelling that the true theory must surely incorporate them, or find some other solution to the problems they address. A source of chemical energy to fuel metabolism is obviously crucial, but possibly so is the ability to harness, or even generate, a proton gradient. Finally, it is striking that the hypothesis attempts to make two of the components of life – a metabolic cycle and a compartment – at once. It is this more holistic approach that is arguably most significant. Rather than trying to do everything with RNA, or with proteins, Russell’s idea endeavors to build something that looks more like a complete cell. In this, if nothing else, it hints at a better explanation for life on Earth. In the twenty-first century, many researchers followed Russell’s example and stopped trying to do everything with one kind of chemical. Instead, they would find ways to create all of life’s components at once. Even if Russell’s hypothesis turns out to be wrong, his work clearly foreshadowed this new approach. 24

Serpentinization
M. J. RUSSELL (2010): The alkaline nature of hydrothermal effluent in serpentinizing systems creates naturally formed pH and redox gradients across the precipitate at the vent-ocean interface, which could readily have served as the geochemical template upon which biological chemiosmotic harnessing evolved. More generally, hydrothermal vent precipitates provide natural three-dimensional microcompartments within which the products of serpentinization-driven organic synthesis could have been retained, so that the path to chemical complexity would not be faced with otherwise insurmountable problems of diffusion to the ocean. 26

Dr. Hideshi Ooka (2018): Deep-sea environments can harness thermal and chemical energy, and this may be used to drive specific chemical reactions such as CO2 reduction. This is possible due to the material properties of the chimney wall, namely their electrical conductivity and thermal insulation. These features allow the thermal and chemical gradients to be maintained, leading to variations in reaction environments which would likely create an environment suitable for the generation of a specific CO2 reduction product. In this way, hydrothermal vents occupy a large chemical reaction space, and may “search” for the optimum spatial and physicochemical environments for CO2 reduction. 27

D.Deamer (2019): Assumptions and conjectures not tested by experiments or observation: 

- The vent minerals can catalyze the reduction of CO2 by the dissolved hydrogen. 
- A substantial pH gradient develops between the alkaline fluid and relatively acidic seawater. 
- The porous minerals compartments can concentrate potential reactants and maintain products within their volume. 
- The reduced carbon compounds can act as a substrate to initiate a primitive version of metabolism. 
- A lipid film seals the otherwise porous mineral membranes so that pH gradients can be maintained. 
- The pH gradients can serve as a source of chemiosmotic energy coupled to pyrophosphate bond synthesis.

As is proper in scientific research, recent papers from other experts express skepticism about whether CO2 can actually be reduced under vent conditions, as well as the conclusions that follow from that assumption. For instance, Jackson (2016) makes the point that mineral membranes are much too thick to function as a chemiosmotic membrane because chemiosmosis requires a thin membrane to harvest the energy of a pH gradient. An analogy that may help clarify this point is to imagine a turbine and generator capturing the energy of a 10-meter waterfall over a cliff (a “sharp” gradient) and then putting the same generator into a river that falls the same distance—10 meters— but over a distance of one kilometer (analogous to a thick membrane). Virtually no energy can be captured from the slow-moving river. 25

Phosphate’s Role in Primitive Metabolism? 
ATP is an extraordinary molecule that has permeated all life as an energy currency, yet the origin of this particular molecule is a major gap in our understanding. Cells use ATP to gather energy from a source such as photosynthesis or respiration and then deliver it to energy-requiring reactions in the rest of the cytoplasm. The chemical-energy content of ATP is present in the pyrophosphate bond that links its second and third phosphate groups. This is called a high-energy bond because of its relatively large energy content, expressed as kilocalories per mole of ATP (kcal/ mol) or as kilojoules per mole (kJ/mol) in international units. To give a sense of the amount of energy available in the bond, the units were originally measured as calories, with units defined as the amount of heat that raises the temperature of a gram (approximately one cubic centimeter) of water by one degree Celsius. (This is simplified, but the technical definition is not necessary here.) A  kilocalorie will therefore raise the temperature of a liter of water (one thousand grams) by one degree C.  If one mole of ATP (507 grams) is dissolved in one liter of water and allowed to hydrolyze, the temperature of the water would increase by approximately 7 degrees C. This energy, by the way, is a chemist’s version which is measured under carefully defined conditions of temperature and concentration. In a living cell, the conditions are quite different and ATP makes a greater amount of energy available, closer to 10 kcal per mole. ATP hydrolysis is the primary source of the heat used by mammals and birds to maintain their body temperature at a fixed point above that of the environment. 


Nonsense remains nonsense, even when spoken by world-famous scientists
Natalie Wolchover: A New Physics Theory of Life January 22, 2014
MIT physicist Jeremy England has proposed the provocative idea that life exists because the law of increasing entropy drives matter to acquire life-like physical properties. England, an assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.
16

Richard Terrile, NASA mission scientist: “Put those ingredients ( for the origin of life) together on Earth and you get life within a billion years 20

Wentao Ma (2017): When two or more functional RNAs emerged, for their efficient cooperation, there should have been a selective pressure for the emergence of protocells. 21

TED talk (2011): Lee Cronin: So many people think that life took millions of years to kick in we're proposing to do it in just a few hours once we've set up the right uh chemistry
Chris Anderson: So and when do you think that will happen?
Lee Cronin: Hopefully within the next two years 22

Jaques Monod: Chance and necessity 1972:  "In the final analysis language too was a product of chance."

Jessica L. E. Wimmer et al. (2021): Simple spontaneous geochemical reactions gave rise to the enzymatically catalyzed reaction network of microbial metabolism: a highly organized set of specific organic reactions that provides the amino acids, nucleotides and cofactors to sustain ribosomal protein synthesis and growth.

The Cell factory maker, Paley's watchmaker argument 2.0 Lennox12

England did not address the question of the origin of the cell's machinery to harness the energy, like ATP synthase, nor how the proton gradient could have developed prebiotically. In biological processes, ATP is very precisely directed to where energy is required. No explanation was provided how such a state of affairs could have first originated.  

Open questions in prebiotic cell membrane synthesis
How could simple amphiphiles, which are molecules containing a nonpolar hydrophobic region and a polar hydrophilic region will self-assemble in aqueous solutions to form distinct structures such as micelles have been available in the prebiotic inventory if there has never been evidence for this? Furthermore, sources of compounds with hydrocarbon chains sufficiently long to form stable membranes are not known.
How could prebiotic mechanisms have transported and concentrated organic compounds to the pools and construction site?
How could membranous vesicles have self-assembled to form complex mixtures of organic compounds and ionic solutes, if science has no solution to this question?
How could there have been a prebiotic route of lipid compositions that could provide a membrane barrier sufficient to maintain proton gradients? Proton gradients are absolutely necessary for the generation of energy.
How to explain that lipid membranes would be useless without membrane proteins but how could membrane proteins have emerged or evolved in the absence of functional membranes?
How did prebiotic processes select hydrocarbon chains which must be in the range of 14 to 18 carbons in length?  There was no physical necessity to form carbon chains of the right length nor hindrance to join chains of varying lengths. So they could have been existing of any size on the early earth.
How could there have been an "urge" for prebiotic compounds to add unsaturated cis double bonds near the center of the chain?
How is there a feasible route of prebiotic phospholipid synthesis, to the complex metabolic phospholipid and fatty acid synthesis pathways performed by multiple enzyme-catalyzed steps which had to be fully operational at LUCA?
How would random events start to attach two fatty acids to glycerol by ester or ether bonds rather than just one, necessary for the cell membrane stability?
How would random events start to produce biological membranes which are not composed of pure phospholipids, but instead are mixtures of several phospholipid species, often with a sterol admixture such as cholesterol? There is no feasible prebiotic mechanism to join the right mixtures.
How did unguided events produce the essential characteristic of living cells which is homeostasis, the ability to maintain a steady and more-or-less constant chemical balance in a changing environment?  The first forms of life required an effective Ca2+ homeostatic system, which maintained intracellular Ca2+ at comfortably low concentrations—somewhere  ∼10,000–20,000 times lower than that in the extracellular milieu. There was no mechanism to generate this gradient.
How was the transition generated from supposedly simple vesicles on the early earth to the ultracomplex membrane synthesis in modern cells, which would have to be extant in the last universal common ancestor, hosting at least over 70 enzymes?

1. Nick Lane: Why Are Cells Powered by Proton Gradients? 2010
2. Change Laura Tan, Rob Stadler: The Stairway To Life March 13, 2020
3. Yijie Deng: Measuring and modeling energy and power consumption in living microbial cells with a synthetic ATP reporter 17 May 2021
4. Kevin Drum: Proton Gradients and the Origin of Life JULY 25, 2016
5. Nick Lane: Proton gradients at the origin of life 2017 May 15.
6. Eugene V. Koonin: Inventing the dynamo machine: the evolution of the F-type and V-type ATPases November 2007
7. Eugene V. Koonin: Co-evolution of primordial membranes and membrane proteins 2009 Sep 28
8. Armen Y. Mulkidjanian: Structural Bioinformatics of Membrane Proteins 2010
9. Eugene V. Koonin: On the origin of genomes and cells within inorganic compartments 2005 Oct 11.
10. Effrosyni Papanikou Bacterial protein secretion through the translocase nanomachine November 2007
11. J. Baz Jackson: Natural pH Gradients in Hydrothermal Alkali Vents Were Unlikely to Have Played a Role in the Origin of Life 2016 Aug 17
12. Libretext: ATP/ADP
13. Leslie E. Orgel: Are you serious, Dr Mitchell? 04 November 1999
14. Daniel Zuckerman: Synthesis of ATP by ATP synthase
15. ADDY PROSS: What is Life?: How Chemistry Becomes Biology  2012
16. Natalie Wolchover: A New Physics Theory of Life January 22, 2014
17. Jeremy England: EVERY LIFE IS ON FIRE How Thermodynamics Explains the Origins of Living Things  (2020)
18. Alicia Kowaltowski: Redox Reactions and the Origin of Life 05/29/2015
19. Geoffrey Zubay  Origins of Life on the Earth and in the Cosmos 2000
20. Cited by Paul Davies in: The fifth miracle, page 245, (2000)
21. Wentao Ma: What Does “the RNA World” Mean to “the Origin of Life”? 2017 Dec; 7
22. Lee Cronin: Making matter come alive Sep 9, 2011
23. Rob Stadler: Energy Harnessing and Blind Faith in Natural Selection July 29, 2022
24. Michael Marshall: The Genesis Quest 2020
25. David W. Deamer: Assembling life: how can life begin on earth and other habitable planets? 2019
26. M. J. RUSSELL: Serpentinization as a source of energy at the origin of life 2010
27. Dr. Hideshi Ooka: Electrochemistry at Deep-Sea Hydrothermal Vents: Utilization of the Thermodynamic Driving Force towards the Autotrophic Origin of Life 09 December 2018



Last edited by Otangelo on Sun Aug 07, 2022 4:31 pm; edited 14 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Chapter 6

Linking the building blocks
During DNA replication, in order to make daughter cells, DNA monomers are linked together to form genomes, and chromosomes identical to the mother cell, the information-bearing molecule of life. Using the RNA polymerase machine complex, DNA is transcribed to messenger RNA (mRNA), which are long strands of joined RNA monomers, forming mRNA polymers,  that store the "message" which is sent to the ribosome where the message is translated. The ribosome based on the instructions from mRNA polymerizes amino acids, strands that fold to become proteins, the working horses of the cell. The sequence of mRNA dictates the sequence of amino acids, which is obtained through translation, using the genetic code. 3 nucleotides form a codon "word", that is assigned to one of the 20 amino acids used in life.  In modern cells, ultra-complex machinery does the polymerization work. But prebiotically, these machines were not extant. That raises the question: How did the first RNA, DNA, and amino-acid polymer strands emerge prebiotically? The synthesis of proteins and nucleic acids from small molecule precursors represents one of the most difficult challenges to the model of pre-biological ( chemical) evolution.

Tan, Change; Stadler, Rob. The Stairway To Life (2020): Consistent Linkage of Building Blocks in living organisms, RNA, DNA, and proteins are chains of monomers that are linked together with perfect consistency, like boxcars perfectly aligned on the tracks and interconnected to form a long train. This “homolinkage” of long biopolymers is very difficult to achieve abiotically, even in modern laboratories run by human intellect. Abiotic chemical reactions to link chains of monomers end up looking more like a train derailment unless complex and highly controlled chemical reactions are employed to connect each monomer correctly 1

Prebiotic RNA and DNA polymerization
Another major problem that origin of life research faces is how to explain the transition from monomer ribonucleotides to polynucleotides. The emergence and existence of catalytic polymers are fundamental. Postulates of how polymerization could have occurred on prebiotic earth are, therefore, another essential question that has not been elucidated.  Initially, this could not have happened with a pre-existing polynucleotide template. In the case of RNA, not only must phosphodiester links be repeatedly forged, but they must ultimately connect the 5 prime‑oxygen of one nucleotide to the 3 prime‑oxygen, and not the 2 prime‑oxygen, of the next nucleotide. How could and would random events attach a phosphate group to the right position of a ribose molecule to provide the necessary chemical activity?  Pierre-Alain Monnard (2012): A fundamental requirement of the RNA world hypothesis is a plausible nonenzymatic polymerization of ribonucleotides that could occur in the prebiotic environment, but the nature of this process is still an open issue. 6

In present-day cells, polymerization is carried out by enzymes with high efficiency and specificity. Enzymes are genetically encoded polymers requiring complex, protein-based synthetic machinery.
Observe what Dr. Pierre-Alain Monnard et al. (2012) write: Selection toward highly efficient catalytic peptides, which eventually resulted in present-day enzymes, could have started at a very early stage of chemical evolution. 31

This is an entirely unsupported claim.In living organisms today, adenosine-5'-triphosphate (ATP) is used for the activation of nucleoside phosphate groups, but ATP would not be available for prebiotic syntheses. Joyce and Orgel note the possible use of minerals for polymerization reactions, but then express their doubts about this possibility.

Robert P. Bywater (2012): Despite the wide repertoire of chemical and biological properties of RNA, which make it such an appealing contender for being the first type of molecular species to usher in life onto this planet, there is no explanation for how such a complex chemical species could have arisen in the absence of sophisticated chemical machinery. The generation of complex chemicals requires many millions of cycles of synthesis, partial degradation, concentration, selection, and reannealing in combinatorially new ways such that sufficiently diverse species could be produced and reproduced, from which particularly suitable entities survived 32

Geoffrey Zubay, Origins of Life on the Earth and in the Cosmos (2000): Once the mononucleotide has been made, it must be converted to an activated derivative suitable for incorporation into a polynucleotide chain. In biochemical pathways, the nucleoside triphosphate derivative is usually used. The triphosphate derivative has more than enough chemical energy to power the formation of the phosphodiester linkages found in polynucleotides so there is no thermodynamic problem here. However, these compounds are not very reactive. In biosystems, sophisticated polymerases are essential to catalyze the polymerization of nucleoside triphosphates. Orgel and others have searched for other forms of activated nucleotides that would be reactive under mild conditions and would not require any more than a divalent cation catalyst. Their extensive search has led them to the use of imidazole-activated mononucleotides. Compounds of this type can be synthesized very efficiently by organo-chemical methods but a satisfactory prebiotic route for their synthesis has not been discovered. Because the reaction of imidazole and a mononucleotide involves the loss of a water molecule, a remote possibility is that the phosphorimidazolide is formed under dehydrating conditions. The formation of activated nucleotides by a prebiotically plausible route remains a most challenging problem. 25

Libretext: Phosphodiester bonds are central to all life on Earth as they make up the backbone of the strands of nucleic acid. In DNA and RNA, the phosphodiester bond is the linkage between the 3' prime carbon atom of one sugar molecule and the 5' prime carbon atom of another, deoxyribose in DNA and ribose in RNA. In modern cells, in order for the phosphodiester bond to be formed and the nucleotides to be joined, the tri-phosphate or di-phosphate forms of the nucleotide building blocks are broken apart to give off energy required to drive the enzyme-catalyzed reaction. Once a single phosphate or two phosphates (pyrophosphates) break apart and participate in a catalytic reaction, the phosphodiester bond is formed. 4 

Saidul Islam (2017): Laboratory-based chemical syntheses of ribonucleotides do most, if not all, require manipulation of sugars and nucleobases with protecting group strategies to overcome the thermodynamic and kinetic pitfalls that prevent their fusion. 2

Deamer (2010): The general problem regarding the condensation of small organic molecules to form macromolecules in an aqueous environment is the thermodynamically unfavorable process of water removal. In the current biosphere, these types of reactions are catalyzed by enzymes and energetically driven by pyrophosphate hydrolysis. 4

Deamer interviewed by Suzan Masur ( 2014): In a solution of monomers, such as monomers of RNA or DNA in solution, the laws of thermodynamics do not allow them to polymerize because there is a tremendous energy barrier to getting them to form bonds. 44

Weber, Arthur L.(1998): Obviously, biocatalysts and energy-rich inorganic phosphorus species were not extant on the Earth before life began. In all cases, the starting problem in a prebiotic synthesis would be the fact that materials would consist of an enormous amount of disparate molecules lying around unordered, and would have had to be separated and sorted out. 6

Allaboutscience: The intrinsic nature of the phosphodiester bonds is also finely-tuned. For instance, the phosphodiester linkage that bridges the ribose sugar of RNA could involve the 5’ OH of one ribose molecule with either the 2’ OH or 3’ OH of the adjacent ribose molecule. RNA exclusively makes use of 5’ to 3’ bonding. There are no explanations of how the right position could have been selected abiotically in a repeated manner in order to produce functional polynucleotide chains.  As it turns out, the 5’ to 3’ linkages impart far greater stability to the RNA molecule than do the 5’ to 2’ bonds. Nucleotides can polymerize via condensation reactions.  The activated nucleotides (or the nucleotides with coupling agent) now had to be polymerized. 7

Arthur V. Chadwick, Ph.D. (2005): When produced and condensed with a nucleotide base, a mixture of optical isomers results, only one of which is relevant to pre-biological studies. Polymerization of nucleotides is inhibited by the incorporation of such an enantiomorph. While only 3'-5' polymers occur in biological systems, 5'-5' and 2'-5' polymers are favored in pre-biological type synthetic reactions. 13

A recent paper from Steven Benner and co-workers (2022) claimed: This study shows that various mafic rock glasses almost certainly present on the surface of the Hadean Earth catalyze the formation of polyribonucleic acid in water starting from nucleoside triphosphates. 21

RNA spontaneously forms on basalt lava glass in the presence of nucleoside triphosphates. This is a simple reaction, which is expected to happen spontaneously in various conditions. Nucleoside triphosphates however are compounds that were not around prebiotically - only living cells synthesize them through very complex biosynthesis pathways. They require long chains of complex enzyme-catalyzed reactions, and also energy, which were not around on early earth.

The Cell factory maker, Paley's watchmaker argument 2.0 Steve_10
Steven Benner (2008) The red bonds in RNA are each unstable in water. Each of these bonds represents a problem for the prebiotic synthesis of RNA in water, even after the building blocks are in hand since the synthesis of these bonds requires the loss of water. Further, even if the RNA could be made, the red bonds would break in water. In modern life, damage done by water to RNA  and DNA is repaired. Such repair systems were preusumably not present prebiotically. Another paradox. In water, adenine, guanine, and cytosine all eventually lose their NH2 units, the phosphate backbone of  RNA hydrolyzes, and the nucleobases will fall off of ribose. 22

Steven Benner (2012) Current experiments suggest that RNA molecules that catalyze the degradation of RNA are more likely to emerge from a library of random RNA molecules than RNA molecules that catalyze the template-directed synthesis of RNA, especially given cofactors (e.g., Mg2+). This could, of course, be a serious (and possibly fatal) flaw to the RNA-first hypothesis for bio-origins. 45

Steven Benner (2014)The Water Paradox: Water is commonly viewed as essential for life, and theories of water are well known to support this as a requirement. So are biopolymers, like RNA, DNA, and proteins. However, these biopolymers are corroded by water. For example, the hydrolytic deamination of DNA and RNA nucleobases is rapid and irreversible, as is the base-catalyzed cleavage of RNA in water. This allows us to construct a paradox: RNA requires water to function, but RNA cannot emerge in water, and does not persist in water without repair. Any solution to the “origins problem” must manage the paradox forced by pairing this theory and this observation; life seems to need a substance (water) that is inherently toxic to polymers (e.g. RNA) necessary for life 24

Westheimer (1987) Although RNA is a phosphodiester and carries a negative charge, it is relatively susceptible to hydrolysis; the rate of its spontaneous reaction with water, extrapolated to room temperature, is about 100 times greater than that of DNA 30

The instability problem
Pekka Teerikorpi (2009): A further problem in the accumulation of long RNA polymers is their inherent instability. RNA polymers are very easily broken into parts by hydrolysis, and their functional sequence could have been easily lost via multiple copying mistakes or mutations. Considering all these chemical obstacles, it seems that the whole reaction cascade for the formation of functional polynucleotides (including the synthesis of the nucleoside bases and ribose, assembly of nucleosides, their phosphorylation and activation, and finally, the polymerization and stabilization of the polymers) has been very difficult in the prebiotic conditions. These processes seem so unlikely that it has been proposed that some other information storing and transfer mechanisms preceded the RNA world and then “guided” the formation (or provided catalysts for) the RNA-based world. But it is not easy to explain how the transfer from a more primitive genetic system into RNA could have happened. 36

Phosphodiester bonds
Activated monomers are essential because polymerization reactions occur in an aqueous medium and are therefore energetically uphill in the absence of activation. A plausible energy source for polymerization remains an open question. Condensation reactions driven by cycles of anhydrous conditions and hydration would seem to be one obvious possibility but seem limited by the lack of specificity of the chemical bonds that are formed. 3

Libretexts: Phosphodiester bonds are central to most life on Earth, as they make up the backbone of the strands of DNA. In DNA and RNA, the phosphodiester bond is the linkage between the 3' carbon atom of one sugar molecule and the 5' carbon atom of another, deoxyribose in DNA and ribose in RNA. Strong covalent bonds form between the phosphate group and two 5-carbon ring carbohydrates (pentoses) over two ester bonds. In order for the phosphodiester bond to be formed and the nucleotides to be joined, the tri-phosphate or di-phosphate forms of the nucleotide building blocks are broken apart to give off energy required to drive the enzyme-catalyzed reaction. When a single phosphate or two phosphates known as pyrophosphates break away and catalyze the reaction, the phosphodiester bond is formed. Hydrolysis of phosphodiester bonds can be catalyzed by the action of phosphodiesterases which play an important role in repairing DNA sequences. 8

Prebiotic phosphodiester bond formation
An often-cited claim is that RNA polymerization could be performed on clay. Robert Shapiro wrote a critique in regards to prebiotic proposals of clay-catalyzed oligonucleotide synthesis (2006): 
An extensive series of studies on the polymerization of activated RNA monomers has been carried out by Ferris and his collaborators. A recent publication from this group concluded with the statement: “The facile synthesis of relatively large amounts of RNA oligomers provides a convenient route to the proposed RNA world. The 35–40 oligomers formed are both sufficiently long to exhibit fidelity in replication as well as catalytic activity”. The first review cited above had stated this more succinctly: “The generation of RNAs with chain lengths greater than 40 oligomers would have been long enough to initiate the first life on Earth”. Do natural clays catalyze this reaction? The attractiveness of this oligonucleotide synthesis rests in part on the ready availability of the catalyst. Montmorillonite is a layered clay mineral-rich in silicate and aluminum oxide bonds. It is widely distributed in deposits on the contemporary Earth. If the polymerization of RNA subunits was a common property of this native mineral, the case for RNA at the start of life would be greatly enhanced. However, the “[c]atalytic activity of native montmorillonites before being converted to their homoionic forms is very poor”. The native clays interfere with phosphorylation reactions. This handicap was overcome in the synthetic experiments by titrating the clays to a monoionic form, generally sodium, before they were used. Even after this step, the activity of the montmorillonite depended strongly on its physical source, with samples from Wyoming yielding the best results. Eventually the experimenters settled on Volclay, a commercially processed Wyoming montmorillonite provided by the American Colloid Company 9

Selecting the binding locations
Once the three components would have been synthesized prebiotically, they would have had to be separated from the confusing jumble of similar molecules nearby, and they would have had to become sufficiently concentrated in order to move to the next steps, to join them to form nucleosides, and nucleotides. 

The phosphate/ribose backbone of DNA is hydrophilic (water-loving), so it orients itself outward toward the solvent, while the relatively hydrophobic bases bury themselves inside. 

Xaktly explains: Additionally, the geometry of the deoxyribose-phosphate linkage allows for just the right pitch, or distance between strands in the helix, a pitch that nicely accommodates base pairing. 10
Lots of things come together to create the beautiful right-handed double-helix structure. Production of a mixture of d- and l-sugars produces nucelotides that do not fit together properly, producing a very open, weak structure that cannot survive to replicate, catalyze, or synthesize other biological molecules. 

Eduard Schreiner (2011): In DNA the atoms C1', C3', and C4' of the sugar moiety are chiral, while in RNA the presence of an additional OH group renders also C2' of the ribose chiral.11

Rob Stadler (2021): Even in a very short DNA of just two nucleotides, there are dozens of incorrect possible arrangements of the components and only one correct arrangement. The probability of consistent arrangement decreases exponentially as the DNA lengthens. If natural processes could polymerize these monomers, the result would be chaotic “asphalt,” not highly organized, perfectly consistent biopolymers. Think about it — if monomers spontaneously polymerized within cells, the cell would die because all monomers would be combined into useless random arrangements. 18

Pier Luigi Luisi (2014): Attempts to obtain copolymers, for instance by a random polymerization of monomer mixtures, yield a difficult-to-characterize mixture of all different products. To the best of our knowledge, there is no clear approach to the question of the prebiotic synthesis of macromolecules with an ordered sequence of residues. 19

Stephen D. Fried (2022):The condensation reaction of nucleotides presents several challenges. First, the selective incorporation of 5′–3′ phosphodiester linkages represents a regioselectivity challenge, given the simultaneous presence of 2′ hydroxyls. Second, nucleobases possess numerous nucleophilic functional groups, which must compete with the 5′ and 3′ hydroxyl groups on the sugar as the donor to the phosphate group in condensation. Hence, the creation of linear polymers (as opposed to the combinatorially more facile highly branched structures) poses a statistical challenge. Third, the double negative charge present on terminal monophosphates render them quite unreactive without activation or catalysis. 48

Pekka Teerikorpi (2009): As described by Gerald Joyce (The Scripps Research Institute, La Jolla), a leading student of prebiotic RNA chemistry, the lack of specificity has indeed been a major problem of prebiotic reactions. The spontaneous reactions starting from hydrogen cyanide, or from cyanoacetylene, cyanate, and urea can lead to a number of different nucleobase analogs. But of all the analogs, only adenine and guanine purines, and cytosine and uracil pyrimidines were eventually used by nature for the formation of the functional nucleosides. In the composition of the nucleosides in prebiotic conditions, the existing bases could have been connected to the ribose components, just as well, both in α- and β-configuration, and the furanose (four-carbon) ring of ribose could have formed just as well in L and D isoforms (left- and right-handed). Ribose sugar could also have formed a five-carbon (pyranose) ring by binding the 5' and 1' carbons. Prebiotic polymerization reactions between all different nucleotide analogs and isoforms would have also led to a wide variety of different phosphate linkages between different carbon atoms of the ribose. Altogether, these reactions would have easily used different purine and pyrimidine variants, bound with different derivatives of different cyclic sugars, formed both in L- and D-configurations. These very random nucleoside analogs could then have been phosphorylated at different carbon positions, and then again, the randomly phosphorylated nucleotide analogs could have been connected to each other in a number of different ways as shown with light lettering in Fig. 30.5. None of these alternatives would have produced functional RNA polymers. Only the correctly formed and polymerized nucleotides would have been functional templates for replication via complementary base pairing. We do not understand how life, in the absence of any selective enzyme reactions, choose to use exactly these nucleotide components and their specific isoforms, or how it could control the formation of the phosphodiester bonds to occur only between the 5' and 3' carbons of the nucleotides. 36

The homochirality problem
A biological system exclusively uses d-ribose, whereas abiotic experiments synthesize both right- and lefthanded-ribose in equal amounts. But the pre-biological building blocks of life didn’t exhibit such an overwhelming bias. Some were left-handed and some right. So how did right-handed RNA emerge from a mix of molecules?  Some kind of symmetry-breaking process leading to enantioenriched bio monomers would have had to exist. But none is known. Gerald Joyce wrote a science paper that was published in Nature magazine, in 1984.  G. F. Joyce (1984):  This inhibition raises an important problem for many theories of the origin of life. 42 His findings suggested that in order for life to emerge, something first had to crack the symmetry between left-handed and right-handed molecules, an event biochemists call “breaking the mirror.” 

To polymerize proteins, it is essential that only left-handed amino acids are added to the chain. The same applies to ribozymes. First, a prebiotic ribozyme ( able to catalyze its self-replication as a template) would have had to emerge and form spontaneously from a pool of racemic mixture of left and right-handed RNA, selecting only enantiomeric pure monomers, incorporating them in the chain. Secondly, in huge sequence space of nonfunctional sequences, it would have to select one that bears function. Once it would start performing template-directed reactions it would have only a racemic mixture at its disposal, monomers of the opposite handedness to the template would be incorporated as chain terminators at the 2′(3′) end of the products. This would end the sequence, and no copy of itself would be the product.

Stu Borman (2014): No known modern-day RNA-based enzyme can assemble RNA from a racemic soup of left- and right-handed RNA building blocks, the form in which RNA likely would have existed prior to the origin of an RNA world. To develop such a ribozyme, chemical biologist Gerald F. Joyce and postdoc Jonathan T. Sczepanski of Scripps Research Institute California used directed evolution. Like modern RNAs, the new ribozyme has d chirality. But unlike them, it catalyzes the template-directed poly­merization of RNAs of opposite handedness, the joining together of l-RNA building blocks bound to an l-RNA template. It ignores d-RNA building blocks that may be around.

Gerald F. Joyce (2014): Thirty years ago it was shown that the non-enzymatic, template-directed polymerization of activated mononucleotides proceeds readily in a homochiral system, but is severely inhibited by the presence of the opposing enantiomer. This finding poses a severe challenge for the spontaneous emergence of RNA-based life.  It is commonly thought that the earliest RNA polymerase and its substrates would have been of the same handedness, but this is not necessarily the case. Replicating D- and L-RNA molecules may have emerged together, based on the ability of structured RNAs of one-handedness to catalyze the templated polymerization of activated mononucleotides of the opposite handedness.  41

The evident problem is outlined a bit later in Stu Borman's article: The study does not directly address how a cross-chiral ribozyme that itself has pure chirality “could have emerged de novo from an achiral mix of nucleotides. Early-world cross-chiral systems “would at some point have to transition to today’s homochiral systems” and that it is difficult to envisage how that could occur.” 

Since then, scientists have largely focused their search for the origin of life’s handedness in the prebiotic worlds of physics and chemistry, not biology - but with no success. So what is the cop-out? Pure chance !! Luck did the job. That is the only thinkable explanation once God's guiding hand is excluded. How could that be a satisfying answer in face of the immense odds? It is conceivable that the molecules were short enough for all possible sequences, or almost, to be realized (by way of their genes) and submitted to natural selection. This is the way de Duve thought that Intelligent Design could be dismissed. This coming from a Nobel prize winner in medicine makes one wondering, to say the least.  De Duve dismissed intelligent design and replaced it with natural selection. Without providing any of evidence. A claim, based on pure guesswork and speculation.

Amino acid peptide bond formation in prebiotic conditions
The formation of proteins in modern cells depends on bonding one amino acid to another, an incredibly precise and efficient reaction catalyzed by ribosomes. The linking bonds of these polymers are peptide and ester bonds. The polymerization reaction is thermodynamically uphill, with hydrolysis being favored. The monomers are chemically activated by the input of metabolic energy so that polymerization is spontaneous in the presence of the enzymes or ribosomes that catalyze polymerization. 

Pier Luigi Luisi (2014): There are no methods described in the literature to efficiently generate long polypeptides, and we also lack a theory for explaining the origin of some macromolecular sequences instead of others. 19 Hui Huang Hui Huang (2019): The viewpoint of amino acids reacting to produce peptides and proteins has always been an unsatisfactory explanation since producing polypeptides via spontaneous reaction of amino acids in aqueous solution is extremely difficult. 17 David Deamer (2017): A plausible mechanism for the synthesis of peptide bonds and ester bonds on the prebiotic Earth continues to be a major gap in our understanding of the origin of life. 12 Elizabeth C. Griffith (2012): Polymer formation in aqueous environments would most likely have been necessary on early Earth because the liquid ocean would have been the reservoir of amino acid precursors needed for protein synthesis. 15 Fabian Sauer (2021): A critical point in prebiotic reactions is often the required high concentration of reactants, which cannot be reconciled with a dilute ocean or pond on early Earth. 16

Arthur V. Chadwick, Ph.D. (2005): Given an ocean full of small molecules of the types likely to be produced on pre-biological earth with the types of processes postulated by the origin of life enthusiasts, we must next approach the question of polymerization. This question poses a two-edged sword: We must first demonstrate that macromolecule synthesis is possible under pre-biological conditions, then we must construct a rationale for generating macromolecules rich in the information necessary for usefulness in a developing precell.There are many different problems confronted by any proposal. Polymerization is a reaction in which water is a product. Thus it will only be favored in the absence of water. The presence of precursors in an ocean of water favors the depolymerization of any molecules that might be formed. Careful experiments done in an aqueous solution with very high concentrations of amino acids demonstrate the impossibility of significant polymerization in this environment. 13

The water paradox
Water molecules not only serve as a solvent and reactant but can also promote hydrolysis, which counteracts the formation of essential organic molecules. This conundrum constitutes one of the central issues in origin of life.

Michael Marshall (2020): Although water is essential for life, it is also destructive to life’s core components. There’s a fundamental problem: life’s cornerstone molecules break down in water. This is because proteins, and nucleic acids such as DNA and RNA, are vulnerable at their joints. Proteins are made of chains of amino acids, and nucleic acids are chains of nucleotides. If the chains are placed in water, it attacks the links and eventually breaks them. In carbon chemistry, “water is an enemy to be excluded as rigorously as possible”, wrote the late biochemist Robert Shapiro in his totemic 1986 book Origins, which critiqued the primordial ocean hypothesis. This is the water paradox. Today, cells solve it by limiting the free movement of water in their interiors. Everything is incredibly scaffolded in cells, and it’s scaffolded in a gel, not a water bag. 20

Martina Preiner (2020): Water is essential for all known forms of life. As the solvent for life, it provides protons (H+) and hydroxyl groups (OH–) for myriad reactions but it creates a central problem when it comes to life’s origin: hydrolysis. Water molecules dissociate chemical bonds and thereby break larger molecules or polymers into their monomeric components. In free solution, condensation reactions that generate water are thermodynamically unfavorable. Both protons and hydroxide ions can catalyze hydrolysis reactions, making them highly pH-dependent processes. Water molecules can easily cleave ester and amide bonds and thus hydrolyze nucleic acids and proteins or they affect the half-life of reactants. 23

Steven Benner (2012): The “water problem”. Many bonds in RNA are thermodynamically unstable with respect to hydrolysis in water. Thus, even if these are made in water, they will fall apart. Indeed, examples of RNA molecules that catalyze the template-directed synthesis of RNA are not accepted as a “final proof” of the RNA-first hypothesis in part because they work at high concentrations of Mg2+, which in turn catalyzes hydrolysis of product RNA. 45
Steven Benner (2014): Water is commonly viewed as essential for life, and theories of water are well known to support this as a requirement. So are biopolymers, like RNA, DNA, and proteins. However, these biopolymers are corroded by water. For example, the hydrolytic deamination of DNA and RNA nucleobases is rapid and irreversible, as is the base-catalyzed cleavage of RNA in water. This allows us to construct a paradox: RNA requires water to function, but RNA cannot emerge in water, and does not persist in water without repair. Any solution to the “origins problem” must manage the paradox forced by pairing this theory and this observation; life seems to need a substance (water) that is inherently toxic to polymers (e.g. RNA) necessary for life. 46

G. Waechtershaeuser (1998): Under the dilute aqueous conditions most relevant for the origin of life, activation of the amino acids by coupling with hydrolysis reactions notably of inorganic polyphosphates has been suggested. It is, however, not clear how under hot aqueous conditions such hydrolytically sensitive coupling compounds, if geochemically available at all, could resist rapid equilibration. 14

The Cell factory maker, Paley's watchmaker argument 2.0 Peptid11
Formation of a peptide bond  Creative Commons CC0 License

1. The synthesis of proteins and nucleic acids from small molecule precursors, and the formation of amide bonds without the assistance of enzymes represents one of the most difficult challenges to the model of pre-vital ( chemical) evolution, and for theories of the origin of life.
2. The best one can hope for from such a scenario is a racemic polymer of proteinous and non-proteinous amino acids with no relevance to living systems.
3. Polymerization is a reaction in which water is a product. Thus it will only be favored in the absence of water. The presence of precursors in an ocean of water favors the depolymerization of any molecules that might be formed.
4. Even if there were billions of simultaneous trials as the billions of building block molecules interacted in the oceans, or on the thousands of kilometers of shorelines that could provide catalytic surfaces or templates, even if, as is claimed, there was no oxygen in the prebiotic earth, then there would be no protection from UV light, which would destroy and disintegrate prebiotic organic compounds. Secondly, even if there would be a sequence, producing a functional folding protein, by itself, if not inserted in a functional way in the cell, it would have absolutely no function. It would just lay around, and then soon disintegrate. Furthermore, in modern cells proteins are tagged and transported on molecular highways to their precise destination, where they are utilized. Obviously, all this was not extant on the early earth.
5. To form a chain, it is necessary to react bi-functional monomers, that is, molecules with two functional groups so they combine with two others. If a uni-functional monomer (with only one functional group) reacts with the end of the chain, the chain can grow no further at this end. If only a small fraction of uni-functional molecules were present, long polymers could not form. But all ‘prebiotic simulation’ experiments produce at least three times more uni-functional molecules than bifunctional molecules.

The RNA world
The term “RNA world” was first coined in the science paper: Origin of life: The RNA world, published in 1986 by Walter Gilbert 29. It is probably not only the most extensively investigated hypothesis for the origin of life but is also the most popular, hailed by many as the most plausible of how cells emerged on the early earth, and kickstarted life. 

For example, Harold S Bernhardt (2012) wrote in a paper giving the title: The RNA world hypothesis: the worst theory of the early evolution of life (except for all the others) still wrote in the concluding remarks: 
I have argued that the RNA world hypothesis, while certainly imperfect, is the best model we currently have for the early evolution of life. 33 Others express the same opinion. For example:
Florian Kruse (2019): The RNA world hypothesis is the central consensus in the origins of life research, although many questions arising from this hypothesis have not yet been answered. 35
Jessica C. Bowman (2015): An RNA World that predated the modern world of polypeptide and polynucleotide is one of the most widely accepted models in origin of life research.  

The RNA World Hypothesis is actually a group of related models, with a variety of assumptions and definitions. In all variations of the RNA World Hypothesis, RNA enzymes (ribozymes) predate protein enzymes. Ribozymes performed a variety of catalytic functions in the RNA World, from metabolite biosynthesis to energy conversion. The defining ribozyme of the RNA World, which unites all RNA World models, performed template-directed synthesis of RNA: in the RNA World, RNA self-replicated. 
36

Not all researchers are however that enthusiastic. Italian chemist and OoL researcher Pier Luigi Luisi, for example, interviewed by Suzan Mazur, responded (2012): The most popular view of Origin of Life, by way of the RNA world, to me and to many others is and always has been a fantasy. This is the theory by which self-replicating RNA arose by itself. Self-replicating also means Darwinian evolution. This, according to the story, produces ribozymes, nucleic acid also capable of catalysis. Ribozymes capable of catalyzing the synthesis of DNA and protein. How did self-replicating RNA arise? And, even granted that, how do we go from this to our DNA/protein cells? It is all in the air, still. 43

Life would coincide with the start of a first self-replicating entity, a jack-of-all-trades super-RNA molecule, which would as a world's first, rule and dominate, and somehow promote both, genetics and catalysis, operating initially both functions performed by DNA and proteins, starting to generate and process information, in parallel replicating, performing metabolic transformations and functions ( similar to proteins) and evolve through natural selection. 

Harris Bernstein:  (2020): In early protocellular organisms the genome is thought to have consisted of ssRNAs (genes) that formed folded structures with catalyic activity (ribozymes) 34

Such a transitional state of affairs, from non-life to life, has never been observed. The imagination led very far. According to the narrative, after the emergence of replicating molecules (replicases) that could self-replicate, short amino acid peptides from the prebiotic soup would have joined RNAs, and given rise to the RNA-peptide world, enhancing its catalytic efficiency.  Westheimer (1987) hypothesized that:  the greater structural variety of amino acids permitted better catalytic properties in protein enzymes than in those composed of RNA 30 and given rise to the much more complex DNA–RNA–proteins interdependence, genetic information directing the making and operation of proteins, and subsequent descendant generations would undergo further mutations, creating metabolic networks, promoting growth and division, the fittest survive, gaining new abilities and getting more complex, and evolve into a progenote, into a cenansestor, a first, and a last universal common ancestor, that then would give rise to the three domains of life. 

The Cell factory maker, Paley's watchmaker argument 2.0 Rna_wo10
Creative Commons CC0 License
Hannes Mutschler (2019): A schematic representation of the classical RNA world hypothesis. 
Initially, synthesis and random polymerization of nucleotides result in pools of nucleic acid oligomers, in which template-directed non-enzymatic replication may occur. Recombination reactions result in the generation of longer oligomers. Both long and short oligomers can fold into structures of varying complexity, resulting in the emergence of functional ribozymes. As complexity increases, the first RNA replicase emerges, and encapsulation results in protocells with distinct genetic identities capable of evolution. In reality, it is likely that multiple processes occurred in parallel, rather than in a strictly stepwise manner, and encapsulation may have occurred at any stage. 34

Museum of science: Until relatively recently, it was thought that proteins were the only biological molecules capable of catalysis.  In the early 1980s, however, research groups led by Sidney Altman and Thomas Cech independently found that RNAs can also act as catalysts for chemical reactions. This class of catalytic RNAs is known as ribozymes, and the finding earned Altman and Cech the 1989 Nobel Prize in Chemistry. 39

Could RNA substitute proteins in an RNA world?
RNA can perform various catalytic functions. RNA riboswitches regulate gene expression and perform peptidyl-transfer reactions in the ribosome, self-splicing Group I intron ribozymes remove intron sequences in genes,  RNA ligases, and polymerase ribozymes ( they break and catalyze phosphodiester bonds), etc. The thing is, ribozymes are extremely good and specialized in what they are doing. They are all encoded in DNA in modern cells and preordained to do what they do with specificity.  How would they emerge spontaneously from a messy primordial soup by random chance?

Timothy J. Wilson (2020): What is arguably the most important reaction in the cell, the condensation of amino acids to form polypeptides by the peptidyl transferase activity of the ribosome is catalyzed by RNA in the large subunit . Another example is the splicing of mRNA, where the U2/U6 snRNA complex is a ribozyme. RNase P is a ribozyme that processes the 5' end of tRNA in all domains of life. Some of the small nucleolytic ribozymes are widespread, such as the hammerhead and twister ribozymes. RNA can accelerate phosphoryl transfer reactions by a millionfold or more. This is achieved by one or other of two main broad strategies. The group I self-splicing introns use divalent metal ions to organize the active center, activate the nucleophile, and stabilize the transition state and the group II introns and RNase P also appear to function as metalloenzymes. By contrast, the nucleolytic ribozymes use general acid-base catalysis most frequently utilizing nucleobases. Even though the natural pKa values of the nucleobases are either low (adenine and cytosine) or high (guanine and uracil), generally resulting in a low fraction of active catalyst at physiological pH, a ribozyme like twister has its active center to impose an in-line geometry for attack by the O20 nucleophile, stabilize the phosphorane transition state and perform nucleobase-mediated general acid-base catalysis to achieve a substantial rate acceleration. Peptidyl transferase activity in the large ribosomal subunit does not use nucleobase-mediated catalysis, but the reaction appears to involve proton transfer mediated by a 20 -hydroxyl of tRNA. 

This demonstrates how all extant ribozymes are highly complex and specified to perform their designated catalytic functions with high specificity, precision, and efficiency. All enzymes that use metal co-factors require enormously complex biosynthesis pathways that are similar to robotic production lines. They orchestrate the synthesis of the co-factors, and the precise insertion into the reaction centers, the pockets, where the catalytic reaction occurs. How could the origin of such a state of affairs be explained by chance events? 

Timothy J. Wilson continues: According to the simplest version of the RNA world hypothesis (W. Gilbert, 1986) ribozymes would have catalyzed all cellular chemical reactions in a primitive metabolism. This would have required RNA to catalyze a far wider range of chemistry than we currently are aware of in nature, and it would have required relatively difficult reactions such as carbon–carbon bond formation. Many of the reactions available to the organic chemist for this purpose would be highly improbable for RNA catalysts. 49

Limited catalytic possibilities of RNAs
Wan, C. (2022): An essential component of an RNA world scenario would be an RNA “replicase” – a ribozyme capable of self-replication as well as copying other RNA sequences. While such a replicase has not been found in nature 27

Ronald R. Breaker (2020): Only a few classes of ribozymes are known to contribute to the task of promoting biochemical transformations. The RNA World theory hypothesis encompasses the notion that earlier forms of life made use of a much greater diversity of ribozymes and other functional RNAs to guide complex metabolic states long before proteins had emerged in evolution. 47

Jessica C. Bowman (2015): Although RNA in extant biology is seen to catalyze only RNA cutting and ligation along with peptidyl transfer (within the ribosome), a wide variety of chemical transformations can be catalyzed by ribozymes selected in vitro. 36

Charles W Carter, Jr (2017): Catalytic RNA itself cannot fulfill the tasks now carried out by proteins. The term “catalytic RNA” overlooks three fundamental problems: 1) it vastly overestimates the potential catalytic proficiency of ribozymes (Wills 2016); and fails to address either 2) the computational essence of translation or 3) the requirement that catalysts not only accelerate, but more importantly, synchronize chemical reactions whose spontaneous rates at ambient temperatures differ by more than 10^20-fold 33

Selecting ribozymes in the laboratory
Timothy J. Wilson (2020):  To explore what might be possible by way of RNA-mediated catalysis of novel chemical reactions there have been many investigations in which in vitro evolution methods have been used to select RNA species that will accelerate a given reaction from a random pool of sequences. These experiments have generally been carried out in a similar manner in which one reactant is tethered to an RNA oligonucleotide whose sequence has been partially or totally randomized, while the other is linked to biotin. If an RNA within the pool can catalyze formation of a bond between the reactants this connects the RNA to the biotin, allowing it to be isolated by binding to streptavidin. This can then be amplified and a second round of selection performed. Something like 15–20 such cycles will be performed after which the reactant will be disconnected from the RNA to see if it will catalyze a reaction in trans. Clearly this strategy is limited to bond-forming reactions, and we can divide this into reactions leading to the formation of C-C, C-N, and C-S bonds. 

Carbon–carbon bond formation. Ribozymes have been selected that can catalyze C-C bonds by the non-natural Diels– Alder cycloaddition reaction, the aldol reaction and related Claisen condensation. 
Carbon–nitrogen bond formation. Selected ribozymes catalyzing C-N bond formation include one that alkylates itself at a specific guanine N7, amide and peptide bond formation and glycosidic bond formation. Very recently Höbartner and colleagues have selected an RNA that catalyzes methyl transfer from O6 -methylguanine to adenine N1. 
Carbon–sulfur bond formation. C-S bond formation has been demonstrated by selected RNA species catalyzing Michael addition and CoA acylation. The estimated rate enhancements vary, being strongly dependent on the estimation of the uncatalyzed rate, but are frequently around 1000-fold. They are probably relatively unsophisticated catalysts. It is likely to be much easier to find an RNA that can exploit metal ions in catalysis than one that uses nucleobases as chemical participants for example. 49

TM.Tarasow ( 1997): Carbon–carbon bond formation and the creation of asymmetric centres are both of great importance biochemically, but have not yet been accomplished by RNA catalysis. (DAase activity) was carried out with a library of 10^14 unique sequences. The RNA molecules were constructed of a contiguous 100-nucleotide randomized region 44

That means nature would have had to shuffle in sequence space of 10^14 possible combinations, to find one that would be able to catalyze a Carbon-carbon bond formation. Considering an estimate of the age of the universe which is 13,7 Billion years, that would be = 1 x 10^16 seconds. A prebiotic soup would have had to try potentially one reaction per second, for 13,7 billion years, to find a functional sequence. That far stretches plausibility. 

Requirement of cofactors and coenzymes for ribozyme function
The Achilles heel of all these experiments is that there was no prebiotic selection, and biotin is a Vitamin B7, an enzyme co-factor. The synthesis of biotin is very complex, depending on a series of complex enzymes ( BioC, BioH BioF, BioA, BioD, and BioB, and SAH, S-adenosylhomocysteine; SAM, S-adenosyl-L-methionine; AMTOD, S-adenosyl-2-oxo-4-thiomethylbutyrate; 5’-DOA, 5′-deoxyadenosine.) Evidently, these enzymes and cofactors were not swimming in the prebiotic soup, synthesizing biotin, ready to be linked to the random nucleotide chains.

Daniel N. Frank (1997): Despite the occurrence of a wide variety of structures and mechanisms among catalytic RNAs (ribozymes), most are metalloenzymes that require divalent metal cations for catalytic function. The ribozyme RNase P for example absolutely requires divalent metal ions for catalytic function. Multiple Mg2+ ions contribute to the optimal catalytic efficiency of RNase P, and it is likely that the tertiary structure of the ribozyme forms a specific metal-binding pocket for these ions within the active site. Divalent metals are thought to play two critical roles in ribozyme function. First, they promote the proper folding of RNA tertiary structures. Second, metals can participate directly in catalysis by activating nucleophiles, stabilizing transition states, and stabilizing leaving groups  51

Gerald F. Joyce (2018): Divalent metal cations appear to be essential for efficient RNA copying, but the poor affinity of the catalytic metal for the reaction center means that very high concentrations of these ions are required, which causes problems for both the RNA (degradation, hydrolysis of activated monomers) and for the fatty acid–based membranes. RNA polymerase enzymes solve these problems by binding and precisely positioning the metal ion for catalysis . A prebiotically plausible means of achieving effective metal ion catalysis at low ambient concentration would greatly simplify the development of model protocells. 38

Coenzymes and cofactors are molecules that help an enzyme or protein function appropriately. Coenzymes are organic molecules and quite often bind loosely to the active site of an enzyme and aid in substrate recruitment, whereas cofactors do not bind the enzyme. Cofactors are "helper molecules" and can be inorganic or organic in nature. These include metal ions and are often required to increase the rate of catalysis of a given reaction catalyzed by the specific enzyme. These coenzymes and cofactors play an integral role in a number of cellular metabolism reactions playing both structural and functional roles to aid in the catalysis. 28

1. Change Laura Tan, Rob Stadler: The Stairway To Life: An Origin-Of-Life Reality Check  March 13, 2020 
2. Saidu lIslam: Prebiotic Systems Chemistry: Complexity Overcoming Clutter  13 April 2017
3. David Deamer: Bioenergetics and Life's Origins 2010 Feb; 2
4. Phosphoester Formation
5. David Deamer: Bioenergetics and Life's Origins  January 13, 2010
6. Weber, Arthur L.: Prebiotic Polymer Synthesis and the Origin of Glycolytic Metabolism 1998-01-01
7. All about science
8. Libretext: Phosphoester Formation
9. Robert Shapiro: Small Molecule Interactions Were Central to the Origin of Life Review 2006
10. Xaktly: DNA & RNA: The foundation of life on Earth
11. Eduard Schreiner: Stereochemical errors and their implications for molecular dynamics simulations 2011
12. David Deamer: The Role of Lipid Membranes in Life’s Origin 2017 Jan 17
13. Arthur V. Chadwick, Ph.D.: Abiogenic Origin of Life: A Theory in Crisis 2005
14. G. Waechtershaeuser: Peptides by Activation of Amino Acids with CO on (Ni,Fe)S Surfaces: Implications for the Origin of Life 31 JULY 1998
15. Elizabeth C. Griffith: In situ observation of peptide bond formation at the water–air interface August 6, 2012
16. Fabian Sauer: From amino acid mixtures to peptides in liquid sulphur dioxide on early Earth 2021 Dec 10
17. Hui Huang, Siwei Yang: Photocatalytic Polymerization from Amino Acid to Protein by Carbon Dots at Room Temperature October 22, 2019
18. Rob Stadler: Long Story Short — A Strikingly Unnatural Property of Biopolymers December 1, 2021
19. Pier Luigi Luisi : OPEN QUESTIONS IN ORIGIN OF LIFE: EXPERIMENTAL STUDIES ON THE ORIGIN OF NUCLEIC ACIDS AND PROTEINS WITH SPECIFIC AND FUNCTIONAL SEQUENCES BY A CHEMICAL SYNTHETIC BIOLOGY APPROACH February 2014
20. Michael Marshall: How the first life on Earth survived its biggest threat — water 09 December 2020
21. Steven A. Benner: Catalytic Synthesis of Polyribonucleic Acid on Prebiotic Rock Glasses 8 Jun 2022
22. Steven A. Benner: Life, the Universe and the Scientific Method 2008
23. Martina Preiner: The ambivalent role of water at the origins of life 16 May 2020
24. Steven A. Benner: Paradoxes in the Origin of Life 5 December 2014
25. Geoffrey Zubay:Origins of Life on the Earth and in the Cosmos  2000
26. S W FOX: A Theory of Macromolecular and Cellular Origins 1965
27. Wan, C.: Evolution and Engineering of RNA-based Macromolecular Machines 2022
28. 
29. Walter Gilbert: Origin of life: The RNA world 20 February 1986
30. F H Westheimer: Why nature chose phosphates  1987 Mar 6
31. Dr. Rafał Wieczorek: Formation of RNA Phosphodiester Bond by Histidine-Containing Dipeptides 18 December 2012
32. Robert P. Bywater writes in: On dating stages in prebiotic chemical evolution 15 February 2012
33. Charles W Carter, Jr: Interdependence, Reflexivity, Fidelity, Impedance Matching, and the Evolution of Genetic Coding  24 October 2017
34. Harris Bernstein: Origin of DNA Repair in the RNA World October 12th, 2020
35. Prof. Dr. Oliver Trapp: Direct Prebiotic Pathway to DNA Nucleosides 26 May 2019
36. Pekka Teerikorpi: The Evolving Universe and the Origin of Life: The Search for Our Cosmic Roots2009
37. Jonathan Wells: The Politically Incorrect Guide to Darwinism and Intelligent Design August 21, 2006
38. Gerald F. Joyce: Protocells and RNA Self-Replication 2018
39. Exploring life's origins
40. Stu Borman: Ribozyme May Hint At The Origin Of Life November 17, 2014 
41. Gerald F. Joyce A cross-chiral RNA polymerase ribozyme 29 October 2014
42. G. F. Joyce: Chiral selection in poly(C)-directed synthesis of oligo(G) 16 August 1984
43. Suzan Mazur: Pier Luigi Luisi: Origin of Life Mindstorms Needed 19 December 2012
44. T M Tarasow: RNA-catalysed carbon-carbon bond formation 1997 Sep 4
45. Steven A. Benner: Asphalt, Water, and the Prebiotic Synthesis of Ribose, Ribonucleosides, and RNA March 28, 2012
46. Steven A. Benner Paradoxes in the Origin of Life 5 Dec. 2014
47. Ronald R. Breaker: Imaginary Ribozymes 2020 Aug 21
48. Stephen D. Fried: Peptides before and during the nucleotide world: an origins story emphasizing cooperation between proteins and nucleic acids 09 February 2022
49. Timothy J Wilson: The potential versatility of RNA catalysis 2021 May 5



Last edited by Otangelo on Sun Jul 10, 2022 10:20 am; edited 66 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

If the RNA world were true, ribozymes would have had to catalyze a wide range of catalytic reactions which subsequently were substituted by proteins. About half would only obtain the necessary catalytic activity by recruiting and employing cofactors and coenzymes ( which, as we see, also depend on complex biosynthesis pathways, and alternative non-enzymatic emergence would be very unlikely).  It is as if a Software engineer had to learn to become a mechanical engineer and assemble complex machines. There is no evidence that RNAs, made of just four different building blocks ( the four nucleobases), were ever able to catalyze the widely different enzymatic and metabolic reactions required for life to thrive. There is no evidence, that that somehow, prebiotic shuffling created a pool of millions of repetitive complex nucleotides, all with the repetitive configuration of purine and pyrimidine bases. There is no chemical logic that makes it appear plausible, that a gradual chemical evolutionary process would have promoted the emergence of an RNA world, followed by an RNA-peptide world. There is a wide unexplained gap between the RNA world, and the modern DNA - RNA - protein state of affairs in modern cells. Can this gap be bridged with new hypotheses and the advance in abiogenesis investigations? That prediction looks rather unlikely. Only time will tell. 
 
Solving a chicken & egg problem?
Supposedly, the RNA world hypothesis solved a long-standing chicken & egg, or catch22 problem ( J.Wells [2006]: In Joseph Heller’s novel about World War II, Catch-22, an aviator could be excused from combat duty for being crazy. But a rule specified that he first had to request an excuse, and anyone who requested an excuse from combat duty was obviously not crazy, so such requests were invariably denied. The rule that made it impossible to be excused from combat duty was called “Catch-22.”  ) In 1965, Sidney Fox wrote a scientific article, asking: How, when no life existed, did substances come into being which today are absolutely essential to living systems, yet which can only be formed by those systems?  He was referring to a problem, outlined by Jordana Cepelewicz (2017): For scientists studying the origin of life, one of the greatest chicken-or-the-egg questions is: Which came first — proteins or nucleic acids like DNA and RNA? 33 This problem arises because DNA and RNA direct the synthesis of enzymes and proteins. But proteins synthesize the making of RNA and DNA. 

Jessica C. Bowman (2015) claimed: The RNA World Hypothesis resolves the putative chicken and egg dilemma: which came first, polynucleotide or polypeptide? The simultaneous emergence from whole cloth of two functional biopolymers, one encoding the other, seems improbable. A single type of ancestral biopolymer (polynucleotide), performing multiple roles, appears to be characterized by high parsimony. A ‘‘Polymer Transition’’, a progression of biology from one polymer type (polynucleotide) to two polymer types (polynucleotide and polypeptide), is consistent with an expectation that ancient biology transitioned from simple to complex. 32

Under naturalism, the only possible explanation for the complexity seen in biochemistry is gradualism: From the simple to the complex. Gradually moving from chemistry to biology. A sudden appearance of all the complex interdependent intricacies observed in the living cannot be explained by a biochemical Big bang. It is untenable. Therefore, under the framework of philosophical naturalism, answers have to be found that confer with the gradualistic scenario. Only the model of intelligent design permits the hypothesis of instant creation by an intelligent agency. Giving up gradualism means giving up naturalism. 

Eugene V Koonin (2007): The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The problem has a clear catch-22 aspect: high translation fidelity hardly can be achieved without a complex, highly evolved set of RNAs and proteins but an elaborate protein machinery could not evolve without an accurate translation system. 13

Paul C. W. Davies (2013): Because of the organizational structure of systems capable of processing algorithmic (instructional) information, it is not at all clear that a monomolecular system, where a single polymer plays the role of catalyst and informational carrier, is even logically consistent with the organization of information flow in living systems because there is no possibility of separating information storage from information processing (that being such a distinctive feature of modern life). As such, digital-first systems (as currently posed) represent a rather trivial form of information processing that fails to capture the logical structure of life as we know it.  The real challenge of life's origin is thus to explain how instructional information control systems emerge naturally and spontaneously from mere molecular dynamics. 15

Self-replication in the RNA world
Despite the current celebrity status, there are several reasons that raise doubts that RNAs would self-assemble into ribozyme polymerases with function-bearing sequences, obtaining the right chemical structures with autocatalytic properties, and being apt to start self-replication. 

Harold S Bernhardt (2012): The following objections have been raised to the RNA world hypothesis: (i) RNA is too complex a molecule to have arisen prebiotically; (ii) RNA is inherently unstable; (iii) catalysis is a relatively rare property of long RNA sequences only; and (iv) the catalytic repertoire of RNA is too limited. 12

Since the various problems of prebiotic RNA monomers have been outlined in the previous chapter, we will address now the issues with RNA self-replication.  Steven Benner (2013): Catalysis and genetics place contradicting demands on any single molecular system asked to do both. For example, catalytic molecules should fold, to surround a transition state. Genetic molecules should not fold, to allow them to template the synthesis of their complements. Catalytic molecules should have many building blocks, to create versatile catalytic potential. Genetic molecules should have few building blocks, to ensure that they are copied with high fidelity 11

Jack W Szostak (2012): The first RNA World models were based on the concept of an RNA replicase - a ribozyme that was a good enough RNA polymerase that it could catalyze its own replication. Although several RNA polymerase ribozymes have been evolved in vitro, the creation of a true replicase remains a great experimental challenge. 6

Hannes Mutschler (2019): It has not yet been possible to demonstrate robust and continuous RNA self-replication from a realistic feedstock (i.e. activated mono- or short mixed-sequence oligonucleotides). In the case of ribozymes, only ‘simple’ ligation or recombination-based RNA replication from defined oligonucleotides has been demonstrated. Such systems have only a limited ability to transmit heritable information and so are not capable of open-ended evolution — the ability to indefinitely increase in complexity like living systems. Open-ended evolution requires that a replicase must at least be able to efficiently copy generic sequences longer than that required to encode its own function. RNA in isolation (including ribozymes) is simply not sufficient to catalyze its own replication, and substantial help from either other molecules or the environment is essential. 7

The annealing problem
Jordana Cepelewicz (2019): As a first step toward making a copy of itself, a single strand of RNA can take up complementary nucleotide building blocks from its surroundings and stitch them together. But the paired RNA strands then tend to bind to each other so tightly that they don’t unwind without help, which prevents them from acting as either catalysts or templates for further RNA strands. “It’s a real challenge,” Sutherland said. “It’s held the field back for a long time.” 8

Gerald F. Joyce (2018): Because the product of template copying is a double-stranded RNA, there must be some means of either strand separation or strand displacement synthesis. Transient temperature fluctuations could lead to thermal strand separation, but long RNA duplexes (≥30 base pairs) are difficult to denature thermally. 10

In modern cells, ribonuclease H enzymes destroy annealed RNAs but evidently, they were not around on prebiotic earth.

The Eigen paradox
Jaroslaw Synak (2022): Another challenge is the maintenance of genetic information in RNA sequences over many rounds of imperfect replication. In order to survive, RNA polymerase must be copied faster than it is hydrolyzed and accurately enough to preserve its function. In the early stages of molecular evolution, due to the lack of reliable replication mechanisms, the mutation rate was likely very high and the critical amount of information could not have been stored in long RNA sequences; on the other hand, the short ones could not be efficient enzymes. Maynard Smith estimated that the maximum length of the RNA replicase is approximately 100 nucleotides, assuming nonenzymatic replication with a copying fidelity of one base of up to 0.99. In order to further increase this length, the copying fidelity would have to be increased, which requires the presence of specific enzymes. This is known as Eigen’s paradox and is often equivalently formulated as: no enzymes without a large genome and no large 5

Indeed. But the problem is not only to maintain genetic information but for the first replicator to obtain it in the first place!

Natalia Szostak (2017): Researchers have performed many attempts to create RNA polymerase ribozyme, recently resulting in a cross-chiral RNA polymerase ribozyme and a system of cooperative RNA replicators, as well as RNA polymerase ribozyme that is able to synthesize structured functional RNAs, including aptamers and ribozymes. However, these molecules are too large to be maintained in a quasispecies population, as they exceed the 100 nucleotide error threshold, which is the maximum length polynucleotide molecule that can be accurately replicated without high fidelity polymerases. Eigen suggested hypercycles as a solution to the error threshold problem mentioned above. However, even if the traditional hypercycle model formulation based on ordinary differential equations is ecologically stable, it is proved to be evolutionarily unstable. To evolve life as we know it, separation of the roles performed by replicases, information storage, and replication of the information, into two molecules appears to be one of the crucial events that had to occur early in the stages leading to life. 4

A remarkable admission !!

Sami EL Khatib (2021): Countless challenges are faced by an RNA self-replicating cycle; for it to be a fully chemically and  enzymatically free reaction, the cycle loses rate and fidelity, so much that it does not even reach the critical threshold for the sustenance of life, meaning the RNA nucleotides break apart faster than the incorporation of nucleotides takes place, thus is the case when experimenting with modern substrates, that do not leak out of cells and are very polar with the regularly known triphosphate ester, this is advantageous to the modern cell where it uses enzymes to catalyze the release of di-phosphate, but not for the primitive cell as the substrates are found in the environment and require continuous dynamic exchange. The RNA world hypothesis has been criticized mostly because of the belief that long RNA sequences are needed for the catalytic function of RNA. These long sequences are enormous and are needed to isolate the catalytic and biding functions of the overall ribozyme. For example the best ribozyme replicase created so far, which is able to replicate an impressive 95-nucleotide stretch of RNA, is ~190 nucleotides in length, which is by far too large a number to have risen in any random assembly, thus in vitro selection experiments had to be designed where 10,000,000,000,000 – 1,000,000,000,000,000 of randomized RNA molecules are required as the starting point for the isolation of ribozymic and/or binding activity. This experiment clearly contradicts the probable prebiotic situation. 14

Eugene Koonin (2012):  The primary incentive behind the theory of self-replicating systems that Manfred Eigen outlined was to develop a simple model explaining the origin of biological information and, hence, of life itself. Eigen’s theory revealed the existence of the fundamental limit on the fidelity of replication (the Eigen threshold): If the product of the error (mutation) rate and the information capacity (genome size) is below the Eigen threshold, there will be stable inheritance and hence evolution; however, if it is above the threshold, the mutational meltdown and extinction become inevitable (Eigen, 1971). The Eigen threshold lies somewhere between 1 and 10 mutations per round of replication; regardless of the exact value, staying above the threshold fidelity is required for sustainable replication and so is a prerequisite for the start of biological evolution. Indeed, the very origin of the first organisms presents at least an appearance of a paradox because a certain minimum level of complexity is required to make self-replication possible at all; high-fidelity replication requires additional functionalities that need even more information to be encoded. However, the replication fidelity at a given point in time limits the amount of information that can be encoded in the genome. What turns this seemingly vicious circle into the (seemingly) unending spiral of increasing complexity—the Darwin-Eigen cycle.  The crucial question in the study of the origin of life is how the Darwin-Eigen cycle started—how was the minimum complexity that is required to achieve the minimally acceptable replication fidelity attained? In even the simplest modern systems, such as RNA viruses with the replication fidelity of only about 10^3 and viroids that replicate with the lowest fidelity among the known replicons (about 10^2), replication is catalyzed by complex protein polymerases. The replicase itself is produced by translation of the respective mRNA(s), which is mediated by the immensely complex ribosomal apparatus. Hence, the dramatic paradox of the origin of life is that, to attain the minimum complexity required for a biological system to start on the Darwin-Eigen spiral, a system of a far greater complexity appears to be required. How such a system could evolve is a puzzle that defeats conventional evolutionary thinking, all of which is about biological systems moving along the spiral; the solution is bound to be unusual.

When considering the origin of the first life forms, one faces the proverbial chicken-and-egg problem: What came first, DNA or protein, the gene or the product? In that form, the problem might be outright unsolvable due to the Darwin-Eigen paradox: To replicate and transcribe DNA, functionally active proteins are required, but production of these proteins requires accurate replication, transcription, and translation of nucleic acids. If one sticks to the triad of the Central Dogma, it is impossible to envisage what could be the starting material for the Darwin-Eigen cycle. Even removing DNA from the triad and postulating that the original genetic material consisted of RNA (thus reducing the triad to a dyad), although an important idea, does not help much because the paradox remains. For the evolution toward greater complexity to take off, the system needs to somehow get started on the Darwin-Eigen cycle before establishing the feedback between the (RNA) templates (the information component of the replicator system) and proteins (the executive component). The brilliantly ingenious and perhaps only possible solution was independently proposed by Carl Woese, Francis Crick, and Leslie Orgel in 1967–68: neither the chicken nor the egg, but what is in the middle—RNA alone. The unique property of RNA that makes it a credible—indeed, apparently, the best—candidate for the central role in the primordial replicating system is its ability to combine informational and catalytic functions. Thus, it was extremely tempting to propose that the first replicator systems—the first life forms—consisted solely of RNA molecules that functioned both as information carriers (genomes and genes) and as catalysts of diverse reactions, including, in particular, their own replication and precursor synthesis. This bold speculation has been spectacularly boosted by the discovery and subsequent study of ribozymes (RNA enzymes), which was pioneered by the discovery by Thomas Cech and colleagues in 1982 of the autocatalytic cleavage of the Tetrahymena rRNA intron, and by the demonstration in 1983 by Sydney Altman and colleagues that RNAse P is a ribozyme. Following these seminal discoveries, the study of ribozymes has evolved into a vast, expanding research area. 

Despite all invested effort, the in vitro evolved ribozymes remain (relatively) poor catalysts for most reactions; the lack of efficient, processive ribozyme polymerases seems particularly troubling. An estimate based on the functional tolerance of well-characterized ribozymes to mutations suggest that, at fidelity of 10^3 errors per nucleotide per replicase cycle (roughly, the fidelity of the RNA-dependent RNA polymerases of modern viruses), an RNA “organism” with about 100 “genes” the size of a tRNA (80 nucleotides) would be sustainable. Such a level of fidelity would require only an order of magnitude improvement over the most accurate ribozyme polymerases obtained by in vitro selection 3

Lack of prebiotic RNA repair mechanisms
Harris Bernstein (2020): Persistence and replication of even the simplest forms of RNA life must have depended on preserving the information content of the RNA genome from damage (a form of informational noise). Damage to the RNA genome likely occurred in a variety of ways including spontaneous hydrolysis, exposure to UV light and exposure to reactive chemicals. 9

Where did the energy come from?
Jack Szostak, interviewed by Suzan Mazur (2014): The problem is RNA falls apart. The activated nucleotides we use to do the non-enzymatic replication -- they react with water, so they fall apart. There needs to be a way to bring energy back into the system to essentially keep the battery charged. To keep all the nucleotides activated and to keep things running. 1

PHILIP BALL (2020): It’s an alluring picture – catalytic RNAs appear by chance on the early Earth as molecular replicators that gradually evolve into complex molecules capable of encoding proteins, metabolic systems and ultimately DNA. But it’s almost certainly wrong. For even an RNA-based replication process needs energy: it can’t shelve metabolism until later. And although relatively simple self-copying ribozymes have been made, they typically work only if provided with just the right oligonucleotide components to work on. What’s more, sustained cycles of replication and proliferation require special conditions to ensure that RNA templates can be separated from copies made on them. Perhaps the biggest problem is that self-replicating ribozymes are highly complex molecules that seem very unlikely to have randomly polymerized in a prebiotic soup. And the argument that they might have been delivered by molecular evolution merely puts the cart before the horse. The problem is all the harder once you acknowledge what a complex mess of chemicals any plausible prebiotic soup would have been. It’s nigh impossible to see how anything lifelike could come from it without mechanisms for both concentrating and segregating prebiotic molecules – to give RNA-making ribozymes any hope of copying themselves rather than just churning out junk, for example. In short, once you look at it closely, the RNA world raises as many questions as it answers.  The best RNA polymerase the researchers obtained this way had a roughly 8% chance of inserting any nucleotide wrongly, and any such error increased the chance that the full chain encoded by the molecule would not be replicated. What’s more, making the original class I ligase was even more error-prone and inefficient – there was a 17% chance of an error on each nucleotide addition, plus a small chance of a spurious extra nucleotide being added at each position. These errors would be critical to the prospects of molecular evolution since there is a threshold error rate above which a replicating molecule loses any Darwinian advantage over the rest of the population – in other words, evolution depends on good enough replication. Fidelity of copying could thus be a problem, hitherto insufficiently recognized, for the appearance of a self-sustaining, evolving RNA-based system: that is, for an RNA world. Maybe this obstacle could have been overcome in time. But my hunch is that any prebiotic molecule will have been too inefficient, inaccurate, dilute and noise-ridden to have cleared the hurdle. 2

An RNA world could not explain the origin of the genetic code
Susan Lindquist (2010): An RNA-only world could not explain the emergence of the genetic code, which nearly all living organisms today use to translate genetic information into proteins. The code takes each of the 64 possible three-nucleotide RNA sequences and maps them to one of the 20 amino acids used to build proteins. 29

Without amino acids, there could not be an assignment. 64 trinucleotide codons are assigned to 20 amino acids. Both had to be present, in order for this assignment to occur. But having both would not be enough either. In reality, the entire system had to be created from the get-go, fully developed, and operational from the beginning. That led Fujio Egami (1981) to present a " working hypothesis on the interdependent genesis of nucleotide bases, protein amino acids, and the primitive genetic code: the primitive genetic code was dependent upon the concentration of different nucleotide bases and amino acids coexisting in the primeval environments and upon the selective affinity between bases and amino acids." 30
 
Charles W Carter, Jr (2017): Computational and structural modeling argue that some mutual, interdependent processes embedded information into proteins and nucleic acids.

While talking about evolutionary modification, Egami was probably not aware that an interdependent system cannot be the product of evolutionary change. An interdependent system must be right all at once, right from the start. Stepwise, gradual evolution of the system is not possible, since intermediate stages would confer no advantage, nor function. Mapping nucleotides to amino acids is only possible if all players are there. 31

The RNA-peptide world
The RNA-peptide world tries to build a bridge between the replication first, and metabolism first scenarios, advancing the RNA world and combining it with catalytic peptides and primitive metabolism.

Stephen D. Fried (2022): Diverse lines of research in molecular biology, bioinformatics, geochemistry, biophysics, and astrobiology provide clues about the progression and early evolution of proteins, and lend credence to the idea that early peptides served many central prebiotic roles before they were encodable by a polynucleotide template, in a putative ‘peptide-polynucleotide stage’. 23

The presupposition is that a result of chemical prebiotic conditions permitted the emergence of activated ribonucleotides and amino acids.  The proposal hypothesizes that RNAs started to interact and get into a relationship with small peptides ( small amino acid strands) right from the beginning, rather than everything starting exclusively with RNAs, that later would transition to mutually beneficial interaction with amino acids.  In modern cells, DNA that stores the genetic data using the genetic code is transcribed into messenger RNA (mRNA), subsequently translated in the ribosome apparatus into functional amino acid sequences, which form polypeptides, and in the end, proteins. The core problem is the origin of the codon-amino acid assignment through the genetic code. The RNA-peptide world attempts to address this current state of affairs, starting with an RNA-peptide world, which constitutes the first step to arrive at the end of the current solution, where the sophisticated translation is performed through the ribosome.    

Charles W. Carter, Jr. (2015): In the RNA-world scenario, the necessary catalysts were initially entirely RNA-based and did not include genetically encoded proteins. IN the RNA-peptide world, the idea that coded peptides functioned catalytically in the early stages of the origin of life directly contradicts the second central tenet of the “RNA World” scenario.  The important distinction between this scenario and the RNA World hypothesis is that the requisite specificity is low in the initial stages of the former but unacceptably high in the latter. Low specificity processes occur with greater frequency and hence are more likely to have occurred first. The unavailability of activated amino acids was the most critical barrier to the emergence of protein synthesis. 16

Dave Speijer (2015): Wery small RNAs (versatile and stable due to base-pairing) and amino acids, as well as dipeptides, coevolved. The “RNA world” hypothesis is seen as one of the main contenders for a viable theory on the origin of life. Relatively small RNAs have catalytic power, RNA is everywhere in present-day life, the ribosome is seen as a ribozyme, and rRNA and tRNA are crucial for modern protein synthesis. However, this view is incomplete at best. The modern protein-RNA ribosome most probably is not a distorted form of a “pure RNA ribosome” evolution started out with. Though the oldest center of the ribosome seems “RNA only”, we cannot conclude from this that it ever functioned in an environment without amino acids and/or peptides. Very small RNAs (versatile and stable due to base-pairing) and amino acids, as well as dipeptides, coevolved. Remember, it is the amino group of aminoacylated tRNA that attacks peptidyl-tRNA, destroying the bond between peptide and tRNA. This activity of the amino acid part of aminoacyl-tRNA illustrates the centrality of amino acids in life. With the rise of the “RNA world” view of early life, the pendulum seems to have swung too much towards the ribozymatic part of early biochemistry. The necessary presence and activity of amino acids and peptides is in need of highlighting. We argue that an RNA world completely independent of amino acids never existed.

Indeed, I agree, that an RNA world never existed. But did an RNA-peptide world?

Speijer: The idea of an independent RNA world without oligopeptides or amino acids stabilizing structures and helping in catalysis does not seem a viable concept. On the other hand, the idea of catalytic protein existing without RNA storing the polypeptide sequences, which have catalytic activity, and organizing the production of these sequences, also does not seem a viable concept. Here we argue for a “coevolutionary” theory in which amino acids and (very small) peptides, as well as small RNAs, existed together and where their separate abilities not only reinforced each other’s survival but allowed life to more quickly climbing the ladder of complexity.

Every naturalistic approach works only from the simple to the complex in a slow, gradual manner. Even if not linear but with ups and downs, the outcome is always that there is more functional complexity at the end. That is as well Speijers proposal: "Starting with small molecules (easily) derived from prebiotic chemistry, we will try to reconstruct a possible history in which every stage of increased complexity arises from the previous more simple stage because specific nucleotide/amino acid (RNA/peptide) interactions allowed it do so." Observe how Speijer introduces teleonomy into the explanation. As if RNAs and amino acids operated or behaved with the "aim" or purpose of keeping a state of affairs, that wasn't even there. RNAs and amino acids on their own are not alive. They are molecules used in biology. But molecules have no innate drive or "urge" to keep a specific state of affairs, that would favor a future outcome, the gradual complexification that would, in the end, result in the existence of self-replicating cells.

Speijer: We now come at a crucial and, we have to admit, somewhat theoretical juncture: coevolution is illustrated by the presumption that RNAs could not persist without peptide protection, that very short (very early) peptides were made more abundant by RNA-producing them, and that they co-evolve forming longer RNAs and peptides. This would constitute an RNA/peptide world of ribozymes and short oligopeptides. These oligopeptides had RNA protection functions (DADVDGD being the obvious ancestor sequence of the universal RNA polymerase active site sequence NADFDGD) This motif (Asn-Ala-Asp-Phe-Asp-Gly-Asp) is a specific stretch of amino acids that is central in all cellular life. RNA polymerases catalyze the transcription from DNA to mRNA. Dennis R. Salahub (2008): Most known RNA polymerases (RNAPs) share a universal heptapeptide, called the NADFDGD motif. The crystal structures of RNAPs indicate that in all cases this motif forms a loop with an embedded triad of aspartic acid residues. This conserved loop is the key part of the active site. 17

The odds to get this sequence randomly is one in 20^7 or one in 10^10, that is taking a pool of 20 selected amino acids used in life would have to be shuffled 10 billion times to get this specified functional sequence. Not forgetting, that it is incorporated in a much longer polymer sequence that also has to be functional, and embedded and working in a joint venture with other polymer subunit strands of RNA polymerase. A far fetch.

Kunnev (2018): The hypothesis assumes that ribonucleotides would polymerize leading to very short RNAs from 2 to about 40 bases. The polymerization would incorporate random sequences and random 3D structures. The process would preserve mostly stable ones. Wet-Dry cycles could facilitate the process of RNA polymerization. Compartmentalization is another important factor since most of the described events are unlikely to occur in very low concentrations. Some level of environmental separation would be expected, for example, micro-chambers out of porous surface of rocks or lipid vesicles or both. Surface adsorption might have facilitated RNA-RNA interactions, RNA-lipids interactions and some beneficial chemical reactions. Thus, clay surfaces have been shown to promote encapsulation of RNA into vesicles and grow by incorporating fatty acid supplied as micelles and can divide without dilution of their contents.  At temperatures between 1°C and to denaturation (about 55°C) temperature, short random RNA oligos would get stabilized via intra and intermolecular hybridization based on Watson-Crick base pairing, forming complexes of various 3D shape and size. Larger hybridized regions would confer greater stability and would be selected for. Highly self-complementary RNAs would be unlikely to exist, forcing intermolecular hybridization of short sequences and the emergence of complexes of several RNA oligos. The formation of RNA complexes also assumes a thermal cycle that would drive the process by sequential denaturation (~55–100°C) and re-annealing (<55°C) phases. Frequent repetition of the thermal cycle and stability selection would favor accumulation of complexes with higher degree of complementarity and higher GC content. Non-enzymatic aminoacylation between 2′ or 3′ positions of ribose and activated amino acids could occur. In addition, ribozymes capable of amino acid transfer from one RNA to another have been selected under laboratory conditions and similar molecules could have participated in aminoacylation of RNAs. Aminoacylated RNAs would be involved in complex formation, bringing some of the aminoacylated RNA 3′-ends in close proximity. This would promote peptide bond formation between two adjacent amino acids, most likely with the assistance of wet/dry natural cycles. All amino acids would have statistically equal probability to aminoacylate RNA. At that stage, any RNA molecule could be aminoacylated and could serve as a template. 18

That means, any available amino acid nearby could be involved in the reaction - inclusive amino acids not used in life, and they could be attached anywhere to the RNA molecule. There is also no restriction in regards of possible RNA configurations with any sort of nucleobases. There is no mechanism that would prevent other than the nucleobases used in life to be involved in the reactions.  It would result simply in a disordered random accumulation of RNA-peptides. 

Kunnev: We presume that following this initial stage all components of the translation system would co-evolve in a stepwise way. Specialization of ribosomal Large Subunit—LSU will start with evolution of peptidyl transferase center (PTC). The evolution of peptides to proteins would occur from small motif to domains and finally— folded proteins. 

Felix Müller (2022): The ability to grow peptides on RNA with the help of non-canonical vestige nucleosides offers the possibility of an early co-evolution of covalently connected RNAs and peptides, which then could have dissociated at a higher level of sophistication to create the dualistic nucleic acid–protein world that is the hallmark of all life on Earth. It is difficult to imagine how an RNA world with complex RNA molecules could have emerged without the help of proteins and it is hard to envision how such an RNA world transitions into the modern dualistic RNA and protein world, in which RNA predominantly encodes information whereas proteins are the key catalysts of life.22

This story, when it comes to elucidating the trajectory from these small RNA-peptides, to fully developed proteins is very "sketchy" and superficial. This is a common modus operandi to uphold a story, that by looking closer, does not withstand scrutiny.

Charles Carter, structural biologist (2017): For life to take hold, the mystery polymer would have had to coordinate the rates of chemical reactions that could differ in speed by as much as 20 orders of magnitude. 24

Marcel Filoche (2019): Enzymes speed up biochemical reactions at the core of life by as much as 15 orders of magnitude. Yet, despite considerable advances, the fine dynamical determinants at the microscopic level of their catalytic proficiency are still elusive. Rate-promoting vibrations in the picosecond range, specifically encoded in the 3D protein structure, are localized vibrations optimally coupled to the chemical reaction coordinates at the active site. Remarkably, our theory also exposes a hitherto unknown deep connection between the unique localization fingerprint and a distinct partition of the 3D fold into independent, foldspanning subdomains that govern long-range communication. The universality of these features is demonstrated on a pool of more than 900 enzyme structures, comprising a total of more than 10,000 experimentally annotated catalytic sites. Our theory provides a unified microscopic rationale for the subtle structure-dynamics-function link in proteins. The intricate networks of metabolic cascades that power living organisms ultimately rest on the exquisite ability of enzymes to increase the rate of chemical reactions by many orders of magnitude. Although many molecular machines contain intrinsically disordered domains, the 3D fold is central to enzyme functioning. In particular, increasing evidence is accumulating in the literature in favor of the existence of specific fold-encoded motions believed to govern the relevant collective coordinate(s) that are coupled to the chemical transformation. These motions typically correspond to localized vibrations of the protein scaffold that contribute to the catalytic reaction, i.e., modes that, if impeded, would lead to a deterioration of the catalytic efficiency.

The more the function of a machine depends on its precise setup and arrangement respecting very limited tolerances, the more efforts have to be undertaken to achieve the required precision, demanding engineering solutions where nothing can be left to chance. That is precisely the case with proteins. There is an extraordinarily limited tolerance upon which proteins have to be engineered and designed, a requirement to achieve the necessary catalytic functions. That sets the bar for the cause to instantiate this state of affairs very high, for which random events are entirely inadequate!! The situation becomes even worse when we consider what Mathieu E. Rebeaud described as (2021): the challenge of reaching and maintaining properly folded and functional proteomes. Most proteins must fold to their native structure in order to function, and their folding is largely imprinted in their primary amino acid sequence. However, many proteins, especially large multidomain polypeptides, or certain protein types such as all-beta or repeat proteins, tend to misfold and aggregate into inactive species that may also be toxic. Life met this challenge by evolving employing molecular chaperones that can minimize protein misfolding and aggregation, even under stressful out-of-equilibrium conditions favoring aggregation. 25

Hays S. Rye (2013): Protein folding is a spontaneous process that is essential for life, yet the concentrated and complex interior of a cell is an inherently hostile environment for the efficient folding of many proteins. Some proteins—constrained by sequence, topology, size, and function—simply cannot fold by themselves and are instead prone to misfolding and aggregation. This problem is so deeply entrenched that a specialized family of proteins, known as molecular chaperones assists in protein folding. The bacterial chaperonin GroEL, along with its co-chaperonin GroES, is probably the best-studied example of this family of protein-folding machine. 27

Chaperones do bear no function unless there are misfolded proteins, that need to be re-folded in order to function. But non-functional proteins accumulating in the cell would be toxic waste and eventually kill the cell. So this creates another chicken & egg problem. What came first: Protein synthesis, or chaperones helping proteins to fold correctly? Consider as well, that, as Jörg Martin puts it (2000): The intracellular assembly of GroEL-type chaperonins appears to be a chaperone-dependent process itself and requires functional preformed chaperonin complexes !! 26 There are machines in the cell, that help other machines to be folded correctly, and these machines are also employed to help other machines to fold in order to be able to operate properly! Amazing!

Thorsten Hugel (2020): In a living cell, protein function is regulated in several ways, including post-translational modifications (PTMs), protein-protein interaction, or by the global environment (e.g. crowding or phase separation). While site-specific PTMs act very locally on the protein, specific protein interactions typically affect larger (sub-)domains, and global changes affect the whole protein non-specifically. Herein, we directly observe protein regulation under three different degrees of localization, and present the effects on the Hsp90 chaperone system at the levels of conformational steady states, kinetics and protein function. Interestingly using single-molecule FRET, we find that similar functional and conformational steady states are caused by completely different underlying kinetics. We disentangle specific and non-specific effects that control Hsp90’s ATPase function, which has remained a puzzle up to now. Lastly, we introduce a new mechanistic concept: functional stimulation through conformational confinement. Our results demonstrate how cellular protein regulation works by fine-tuning the conformational state space of proteins. 28

Susan Lindquist (2010): Cells also require a ubiquitin-proteasome system, targeting terminally misfolded proteins for degradation, and with translocation machineries to get proteins to their proper locations. These protein folding agents constitute a large, diverse, and structurally unrelated group. Many are upregulated in response to heat and are therefore termed heat shock proteins (HSPs).  HSP90 is one of the most conserved HSPs, present from bacteria to mammals, and is an essential component of the protective heat shock response. The role of HSP90, however, extends well beyond stress tolerance. Even in nonstressed cells, HSP90 is highly abundant and associates with a wide array of proteins (known as clients) that depend on its chaperoning function to acquire their active conformations. 20% of yeast proteins are influenced by Hsp90 function, making it the most highly connected protein in the yeast genome, and GroES mediates the folding of ~10% of proteins in E. coli.29

Short RNA-peptides, or peptides on their own, are not functional and are useless in a supposed "proto-cell" unless they have the right size and sequence, able to fold into the functional 3D conformation.  In face of this evidence, supposing and theorizing intermediate states and transitions of growing size and complexity over long periods of time until a functional state of affairs is achieved, is untenable. It opposes the evidence just described. Sophisticated exquisite mechanisms have to be instantiated from the get-go, to guarantee the right setup and folding of proteins of the full length.  Such a hypothesized transition is never to work and going to happen. These RNA-peptides would simply lay around, and then sooner or later disintegrate.  These explanations not including an intelligent agent are entirely inadequate to account for the origin of this kind of these high-tech engineering marvel implementations on a molecular scale!

George Church, Professor of Genetics, described the ribosome as "the most complicated thing that is present in all organisms". The peptidyl transferase center (PTC) is the core of the ribosome, where peptide bond formation occurs, which is a central catalytic reaction in life, where proteins are synthesized, and is as such of particular importance.  The process is so intriguingly complex, that a science paper in 2015 had to admit that: "The detailed mechanism of peptidyl transfer, as well as the atoms and functional groups involved in this process are still in limbo." 19 The PTC is a ribozyme, which means it is composed of ribosomal RNAs ( rRNAs). Francisco Prosdocimi (2020): The PTC region has been considered crucial in the understanding about the origins of life. It has been described as the most significant trigger that engendered a mutualistic behavior between nucleic acids and peptides, allowing the emergence of biological systems. The emergence of this proto-PTC is a prerequisite to couple a chemical symbiosis between RNAs and peptides. Of 1434 complete sequences of 23S ribosomal RNAs analyzed, it was demonstrated that site A2451 from the 23S rRNA, which is the catalytic site of the PTC, is essential for the peptide bond to occur, and is absolutely preserved in each and every analyzed sequence. The PTC is known to be a flexible and efficient catalyst as it is capable of recognizing different, specific substrates (20 different amino acids bind to aminoacyl-tRNAs) and polymerizing proteins at a similar rate. 20  

Sávio T.Farias (2014): Studies reveal that the PTC has a symmetrical structure comprising approximately 180 nucleotides. Molecular structure models suggest that the catalytic portion of the 23S rRNA entities of the symmetrical region possesses the common stem-elbow-stem (SES) structural motif. 21

Let's suppose that this structure would have emerged in an RNA-peptide world. Let's also not consider, that finding a functional sequence of 180 RNAs would vastly exceed the resources in sequence space, exhausting the maximum number of possible events in a universe that is 18 Billion years old (10^16 seconds) where every atom (10^80) is changing its state at the maximum rate of 10^40 times per second is 10^139. If we had such a core PTC, it would have no function whatsoever, unless all other players would be in place to perform translation from RNA to amino acids, having as well the genetic code implemented, and the entire chain from DNA to mRNA, to then coming to the events in translation. All these proposals, the RNA world, and the RNA-peptide world are based on silly pipe dreams - that they call theories when they are not more than ideas, based on fertile minds, and not results based on scientific evidence, experimentation, and tests in the lab. These are just invented scenarios - out of the need to keep an explanatory framework based on philosophical naturalism to find answers that do not require invoking a supernatural entity. All these proposals have been shown to be inadequate and doomed to failure.  Biological cells are too complicated, sophisticated, integrated, and functional in order to warrant the belief that they could have originated by unguided means - the ribosome is a prime example to conclude this.

1. Suzan Mazur: The Origin of Life Circus: A How To Make Life Extravaganza  November 30, 2014
2. PHILIP BALL: Flaws in the RNA world  12 FEBRUARY 2020
3. Eugene Koonin: The Logic of Chance: The Nature and Origin of Biological Evolution  31 agosto 2011
4. Natalia Szostak: Simulating the origins of life: The dual role of RNA replicases as an obstacle to evolution July 10, 2017
5. Jaroslaw Synak: RNA World Modeling: A Comparison of Two Complementary Approaches 11 April 2022
6. Jack W Szostak: The eightfold path to non-enzymatic RNA replication 03 February 2012
7. Hannes Mutschler: The difficult case of an RNA-only origin of life AUGUST 28 2019
8. Jordana Cepelewicz: Origin-of-Life Study Points to Chemical Chimeras, Not RNA September 16, 2019
9. Harris Bernstein: Origin of DNA Repair in the RNA World October 12th, 2020
10. Gerald F. Joyce: Protocells and RNA Self-Replication 2018
11. Steven A. Benner: The ‘‘Strong’’ RNA World Hypothesis: Fifty Years Old 2013 Apr;13
12. Harold S Bernhardt: The RNA world hypothesis: the worst theory of the early evolution of life (except for all the others) 2012 Jul 13
13. Eugene V Koonin: On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization 2007 May 31
14. Sami EL Khatib: Assumption and Criticism on RNA World Hypothesis from Ribozymes to Functional Cells March 12, 2021
15. Paul C. W. Davies: The algorithmic origins of life 2013 Feb 6
16. Charles W. Carter, Jr. What RNA World? Why a Peptide/RNA Partnership Merits Renewed Experimental Attention 23 January 2015
17. Dennis R. Salahub: Characterization of the active site of yeast RNA polymerase II by DFT and ReaxFF calculations 08 April 2008
18. Dimiter Kunnev: Possible Emergence of Sequence Specific RNA Aminoacylation via Peptide Intermediary to Initiate Darwinian Evolution and Code Through Origin of Life 2018 Oct 2;8
19. Hadieh Monajemi: The P-site A76 2′-OH acts as a peptidyl shuttle in a stepwise peptidyl transfer mechanism 2015
20. Francisco Prosdocimi: The Ancient History of Peptidyl Transferase Center Formation as Told by Conservation and Information Analyses 2020 Aug 5
21. Sávio T.Farias: Origin and evolution of the Peptidyl Transferase Center from proto-tRNAs 2014
22. Felix Müller: A prebiotically plausible scenario of an RNA–peptide world 11 May 2022
23. Stephen D. Fried: Peptides before and during the nucleotide world: an origins story emphasizing cooperation between proteins and nucleic acids 09 February 2022
24. Jordana Cepelewicz: The End of the RNA World Is Near, Biochemists Argue December 19, 2017
25. Mathieu E. Rebeaud:  On the evolution of chaperones and cochaperones and the expansion of proteomes across the Tree of Life May 17, 2021
26. Jörg Martin: Assembly and Disassembly of GroEL and GroES Complexes 2000
27. Hays S. Rye: GroEL-Mediated Protein Folding: Making the Impossible, Possible 2013 Sep 25
28. Thorsten Hugel: Controlling protein function by fine-tuning conformational flexibility 2020 Jul 22
29. Susan Lindquist: HSP90 at the hub of protein homeostasis: emerging mechanistic insights 2010 Jul;11
30. F Egami: A working hypothesis on the interdependent genesis of nucleotide bases, protein amino acids, and primitive genetic code 1981 Sep;11
31. Charles W Carter, Jr: Interdependence, Reflexivity, Fidelity, Impedance Matching, and the Evolution of Genetic Coding 24 October 2017
32. Jessica C. Bowman: The Ribosome Challenge to the RNA World  20 February 2015
33. Jordana Cepelewicz: Life’s First Molecule Was Protein, Not RNA, New Model Suggests November 2, 2017

https://reasonandscience.catsboard.com

16The Cell factory maker, Paley's watchmaker argument 2.0 Empty Chapter 7 Sun Jul 10, 2022 11:48 am

Otangelo


Admin

Chapter 7

https://reasonandscience.catsboard.com/t2809-on-the-origin-of-life-by-the-means-of-an-intelligent-designer#9369

Biosemiotic information
So far, I have dealt mostly with the physical aspect of life and the origin of the basic building blocks. In this chapter, we will give a closer look in regards to a fundamental and essential aspect of life: The information stored in biomolecules. Life is more than physics and chemistry. In a conversation with J.England, Paul Davies succinctly described life as Chemistry + information 1. Witzany (2015) gave a similar description: "Life is physics and chemistry and communication. 2. Its even more than just information. Life employs advanced languages, analogous to human languages.

Paul Davies (2013): Chemistry is about substances and how they react, whereas biology appeals to concepts such as information and organization. Informational narratives permeate biology. DNA is described as a genetic "database", containing "instructions" on how to build an organism. The genetic "code" has to be "transcribed" and "translated" before it can act. And so on. If we cast the problem of life's origin in computer jargon, attempts at chemical synthesis focus exclusively on the hardware – the chemical substrate of life – but ignore the software – the informational aspect. To explain how life began we need to understand how its unique management of information came about. In the 1940s, the mathematician John von Neumann compared life to a mechanical constructor, and set out the logical structure required for a self-reproducing automaton to replicate both its hardware and software. But Von Neumann's analysis remained a theoretical curiosity. Now a new perspective has emerged from the work of engineers, mathematicians and computer scientists, studying the way in which information flows through complex systems 3

SUNGCHUL JI (2006): Biological systems and processes cannot be solely accounted for based on the laws of physics and chemistry. They require in addition the principles of semiotics, the science of symbols and signs, including linguistics. It was Von Neumann recognizing first the interrelationship required for self-replication: symbol-matter complementarity. Linguistics provides a fundamental principle to account for the structure and function of the cell. Cell language has counterparts to 10 of the 13 design features of human language characterized by Hockett and Lyon. 4

Cells are information-driven factories
Specified complex information observed in biomolecules dictates and directs the making of irreducible complex molecular machines, robotic molecular production lines, and chemical cell factories. In other words: Cells have a codified description of themselves in digital form stored in genes and have the machinery to transform that blueprint through information transfer from genotype to phenotype, into an identical representation in analog 3D form, the physical 'reality' of that description.  No law in physics or in chemistry, is known to specify that A should represent, or be assigned to mean B. The cause leading to a machine’s and factory's functionality has only been found in the mind of the engineer and nowhere else. 

Paul Davies (1999): How did stupid atoms spontaneously write their own software … ? Nobody knows … … there is no known law of physics able to create information from nothing. 5

Timothy R. Stout (2019): A living cell may be viewed as an information-driven machine. 6

David L Abel (2005): An algorithm is a finite sequence of well-defined, computer-implementable instructions. Genetic algorithms instruct sophisticated biological organization. A linear, digital, cybernetic string of symbols representing syntactic, semantic, and pragmatic prescription.  Genes are not analogous to messages; genes are messages. Genes are literal programs. They are sent from a source by a transmitter through a channel.   Prescriptive sequences are called "instructions" and "programs." They are algorithmically complex sequences. They are cybernetic. 7

G. F. Joyce (1993): A blueprint cannot produce a car all by itself without a factory and workers to assemble the parts according to the instructions contained in the blueprint; in the same way, the blueprint contained in RNA cannot produce proteins by itself without the cooperation of other cellular components which follow the instructions contained in the RNA. 8

Claus Emmeche (1991): Biological systems start from the (digital) axioms and definitions and develop an analogic three-dimensional geometry: an instance of the morphology of life. 9

Is the claim that DNA stores information just a metaphor? 
There has been a long-standing dispute: Is DNA a code? Does DNA store information in a literal sense or is it just a metaphor?  Many have objected and claimed that DNA or its information content can be described in a metaphorical sense storing information, using a code, but not literally. Some have also claimed that DNA is just chemistry. That has raised a lot of confusion.

Sergi Cortiñas Rovira (2008): The most popular metaphor is the one of information (DNA = information). It is an old association of ideas that dates back to the origins of genetics, when research was carried out into the molecule (initially thought to be proteins) that should have contained the information to duplicate cells and organisms. In this type of popularisation model, DNA was identified with many everyday-use objects able to store information: a computer file of living beings, a database for each species, or a library with all the information about an individual. To Dawkins, the human DNA is a “user guide to build a living being” or “the architect’s designs to build a building”. 10

Massimo Pigliucci (2010):Genes are often described by biologists using metaphors derived from computational science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Modern proponents of Intelligent Design, the latest version of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case. . In this article we illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. Thus, we argue that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public.  We will see that analogies between living organisms and machines or programs (what we call ‘‘machine-information metaphors’’) are in fact highly misleading in several respects.

This is the claim. How does Pigliucci justify his accusation? He continues:

‘‘direct encoding systems’’, such as human-designed software, suffer from ‘‘brittleness’’, that is they break down if one or a few components stop working, as a result of the direct mapping of instructions to outcomes. If we think of living organisms as based on genetic encoding systems—like blueprints—we should also expect brittleness at the phenotypic level which, despite the claims of creationists and ID supporters that we have encountered above, is simply not observed.  Indeed, the fact that biological organisms cannot possibly develop through a type of direct encoding of information is demonstrated by calculations showing that the gap between direct genetic information (about 30,000 protein-coding genes in the human genome) and the information required to specify the spatial position and type of each cell in the body is of several orders of magnitude. Where does the difference come from? An answer that is being explored successfully is the idea that the information that makes development possible is localized and sensitive (as well as reactive) to the conditions of the immediate surroundings. In other words, there is no blueprint for the organism, but rather each cell deploys genetic information and adjusts its status to signals coming from the surrounding cellular environment, as well as from the environment external to the organism itself. 11

The answer to this claim is a resounding no. What defines organismal architecture and body plans, or phenotypic complexity, anatomical novelty, as well as the ability of adaptation, is preprogrammed prescribed instructional complex information encoded through ( at least ) 33 variations of genetic codes, and 45 epigenetic codes, and complex communication networks using signaling that act on a structural level in an integrated interlocked fashion, which is pre-programmed do respond to nutrition demands, environmental cues, control reproduction, homeostasis, metabolism, defense systems, and cell death. So the correct answer is, that the phenomena described by Pigliucci, and the fact that genes alone do not explain phenotype, is not explained by denying that genes store literally information, but that even more prescribing, instructional information is in operation, also on an epigenetic level. Pigliucci's claims are entirely misleading in the opposite direction of the truth.    

Pigliucci argues that: phenotypes are fault-tolerant—to use software engineering terminology—because they are not brittle: giving up talk of blueprints and computer programs immediately purchases an understanding of why living organisms are not, in fact, irreducibly complex.

Agreed, the metaphor of a blueprint or computer program might be faulty or not fully up to the task to describe what goes on in biological information systems - but not because they do not describe literally the state of affairs, but because they do not fully clarify and/or describe or illustrate the sophistication, the superb information engineering feat that goes on in the living, to which what we in comparison as intelligent human agents have come up with in comparison, is pale, rudimentary, and primitive.

Richard Dawkins (2008): After the seventh minute of his speech, Dawkins admits that: Can you think of any other class of molecule, that has that property, of folding itself up, into a uniquely characteristic enzyme, of which there is an enormous repertoire, capable of catalyzing an enormous repertoire of chemical reactions, and this is in itself to be absolutely determined by a digital code. 12

Hubert Yockey (2005): Information, transcription, translation, code, redundancy, synonymous, messenger, editing, and proofreading are all appropriate terms in biology. They take their meaning from information theory (Shannon, 1948) and are not synonyms, metaphors, or analogies. 13

Barry Arrington (2013):Here’s an example of an arbitrary arrangement of signs:  DOG.  This is the arrangement of signs English speakers use when they intend to represent Canis lupus familiaris. In precise semiotic parlance, the word “dog” is a “conventional sign” for Canis lupus familiaris among English speakers.  Here, “conventional” is used in the sense of a “convention” or an agreement.  In other words, English speakers in a sense “agree” that “dog” means Canis lupus familiaris.

Now, the point is that there is nothing inherent in a dog that requires it to be represented in the English language with the letters “D” followed by “O” followed by “G.”  If the rules of the semiotic code (i.e., the English language) were different, the identical purpose could be accomplished through a different arrangement of signs.  We know this because in other codes the same purpose is accomplished with vastly different signs.  In French the purpose is accomplished with the following arrangement of signs:  C H I E N.  In Spanish the purpose is accomplished with the following arrangement of signs:  P E R R O.  In German the purpose is accomplished with the following arrangement of signs:  H U N D.

In each of the semiotic codes the purpose of signifying an animal of the species Canis lupus familiaris is accomplished through an arbitrary set of signs.  If the rules of the code were different, a different set of signs would accomplish the identical purpose.  For example, if, for whatever reason, English speakers were collectively to agree that Canis lupus familiaris should be represented by “B L I M P,” then “blimp” would accomplish the purpose of representing Canis lupus familiaris just as well as “dog.”

How does this apply to the DNA code?  The arrangement of signs constituting a particular instruction in the DNA code is arbitrary in the same way that the arrangement of signs for representing Canis lupus familiaris is arbitrary.  For example, suppose in a particular strand of DNA the arrangement “AGC” means “add amino acid X.”  There is nothing about amino acid X that requires the instruction “add amino acid  X” to be represented by  “AGC.”  If the rules of the code were different the same purpose (i.e, instructing the cell to “add amino acid  X”) could be accomplished using “UAG” or any other combination.  Thus, the sign AGC is “arbitrary” in the sense UB was using the word.

Why is all of this important to ID?  It is important because it shows that the DNA code is not analogous to a semiotic code.  It is isometric with a semiotic code.  In other words, the digital code embedded in DNA is not “like” a semiotic code, it “is” a semiotic code.  This in turn is important because there is only one known source for a semiotic code:  intelligent agency.  Therefore, the presence of a semiotic code embedded within the cells of every living thing is powerful evidence of design, and the burden is on those who would deny design to demonstrate how a semiotic code could be developed though blind chance or mechanical law or both. 14

DNA is a semantophoretic molecule (a biological macromolecule that stores genetic information). RNA and DNA are analogous to a computer hard disk. DNA monomers are joined to long strings        (like train bandwagons) made up of the four nucleobases ( Adenine, Guanine, Cytosine, and Thymine, (Uracil in RNA). The aperiodic sequence of nucleotides carries instructional information that directs the assembly and polymerization of amino acids in the ribosome, forming polymer strands that make up proteins, the molecular workers of the cell.

No one who understands the subject argues that the information stored in DNA is called so just as a “metaphor”, by means that it only ‘looks like’ coded information and information processing but is not really so. This is blatantly false. The sequence of the nucleotides stored in DNA, the trinucleotide codon "words" lined up are exactly parallel to the way that the alphabetic letters are arranged, and work in this sentence. The words that I write here have symbolic meanings that you can look up in a dictionary, and I have strung them together in a narrative sequence to tell you a story about biological information. The genetic code, each codon, have symbolic meanings that a cell (and you) can look up in a ‘dictionary’ of the genetic code table, and they are strung together in sequences that have meaning for the workings of the cell. The cell exercises true information storage, retrieval, and processing, resulting in functional proteins, required to make a living organism,  and no educated person in biology would deny it.

DNA and RNA are the hardware, and the specified complex sequence of nucleotides is the software. That information is conveyed using a genetic code, which is a set of rules, where meaning is assigned to trinucleotide codon words. The information in DNA is first transcribed to messenger RNA (mRNA) ( which acts like a post officer, sending a message from A to B ) and then translated in the ribosome. A set of three nucleotides (trinucleotides) form a codon. Life uses 64 codon "words" that are assigned or mapped to 20 ( in certain cases 22) amino acids. Origin of Life researchers are confronted with the problem of explaining the origin of the complex, specified ( or instructional assembly) information stored in DNA, and on top of that, the origin of the genetic code. These are two, often conflated, but very distinct problems, which have caused a lot of confusion, which comes from the ambiguity in using the term “genetic code”. Here is a quote from Francis Crick, who seems to be the one who coined this term: Unfortunately the phrase “genetic code” is now used in two quite distinct ways. Laymen often use it to mean the entire genetic message in an organism. Molecular biologists usually mean the little dictionary that shows how to relate the four-letter language of the nucleic acids to the twenty-letter language of the proteins, just as the Morse code relates the language of dots and dashes to the twenty-six letters of the alphabet… The proper technical term for such a translation is, strictly speaking, not a code but a cipher. In the same way, the Morse code should really be called the Morse cipher. I did not know this at the time, which was fortunate because “genetic code” sounds a lot more intriguing than “genetic cipher”.

The specification, from triplet codon to amino acid, is called a cipher. It is like a translation from one language to another. We can use for example the google translate program. We write the English word language, and the program translates it and we can get the word "Sprache", in German, which is equivalent to the word  "language" in English.  As in all translations, there must be someone or something, that is bilingual, in this case, to turn the coded instructions written in nucleic acid language into a result written in the amino-acid language. In Cells the adaptor molecule, tRNA, performs this task. One end of the tRNA mirrors the code on the codons on the messenger RNA and the other end is attached to the amino acid that is coded for.  the correct amino acid is attached to the correct tRNA by an enzyme called amino acid tRNA Syntethase.. This raises a huge - an even tougher problem concerning the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly but have to deal via the tRNA chemical intermediary, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance? That is what Crick proposed. How could that not be called just an "ad-hoc" assertion, face no other reasonable or likely explanation? - unless, of course, we permit the divine into the picture.

One deals with sequence specificity, and the other with mapping or assigning the meaning of one biomolecule to another ( the codon TTA ( Adenine - Adenine - Thymine) is assigned to the amino acid Leucine (Leu).  That means, that when an mRNA strand with the codon sequence TTA enters the ribosome translation machine, specialized molecules ( tRNAs, aminoacyl tRNA synthetases, etc.) are recruited, Leucine is picked and added to the growing and elongating polymer strand that is being formed in the ribosome, that will, in the end, fold into a very specific, functional 3D configuration, and be part of a protein, which will bear a precise function in the cell. As the instructions of a floorplan or a blueprint direct the making of a machine, so does the information ( conveyed in the sequence of trinucleotide codons) direct the making of molecular machines. There is a precise 1:1 analogous situation here. But it goes further than that. Individual machines often operate in a joint venture with other machines, composing production lines, being part of a team that constructs products, that are still just intermediate products, that only later are assembled together with other intermediate products, to form a functional device of high integrated complexity. Mete data is necessary, or diverse levels of information, that operate together. DNA contains coding and non-coding genes. Non-coding genes are employed in the gene regulatory network. They dictate the timeframe upon which genes are expressed and orchestrate the spatiotemporal pattern upon which individual cells, or in multicellular organisms, the embryo develops. This is the second level of DNA information.  

Paul Davies (2013): The significant property of biological information is not its complexity, great though that may be, but the way it is organised hierarchically. In all physical systems there is a flow of information from the bottom upwards, in the sense that the components of a system serve to determine how the system as a whole behaves.  3

So why has that been such a conundrum? Because many want to avoid the design inference at all costs.  Hume wrote in the form of a dialogue between three characters. Philo, Cleanthes, and Demea. The design argument is spoken by Cleanthes in Part II of the Dialogues: The curious adapting of means to ends, throughout all nature, resembles exactly, though much exceeds, the production of human contrivance, or human design, the thought, wisdom and intelligence. Since therefore the effects resemble each other, we are led to infer, by all the rules of analogy, that the causes also resemble; and that the Author of nature is somewhat similar to the mind of man; though possesses of much large faculties, proportioned to the grandeur of the work executed. By this argument a posteriori, and by this argument alone, do we prove at once the existence of a Deity, and his similarity to human mind and intelligence. 

Does DNA store prescriptive, or descriptive information? 
One common misconception is that natural principles are just discovered, and described by us. In other words, that life is supposedly just chemistry, and we describe the state of affairs going on there.  Two cans with Coca Cola, one is normal, the other is diet. Both bear information that we can describe. We describe the information transmitted to us that one can contain Coca Cola, and the other is diet. But that does not occur naturally. A chemist invented the formula of how to make Coke, and Diet Coke, and that is not dependent on descriptive, but PREscriptive information. The same occurs in nature. We discover that DNA contains a genetic code. But the rules upon which the genetic code operates are prescriptive. The rules are arbitrary. The genetic Code is constraint to behave in a certain way. But what genes store, is information that is similarly organized to a library (the genome), which stores many books (genes) each containing either the instructions, the know-how to make proteins, or there is the non-coding section, which stores regulatory elements (promoters, enhancers, silencers, insulators, MicroRNAs (miRNAs), etc. that works like a program, directing/controlling the operation of the cell, like when a gene has to be expressed (when the information in a gene has to be transcribed and translated). This is information that prescribes how to assemble and operate the cell factory, so it's prescriptive information.

How exactly is information related to biology?
It is related in several ways. I will address two of them. DNA contains information in the sense that the nucleotides sequences or arrangements of characters instruct how to produce a specific amino acid chain that will fold into functional form. DNA base sequences convey instructions. They perform functions and produce specific effects. Thus, they not only possess statistical information but instructional assembly information.

Instructional assembly information
Paul Davies Origin of Life (2003), page 18: Biological complexity is instructed complexity or, to use modern parlance, it is information-based complexity. Inside each and every one of us lies a message. Decrypted, the message contains instructions on how to make a human being. Inside each and every one of us lies a message. It is inscribed in an ancient code, its beginnings lost in the mists of time. Decrypted, the message contains instructions on how to make a human being.  The message isn't written in ink or type, but in atoms, strung together in an elaborately arranged sequence to form DNA, short for deoxyribonucleic acid. It is the most extraordinary molecule on Earth. Although DNA is a material structure, it is pregnant with meaning. The arrangement of the atoms along the helical strands of your DNA determines how you look and even, to a certain extent, how you feel and behave. DNA is nothing less than a blueprint, or more accurately an algorithm or instruction manual, for building a living, breathing, thinking human being. We share this magic molecule with almost all other life forms on Earth. From fungi to flies, from bacteria to bears, organisms are sculpted according to their respective DNA instructions. Each individual's DNA differs from others in their species (with the exception of identical twins), and differs even more from that of other species. But the essential structure – the chemical make-up, the double helix architecture – is universal. 15

Tan, Change; Stadler, Rob (2020): In DNA and RNA, no chemical or physical forces impose a preferred sequence or pattern upon the chain of nucleotides. In other words, each base can be followed or preceded by any other base without bias, just as the bits and bytes of information on a computer are free to represent any sequence without bias. This characteristic of DNA and RNA is critical—in fact, essential—for DNA and RNA to serve as unconstrained information carriers. However, this property also obscures any natural explanation for the information content of life—the molecules themselves provide no explanation for the highly specific sequence of nucleotides required to code for specific biologic functions. Only two materialistic explanations have been proposed for the information content of life: fortuitous random arrangements that happen to be functional or the combination of replication, random mutations, and natural selection to improve existing functionality over time. 16

The Cell factory maker, Paley's watchmaker argument 2.0 Source11

A section of Alosa pseudoharengus (a fish) mitochondrion DNA. This reference sequence continues on all the way up to 16,621 “letters.” Each nucleotide is a physical symbol vehicle in a material symbol system. The specific selection of symbols and their syntax (particular sequencing) prescribes needed three-dimensional molecular structures and metabolic cooperative function prior to natural selection’s participation. (Source: http://www.genome.jp/dbget-bin/www_bget?refseq+NC_009576).

David L. Abel (2009): The figure above  shows the prescriptive coding of a section of DNA. Each letter represents a choice from an alphabet of four options. The particular sequencing of letter choices prescribes the sequence of triplet codons and ultimately the translated sequencing of amino acid building blocks into protein strings. The sequencing of amino acid monomers (basically the sequencing of their R groups) determines minimum Gibbs-free-energy folding into secondary and tertiary protein structure. It is this three-dimensional structure that provides “lock-and-key” binding fits, catalysis, and other molecular machine formal functions. The sequencing of nucleotides in DNA also prescribes highly specific regulatory micro RNAs and other epigenetic factors. Thus linear digital instructions program cooperative and holistic metabolic proficiency. 17

George M Church (2012): DNA is among the densest and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing. 18

Peter R. Wills (2016): The biological significance of DNA lies in the role it plays as a carrier of information, especially across generations of reproducing organisms, and within cells as a coded repository of system specification and stability. 19

David L Abel (2005): Genes are not analogous to messages; genes are messages. 7

Leroy Hood: (2003): The value of having an entire genome sequence is that one can initiate the study of a biological system with a precisely definable digital core of information for that organism — a fully delineated genetic source code. Genes that encode the protein and RNA molecular machines of life, and the regulatory networks that specify how these genes are expressed in time, space and amplitude. 20

Information related to the genetic code
Information is divided into five levels. These can be illustrated with a STOP sign.
The first level, statistics, tells us the STOP sign is one word and has four letters. It is related to the improbability of a sequence of symbols (or the uncertainty to obtain it).
The second level, syntax, requires the information to fall within the rules of grammar such as correct spelling, word, and sentence usage. The word STOP is spelled correctly.
The third level, semantics, provides meaning and implications. The STOP sign means that when we walk or drive and approach the sign we are to stop moving, look for traffic and proceed when it is safe.
The fourth level, pragmatics, is the application of the coded message. It is not enough to simply recognize the word STOP and understand what it means; we must actually stop when we approach the sign.
The fifth level, apobetics, is the overall purpose of the message. The STOP signs are placed by our local government to provide safety and traffic control.

The code in DNA completely conforms to all five of these levels of information.

Perry Marshall, Evolution 2.0:The alphabet (symbols), syntax (grammar), and semantics (meaning) of any communication system must be determined in advance before any communication can take place. Otherwise, you could never be certain that what the transmitter is saying is the same as what the receiver is hearing. It’s like when you visit a Russian website and your browser doesn’t have the language plug-in for Russian. The text just appears as a bunch of squares. You would never have any idea if the Russian words were spelled right. When a message’s meaning is not yet decided, it requires intentional action by conscious agents to reach a consensus. The simple process of creating a new word in English, like a blog, requires speakers who agree on the meaning of the other words in their sentences. Then they have to mutually agree to define the new word in a specific way. Once a word is agreed upon, it is added to the dictionary. The dictionary is a decode table for the English language. Even if noise might occasionally give you a real word by accident, it could never also tell you what that word means. Every word has to be defined by mutual agreement and used in the correct context in order to have meaning. 21

Okay, you probably wish you could see an example, of how that works in the cell, right? Let's make an analogy. Let's suppose you have a recipe to make spaghetti with a special tomato sauce written on a word document saved on your computer.  You have a Japanese friend and only communicate with him using the google translation program. Now he wants to try out that recipe and asks you to send him a copy. So you write an email, annex the word document, and send it to him. When he receives it, he will use google translate and get the recipe in Japanese, written in kanji, in logographic Japanese characters which he understands. With the information at hand, he can make the spaghetti with that fine special tomato sauce exactly as described in the recipe. In order for that communication to happen, you use at your end 26 letters from the alphabet to write the recipe, and your friend has 2,136 kanji characters that permit him to understand the recipe in Japanese. Google translate does the translation work.

While the recipe is written on a word document saved on your computer, in the cell, the recipe (instructions or master plan) for the construction of proteins which are the life essential molecular machines, veritable working horses, is written in genes through DNA. While you use the 26 letters of the alphabet to write your recipe, the Cell uses DNA, deoxyribonucleotides, and four monomer "letters". In kanji there are 2136 characters, the alphabet uses 26,   computer codes being binary, use 0,1. The language of DNA is digital, but not binary. Where binary encoding has 0 and 1 to work with (2 - hence the 'binary)  DNA uses four different organic bases, which are adenine (A), guanine (G), cytosine (C), and thymine (T). The way by which DNA stores the genetic information consists of codons equivalent to words, consisting of an array of three DNA nucleotides. These triplets form "words". While you used sentences to write the spaghetti receipt, the equivalent sentences are called genes written through codon "words". With four possible nucleobases, the three nucleotides can give 4^3 = 64 different possible "words" (tri-nucleotide sequences). In the standard genetic code, three of these 64 codons (UAA, UAG, and UGA) are stop codons. There has to be a mechanism to extract the information in the genome, and send it to the ribosome,  the factory that makes proteins, which is at another place in the cell, free-floating in the cytoplasm. The message contained in the genome is transcribed by a very complex molecular machine, called RNA polymerase. It makes a transcript, a copy of the message in the genome, and that transcript is sent to the Ribosome. That transcript is called messenger RNA or typically mRNA.  In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, or word, into another form, ( as another word, letter, etc. ) In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underlying the translation process. Assignment means designating, ascribing, corresponding, and correlating. The Ribosome does basically what google translate does. But while google translate just gives the receipt in another language, and our Japanese friend still has to make the spaghettis,  the Ribosome actually makes in one step the end product, which are proteins.  Imagine the brainpower involved in the entire process from inventing the receipt to making spaghetti, until they are on the table of your Japanese friend. What is involved?

1. Your imagination of the recipe
2. Inventing an alphabet, a language
3. Inventing the medium to write down the message
4. Inventing the medium to store the message
5. Storing the message in the medium
6. Inventing the medium ( the machine) to extract the message
7. Inventing the medium to send the message
8. Inventing the second language (Japanese)
9. Inventing the translation code/cipher from your language to Japanese
10. Making the machine that performs the translation
11. Programming the machine to know both languages, to make the translation
12. Making ( performing) the translation
12. Making of the spaghettis on the other end using the receipt in Japanese  

1. Creating a recipe to make a cake is always a mental process. Creating a blueprint to make a machine is always a mental process.
2. To suggest that a physical process can create instructional assembly information, a recipe or a blueprint, is like suggesting that a throwing ink on paper will create a blueprint. It is never going to happen!  
3. Physics and chemistry alone do not possess the tools to create a concept, or functional complex machines made of interlocked parts for specific purposes
4. The only cause capable of creating conceptual semiotic information is a conscious intelligent mind.
5. DNA stores codified information to make proteins, and cells, which are chemical factories in a literal sense.

Information is not physical
Robert Alicki (2014): Information is a disembodied abstract entity independent of its physical carrier. ”Information is always tied to a physical representation. It is represented by engraving on a stone tablet, a spin, a charge, a hole in a punched card, a mark on paper, or some other equivalent. Information is neither classical nor quantum, it is independent of the properties of physical systems used for its processing. 3

Paul C. W. Davies (2013): The key distinction between the origin of life and other ‘emergent’ transitions is the onset of distributed information control, enabling context-dependent causation, where an abstract and non-physical systemic entity (algorithmic information) effectively becomes a causal agent capable of manipulating its material substrate. Biological information is functional due to the right sequence. There have been a variety of terms employed for measuring functional biological information — complex and specified information (CSI), Functional Sequence Complexity (FSC) Instructional complex Information.  I like the term instructional because it defines accurately what is being done, namely instructing the right sequence of amino acids to make proteins, and also the sequence of messenger RNA, which is used for gene regulation, and a variety of yet unexplored function. Another term is prescriptive information (PI). It describes as well accurately what genes do. They prescribe how proteins have to be assembled. But it smuggles in as well a meaning, which is highly disputed between proponents of intelligent design, and unguided evolution. Prescribing implies that an intelligent agency preordained the nucleotide sequence in order to be functional. 22

David L Abel (2012): Biological information frequently manifests its “meaning” through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI.  In addition to algorithm execution, there needs to be an assembly algorithm. Any manufacturing engineer knows that nothing (in production) is built without plans that precisely define orders of operations to properly and economically assemble components to build a machine or product. There must be by necessity, an order of operations to construct biological machines. This is because biological machines are neither chaotic nor random, but are functionally coherent assemblies of proteins/RNA elements. An Algorithm is a set of rules or procedures that precisely defines a finite sequence of operations. These instructions prescribe a computation or action that, when executed, will proceed through a finite number of well-defined states that leads to specific outcomes.  One of the greatest enigmas of molecular biology is how codonic linear digital programming is not only able to anticipate what the Gibbs free energy folding will be, but it actually prescribes that eventual folding through its sequencing of amino acids. Much the same as a human engineer, the nonphysical, formal PI instantiated into linear-digital codon prescription makes use of physical realities like thermodynamics to produce the needed globular molecular machines. 23

An algorithm is a finite sequence of well-defined, computer-implementable instructions resulting in precise intended functions. A prescriptive algorithm in a biological context can be described as performing control operations using rules, axioms, and coherent instructions. These instructions are performed, using a linear, digital, cybernetic string of symbols representing syntactic, semantic, and pragmatic prescriptive information. Cells host algorithmic programs for cell division, cell death, enzymes pre-programmed to perform DNA splicing, programs for dynamic changes of gene expression in response to the changing environment. Cells use pre-programmed adaptive responses to genomic stress,  pre-programmed genes for fetal development regulation, temporal programs for genome replication, pre-programmed animal genes dictating behaviors including reflexes and fixed action patterns, pre-programmed biological timetables for aging etc. A programming algorithm is like a recipe that describes the exact steps needed to solve a problem or reach a goal. We've all seen food recipes - they list the ingredients needed and a set of steps for how to make a meal. Well, an algorithm is just like that.  A programming algorithm describes how to do something, and it will be done exactly that way every time.

Albert Voie (2006): Life expresses both function and sign systems. Due to the abstract character of function and sign systems, life is not a subsystem of natural laws. This suggests that our reason is limited in respect to solving the problem of the origin of life and that we are left accepting life as an axiom. Memory-stored controls transform symbols into physical states. Von Neumann made no suggestion as to how these symbolic and material functions in life could have originated. He felt, "That they should occur in the world at all is a miracle of the first magnitude."

The Cell factory maker, Paley's watchmaker argument 2.0 Von_ne11

No natural law restricts the possibility-space of a written or spoken text. Languages are indeed abstract, and non-physical, and it is really easy to see that they are subsystems of the mind and belong to another category of phenomena than subsystems of the laws of nature, such as molecules. Another similar set of subsystems is functional objects. In the engineering sense, a function is a goal-oriented result coming of an intelligent entity.  The origin of a machine cannot be explained solely as a result of physical or chemical events. Machines can go wrong and break down - something that does not happen to laws of physics and chemistry. In fact, a machine can be smashed and the laws of physics and chemistry will go on operating unfailingly in the parts remaining after the machine ceases to exist. Engineering principles create the structure of the machine which harnesses energy based on the laws of physics for the purposes the machine is designed to serve. Physics cannot reveal the practical principles of design or coordination which are the structure of the machine. The cause leading to a machine’s functionality is found in the mind of the engineer and nowhere else.

In life, there is interdependency between biological sign systems, data, and the construction, machine assembly, and operation, that is directed by it. The abstract sign-based genetic language stores the abstract information necessary to build functional biomolecules. 

Von Neumann believed that life was ultimately based on logic. Von Neumann’s abstract machine consisted of two central elements: a Universal Computer and a Universal Constructor. The Universal Constructor builds another Universal Constructor based on the directions contained in the Universal Computer. When finished, the Universal Constructor copies the Universal Computer and hands the copy to its descendant. As a model of a self-replicating system, it has its counterpart in life where the Universal Computer is represented by the instructions contained in the genes, while the Universal Constructor is represented by the cell and its machinery. 24

On the one side, there is the computer storing the data, on the other, the construction machines. The construction machines build/replicate and make another identical construction machine, based on the data stored in the computer. Once finished, the construction machines copy the computer and the data and hand it down to the descendant.  As a model of a self-replicating system, it has its counterpart in life where the computer is represented by the instructions contained in the genes, while the construction machines are represented by the cell and its machinery that transcribes, translates, and replicates the information stored in genes.  RNA polymerase transcribes, and the ribosome translates the information stored in DNA and produces a Fidel reproduction of the cell and all the machinery inside of the cell. Once done, the genome is replicated, and handed over to the descendant replicated cell, and the mother cell has produced a daughter cell.   The entire process of self-replication is data-driven and based on a sequence of events that can only be instantiated by understanding and knowing the right sequence of events. There is an interdependence of data and function. The function is performed by machines that are constructed based on the data instructions. The cause to instantiate such a sequence of events and state of affairs has never been shown to be otherwise, then a mind. 

Timothy R. Stout (2019): A body of information is stored in a genome within the cell. Cellular “hardware” then reads, decodes, and uses the information. The information drives the operation in a manner analogous to how software in a computer drives computer hardware. In both cases, proper information needs to be available for use by functioning hardware which in turn is controlled by it. The gradual step-by-step developmental processes characteristic of evolution are not compatible with the first appearance of a computer. There is a minimum amount of functioning information required for computer operation. The information and hardware must interact with each other in a very intricate, intertwined manner. The minimum amounts required for each are staggeringly complex. In industry, a computer needs to be designed before it is fabricated. The probability is virtually zero for an unguided, random combination of logic gates to form a functioning computer, complete with internal memory, memory address logic, data registers, a central processing unit, data input, and output components, control signal inputs, and outputs, and connections between internal components. Beyond this, there are no known means for random combinations of logic to generate a body of information tailored to work with a specific form of computer hardware. There are no known means for such information to be stored for use by the computer and to be accessible by it. Computers are the product of deliberate intelligent action, not random processes. Since computers and living cells are both information-driven machines, this suggests the possibility that the difficulties facing initial computer fabrication could also apply to initial cell fabrication. If this suggestion proves valid, it poses serious issues concerning the adequacy of natural processes being adequate to account for the information-driven physical life we see around us. There is another aspect of this problem that has particular significance. In industry, both computers and processer-driven applications ranging from microwave ovens to self-driving automobiles start with a predefined system specification.

Typically, this will define an overall task for the machine to accomplish. Some tasks may be done in hardware or software. Typically, the software is cheaper and more readily adapts to a wide range of possible variations in operation. However, hardware is faster and requires minimal input to trigger its operation. The specification determines whether a particular task is to be done in hardware or software. It also determines how the software and the hardware interact with each other to accomplish a given task. A major objective of the system specification is to define a software specification describing what the software needs to do and a hardware specification defining what the hardware needs to do. In industry, separate hardware and software design engineering teams then design a product meeting their specified goals. In an ideal world, the system specification will be so complete and accurate and the proficiency of the software and hardware engineers in implementing their specifications will likewise be so complete and accurate that the system will work the first time the power is turned on and the two are brought together. In real life, this is not typical.

If a living cell is more complex than a physical computer and if debug of computer design typically is an extremely difficult task, this suggests that a living cell must have its origin in a being so intelligent that it can anticipate all of the behaviors of the various arrangements of building block amino acids and nucleotides. The first cell must appear in working form without needing debug. This is particularly the case since special test equipment for identifying design problems would not be available in a prebiotic scenario. Although mutation and natural selection can have use in adapting an already living cell to changing environmental conditions, they appear inadequate to meet the requirements of initial cellular appearance. Slight modification of an existing, already working design is trivial compared to the difficulties of implementing an initial design. During my experience as an industrial design engineer, I was active on many design projects that were canceled for various reasons. I have worked on designs that were ready for a prototype to be built, but funds were not provided to make it. There is a difference between having a paper design, no matter how good it might be, and actually having resources to build the product. It is insufficient for an intelligent being to design a living cell capable of survival in the environment in which it will appear. Since the design specification appears outside of natural law, its physical implementation must also take place outside of natural law. Natural processes have no ability to implement non-material plans. The actual appearance on Earth of a living cell required an intelligent being to work outside of natural law in order to arrange molecules and atoms into dynamic relationships with each other in accordance with a predefined specification, one which was developed through intelligence and apart from natural processes. There is a word we use to call an extremely intelligent being who can move molecules and atoms into predetermined, dynamic relationships at will—God. This paper has plausibly demonstrated how unsuppressed, unbiased scientific observation leads to a Being with the characteristics of God as the source of the physical life we see around us. 25

Instructional assembly information has always a mental origin
David L Abel: (2009): Even if multiple physical cosmoses existed, it is a logically sound deduction that linear-digital genetic instructions using a representational material symbol system (MSS) cannot be programmed by the chance and/or fixed laws of physicodynamics. This fact is not only true of the physical universe but would be just as true in any imagined physical multiverse. Physicality cannot generate non-physical Prescriptive Information (PI). Physicodynamics cannot practice formalisms (The Cybernetic Cut). Constraints cannot exercise formal control unless those constraints are themselves chosen to achieve formal function. 26

Edward J. Steele (2018): The transformation of an ensemble of appropriately chosen biological monomers (e.g. amino acids, nucleotides) into a primitive living cell capable of further evolution appears to require overcoming an information hurdle of super astronomical proportions, an event that could not have happened within the time frame of the Earth except, we believe, as a miracle. All laboratory experiments attempting to simulate such an event have so far led to dismal failure. It would thus seem reasonable to go to the biggest available “venue” in relation to space and time. A cosmological origin of life thus appears plausible and overwhelmingly likely to us. 27

Katarzyna Adamala (2014):  There is a conceptual problem, namely the emergence of specific sequences among a vast array of possible ones, the huge “sequence space”, leading to the question “why these macromolecules, and not the others?” One of the main open questions in the field of the origin of life is the biogenesis of proteins and nucleic acids as ordered sequences of monomeric residues, possibly in many identical copies. The first important consideration is that functional proteins and nucleic acids are chemically speaking copolymers, i.e., polymers formed by several different monomeric units, ordered in a very specific way. 

Attempts to obtain copolymers, for instance by a random polymerization of monomer mixtures, yield a difficult-to-characterize mixture of all different products. To the best of our knowledge, there is no clear approach to the question of the prebiotic synthesis of macromolecules with an ordered sequence of residues. The copolymeric nature of proteins and nucleic acid challenges our understanding of the origin of life also from a theoretical viewpoint. The number of all possible combinations of the building blocks (20 amino acids, 4 nucleotides) forming copolymers of even moderate length is ‘astronomically’ high, and the total number of possible combinations it is often referred as the “sequence space”. Simple numerical considerations suggest that the exhaustive exploration of the sequence spaces, both for proteins and nucleic acid, was physically not possible in the early Universe, both for lack of time and limited chemical material. There are no methods described in the literature to efficiently generate long polypeptides, and we also lack a theory for explaining the origin of some macromolecular sequences instead of others.

The theoretical starting point is the fact that the number of natural proteins on Earth, although apparently large, is only a tiny fraction of all the possible ones. Indeed, there are thought to be roughly 10^13 proteins of all sizes in extant organisms. This number, however, is negligible when compared to the number of all theoretically possible different proteins. The discrepancy between the actual collection of proteins and all possible ones stands clear if one considers that the number of all possible 50-residues peptides that can be synthesized with the standard 20 amino acids is 20^50, namely 10^65. Moreover, the number of theoretically possible proteins increases with length, so that the related sequence space is beyond contemplation; in fact, if we take into account the living organisms, where the average length of proteins is much greater, the number of possible different proteins becomes even bigger. The difference between the number of possible proteins (i.e. the sequence space) and the number of those actually present in living organisms is comparable, in a figurative way, to the difference that exists between a drop of water and an entire Ocean. This means that there is an astronomically large number of proteins that have never been subjected to the long pathway of natural evolution on Earth: the “Never Born Proteins” (NBPs). 28


1. Paul Davies & Jeremy England:  The Origins of Life: Do we need a new theory for how life began? at 15:30 Life = Chemistry + information  Jun 25, 2021
2. Guenther Witzany: Life is physics and chemistry and communication 2014 Dec 31
3. Paul Davies: The secret of life won't be cooked up in a chemistry lab
4. SUNGCHUL JI: The Linguistics of DNA: Words, Sentences, Grammar, Phonetics, and Semantics  06 February 2006
5. Paul Davies: Life force 18 September 1999
6. Timothy R. Stout: Information-Driven Machines and Predefined Specifications: Implications for the Appearance of Organic Cellular Life April 8, 2019
7. David L Abel: Three subsets of sequence complexity and their relevance to biopolymeric information 11 August 2005
8. G. F. Joyce, L. E. Orgel: Prospects for Understanding the Origin of the RNA World 1993
9. Claus Emmeche: FROM LANGUAGE TO NATURE - the semiotic metaphor in biology 1991
10. Sergi Cortiñas Rovira: Metaphors of DNA: a review of the popularisation processes  21 March 2008
11. Massimo Pigliucci:  Why Machine-Information Metaphors are Bad for Science and Science Education 2010
12. Richard Dawkins on the origins of life (1 of 5) Sep 29, 2008
13. Hubert P. Yockey: Information Theory, Evolution, and the Origin of Life 2005
14. Barry Arrington A Dog Is A Chien Is A Perro Is A Hund February 11, 2013
15. Paul Davies: The Origin of Life January 31, 2003
16. Change Laura Tan, Rob Stadler: The Stairway To Life: An Origin-Of-Life Reality Check  March 13, 2020 
17. David L. Abel: The Capabilities of Chaos and Complexity 9 January 2009
18. George M Church: Next-generation digital information storage in DNA 2012 Aug 16
19. Peter R. Wills: DNA as information 13 March 2016
20. Leroy Hood: The digital code of DNA 2003 Jan 23
21. P.Marshall:  Evolution 2.0:Breaking the Deadlock Between Darwin and Design September 1, 2015
22. Paul C. W. Davies: The algorithmic origins of life 06 February 2013
23. David L Abel: Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems 2012 Mar 14
24. Albert Voie: Biological function and the genetic code are interdependent 2006
25. Timothy R. Stout: Information-Driven Machines and Predefined Specifications: Implications for the Appearance of Organic Cellular Life April 8, 2019
26. David L Abel: The Universal Plausibility Metric (UPM) & Principle (UPP) 2009; 6: 27
27. Edward J. Steele: Cause of Cambrian Explosion -Terrestrial or Cosmic? 2018
28. Katarzyna Adamala OPEN QUESTIONS IN ORIGIN OF LIFE: EXPERIMENTAL STUDIES ON THE ORIGIN OF NUCLEIC ACIDS AND PROTEINS WITH SPECIFIC AND FUNCTIONAL SEQUENCES BY A CHEMICAL SYNTHETIC BIOLOGY APPROACH February 2014



Last edited by Otangelo on Sat Jul 23, 2022 5:48 pm; edited 16 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Sir Fred Hoyle (1981): The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn't so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn't give. If amino acids were linked at random, there would be a vast number of arrangements that would be useless in serving the purposes of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link, it's easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem - the information problem.

It's easy to frame a deceitful answer to it. Start with much simpler, much smaller enzymes, which are sufficiently elementary to be discoverable by chance; then let evolution in some chemical environment cause the simple enzymes to change gradually into the complex ones we have today. The deceit here comes from omitting to explain what is in the environment that causes such an evolution. The improbability of finding the appropriate orderings of amino acids is simply being concealed in the behavior of the environment if one uses that style of argument. I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe.  So try as I would, I couldn't convince myself that even the whole universe would be sufficient to find life by random processes - by what are called the blind forces of nature. The thought occurred to me one day that:

The human chemical industry doesn't chance on its products by throwing chemicals at random into a stewpot. To suggest to the research department at DuPont that it should proceed in such a fashion would be thought ridiculous.
Wasn't it even more ridiculous to suppose that the vastly more complicated systems of biology had been obtained by throwing chemicals at random into a wildly chaotic astronomical stewpot? By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes. And given a knowledge of the appropriate ordering of amino acids, it would need only a slightly superhuman chemist to construct the enzymes with 100 percent accuracy. It would need a somewhat more superhuman scientist, again given the appropriate instructions, to assemble it himself, but not a level of scale outside our comprehension. Rather than accept the fantastically small probability of life having arisen through the blind forces of nature, it seemed better to suppose that the origin of life was a deliberate intellectual act. By "better" I mean less likely to be wrong. Suppose a spaceship approaches the earth, but not close enough for the spaceship's imaginary inhabitants to distinguish individual terrestrial animals. They do see growing crops, roads, bridges, however, and a debate ensues. Are these chance formations or are they the products of intelligence? Taking the view, palatable to most ordinary folk but exceedingly unpalatable to scientists, that there is an enormous intelligence abroad in the universe, it becomes necessary to write blind forces out of astronomy.

Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language super calculating intellects use: Some super calculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course, you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.

A common-sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question. 29

Robert T. Pennock (2001): Once considered a crude method of problem-solving, trial-and-error has so risen in the estimation of scientists that it is now regarded as the ultimate source of wisdom and creativity in nature. The probabilistic algorithms of computer science all depend on trial-and-error. So too, the Darwinian mechanism of mutation and natural selection is a trial-and-error combination in which mutation supplies the error and selection the trial. An error is committed after which a trial is made. But at no point is complex specified information generated. All historical, observational, testable and repeatable examples PROVE information and operational functionality come from intelligent sources. "The inadequacy of proposed materialistic causes forms only a part of the basis of the argument for intelligent design. We know from broad and repeated experience that intelligent agents can and do produce information rich systems: we have positive experience based on knowledge of a cause that is sufficient to generate new specified information, namely, intelligence. We are not ignorant of how information arises. According to information theorist Henry Quastler...'the creation of new information is habitually associated with conscious activity' "....I described indirect evidence which is a recognized form of proof for a causal agent...if you have no theory which explains the formation of complex specified information or functional operational activity without an intelligent origin then you cannot dismiss a known cause for such phenomena. Seen or unseen such phenomena require a sufficient cause. 30

Paul davies (2013): If life is more than just complex chemistry, its unique informational management properties may be the crucial indicator of this distinction, which raises the all-important question of how the informational properties characteristic of living systems arose in the first place. This key question of origin may be satisfactorily answered only by first having a clear notion of what is meant by “biological information”.  While standard information-theoretic measures, such as Shannon information, have proved useful, biological information has an additional quality which may roughly be called “functionality” – or “contextuality” – that sets it apart from a collection of mere bits as characterized by Shannon Information content. Biological information shares some common ground with the philosophical notion of semantic information (which is more commonly – and rigorously – applied in the arena of “high-level” phenomena such as language, perception, and cognition). We, therefore, identify the transition from non-life to life with a fundamental shift in the causal structure of the system, specifically, a transition to a state in which algorithmic information gains direct, context-dependent, causal efficacy over matter. 22

Perry Marshall: Evolution 2.0 (2015) page 170: Information possesses another very interesting property that distinguishes it from matter and energy. That property is freedom of choice. In communication, your ability to choose whether “1 = on and 0 = off” or “1 = off and 0 = on” is the most elementary example of the human capacity to choose. Mechanical encoders and decoders can’t make choices, but their very existence shows that the choice was made. By definition, none of these decisions can be derived from the laws of physics because they are freely chosen. In the history of the computer industry, somewhere along the way, somebody got to decide that 1 = “on” and 0 = “off.” Then everyone else decided to adopt that standard. Physics and chemistry alone want us to be fat, lazy, and unproductive. Gravity pulls us down. Entropy makes us old and tired. Clocks wind down. Cars rust. Signals get static. LPs scratch. Desks become cluttered. Bedrooms get strewn with dirty clothes. Choice rises up against this. Evolution 2.0, far from mindless, is literally mind over matter. The unfit adapt. Order and structure increase. Cells exert control over their environments. That means materialism cannot explain the origin of information, the nature of information, or the ability to create a code or language from scratch. It can’t explain thought, feeling, mind, will, or communication. 21

Paul Davies The Fifth Miracle: The Search for the Origin and Meaning of Life (1998) page 82:   The theory of self-organization as yet gives no clue how the transition is to be made between spontaneous, or self-induced, organization—which in even the most elaborate nonbiological examples still involves relatively simple structures—and the highly complex, information-based, genetic organization of living things. An explanation of this genetic takeover must account for more than merely the origin of nucleic acids and their potent entanglement with proteins at some later stage. It is not enough to know how these giant molecules arose or started to interact. We also need to know how the system’s software came into existence. Indeed, we need to know how the very concept of software control was discovered. But how did all these optimized and stringent codes of biological communication come about? Here we are up against one of the great enigmas of biology, which thus far has defied all efforts of penetration. Darwinian evolutionary theory offers no help here. Its principle of natural selection or survival of the fittest, as even staunch evolutionists do concede, entails a tautology: it identifies the fittest with the survivors, and so boils down to no more than a "survival of the survivors." Outside biology, the question how codes get to be hardly comes up. It is usually uncalled for; the origins are obvious in the case of man-made codes. Take our telegraph code, for instance, or the codes bankers or military men use. Those are concocted more or less arbitrarily, arising almost fully fledged from the cryptographer's mind. But no one but the most die-hard creationist would believe that the codes of biological communication—the natural codes—have such one-fell-swoop beginnings. The question of the origins of natural codes has haunted two generations of information theorists interested in human languages and, regarding biomolecular communication, the question has loomed large ever since the genetic code was found. Indeed, one cannot help being drawn to something that speaks so eloquently for the unity of life and the early origin of its master code, like the fact that all living beings use the same four-letter alphabet for their genetic information, the same twenty-letter alphabet for their proteins, and the same code to translate from one language to the other. Each of the 64 possible triplets of the 4 RNA bases here is a "codon" that assigns one of the 20 amino acids in the polypeptide chain. Such a code and the hardware that executes it are surely too complex to have started with one swoop.  42 

Paul Davies (2013): The way life manages information involves a logical structure that differs fundamentally from mere complex chemistry. Therefore chemistry alone will not explain life's origin, any more than a study of silicon, copper, and plastic will explain how a computer can execute a program. 22

Davies does not like the idea of God. Therefore, he attempts to substitute God's efficacious mind with physics: "Our work suggests that the answer will come from taking information seriously as a physical agency, with its own dynamics and causal relationships existing alongside those of the matter that embodies it – and that life's origin can ultimately be explained by importing the language and concepts of biology into physics and chemistry, rather than the other way round."

But he confesses in The Fifth Miracle, on page 258:  “We are still left with the mystery of where biological information comes from.… If the normal laws of physics can’t inject information, and if we are ruling out miracles, then how can life be predetermined and inevitable rather than a freak accident? How is it possible to generate random complexity and specificity together in a lawlike manner? We always come back to that basic paradox” [url=https://www.amazon.com/FIFTH-MIRACLE-Search-Origin-Meaning/dp/068486309X#:~:text=Are We Alone in the,years ago%2C Mars resembled earth.]34[/url] and in his book: The origin of Life, on page 17:  "A law of nature could not alone explain how life began, because no conceivable law would compel a legion of atoms to follow precisely a prescribed sequence of assemblage. " 31

So he is well aware that physical laws are an inadequate explanation. So why is Davies not going with God? In a conversation at unbelievable? he confessed: " I don't like the idea of miracles I don't like the idea of a god who uh sort of meddles in the affairs of the world and also I don't like the idea of a God who's sitting around for all eternity and then made the big bang go bang at some other free moment and so I think however we live in a universe that is remarkable in many ways the laws of physics themselves where did they come from?  Why do they have the form that they do?  Most of my colleagues just accept them as a brute fact but it seems to me that there should be a deeper level of explanation I'm not sure that God is an appropriate term for that but if part of what is involved in the laws of physics is something like a life principle then what we're talking about is something that explains the order of the universe and our place within it what is undeniable we can't be a scientist without supposing that there is a rational order in nature that is at least in part intelligible to us. God of some agency or something supernatural or something that transcends even somehow then zeroes in and does something it gives me an experience or if it was a miracle or something extraordinary happening around me that's not a type of god that I feel very comfortable with you know God who uh as one English bishop once described it's like the laser beam god so you know what would convince me um it would have to be some direct experience and at this stage, I haven't had that." 32 
 
So it is very clear that Davies does not infer God, not based on rational ground, but because of his personal ( maybe emotional?) preferences.

Davies believes that (2019): "The laws of nature as we know them today are insufficient to explain what life is and how it came about. We need to find new laws, he says, or at least new principles, which describe how information courses around living creatures. Those rules may not only nail down what life is but actively favor its emergence. “I really think we need new physics to understand how information couples to matter and makes a difference in the world,” he says." 33

I would call that " physical laws of the gaps". We don't know how information came to direct the making of living cells, but eventually, one day, we will find out, and it will be a new principle of physics, that we have not found out yet. 

Prescriptive information is non-physical, and comes always from a mind
1. Genetic and epigenetic information is characterized containing prescriptive codified information, which result in functional outcomes due to the right particular specified complex sequence, the proper order of triplet codons and ultimately the translated sequencing of amino acid building blocks into protein strings, and highly specific regulatory micro RNAs and other epigenetic factors.
2. Algorithms, prescribing functional instructions, digital programming, using symbols and coding systems are abstract and non-physical and originate always from thought—from conscious or intelligent activity.
3. Therefore, genetic and epigenetic information comes from an intelligent mind. Since there was no human mind present to create life, it must have been a supernatural agency.

Conceptual Information has always mental origin
1. Life depends on vast quantity of semiotic information. 
2. Semiotic functional information is not a tangible entity, and as such, it is beyond the reach of, and cannot be created by any undirected physical process. This is not an argument about probability. Conceptual semiotic information is simply beyond the sphere of influence of any undirected physical process. To suggest that a physical process can create semiotic code is like suggesting that a rainbow can write poetry... it is never going to happen!  Physics and chemistry alone do not possess the tools to create a concept. The only cause capable of creating conceptual semiotic information is a conscious intelligent mind.
3. Life is no accident, and provides powerful positive evidence that we have been designed. A scientist working at the cutting edge of our understanding of the programming information in biology, he described what he saw as an “alien technology written by an engineer a million times smarter than us”

The "Cosmic Limit", or shuffling possibilities of our universe
We need to consider the number of possibilities that such an event in question could have occurred. We have to consider the upper number of probabilistic resources that theoretically were available to produce the event by unguided occurrences. 

The number of atoms in the entire universe = 1 x 10^80
The estimate of the age of the universe is 13,7 Billion years.  In seconds, that would be = 1 x 10^16
The fastest rate an atom can interact with another atom = 1 x 10^43
Therefore, the maximum number of possible events in a universe, 13,7 Billion years old, (10^16 seconds) where every atom (10^80) is changing its state at the maximum rate of 10^40 times per second during the entire time period of the universe is 10^139.

By this calculation, all atoms in the universe would shuffle simultaneously, together, during the entire lifespan of the universe, at the fastest possible rate. It provides us with a measure of the probabilistic resources of our universe. There could have been a maximum of 10^139 events, (number of possible shuffling events in the entire history of our universe)

If the first proteins on early earth were to originate without intelligent input, the only alternative is random events. How can we calculate the odds? What is the chance, or likelihood that a minimal proteome of the smallest free-living cell could emerge by chance? Let us suppose that the 20 amino acids used in life would be separated, purified, and concentrated, and the only ones available to interact with each other excluding all others. What would be the improbability to get a functional sequence? If we had to select a chain of two amino acids bearing a function, in each position of the 2 positions, there would be 20 possible alternatives. Just one of the 20 would be providing a functional outcome. So the odds are 2^20, or 2x20 = 400. One in 400 possible options will be functional.  If the chain has 3 amino acids, the odds are 3^20, or 20x20x20 = 8,000. One in 8,000 options will be functional. And so on. As we can see, the odds or the unlikelihood to get a functional sequence becomes very quickly, very large.

David T.F Dryden (2008): A typical estimate of the size of sequence space is 20^100 (approx. 10^130) for a protein of 100 amino acids in which any of the normally occurring 20 amino acids can be found. This number is indeed gigantic. 34

That means, just to find one protein of 100 amino acids by random shuffling, could have required exhausting the probabilistic resources of our universe. That means, it is far more probable that such an extraordinarily improbable event would never have occurred.  ( For a more extended explanation, see S.Meyers Signature in the cell, page 194)

The simplest free-living bacteria is Pelagibacter ubique. It is known to be one of the smallest and simplest, self-replicating, and free-living cells.  It has complete biosynthetic pathways for all 20 amino acids.  These organisms get by with about 1,300 genes and 1,308,759 base pairs and code for 1,354 proteins. They survive without any dependence on other life forms. Incidentally, these are also the most “successful” organisms on Earth. They make up about 25% of all microbial cells.   If a chain could link up, what is the probability that the code letters might by chance be in some order which would be a usable gene, usable somewhere—anywhere—in some potentially living thing? If we take a model size of 1,200,000 base pairs, the chance to get the sequence randomly would be 4^1.200,000 or 10^722,000. 35

The odds or uncertainty to:
have a functional proteome, which is in the case of Pelagibacter, the smallest known bacteria and life-form, with 1,350 proteins, average 300 Amino Acids size, by unguided means: 10^722,000
connecting all 1,350 proteins ( each, average 300 amino acids in length)  in the right, functional order is about 4^3,600
for the occurrence to have both, a minimal proteome, and interactome: about 10^725,600

It could require up to 5220 universes like ours, exhausting their shuffling resources during the time period of 13,7 billion years, to find one genome and interactome with the right sequence of Pelagibacter Ubique. We have elucidated the probabilistic resources needed to encounter one functional sequence of a minimal interactome of P.Ubique. Another question is the likelihood of such an event occurring. 

The Cell factory maker, Paley's watchmaker argument 2.0 There_10

Software programming requires always intelligence
One could say: But there is still a chance that natural events would encounter a functional sequence.  In fact, in the best case, given the probabilistic resources, even extrapolating those of our universe, it could still be possible, that the first shuffling event would give a positive result. No matter how large the odds, or how unlikely an event, such an event would still be, theoretically possible. 

The probability of picking 7 from a hat of 10 numbers is 1/10. The probability of picking 7 from a hat of 1,000 numbers is 1/1,000. The probability of picking 7 from a hat of 1.000,000 numbers is 1/1.000,000. The probability of picking 7 from 10^756,000 ( a number of 1 with 756,000 zeroes ) is 1/10^756,000. So if the odds to find a functional interactome would require exhausting the probabilistic resources of 5220 universes, and we have just ours, then the odds would be 1/5,220.  The larger the number, the larger the unlikeliness of picking the right number. But what, if the number is infinitely large, or limitless? The probability of picking 7 from an infinitely large number means the probability is indistinguishable from 0.

So what is the criterion upon which chance is a less plausible explanation rather than design? We need a threshold, upon which we can say: In face of the unlikelihood of event X, the intelligent selection inference seems to be a more plausible explanation. Let us suppose someone was playing the lottery 200 times and would win every time, what would be more likely: That someone was cheating, or having luck? 

S.Meyer, Signature in the Cell (2009):  Imagine that after starting my demonstration with Scrabble letters, I left the classroom for a few minutes and instructed my students to continue picking letters at random and writing the results on the board in my absence. Now imagine that upon my return they showed me a detailed message on the blackboard such as Einstein’s famous dictum: “God does not play dice with the universe.” Would it be more reasonable for me to suppose that they had cheated (perhaps, as a gag) or that they had gotten lucky? Clearly, I should suspect (strongly) that they had cheated. I should reject the chance hypothesis. Why? I should reject chance as the best explanation not only because of the improbability of the sequence my students had generated but also because of what I knew about the probabilistic resources available to them. If I had made a prediction before I had left the room, I would have predicted that they could not have generated a sequence of that length by chance alone in the time available. But even after seeing the sequence on the board, I still should have rejected the chance hypothesis. Indeed, I should know that my students did not have anything like the probabilistic resources to have a realistic chance of generating a sequence of that improbability by chance alone. In one hour my students could not have generated anything but a minuscule fraction of the total possible number of 40- character sequences corresponding to the length of the message they had written on the board. The odds that they could have produced that sequence—or any meaningful sequence at all of that length—in the time available by choosing letters at random was exceedingly low. They simply did not have time to sample anything close to the number of 40-character sequences that they would have needed to have a 50 percent chance of generating a meaningful sequence of that length. Thus, it would be much more likely than not that they would not produce a meaningful sequence of that length by chance in the time available and, therefore, it was also vastly more likely than not that something other than chance had been in play. 36

Hubert P. Yockey (1977):  The Darwin-Oparin-Haldane “warm little pond” scenario for biogenesis is examined by using information theory to calculate the probability that an informational biomolecule of reasonable biochemical specificity, long enough to provide a genome for the “protobiont”, could have appeared in the primitive soup. Certain old untenable ideas have served only to confuse the solution to the problem. Negentropy is not a concept because entropy cannot be negative. The role that negentropy has played in previous discussions is replaced by “complexity” as defined in information theory. A satisfactory scenario for spontaneous biogenesis requires the generation of “complexity” not “order”. Previous calculations based on simple combinatorial analysis overestimate the number of sequences by a factor of 10^5. The number of cytochrome c sequences is about 3·8 × 10^61. The probability of selecting one such sequence at random is about 2·1 ×10^65. The primitive milieu will contain a racemic mixture of the biological amino acids and also many analogs and non-biological amino acids. Taking into account only the effect of the racemic mixture the longest genome which could be expected with 95 % confidence in 10^9 years corresponds to only 49 amino acid residues. This is much too short to code a living system so evolution to higher forms could not get started. Geological evidence for the “warm little pond” is missing. It is concluded that belief in currently accepted scenarios of spontaneous biogenesis is based on faith, contrary to conventional wisdom. 37

We also know, that the origin of blueprints containing instructional complex assembly information, dictating the fabrication of complex machines, robotic production lines, computers, transistors, turbines, energy plants,  and interlinked factories based on these instructions, which produce goods for specific purposes, are both always the result of intelligent setup.

A statistical impossibility is a probability that is so low as to not be worthy of mentioning. Sometimes  Borel's law is mentioned, putting an upper boundary at 10^50, although the cutoff is inherently arbitrary. While not truly impossible the probability is low enough so as to not bear mention in a rational, reasonable argument. 38

1. The more statistically improbable something is, the less it makes sense to believe that it just happened by blind chance.
2. Statistically, it is extremely unlikely, that a primordial genome, proteome, metabolome, and interactome of the first living cell arose by random, unguided events.
3. Furthermore, we see in the set up in biochemistry, and biology, purposeful design.  
4. Without a mind knowing which sequences are functional in a vast space of possible sequences, random luck is probabilistically doomed to find those that confer function

Designing and implementing hardware requires always intelligence
The DNA molecule, genes, and genomes are hardware. They are analogous to a computer hard disk. They are semantophoretic ( information-bearing) molecules. Since they are complex macromolecules, their three individual parts have to be selected and joined in the right fashion. It can be compared to drug design.

Florian Lauck (2013): A common process in drug design is to systematically search portions of chemical space for new possible and probable lead or drug molecules. In practice, this is realized with different techniques for constructing a new or altering a known bioactive molecule. Some sort of template or target usually guides this process.  It can be integrated into other techniques as a source of compounds or can stand for itself implementing unique search strategies for new drug candidates. 

Chemical space encompasses the lot of all possible molecules. In the context of biological systems, it is usually used to describe ‘‘all the small carbon-based molecules that could in principle be created’’. Despite the limitations introduced with the terms small and carbon-based, this space is vast. It encompasses all molecules that occur naturally in biological systems, as well as artificially created substances that have an effect on an organism, such as medicinal drugs. An efficient way of modeling chemical space is via a combinatorial approach. We define combinatorial chemical space as a tuple of atomic or molecular building blocks and connection rules. The nature of the building blocks determines the kind of connection between them; the rules determine how building blocks relate and in which case connections are allowed. The evaluation of a combinatorial space, through enumeration or search, then yields actual molecular graphs that can be translated into molecules, the so-called products. In this way, huge chemical libraries can be represented only by the building blocks and the connection rules. There are essentially two options for evaluation: systematic enumeration and searching with on-the-fly molecule construction. We will discuss later that the former is only feasible for small chemical spaces or when heavy constraints are imposed, because it quickly leads to a combinatorial explosion. The latter is much more efficient and can be realized with short computation times. Several methods have been published, implementing an efficient, and in some cases exhaustive, search in combinatorial spaces. In nature, several examples of combinatorial chemical spaces exist. The two probably most important and famous ones describe DNA with four nucleotides as building blocks and the creation of phosphoester bonds as connection rule and proteins with 20 amino acids as building blocks and amide bond formation as connection rule. As building blocks can be combined in arbitrary order, the resulting space is huge. 39

W. Patrick Walters (1998): There are perhaps millions of chemical ‘libraries’ that a trained chemist could reasonably hope to synthesize. Each library can, in principle, contain a huge number of compounds – easily billions.  A ‘virtual chemistry space’ exists that contains perhaps 10^100 possible molecules 40

We know by repeated experience that humans know how to design drugs, and create them for specific purposes. 

1. Chemical space encompasses the lot of all possible molecules. Despite the limitations introduced with the terms small and carbon-based, this space is vast. if not limiting the search of combinatorial space into small chemical spaces or when heavy constraints are imposed, it quickly leads to a combinatorial explosion.  There are perhaps millions of chemical ‘libraries’ that a trained chemist could reasonably hope to synthesize. Each library can, in principle, contain a huge number of compounds – easily billions.  A ‘virtual chemistry space’ exists that contains perhaps 10^100 possible molecules. In nature, several examples of combinatorial chemical spaces exist. The two probably most important and famous ones describe DNA with four nucleotides as building blocks and the creation of phosphodiester bonds as connection rule and proteins with 20 amino acids as building blocks and amide bond formation as connection rule. As building blocks can be combined in arbitrary order, the resulting space is huge.
2. In drug design, researchers employ systematic search, utilizing different techniques, using targets ( or goals) as a guide, integrating techniques, implementing search strategies, modeling, evaluating, enumerating, and implementing efficient methods of search.
3. Hume: "Since the effects resemble each other, we are led to infer, by all the rules of analogy, that the causes also resemble; and that the Author of nature is somewhat similar to the mind of man; though possesses of much large faculties, proportioned to the grandeur of the work executed. By this argument a posteriori, and by this argument alone, do we prove at once the existence of a Deity, and his similarity to human mind and intelligence. ..." If humans employ their mind, and various techniques resorting and using their intelligence, to sort out which molecules in a vast "chemical space" bear the desired function, the same must apply to the quartet of macromolecules used in life, and the genome, proteome, and interactome, necessary to create self-replicating, living cells.

Designing and implementing a hardware/software system requires always intelligence
Brian R. Johnson (2010): All the chemical compounds within a cell, and the stable organizational relationships between them, form the hardware of the biological system. This includes the cell membranes, organelles, and even the DNA molecules. These structures and processes encompass information stored in genes, as well as information inherent in their organization. 41

The problem is twofold: 1. Explaining the origin of the hardware, and 2. the origin of the software, that is, the DNA molecule, and the instructional complex information stored through the specified complex nucleotide sequence.  

Paul Davies: the fifth miracle (2000) page 53: Pluck the DNA from a living cell and it would be stranded, unable to carry out its familiar role. Only within the context of a highly specific molecular milieu will a given molecule play its role in life. To function properly, DNA must be part of a large team, with each molecule executing its assigned task alongside the others in a cooperative manner. Acknowledging the interdependability of the component molecules within a living organism immediately presents us with a stark philosophical puzzle. If everything needs everything else, how did the community of molecules ever arise in the first place?

On page 62, Davies continues: We need to explain the origin of both the hardware and software aspects of life, or the job is only half finished. Explaining the chemical substrate of life and claiming it as a solution to life’s origin is like pointing to silicon and copper as an explanation for the goings-on inside a computer. It is this transition where one should expect to see a chemical system literally take-on “a life of its own”, characterized by informational dynamics which become decoupled from the dictates of local chemistry alone (while of course remaining fully consistent with those dictates). Thus the famed chicken-or-egg problem (a solely hardware issue) is not the true sticking point. Rather, the puzzle lies with something fundamentally different, a problem of causal organization having to do with the separation of informational and mechanical aspects into parallel causal narratives. The real challenge of life’s origin is thus to explain how instructional information control systems emerge naturally and spontaneously from mere molecular dynamics. 42

Daniel J. Nicholson (2019): Following the Second World War, the pioneering ideas of cybernetics, information theory, and computer science captured the imagination of biologists, providing a new vision of the machine conception
of the cell (MCC) that was translated into a highly successful experimental research program, which came to be known as ‘molecular biology’. At its core was the idea of the computer, which, by introducing the conceptual distinction between ‘software’ and ‘hardware’, directed the attention of researchers to the nature and coding of the genetic instructions (the software) and to the mechanisms by which these are implemented by the cell’s macromolecular components (the hardware). 43

1. There is a vast "structure-space", or "chemical space".  A ‘virtual chemistry space’ exists that contains perhaps 10^100 possible molecules. There would have been almost no limit of possible molecular compositions, or "combination space" of elementary particles bumping and eventually joining each other to form any sort of molecules. There was no goal-oriented mechanism, selecting the "bricks" used in life, and producing them equally in the millions. 
2. Even if that hurdle would have been overcome, and, let's say, a specified set of 20 selected amino acids, left-handed and purified, able to polymerize on their own, were available, and a natural mechanism to perform the shuffling process, the "sequence space" would have been 10^756,000 possible sequences amongst which the functional one would have had to be selected. The shuffling resources of 5,220 universes like ours would have eventually to be exhausted to generate a functional interactome.

The four interdependent requirements to have an information transmission system
P.Marshall (2015): Imagine that you’re building the world’s first DVD player. What must you have before you can turn it on and watch a movie for the first time? A DVD. How do you get a DVD? You need a DVD recorder first.
How do you make a DVD recorder? First, you have to define the language. Likewise, you have to define the language that gets written on the DVD, then build hardware that speaks that language. Language must be defined first.
Our DVD recorder/player problem is an encoding-decoding problem, just like the information in DNA. You’ll recall that communication, by definition, requires four things to exist:

1. A code
2. An encoder that obeys the rules of a code
3. A message that obeys the rules of the code
4. A decoder that obeys the rules of the code

These four things—language, a transmitter of language, message, and receiver of language—all have to be precisely defined in advance before any form of communication can be possible at all. The rules of any communication system are always defined in advance by a process of deliberate choices. There must be a pre-arranged agreement between sender and receiver, otherwise, communication is impossible. If you leave out any of these things—code, encoder, message, decoder—your system doesn’t work. There are no known exceptions to this.  43a

A software program stored on a computer has no use unless employed to perform specific functions. The information stored in genes is used to synthesize proteins, expresses microRNAs that control gene expression, and a large part of noncoding DNA instructs the operation of epigenetic processes. Genes control the development, maintenance, and reproduction of organisms. The expression of genes requires an information transmission system. So far, we have only dealt with the software/hardware aspect, intending with hardware the DNA molecule. But the hardware of a computer englobes much more, namely the entire system: In a computer, there is also the processor, circuit board, the monitor, the computer source, etc. In the cell, expression and replication of the genetic information require complex molecular protein complexes - namely the DNA replication machinery, the transcription, and translation machinery. None of all this has a function, nor can life perpetuate, if one of these pieces of machinery is missing.

Information is what is conveyed or represented by a particular arrangement or sequence of things. To have an information transmission system, the following things are indispensable, essential, and required ( if any of those is missing, information transmission cannot be established - all have to be precisely defined in advance before any form of communication can be possible at all):

A) A language, B) the information (message) produced upon that language, C) information storage mechanism ( a hard disk, paper, etc.), D) An information transmission system, that is: encoding - sending and decoding) and eventually E), F), and G) ( not essential): translation, conversion, and transduction

1. The rules or protocol of any informational communication and information system must be preestablished and agreed in advance between those that communicate with each other, through establishing in common agreement of the meaning where a symbol, letters, words, waves or frequency variations, sounds, pulses, or a combination of those are assigned to something else,  otherwise the transmission of information is not possible. A message can only be created once a language has been established.  A code is an abstract, immaterial, nonphysical set of rules. Statistics, Semantics, Synthax, and Pragmatics are used according to combinatorial, context-dependent, and content-coherent rules.
2. This set of rules, codes, or language, permits to produce a blueprint, which contains instructional complex information, that permits to produce goods for specific purposes, and control or maintain the operation of factories.  
3. Then there has to be a device, that is the hard disk, a paper, or any hardware upon which the information can be recorded.
4. And there has to be a system to encode, send, and decode the message.
5. Eventually, during the transmission of information, it can be translated from one language to another. That requires a system of translation/cipher.  It’s like when you visit a Russian website and your browser has the language plug-in for Russian. Conveying meaning of the Russian and English languages must be established in advance, that is the alphabet (symbols), syntax (grammar), and semantics (meaning) before any translation can take place. Otherwise, it would never be certain that what the transmitter is communicating is the same as what the receiver is understanding.
6. Eventually signal conversion ( digital-analog conversion, modulators, amplifiers)
7. Eventually signal transduction converts the nonelectrical signals into electrical signals
8. Communication requires the setup of the entire interdependent system. One has no function without the other players in place. Therefore, the information transmission system in cells was probably designed. 

A) The DNA language
Cells store a genetic language. Marshall. Nirenberg, American biochemist, and geneticist,  received the Nobel Prize in 1968  for "breaking the genetic code" and describing how it operates in protein synthesis 47 He wrote in 1967: The genetic language now is known. and it seems clear that most, if not all, forms of life on this planet use the same language. with minor variations. 44

Patricia Bralley (1996): The cell's molecules correspond to different objects found in natural languages. A nucleotide corresponds to a letter, a codon to either a phoneme (the smallest unit of sound) or a morpheme (the smallest unit of meaning), a gene to a word or simple sentence, an operon to a complex sentence, a replicon to a paragraph, and a chromosome to a chapter. The genome becomes a complete text. Kiippers (1990) emphasizes the thoroughness of the mapping and notes that it presents a hierarchical organization of symbols. Like human language, molecular language possesses syntax. Just as the syntax of natural language imposes a grammatical structure that allows words to relate to one another in only specific ways, biological symbols combine in a specific structural manner. The fact that phonemes have no inherent meaning yet are noninterchangeable (just as within a gene a thymine cannot be replaced by a cytosine without risk of fatal mutation) gives rise to the linguistic property of duality. Duality uses two discrete combinatorial systems: One combines meaningless sounds into meaningful morphemes, while the second combines meaningful morphemes into words and ultimately sentences. Because each discrete combinatorial system combines a finite set of elements into any number of larger structures, duality is an economical and powerful way to produce an infinity of meaningful forms from a few elements. It is a strategy also used by the cell-four nucleotidcs combine into 64 codons; codons combine into many different genes. 45

V A Ratner (1993): The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics. 46

Sedeer el-Showk (2014): The genetic code combines redundancy and utility in a simple, elegant language. Four letters make up the genetic alphabet: A, T, G, and C. In one sense, a gene is nothing more than a sequence of those letters, like TTGAAGCATA…, which has a certain biological meaning or function.  The beauty of the system emerges from the fact that there are 64 possible words but they only need 21 different meanings — 20 amino acids plus a stop sign. That creates the first layer of redundancy since codons can be synonyms. Just like ‘cup’ and ‘glass’ mean (essentially) the same thing, two different codons can refer to the same amino acid; for example, the GAG and GAA both mean ‘glutamic acid’. Synonymous codons offer some protection against mutation. If the last letter of a GAA happened to mutate into a G in a gene, it would still get a glutamic acid at that point, since GAA and GAG are synonyms. 46

Spell-check is an exercise to find misspellings in a text. it can be done by professionals, or through software. In cells, mismatch repair enzymes act as "spell checkers," identifying and fixing errors in the DNA code after replication and recombination. Eric Alani, Cornell Research: These proteins ensure that the error rate when a cell replicates its DNA is one hundred to a thousand-fold lower than if the spell-checker system was not in place. 47

B) The information (message) produced upon the DNA language
The DNA alphabet can be used to write a message, a recipe, or a blueprint, it can be copied, edited, read, transmitted, replicated, transcribed, or translated. In cells, DNA contains the instructions for making complicated living beings 48

C) DNA: the most sophisticated information storage mechanism
DNA is an unequally masterful information-storage molecule.

Dawkins, The Blind Watchmaker (1986): pp. 116–117: There is enough information capacity in a single human cell to store the Encyclopaedia Britannica, all 30 volumes of it, three or four times over. ... There is enough storage capacity in the DNA of a single lily seed or a single salamander sperm to store the Encyclopaedia Britannica 60 times over. Some species of the unjustly called ‘primitive’ amoebas have as much information in their DNA as 1,000 Encyclopaedia Britannicas. 49

Perry Marshall, Evolution 2.0 (2015) page 192: Ultra-High-Density Data Storage and Compression:  Your cells contain at least 92 strands of DNA and 46 double-helical chromosomes. In total, they stretch 6 feet (1.8 meters) end to end. Every human DNA strand contains as much data as a CD. Every DNA strand in your body stretched end to end would reach from Earth to the sun and back 600 times. When you scratch your arm, the dead skin cells that flake off contain more information than a warehouse of hard drives. Cells store data at millions of times more density than hard drives, 1021 bits per gram. Not only that, they use that data to store instructions vastly more effectively than human-made programs; consider that Windows takes 20 times as much space (bits) as your own genome. We don’t quite know how to quantify the total information in DNA. The genome is unfathomably more elegant, more sophisticated, and more efficient in its use of data than anything we have ever designed. Even with the breathtaking pace of Moore’s Law—the principle that data density doubles every two years and its cost is cut in half—it’s hard to estimate how many centuries it may take for human technology to catch up. Hopefully, the lessons we learn from DNA can speed our efforts. A single gene can be used a hundred times by different aspects of the genetic program, expressed in a hundred different ways (248). The same program provides unique instructions to the several hundred different types of cells in the human body; it dictates their relationships to each other in three-dimensional space to make organs, as well as in a fourth dimension, the timeline of growth and development. It knows, for instance, that boys’ voices need to change when they’re 13 and not when they’re 3. It’s far from clear how this information is stored and where it all resides. Confining our understanding of DNA data to computer models is itself a limiting paradigm. This is all the more reason why our standard for excellence ought to be the cell and not our own technology. 21

DNA storage points to an intelligent set up
1. In the scientific magazine ‘Nature,’ in January 2013, Nick Goldman et al. reported a successful use of DNA to store large amounts of data.
2. “Here we describe a scalable method that can reliably store more information than has been handled before. We encoded computer files totaling 739 kilobytes of hard-disk storage and with an estimated Shannon information of 5.2× 10^6 bits into a DNA code synthesized this DNA, sequenced it, and reconstructed the original files with 100% accuracy. Theoretical analysis indicates that our DNA-based storage scheme could be scaled far beyond current global information volumes and offers a realistic technology for large-scale, long-term, and infrequently accessed digital archiving. In fact, current trends in technological advances are reducing DNA synthesis costs at a pace that should make our scheme cost-effective for sub-50-year archiving within a decade.”
3. "DNA-based storage has potential as a practical solution to the digital archiving problem and may become a cost-effective solution for rarely accessed archives," said Goldman.
4. DNA far surpasses any current human-made technology and can last for thousands of years. To get a handle on this, consider that 1 petabyte is equivalent to 1 million gigabytes of information storage. This paper reports an information storage density of 2.2 petabytes per gram.
5. Scientists needed many decades to find out such an incredibly useful design of the DNA made, as they say, by nature. The discovery of the complex design of DNA needed intelligence. How can somebody deny a superior intelligence that designed such a high-density information storage mechanism, necessary for the survival of all the species?
6. The most plausible explanation is that a super-intelligent computer engineer implemented DNA as an information storage carrier for life.

29. Sir Fred Hoyle: The Universe: Past and Present Reflections November 1981
30. Robert T. Pennock: [size=12]Intelligent Design Creationism and Its Critics: Philosophical, Theological, and Scientific Perspectives 2001[/size]
31. Paul Davies: The Origin of Life  January 31, 2003
32. Paul Davies & Jeremy England:  The Origins of Life: Do we need a new theory for how life began? Jun 25, 2021
33. Paul Davies: 'I predict a great revolution': inside the struggle to define life 2019
34. David T.F Dryden: How much of protein sequence space has been explored by life on Earth? 15 April 2008
35. Evolution: Possible, or impossible? Probability and the First Proteins
36. Steve Meyer, Signature in the Cell 2009
37. Hubert P.Yockey: A calculation of the probability of spontaneous biogenesis by information theory 7 August 1977
38. M. Emile Borel: LES PROBABILITIES DINOMBRABLES ET LEURS APPLICATIONS ARITHMtTIOUES. 8 novembre 1908
39. Florian Lauck: Coping with Combinatorial Space in Molecular Design October 2013
40. W.Patrick Walters: Virtual screening—an overview 1 April 1998
41. Brian R. Johnson: Self-organization, Natural Selection, and Evolution: Cellular Hardware and Genetic Software  December 2010
42. Paul Davies: The FIFTH MIRACLE: The Search for the Origin and Meaning of Life  March 16, 2000
43. Daniel J. Nicholson Is the cell really a machine? 4 June 2019
43a P.Marshall:  Evolution 2.0:Breaking the Deadlock Between Darwin and Design September 1, 2015
44. MARSHALL W. NIRENBERG Will Society Be Prepared? 11 August 1967
45. Patricia Bralley: An introduction to molecular linguistics Fehruary 1996
46. V A Ratner: The genetic language: grammar, semantics, evolution 1993 May;29
47. Eric Alani: DNA Spell Checkers
48. Libretexts: Genetic Information
49. Richard Dawkins: The blind watchmaker  1 January 1986



Last edited by Otangelo on Mon Jul 25, 2022 8:55 am; edited 16 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

D) The information transmission system
To make proteins, and direct and insert them to the right place where they are needed, an exquisitely engineered molecular arrangement, at least 25 dauntingly complex biosyntheses, and production-line-like manufacturing steps are required. Each step requires superb ingeniously crafted molecular machines composed of numerous subunits and co-factors, which require the very own processing procedure here described to be manufactured, which makes its origin an irreducible  catch22 problem. DNA expression is highly regulated. The first living organism, lets suppose it was similar to bacteria, had to be able to adapt to the environment, and needed to be able to generate coordinated responses to environmental challenges. This is done through sophisticated gene regulatory networks. After they receive signals from the environment, they respond accordingly, and put the cellular machinery into operation.

María A. Sánchez-Romero (2019): Epigenetic signals control DNA–protein interactions and can cause phenotypic change in the absence of mutation. A nearly universal mechanism of epigenetic signalling is DNA methylation. In bacteria, DNA methylation has roles in genome defence, chromosome replication, and segregation, nucleoid organization, cell cycle control, DNA repair, and regulation of transcription. In many bacterial species, DNA methylation controls reversible switching (phase variation) of gene expression, a phenomenon that generates phenotypic cell variants. The formation of epigenetic lineages enables the adaptation of bacterial populations to harsh or changing environments and modulates the interaction of pathogens with their eukaryotic hosts.

Once the gene that has to be expressed has been selected, a very complex sequence of molecular processes begings, that is: transcription, capping, elongation, splicing, cleavage, polyadenylation, and termination, handing over of the product for translation and starting protein synthesis ( translation), completion of protein synthesis, and folding. The next and last step is: Claudia Y Janda (2010): Targeting proteins to appropriate subcellular compartments is a crucial process in all living cells. Secretory and membrane proteins usually contain an amino-terminal signal peptide, which is recognized by the signal recognition particle (SRP) when nascent polypeptide chains emerge from the ribosome. 50

Daniel J. Nicholson (2020): Gene expression is an extremely intricate process. Consider how it gets started: an inducer, which can be an intracellular or extracellular signal, triggers a chain of biochemical reactions that causes proteins called activators to bind to specific sites in the DNA known as enhancers. Upon binding, the activators interact with other proteins that recruit RNA polymerase and its associated transcription factors to the promoter region of the target gene, where it begins the process of transcription. Numerous additional steps need to be strictly followed after transcription, including RNA processing and export, translation, and protein folding and sorting. The point is that for even a single protein to be successfully expressed in the cell, a huge number of molecules need to interact with one another in exactly the right way, at exactly the right time, and in exactly the right order. And it should not be surprising that the likelihood that all of this happens in a perfectly efficient and precisely timed fashion (as one would expect of the programmatic execution of an algorithmic sequence of coded instructions) is virtually zero once we take into account the random and ferocious buffeting that all molecules are subject to inside the cell by virtue of their size. 51

The Cells GPS system
Imagine a self-driving car. Even today, the programming of self-driving cars that are secure enough on the street are not market ready. The amount of information processing required to direct a car from A to B on a highway is monumental. And to develop a software which is able to direct cars autonomously is a truly an extraordinary feat, which has required up to know thousands of highly skilled software engineers to develop these software programs. But the program alone is not enough. The car itself must be interface compatible to interpret the information input from and direction from the program and transmit it to the wheels etc. Up to now, the human brain does what humans try to delegate to autonomous cars.

1. The cell has sophisticated cargo loading, transport, and delivery systems. It is not only analogous to a man-made transport system but operates literally as such.
2. Cargo loading, transport, and delivery require specialists that operate and execute the delivery based on advanced planning, logistic software systems, GPS  Positioning Systems and information management, monitoring, and control systems.
3. Once proteins are synthesized, they are tagged through the signal transduction particle ( which is life-essential), carried on molecular cargo carriers ( dynein, and kinesin motor proteins ) on molecular highways ( tubulins), and delivered at the destination where the protein cargo will be employed for operation.
4. The programming of all those steps requires with high probability of an intelligent programmer ( God).

David F. Coppedge (2007): Most signal-relay stations we know about were intelligently designed. Signal without recognition is meaningless.  Communication implies a signaling convention (a “coming together” or agreement in advance) that a given signal means or represents something: e.g., that S-O-S means “Send Help!”   The transmitter and receiver can be made of non-sentient materials, but the functional purpose of the system always comes from a mind.  The mind uses the material substances to perform an algorithm that is not itself a product of the materials or the blind forces acting on them.  Signal sequences may be composed of mindless matter, but they are marks of a mind behind intelligent design. 52

In living cells, information is encoded through at least 30 genetic, and almost 50 epigenetic codes that form various sets of rules and languages. They are transmitted through a variety of means, that is transcription and translation, and DNA replication.  The cell cilia as the center of communication, microRNA's influencing cell function, the nervous system, the system synaptic transmission, neuromuscular transmission, transmission b/w nerves & body cells, axons as wires, the transmission of electrical impulses by nerves between brain & receptor/target cells, vesicles, exosomes, platelets, hormones, biophotons, biomagnetism, cytokines and chemokines, elaborate communication channels related to the defense of microbe attacks, nuclei as modulators-amplifiers. These information transmission systems are essential for keeping all biological functions, that is organismal growth and development, metabolism, regulating nutrition demands, controlling reproduction, homeostasis, constructing biological architecture, complexity, form, controlling organismal adaptation, change,  regeneration/repair, and promoting survival.

Besides the information transmission system of DNA to make proteins, there is the most amazing and advanced information transmission system in operation in each of our cells, which works through light.  The more sophisticated and fast an Information transmission system is, the more intelligence is required to project and implement it. Light-fidelity, or Li-Fi, is a 5th generation cutting edge technology, the fastest information transmission system so far invented by man. Life uses not only light, but quantum entanglement to transmit information, which occurs basically instantly. It is logical, therefore, to infer a super-intelligent agency created life's awesome high-speed internet on a molecular level.

E. Camprubí (2019): The organization of various biological forms and their interrelationships, vis-à-vis biochemical and molecular networks, is characterized by the interlinked processes of flow of information between the information-bearing macromolecular semantides, namely DNA and RNA, and proteins. This flow takes place through transcription (of DNA to RNA) and translation (of the message from RNA to proteins) via the algorithm that is the genetic code. 53

E) An information translation system
The translation of a word in one language, to another language, is always of mental origin. For example the assignment of the word chair, in English, to xizi, in Chinese, can only be made by intelligence upon common agreement of meaning. In biology, the genetic code is the assignment ( a cipher) of 64 triplet codons to 20 amino acids.


The irreducible interdependence of information generation and transmission systems
1. Codified information transmission system depends on:
a) A language where a symbol, letters, words, waves or frequency variations, sounds, pulses, or a combination of those are assigned to something else. Assigning meaning of characters through a code system requires a common agreement of meaning. Statistics, Semantics, Synthax, and Pragmatics are used according to combinatorial, context-dependent, and content-coherent rules.
b) An information storage system,
c) Information encoded through that code,
d) An information transmission system, that is encoding, transmitting, and decoding.
e) Eventually translation ( the assignment of the meaning of one language to another )
f)  Eventually conversion ( digital-analog conversion, modulators, amplifiers)
g) Eventually transduction converting the nonelectrical signals into electrical signals
2. In living cells, information is encoded through at least 30 genetic, and almost 30 epigenetic codes that form various sets of rules and languages. They are transmitted through a variety of means, that is the cell cilia as the center of communication, microRNA's influencing cell function, the nervous system, the system synaptic transmission, neuromuscular transmission, transmission b/w nerves & body cells, axons as wires, the transmission of electrical impulses by nerves between brain & receptor/target cells, vesicles, exosomes, platelets, hormones, biophotons, biomagnetism, cytokines and chemokines, elaborate communication channels related to the defense of microbe attacks, nuclei as modulators-amplifiers. These information transmission systems are essential for keeping all biological functions, that is organismal growth and development, metabolism, regulating nutrition demands, controlling reproduction, homeostasis, constructing biological architecture, complexity, form, controlling organismal adaptation, change,  regeneration/repair, and promoting survival.
3. The origin of such complex communication systems is best explained by an intelligent designer. Since no humans were involved in creating these complex computing systems, a suprahuman super-intelligent agency must have been the creator of the communication systems used in life.

The implementation of sophisticated technology requires always intelligent technicians 
Technological principles are implemented in the genetic system, and in biology as a whole,  far beyond what we humans, even today, are able to instantiate and achieve. Without a doubt, the mind that implemented and created all this is the ultimate technologist—the supreme information theorist, polymer scientist, energy engineer, electrical engineer, operations manager, architect, systems chemist, photochemist, synthetic chemist, and molecular biologist, to name just a few of wide-ranging capabilities that he has displayed in his creation. It is staggering to realize that God has implemented far more creative technological solutions than any human engineer, past or present, ever has, or ever could.

Obviously, it could not have been physical, lifeless, non-cognitive principles, dead natural molecules, able to become the ultimate computer engineer !! Natural selection is an inadequate, clearly a false scientific explanation. It could not be a significant part of the answer to the question of where these high-tech solutions came from. Unguided natural selection is traditionally viewed as a tinkerer, akin to what takes place during the execution of a computer program. Such algorithmic processes have never been shown to become implemented and start their powerful information processing and transmission on their own. It is with no exception human intelligence that has demonstrated to be capable of implementing exceptional technological innovations through computer programs that have brought significant innovations into our lives. Pointing to the existence of such an algorithmic process by natural, non-intelligent means, is entirely misplaced.  

Consider a computer programmed to play chess. Thanks to the algorithmic process taking place within that computer, it may well outplay any human opponent. Interestingly, however, that chess-playing computer does not know that a human game called chess even exists, let alone that it is playing the game at any given moment. A software program only does what all software programs do— it executes preprogrammed operations. It reduces any problem to a set of ‘dumb’ calculations. But at the beginning of such a program, there is always a programming software engineer that implemented the algorithm and the program. Analogously as a chess computer does not know it is playing chess, any living organism does not know that it is utilizing instructional assembly information to produce proteins, that it is actioning its immune system to protect itself from viruses and other environmental dangers, or that it is using a highly sophisticated flagellum motor similar to an outboard engine to a food source. Furthermore, there is the mind.  A new dimension of reality, one that seemingly extends beyond the physical.  The significance of the creation of the human mind is hard to overstate. It means that through our eyes, we can see, conceptually, using our minds - which we have no clue how it came about. Our mind has enabled us to be conscience, and discover and comprehend the natural world, the natural laws, calculus, math, using language, and applying the laws of logic, consciously. Our intelligent minds have paved the path to scientific discovery, we managed to uncover and exploit the secrets of the metaphysical, physical, chemical, and biological world. 

Remarkably, our minds allowed us to begin to investigate, understand, and marvel at the mindless natural world. A material system would have never been capable to undergo such an extraordinarily innovative process, implementing data, directing the making of molecular machines, and self-replicating cell factories,  and becoming conscious on its own !! Mindless natural laws cannot program a computer to play chess, much less if there was no hardware, no computer on which that program could have been installed !!  Matter could not have implemented and stored algorithmic data in DNA by a random process without there being a suitable material substrate—in this case, a simple life form storing the DNA molecule—on which the blind algorithmic process can operate. How did that initial life form, the equivalent of the computer in the chess analogy, come to be? Hidden within natural law there could not exist the primal means by which nature was able to create such a system, one with the extraordinary potential to discover itself! Agency, mind, technological innovation, scientific discovery—whether natural, whether human—all had to arise out of a much greater, much more powerful, intelligent mind. The creation of matter and mind was certainly instantiated by a much greater, eternal, powerful mind.

The Genetic Code
The Genetic Code is often confounded with the specified information-bearing sequence of DNA or mRNA nucleotides, but it is a translation program. It is the crucial bridge, the decoding of information stored in nucleotides, to proteins. It is a fundamental basis for all biology. Therefore elucidating its origin is of the utmost importance, if we want to find out where life came from. While routinely evolutionary pressures are invoked to explain its emergence, that mechanism should and cannot be invoked, since evolution depends on the genetic code, and its translation machinery being fully implemented, to start life and self-replication. 

B.Alberts Molecular Biology of the Cell 4th ed. (2003): Once an mRNA has been produced, by transcription and processing the information present in its nucleotide sequence is used to synthesize a protein. Transcription is simple to understand as a means of information transfer: since DNA and RNA are chemically and structurally similar, the DNA can act as a direct template for the synthesis of RNA by complementary base-pairing. As the term transcription signifies, it is as if a message written out by hand is being converted, say, into a typewritten text. The language itself and the form of the message do not change, and the symbols used are closely related. In contrast, the conversion of the information in RNA into protein represents a translation of the information into another language that uses quite different symbols. Moreover, since there are only four different nucleotides in mRNA and twenty different types of amino acids in a protein, this translation cannot be accounted for by a direct one-to-one correspondence between a nucleotide in RNA and an amino acid in protein. The nucleotide sequence of a gene, through the medium of mRNA, is translated into the amino acid sequence of a protein by rules that are known as the genetic code. The sequence of nucleotides in the mRNA molecule is read consecutively in groups of three. RNA is a linear polymer of four different nucleotides, so there are 4 × 4 × 4 = 64 possible combinations of three nucleotides: the triplets AAA, AUA, AUG, and so on. However, only 20 different amino acids are commonly found in proteins. The code is redundant and some amino acids are specified by more than one triplet. Each group of three consecutive nucleotides in RNA is called a codon, and each codon specifies either one amino acid or a stop to the translation process. 54

B. Alberts Molecular Biology of the Cell 6th ed. (2015): Each group of three consecutive nucleotides in RNA is called a codon, and each codon specifies either one amino acid or a stop to the translation process. AUG is the Universal Start Codon. Nearly every organism (and every gene) that has been studied uses the three ribonucleotide sequence AUG to indicate the "START" of protein synthesis (Start Point of Translation). 55

Furthermore, three codons are referred to as STOP codons: UAA, UAG, and UGA. These are used to terminate translation; they indicate the end of the gene's coding region.

Why and how should natural processes have " chosen " to insert a punctuation signal, a universal start codon, and stop codons,  in order for the Ribosome to " know " where to start and end translation? This is essential in order for the machinery to start translating at the correct place.

1. The instantiation of communication systems depends on creating a language using symbols, Statistics, Semantics, Synthax, pragmatics, information storage, transmission, translation, conversion, and eventually a transduction system,
2. Signal transmission is a fundamental property in all biological life forms. Cells use various kinds of molecular communication, cell signaling, signal transduction pathways, genetic and epigenetic codes and languages.  
3. Communication systems are always instantiated by thinking minds. Therefore, biological communication systems were intelligently designed.

The Cell factory maker, Paley's watchmaker argument 2.0 Polype13
The figure above outlines the mechanism whereby the information corresponding to an (arbitrarily chosen) DNA sequence is transferred. Here the messenger RNA is assumed to be transcribed from the DNA strand marked by an asterisk. 56

Paul Davies, The fifth miracle (2000): page 105:
I have described life as a deal struck between nucleic acids and proteins. However, these molecules inhabit very different chemical realms; indeed, they are barely on speaking terms. This is most clearly reflected in the arithmetic of information transfer. The data needed to assemble proteins are stored in DNA using the four-letter alphabet A, G, C, T. On the other hand, proteins are made out of twenty different sorts of amino acids. Obviously, twenty into four won’t go. So how do nucleic acids and proteins communicate? Earthlife has discovered a neat solution to this numerical mismatch by packaging the bases in triplets. Four bases can be arranged in sixty-four different permutations of three, and twenty will go into sixty-four, with some room left over for redundancy and punctuation. The sequence of rungs of the DNA ladder thus determines, three by three, the exact sequence of amino acids in the proteins. To translate from the sixty-four triplets into the twenty amino acids means assigning each triplet (termed a codon) a corresponding amino acid. This assignment is called the genetic code. The idea that life uses a cipher was first suggested in the early 1950s by George Gamow, the same physicist who proposed the modern big-bang theory of the universe. As in all translations, there must be someone, or something, that is bilingual, in this case to turn the coded instructions written in nucleic acid language into a result written in amino-acid language. From what I have explained, it should be apparent that this crucial translation step occurs in living organisms when the appropriate amino acids are attached to the respective molecules of tRNA prior to the protein-assembly process.  This attachment is carried out by a group of clever enzymes  57

Job Merkel (2019):    DNA translation: Everyone speaks a language. Animals speak a language. Computers speak a language. Even your cells speak a language. And like any language, we need to understand the basic rules before we can read and write with it. Four letters make up DNA’s alphabet. These four letters are Adenine (A) Cytosine (C) Guanine (G) Thymine (T) But letters alone do not make a language.  Conveniently, all of DNA’s words are the same length. They are all three (3) letters long. Scientists call these three letters a codon. In the following chart, we’ll see what these codons mean.

The Cell factory maker, Paley's watchmaker argument 2.0 F2.medium

Each codon designates an amino acid. For example, the codon TAT codes for the amino acid Tyrosine. If we continue our analogy, this makes each codon a “word.” These words are the basis of DNA translation. In DNA translation, DNA is converted into a specific sequence of amino acids. But words alone aren’t enough to convey meaning. You need to string words together to form sentences. In the same way, amino acids combine together through DNA translation to form proteins. These sentences need punctuation. Punctuation serves to let you know when a sentence begins when it ends, and any pauses or gaps in-between. DNA is no different. It uses specific codons to indicate the beginning or ending of a sentence. For example, the codon “ATG” indicates the beginning of an amino acid sequence. For this reason, scientists refer to ATG as the “START” codon. It is always at the beginning of a sentence. Without a START codon, your cells wouldn’t know where to begin making proteins. There are also three codons that act as a “STOP” codon. These three codons (TGA, TAA, TAG) always indicate the end of a sentence. Without a STOP codon, your cells wouldn’t know when to stop making a given protein. As a demonstration, here’s what an example of a “sentence” might look like in DNA: ATG TAT CAG GGA TGA This translates to: START - Tyrosine - Glutamine - Glycine - STOP This would produce a protein made of 3 amino acids (Tyr-Glu-Gly). Most proteins are not this short. For example, a hemoglobin subunit is 141 amino acids long. To continue the metaphor of language, sentences aren’t the only part of a written document. Writers clump similar sentences together into paragraphs. And the same is true for proteins. Individual units of protein may come together to form something larger than themselves. DNA acts as the alphabet, coding for amino acids in codons. These codons act as words to make proteins. These proteins act as sentences, and merge together to make larger structures. These larger structures are your paragraphs. 58

1. There are two punctuation marks in the genetic code called the START and STOP codons which signal the end of protein synthesis in all organisms. ATG as the “START” codon. It is always at the beginning of a sentence. Without a START codon, your cells wouldn’t know where to begin making proteins. There are also three codons that act as a “STOP” codon. These three codons (TGA, TAA, TAG) always indicate the end of a sentence. Without a STOP codon, your cells wouldn’t know when to stop making a given protein. As a demonstration, here’s what an example of a “sentence” might look like in DNA: ATG TAT CAG GGA TGA This translates to: START - Tyrosine - Glutamine - Glycine - STOP This would produce a protein made of 3 amino acids (Tyr-Glu-Gly).
2. Without start and stop codon signals, there would be no way to begin or end the process of translation. Ribosomes can translate RNA sequences without the need for an initiation codon, as demonstrated in experiments leading to the elucidation of the genetic code. If AUG is missing, it will start later at the next AUG. This will likely create a small or big deletion and may cause a frameshift. Termination of protein translation occurs when the translating ribosome reaches a stop codon that is recognized by a release factor. Each of the three stop codons, UAA, UGA and UAG, is used in all three domains of life. During protein synthesis, STOP codons cause the release of the new polypeptide chain from the ribosome. This occurs because there are no tRNAs with anticodons complementary to the STOP codons. Without stop codons, an organism is unable to produce specific proteins. The new polypeptide (protein) chain will just grow and grow until the cell bursts or there are no more available amino acids to add to it. A nonsense mutation occurs in DNA when a sequence change gives rise to a stop codon rather than a codon specifying an amino acid. The presence of the new stop codon results in the production of a shortened protein that is likely non-functional.
3. The START and STOP codons had to be part of the genetic code right from the beginning, or proteins could not be synthesized. Gradual evolutionary development is not feasible. The genetic code most certainly was designed.

The Genetic Code is not random, but arbitrary
Jaque Monod (1972): The genetic code, universal in the biosphere, seems to be chemically arbitrary, in as much as the transfer of information could just as well  take place according to some other convention 56

ULRICH E. STEGMANN (2004): The genetic code has been regarded as arbitrary in the sense that the codon-amino acid assignments could be different than they actually are.  The genetic code is arbitrary in that any codon requires certain chemical and structural properties to specify a particular amino acid, but these properties are not required in virtue of a principle of chemistry.  It is arbitrary that a particular codon specifies one particular amino acid rather than a different one. The only generally accepted sense of “arbitrary“ seems to be that the assignments could be different than they actually are. Chemical arbitrariness is similar or even equivalent to the absence of chemical necessity. 59

David L. Abel (2009): Anti-codons are at opposite ends of t-RNA molecules from amino acids. The linking of each tRNA with the correct amino acid depends entirely upon a completely independent family of tRNA aminoacyl synthetase proteins. Each of these synthetases must be specifically prescribed by separate linear digital programming. These symbol and coding systems not only predate human existence, they produced humans along with their anthropocentric minds. The nucleotide and codon syntax of DNA linear digital prescription has no physicochemical explanation. All nucleotides are bound with the same rigid 3’5’ phosphodiester bonds. The codon table is arbitrary and formal, not physical. Codon syntax communicates time-independent, non-physicodynamic “meaning” (prescription of biofunction). This meaning is realized only after abstract translation via a conceptual codon table. To insist that codon syntax only represents amino acid sequence in our human minds is not logically tenable. 60

Ludmila Lackova (2017):  Jacques Monod, declared the arbitrary nature of the genetic code explicitly: ‘‘There is no direct steric relationship between the coding triplet and the encoded amino acid. The code […] seems chemically arbitrary.’’ (Monod 1970, 123). Arbitrariness was defined by Ferdinand de Saussure as one of the three main principles of languages. According to de Saussure, a linguistic sign and any kind of sign ‘‘is arbitrary in that it has no natural connection’’ between the sign and its object (De Saussure 1916, 69).  It means that there is no natural direct connection between the word dog and its meaning (the object in general, in this case, any dog). In other words, what is referred to does not necessitate the form of what is referring to (the referent). It is important. In a similar way as the relation between a sign and its meaning is mediated in the natural language, the binding between the amino acid and the triplet in the genetic code is not direct, but mediated by the tRNA molecule. The tRNA has a place for amino acid attachment and on the opposite side a place for an anticodon. In both natural language and the genetic code, there is no physical connection between the two entities that enter into the relation; the connection is conventional or historical. For reasons described in the previous paragraph, in the field of biology and biosemiotics, it is used to consider the strings of nucleic bases as signs and the strings of amino acids as their meanings, in other words, nucleic bases should refer to amino acid in the same way as words refer to objects (meanings). The mediated connection between the two entities in the protein synthesis makes it tempting to consider them as signs and their meanings. Since amino acids in a form of a string are not the final product of protein synthesis and do not represent functional units, they cannot be considered as meaning of the genetic code. Amino acids as such have no direct function in a cell. They only provide a framework of the final protein that acts as functional unit and it is the shape of a protein that determines whether the protein can interact with other molecules and in which way. Not every shape of a protein has a function, but every function is provided by a shape, thus we suggest that shapes are the elementary meaning-carrying entities in a cell or in an organism. 61

Eugene V. Koonin (2017): The assignment of codons to amino acids across the code table is clearly nonrandom: Related amino acids typically occupy contiguous areas in the table. The second position of a codon is the most important specificity determinant, and three of the four columns of the codon table encode related, chemically similar amino acids. For example, all codons with a U in the second position correspond to hydrophobic amino acids. Even a simple qualitative examination shows that the code is robust to mutational or translational errors. Substitutions and translation errors in synonymous positions (typically, the third position in a codon) have no effect on the protein (although this does not necessarily imply such substitutions are selectively neutral), whereas substitutions in the first position most often lead to the incorporation of an amino acid similar to the correct one, thus decreasing the damage. 62

A: Information, Biosemiotics ( instructional complex mRNA codon sequences transcribed from DNA )
B: Translation mechanism ( adapter, key, or process of some kind to exist prior to translation = ribosome )
C: Genetic Code
D: Functional proteins

1. Life depends on proteins ( molecular machines ) (D). Their function depends on the correct arrangement of a specified complex sequence of amino acids.
2. That depends on the translation of genetic information (A) through the ribosome (B) and the genetic code (C), which assigns 61 codons and 3 start/stop codons to 20 amino acids
3. Instructional complex Information ( Biosemiotics: Semantics, Synthax, and pragmatics (A)) is only generated by intelligent beings with foresight. Only intelligence with foresight can conceptualize and instantiate complex machines with specific purposes, like translation using adapter keys (ribosome, tRNA, aminoacyl tRNA synthetases (B)) All codes require arbitrary values being assigned and determined by agency to represent something else ( genetic code (C)).
4. Therefore, Proteins being the product of semiotics/algorithmic information including translation through the genetic code, and the manufacturing system ( information directing manufacturing ) are most probably the product of a divine intelligent designer.

The Genetic Code is more robust than 1 million alternatives
Thomas Butler (2009): Almost immediately after its elucidation, attempts were made to explain the assignment of codons to amino acids. It was noticed that amino acids with related properties were grouped together, which would have the effect of minimizing translation errors. The canonical genetic code was compared to samples of randomly generated synthetic codes. Depending on the measure used to characterize or score the sampled codes, high degrees of optimality have been reported. For example, using an empirical measure of amino acid differences referred to below as the “experimental polar requirement”, Freeland and Hurst calculated that the genetic code is “one in a million”. More recently, it has been shown that when coupled to known patterns of codon usage, the canonical code and the codon usage are simultaneously optimized with respect to point mutations and to the rapid termination of peptides that are generated with frameshift errors. 63

S. J. Freeland  (1998): Statistical and biochemical studies of the genetic code have found evidence of nonrandom patterns in the distribution of codon assignments. It has, for example, been shown that the code minimizes the effects of point mutation or mistranslation: erroneous codons are either synonymous or code for an amino acid with chemical properties very similar to those of the one that would have been present had the error not occurred. If we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors. 64

S. J. Freeland (2000): The canonical code is at or very close to a global optimum for error minimization across plausible parameter space. This result is robust to variation in the methods and assumptions of the analysis. Although significantly better codes do exist under some assumptions, they are extremely rare and thus consistent with reports of an adaptive code: previous analyses which suggest otherwise derive from a misleading metric. However, all extant, naturally occurring, secondarily derived, nonstandard genetic codes do appear less adaptive. 65

Subsequent efforts employing much more sophisticated models revealed even greater robustness of the code (Hani Goodarzi 2004)

Shalev Itzkovitz: (2007): DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information. 66

Adapted from Fazale Rana The Cell's Design (2008) Page 172: The genetic code is fine-tuned to minimize errors. These error-minimization properties allow the cell to make mistakes and still communicate critical information with high fidelity. The cell uses groupings of three nucleotides (codons) to specify twenty amino acids. The cell uses a set of rules—the genetic code—to relate these nucleotide triplet sequences to the twenty amino acids used to make polypeptides. Codons represent the fundamental coding units. In the same way, the stranded islander used three letters (SOS) to communicate, the genetic code uses three-nucleotide "characters" that are assigned to an amino acid.  It consists of sixty-four codons. Because the genetic code only needs to encode twenty amino acids, some of the codons are redundant. Different codons can code for the same amino acid. In fact, up to six different codons specify some amino acids. A single codon specifies others. The universal genetic code, presented in a conventional way, is according to how the information appears in mRNA molecules after the information stored in DNA is transcribed. (In RNA uracil is used instead of thymine [T].) The first nucleotide of the coding triplet begins at what biochemists call the 5' end of the sequence. Each nucleotide in the codon's first position (5' end) can be read from the left-most column, and the nucleotide in the second position can be read from the row across the top of the table. The nucleotide in each codon's third position (the 3' end) can be read within each box. For example, the two codons, 5' UUU and 5' UUC, that specify phenylalanine (abbreviated Phe) are listed in the box located at the top left corner of the table. Interestingly, some codons (stop or nonsense codons) don't specify any amino acids. They always occur at the end of the gene informing the protein manufacturing machinery where the polypeptide chain ends. Stop codons serve as a form of "punctuation" for the cell's information system. (For example, UGA is a stop codon.) Some coding triplets (start codons) play a dual role in the genetic code. These codons not only encode amino acids but also "tell" the cell where a polypeptide begins. For example, the codon GUG not only encodes the amino acid valine but also specifies the beginning of a polypeptide chain. Start codons function as a sort of "capitalization" for the information system of the cell.  A capacity to resist the errors that naturally occur as a cell uses or transmits information from one generation to the next is built into the code. Recent studies employing methods to quantify the genetic code's error-minimization properties indicate that the genetic code's rules have been finely tuned. The failure of the genetic code to transmit and translate information with high fidelity can be devastating to the cell. A mutation refers to any change that takes place in the DNA nucleotide sequence. Several different types of changes to DNA sequences can occur with substitution mutations being the most frequent. As a result of these mutations, a nucleotide(s) in the DNA strand is replaced by another nucleotide(s). For example, an A may be replaced by a G or а С with a T. When substitutions occur, they alter the codon that houses the substituted nucleotide. And if the codon changes, then the amino acid specified by that codon also changes, altering the amino acid sequence of the polypeptide chain specified by the mutated gene. This mutation can then lead to a distorted chemical and physical profile along the polypeptide chain. If the substituted amino acid has dramatically different physicochemical properties from the native amino acid, then the polypeptide folds improperly. An improperly folded protein has reduced or even lost function. Mutations can be deleterious because they hold the potential to significantly and negatively impact protein structure and function.

Error minimization
Six codons encode the amino acid leucine (Leu). If at a particular amino acid position in a polypeptide, Leu is encoded by 5'CUU, substitution mutations in the 3' position from U to C, A, or G produce three new codons—5'CUC, 5'CUA, and 5'CUG, respectively—all of which code for Leu. The net effect leaves the amino acid sequence of the polypeptide unchanged. And, the cell successfully avoids the negative effects of a substitution mutation. Likewise, a change of С in the 5' position to a U generates a new codon, 5'UUU, which specifies phenylalanine, an amino acid with physical and chemical properties similar to Leu. Changing С to an A or a G produces codons that code for isoleucine and valine, respectively. These two amino acids possess chemical and physical properties similar to leucine. Qualitatively, it appears as if the genetic code has been constructed to minimize the errors that could result from substitution mutations. Recently, scientists have worked to quantitatively evaluate the error-minimization capacity of the genetic code. One of the first studies to perform this analysis indicated that the universal genetic code found in nature could withstand the potentially harmful effects of substitution mutations better than all but 0.02 percent (1 out of 5,000) of randomly generated genetic codes with different codon assignments than the one found through-out nature.

This initial work, however, did not take into account the fact that some types of substitution mutations occur more frequently in nature than others. For example, an A-to-G substitution occurs more often than either an A- to-C or an A-to-T mutation. When researchers incorporated this correction into their analysis, they discovered that the naturally occurring genetic code performed better than one million randomly generated genetic codes and that the genetic code in nature resides near the global optimum for all possible genetic codes with respect to its error-minimization capacity. Nature's universal genetic code is truly one in a million! The genetic code's error-minimization properties are far more dramatic than these results indicate. When the researchers calculated the error-minimization capacity of the one million randomly generated genetic codes, they discovered that the error-minimization values formed a distribution with the naturally occurring genetic code lying outside the distribution. Researchers estimate the existence of 10^18 possible genetic codes possessing the same type and degree of redundancy as the universal genetic code. All of these codes fall within the error-minimization distribution. This means of 10^18 possible genetic codes few, if any, have an error-minimization capacity that approaches the code found universally throughout nature. A genetic code assembled through random biochemical events could not possess near ideal error-minimization properties.

In 1968, Nobel laureate Francis Crick argued that the genetic code could not undergo significant evolution. Any change in codon assignments would lead to changes in amino acids in every polypeptide made by the cell. This wholesale change in polypeptide sequences would result in a large number of defective proteins. Nearly any conceivable change to the genetic code would be lethal to the cell. The scientists who suggest that natural selection shaped the genetic code are fully aware of Crick's work. Still, they rely on evolution to explain the code's optimal design because of the existence of nonuniversal genetic codes. While the genetic code in nature is generally regarded as universal, some non-universal genetic codes exist—codes that employ slightly different codon assignments. Presumably, these nonuniversal codes evolved from the universal genetic code. Therefore, researchers argue that such evolution is possible. But, the codon assignments of the nonuniversal genetic codes are nearly identical to those of the universal genetic code with only one or two exceptions. Non-universal genetic codes can be thought of as deviants of the universal genetic code. Does the existence of nonuniversal codes imply that wholesale genetic code evolution is possible? A careful study reveals that codon changes in the nonuniversal genetic codes always occur in relatively small genomes, such as those in mitochondria. These changes involve (1) codons that occur at low frequencies in that particular genome or (2) stop codons. Changes in assignment for these codons could occur without producing a lethal scenario because only a small number of polypeptides in the cell or organelle would experience an altered amino acid sequence. So it seems limited evolution of the genetic code can take place, but only in special circumstances. The existence of nonuniversal genetic codes does not necessarily justify an evolutionary origin of the amazingly optimal genetic code found in nature.

Even if the genetic code could change over time to yield a set of rules that allowed for the best possible error-minimization capacity, is there enough time for this process to occur? Biophysicist Hubert Yockey addressed this question. (H.Yockey (2005): Let us calculate the number of genetic codes with the codon-amino acid assignment typical of the modern standard genetic code. we have: 1.40 × 10^70 One must presume that the modern genetic code did not originate from among codes awaiting assignment. 67) Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code. A team headed by renowned origin-of-life researcher Manfred Eigen estimated the age of the genetic code at 3.8 ± 0.6 billion years. Current geochemical evidence places life's first appearance on Earth at 3.86 billion years ago. This timing means that the genetic code's origin coincides with life's start on Earth. It appears as if the genetic code came out of nowhere, without any time to search out the best option. In the face of these types of problems, some scientists suggest that the genetic code found in nature emerged from a simpler code that employed codons consisting of one or two nucleotides. Over time, these simpler genetic codes expanded to eventually yield the universal genetic code based on coding triplets. The number of possible genetic codes based on one or two nucleotide codons is far fewer than for codes based on coding triplets. This scenario makes code evolution much more likely from a naturalistic standpoint. One complicating factor for these proposals arises, however, from the fact that simpler genetic codes cannot specify twenty different amino acids. Rather, they are limited to sixteen at most. Such a scenario would mean that the first life forms had to make use of proteins that consisted of no more than sixteen different amino acids. Interestingly, some proteins found in nature, such as ferredoxins, are produced with only thirteen amino acids. On the surface, this observation seems to square with the idea that the genetic code found in nature arose from a simpler code. Yet, proteins like ferredoxins are atypical. Most proteins require all twenty amino acids. This requirement, coupled with the recent recognition that life in its most minimal form needs several hundred proteins, makes these types of models for code evolution speculative at best. The optimal nature of the genetic code and the difficulty of accounting for the code's origin from an evolutionary perspective work together to support the conclusion that an Intelligent Designer programmed the genetic code, and hence, life. 68

Exploiting the redundancy (or degeneracy) of the genetic code
D. L. Gonzalez (2019): Since the cardinality of the starting set of codons (64) is greater than the cardinality of the arriving set of amino acids (20 + 2), the mapping is necessarily degenerate. In other words, some amino acids are coded by two or more codons. 69

Tessa E.F. Quax et.al. (2016): Because 18 of 20 amino acids are encoded by multiple synonymous codons, the genetic code is called “degenerate.” Because synonymous mutations do not affect the identity of the encoded amino acid, they were originally thought to have no consequences for protein function or organismal fitness and were therefore regarded as “silent mutations.” However, comparative sequence analysis revealed a non-random distribution of synonymous codons in genes of different organisms. Each organism seems to prefer a different set of codons over others; this phenomenon is called codon bias. It has been established that codon bias also influences protein folding and differential regulation of protein expression. Analysis of the tRNA content of organisms in all domains of life showed that they never contain a full set of tRNAs with anticodons complementary to the 61 different codons; for example, 39 tRNAs with distinct anticodons are present in the bacterium Escherichia coli, 35 in the archaeon Sulfolobus solfataricus, and 45 in the eukaryote Homo sapiens. 70

M.Eberlin (2019): The redundancy is vital. The apparent overkill minimizes reading and transmitting errors so that the same amino acid is transferred to each generation. But if carefully inspected, the redundancies themselves don’t seem to be random, since they involve mainly changes in the third letter of each triplet. For example, the simplest amino acid, glycine, has four codons that specify it: GGA, GGC, GGG, and GGT. The only position that varies is the third, and any nucleotide in that position will still specify glycine. (There are other biological effects possible, though—for example, effects on the speed of protein synthesis and folding) Changes in the first and second letters are less common and are offset by the expression of amino acids with chemically similar properties and that don’t significantly alter the structure and properties of the final protein. For example, the CTT codon that codes for leucine becomes the chemically similar isoleucine when the C is replaced by A (ATT). Such redundancies establish a chemical buffer between amino acids when common errors occur. That is, the code of life has built-in safeguards against potentially damaging genetic typos. But that’s not the only purpose of the redundancy in our genetic code. The use of different codons to express a single amino acid also allows the speed of protein synthesis to be controlled. For example, four different codons may specify the same amino acid, but the four differ in their effects on how fast or slow a bond is made and the protein folds. This kinetic control gives each protein the exact amount of time it needs to form the correct 3-D shape. There are other nuances in our genetic code that seem to suggest foresight, such as the grouping of codons for amino acids with either acid or alkaline side chains. Hence, if environmental stimuli require exchanging an alkaline (basic) amino acid for an acidic amino acid in a protein, this exchange is aided by such grouping. Again, what a wonderful chemical trick! For example, a basic lysine coded by either AAA or AAG can easily be changed to the acidic glutamic acid by only a single letter substitution: GAA or GAG. Having such a flexible code helps the organism to stay alive. The code also anticipates and has safeguards against the most common single-point mutations. For instance, leucine is encoded by no less than six codons. The CTT codon encodes leucine, but all the third letter-mutation variations—CTC, CTA, and CTG—are “synonymous” and also encode leucine. First-letter mutations are rarer, and potentially more dangerous because they do change the amino acid specified—if C is exchanged for T, forming the TTT codon, a different amino acid (phenylalanine) will be expressed. But even for this, the genetic code has a safeguard: phenylalanine’s chemical properties are similar to leucine’s, so the protein will still retain its shape and function. If the first letter C in CTT (leucine) is replaced by A or G, something similar happens, since ATT (isoleucine) and GTT (valine) have physicochemical properties similar to leucine as well. 71

 

50. María A Sánchez-Romero: The bacterial epigenome 2020 Jan;18
51. Daniel J. Nicholson: On Being the Right Size, Revisited: The Problem with Engineering Metaphors in Molecular Biology 2020
52. David F. Coppedge Cilia Are Antennas for Human Senses and Development October 26, 2007
53. E. Camprubí: The Emergence of Life  27 November 2019
54. B.Alberts: Molecular Biology of the Cell. 4th edition. 2003
55. B. Alberts Molecular Biology of the Cell 6th ed. 2015
56. J.Monod: Chance and Necessity: An Essay on the Natural Philosophy of Modern Biology  12 setember 1972
57. PAUL DAVIES: [size=12]The Fifth Miracle The Search for the Origin and Meaning of Life 2000[/size]
58. Job Merkel: The Language of DNA 15 NOV, 2019
59. ULRICH E. STEGMANN: The arbitrariness of the genetic code 9 September 2003
60. David L. Abel: The Capabilities of Chaos and Complexity 9 January 2009
61. Ludmila Lackova: Arbitrariness is not enough: towards a functional approach to the genetic code 2 May 2017
62. Eugene V. Koonin: Origin and Evolution of the Universal Genetic Code 2017
63. Thomas Butler: Extreme genetic code optimality from a molecular dynamics calculation of amino acid polar requirement 17 June 2009
64. S J Freeland: The genetic code is one in a million 1998 Sep
65. S J Freeland: Early Fixation of an Optimal Genetic Code 01 April 2000
66. Shalev Itzkovitz: The genetic code is nearly optimal for allowing additional information within protein-coding sequences 2007 Apr; 17
67. H.Yockey: Information theory, evolution, and the origin of life 2005
68. Fazale Rana [size=12]The Cell's Design: How Chemistry Reveals the Creator's Artistry 1 junho 2008 Page 172:
69. D. L. Gonzalez  On the origin of degeneracy in the genetic code 18 October 2019
70. Tessa E.F. Quax: Codon Bias as a Means to Fine-Tune Gene Expression 2016 Jul 16
71. M.Eberlin Foresight 2019[/size]



Last edited by Otangelo on Mon Jul 25, 2022 8:47 am; edited 63 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

https://www.news-medical.net/life-sciences/START-and-STOP-Codons.aspx

David L. Abel (2014): The codon redundancy (“degeneracy”) found in protein-coding regions of mRNA also prescribes Translational Pausing (TP). When coupled with the appropriate interpreters, multiple meanings and functions are programmed into the same sequence of configurable switch-settings. This additional layer of prescriptive Information (PI) purposely slows or speeds up the translation-decoding process within the ribosome. Variable translation rates help prescribe functional folding of the nascent protein. Redundancy of the codon to amino acid mapping, therefore, is anything but superfluous or degenerate. Redundancy programming allows for simultaneous dual prescriptions of prescriptive Information and amino acid assignments. This allows both functions to be coincident and realizable. The prescriptive Information schema is a bona fide rule-based code, conforming to logical code-like properties. This prescriptive Information code is programmed into the supposedly degenerate redundancy of the codon table. Algorithmic processes play a dominant role in the realization of this multi-dimensional code.

Genomic prescriptions of bio functions are multi-dimensional. Within the genome domain, executable operations format, read, write, copy, and maintain digital Functional Information (FI). Bio-molecular machines are programmed to organize, regulate, and control metabolism. The genetic code is composed of data sets residing in the particular sequencing of nucleotides. A large percentage of protein-folding is assisted by chaperones, some of which are RNAs rather than protein chaperones. But the final fold is primarily constrained by the primary-structure of amino-acid sequence. Protein-coding sequencing significantly affects translation rate, folding, and function. Most protein functionality is dependent upon its three-dimensional conformation. These conformations are dependent upon folding mechanisms performed upon the nascent protein. Such folding mechanisms are linked directly to several cooperative translational processes. By “translational processes,” we mean processes that go beyond simply translating and linking the amino acids. This paper expands the understanding of translation processes to go beyond just the mechanistic interactions between the polypeptide and ribosome tunnel. The mRNA sequencing of codons itself determines the rate of translation (internal mechanism). The chaperone function occurs as an external mechanism. These mechanisms all work to contribute coherently to the folding process. The crucial point is that they are all dependent upon momentary pauses in the translation process. We collectively define these linked phenomena and their rate regulation as “co-translational pausing.” The dependency of folding on these multiple translation processes has been defined as “co-translational folding”. They reveal the ribosome, among other things, to be not only a machine, but an independent computer-mediated manufacturing system.

Nucleotide, and eventually amino acid, sequencing are both physicodynamically indeterminate (inert). Cause-and-effect physical determinism, in other words, cannot account for the programming of sequence-dependent biofunction. Nucleotide sequencing and consequent amino acid sequencing are formally programmed in both the nascent protein and in the chaperones that help determine folding.  The external mechanisms involve trigger factors. Prokaryotes employ chaperones,  ribosome tunnel interactions, and binding protein factors. The internal mechanisms involve mRNA interactions, codon sequences and tRNA availability. Translational pausing (TP) allows for momentary pauses enabling preliminary folding of the nascent protein. The particular redundancy of codons provides temporal regulation of the co-translational folding process.

Translational pausing of nascent proteins is linked to the arrangement of nucleotides in the mRNA. Pausing can be induced by mRNA structure,  signal recognition particle (SRP) binding, mRNA binding proteins, rare codons, and anti-Shine-Dalgarno (aSD) codon sequences. A common thread exists between the mechanical execution of the folding process (exit tunnel/factors/chaperones) to internal mRNA processes involved in folding of the nascent protein. We argue that the causal relationship to co-translational folding is due to a prescribed arrangement of codons within the mRNA. We base this on the fact that for trigger factors, chaperones, and binding proteins are all related to the nascent amino acid chain sequence. Amino acid sequence, by necessary consequence, points to mRNA sequences. We further posit that the interactions with translation pausing can be traced back to the specific arrangements of redundant codons in the mRNA, and ultimately to the genome. We propose that the pausing functions are facilitated by first generating a pause state in the translation of the mRNA codons within the ribosome. This gives protein factors, trigger factors and other chaperones the necessary time to mechanically perform folding operations. If the pausing effect was solely related to the amino acid chain sequence, then replacing codons with synonymous codons should still produce the same folded amino acid chain with the same translation speed. However, substitution of rare codons with synonymous codons did produce a change in speed and conformation changes.

Reduncancy permits a secondary, superimposed code
Redundancy in the primary genetic code allows for additional independent codes. Coupled with the appropriate interpreters and algorithmic processors, multiple dimensions of meaning, and function can be instantiated into the same codon string. A secondary code superimposed upon the primary codonic prescription of amino acid sequence in proteins. Dual interpretations enable the assembly of the protein's primary structure while enabling additional folding controls via pausing of the translation process. TP provides for temporal control of the translation process allowing the nascent protein to fold appropriately as per its defined function. The functionality of condonic redundancy denies the ill-advised label of “degeneracy.” Multiple dimensions of independent coding by the same codon string has become apparent.

The ribosome can be thought of as an autonomous functional processor of data that it sees at its input. This data has been shown to be prescriptive information in the form of prescribed data, not just probabilistic combinatorial data. Choices must be made with intent to select the best branch of each bifurcation point, in advance of computational halting.

The arrangement of codons has embodied in it a prescribed sequential series of both amino acid code and time-based translation pausing code necessary for protein assembly and nascent pre-folding that defines protein functionality. The translation pausing coding schema follows distinct and consistent rules. These rules are logical and unambiguous. 72

The Wobble hypothesis points to an intelligent setup!
1. In translation, the wobble hypothesis is a set of four relationships. The first two bases in the codon create the coding specificity, for they form strong Watson-Crick base pairs and bond strongly to the anticodon of the tRNA.
2. When reading 5' to 3' the first nucleotide in the anticodon (which is on the tRNA and pairs with the last nucleotide of the codon on the mRNA) determines how many nucleotides the tRNA actually distinguishes.
If the first nucleotide in the anticodon is a C or an A, pairing is specific and acknowledges original Watson-Crick pairing, that is: only one specific codon can be paired to that tRNA. If the first nucleotide is U or G, the pairing is less specific and in fact, two bases can be interchangeably recognized by the tRNA. Inosine displays the true qualities of wobble, in that if that is the first nucleotide in the anticodon then any of three bases in the original codon can be matched with the tRNA.
3. Due to the specificity inherent in the first two nucleotides of the codon, if one amino acid is coded for by multiple anticodons and those anticodons differ in either the second or third position (first or second position in the codon) then a different tRNA is required for that anticodon.
4. The minimum requirement to satisfy all possible codons (61 excluding three stop codons) is 32 tRNAs. Which is 31 tRNAs for the amino acids and one initiation codon. Aside from the obvious necessity of wobble, that our bodies have a limited amount of tRNAs and wobble allows for broad specificity, wobble base pairs have been shown to facilitate many biological functions. This has another AMAZING implication which points to intelligent set up:  The science paper: The genetic code is one in a million, confesses: If we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.
5. This, all, by all means, points clearly to intelligent DESIGN !!

Several reading frames are explored
Susha Cheriyedath (2019): The genetic code can be read in multiple ways depending on where the reading starts. For example, if the base sequence is GGGAAACCC, reading could start from the first letter, G and there will be 3 codons - GGG, AAA, and CCC. If reading starts at G in the second position, the string will have two codons - GGA and AAC. If reading starts at the third base G, 2 codons will again result - GAA and ACC. Thus, there are 3 ways of reading the code of every strand of genetic material. These different ways of reading a nucleotide sequence is known as a reading frame. Each reading frame will produce a different sequence of amino acids and hence proteins. Thus, in double-stranded DNA, there are 6 possible reading frames. 72a

Several proteins can be produced using the same mRNA strand. Ribosomal frameshifting conveying this function is promoted by a so-called pseudoknot structure and also a specific site in the mRNA, known as a slippery sequence. In order to convey this function, the ribosome shifts back one base and subsequently proceeds to read the mRNA transcript in a different frame. 

The triplet Code goes triplet
University of Utah (2017): Connecting amino acids to make proteins in ribosomes may in fact be influenced by sets of three triplets – a “triplet of triplets” that provide crucial context for the ribosome. Hughes and Fabienne Chevance worked with a gene in Salmonella that codes for the FlgM protein, which is a component of the bacteria's flagellum. A mutation that was defective in "reading" a specific codon in the flgM gene only affected FlgM protein production and not other genes that contained the same codon. "That got us thinking—why is that particular codon in the flgM gene affected and not the same codon in the other genes?" Hughes says. "That's when we started thinking about context." Changing the codon on one side of the defective codon resulted in a 10-fold increase in FlgM protein activity. Changing the codon on the other side resulted in a 20-fold decrease. And the two changes together produced a 35-fold increase. "We realized that these two codons, although separated by a codon, were talking to each other," Hughes says. "The effective code might be a triplet of triplets." 73

Hydrophobicity and the Genetic Code
JEAN LEHMANN (1998): Observations on the hydrophobicity of the molecular compounds of the genetic coding system have long suggested the non-random organization of the genetic code. This parameter was measured for the four nucleotides and the 20 coded amino acids and a strong correlation was found between average ranking measures of the anticodon doublet (i.e. the "rst two bases, from 3' to 5') and the corresponding amino acids, with a few exceptions. Moreover, the four nucleotides can be ordered as [U, C, G, A], from the most hydrophilic (U) to the most hydrophobic (A). It was thus proposed to set the genetic code table in this order. The second position of the anticodon shows the best correlation property: the most hydrophilic amino acids are coded by A (anticodon U) and the most hydrophobic by U (anticodon A), which are at opposite ends of the list. 74

Carl R. Woese (2000): Simply plotting these numbers on a codon table reveals the existence of a remarkable degree of order, much of which would be unexpected on the basis of amino acid properties as normally understood. For example, codons of the form NUN define a set of five amino acids, all of which have very similar polar requirements. Likewise, the set of amino acids defined by the NCN codons all have nearly the same unique polar requirement. The codon couplets CAY-CAR, AAY-AAR, and GAY-GAR each define a pair of amino acids (histidine-glutamine, asparagine-lysine, and aspartic acid-glutamic acid, respectively) that has a unique polar requirement. Only for the last of these (aspartic and glutamic acids), however, would the two amino acids be judged highly similar by more conventional criteria. Perhaps the most remarkable thing about polar requirement is that although it is only a unidimensional characterization of the amino acids, it still seems to capture the essence of the way in which amino acids, all of which are capable of reacting in varied ways with their surroundings, are related in the context of the genetic code. Also of note is the fact that the context in which polar requirement is defined, i.e., the interaction of amino acids with heterocyclic aromatic compounds in an aqueous environment, is more suggestive of a similarity in the way amino acids might interact with nucleic acids than of any similarity in the way they would behave in a proteinaceous environment. While it must be admitted that the evolutionary relationships among the AARSs bear some resemblance to the related amino acid order of the code, it seems unlikely that they are responsible for that order: the evolutionary wanderings of these enzymes alone simply could not produce a code so highly ordered, in both degree and kind, as we now know the genetic code to be. 75

Origin of the Genetic Code
Over the decades, several hypotheses have been elaborated, attempting to explain the origin of the genetic code. Carl Woese illustrated the problem already in 1969, not long after the code was deciphered:

Let us try to gain a feeling for the present status of the "coding problem" through an analogy. Suppose we were given a particular extract from cells and we determined it to have the following property: When nucleoside triphosphates are added to the extract along with poly A, then poly T is synthesized, but when the poly A is re-placed by poly T, poly C, or poly G, successively, one observes production of poly A, poly G, or poly C, respectively. Given these and certain other experiments, one would soon arrive at the notion of an input-output "code" for this system of the simple composition 

The Cell factory maker, Paley's watchmaker argument 2.0 Geneti14

Where to proceed from here is immediately obvious in this simple example. Why are these particular input units associated with these particular output unitS? Following this line of questioning, we would sooner or later discover that base pairing (postulated in another universe by Drs. WATSON and CRICK) lies behind all. Viewing our knowledge of the genetic code in the light of this analogy, we see that what is now possible is the construction of an "input-output" table, the catalog of codon assignments, for the system, but what remains unknown is why this particular set of relationships exists. The essence of the genetic code lies in those "forces" or processes that cause UUU to be assigned to phenylalanine, or cause translation to occur in the way that it does. And we have yet to gain the slightest appreciation for what these are. As will be seen, if it is not already obvious, this aspect of the problem is inseparable from the problem of how this biological information processing system we find in the cell could ever have arisen in the first place. 76

Ádám Radványid (2018) mentions the stereochemical, the coding coenzyme handle, the coevolution, the four-column theory, the error minimization and the frozen accident hypotheses 77

Henri Grosjean (2016):  The earliest amino acids used to synthesize ancestral polypeptides were found in the prebiotic soup and selected according to their specific interactions with the ancestral codons (the ‘stereochemical hypothesis’). These steps were subsequently expanded through the co-evolution with the invention of biosynthetic pathways for new amino acids (the ‘amino acid metabolism hypothesis’) and the emergence of the corresponding primordial aminoacyl-tRNA synthetases able to fix these new amino acids on appropriated proto-tRNAs (‘co-evolution with tRNA aminoacylation systems’). Constant refinement at both the replication and translation levels allows to progressively minimize the impact of coding errors and to increase the diversity and functionality of proteins that can be made with a larger amino acid alphabet ‘error minimizing code hypothesis’. Finally, the code can further evolve by reassignment of unused, temporarily ambiguous, or less used codons for other canonical or even totally new amino acids (the ‘codon capture theory’). Finally, early horizontal transfer and collective evolution of the code through different subspecies have been emphasized. In other words, the present-day genetic code did not necessarily result solely from divergent evolution, but also from collective evolution via the development of an innovation-sharing process that allows the emergence of a quasi-universal genetic code among populations of species ‘speaking the same language. The take-home lesson of all the above information is that along the expansion of the genetic code, optimal stability of complementary codon-anticodon pairs appears to have been the main evolutionary force. 77

Grosjean uses repeatedly teleologically loaded terms. "Inventing, constantly refining, progressively minimizing the impact of coding errors, reassigning, innovation-sharing" are all goal-oriented actions that would nor couldn't be performed by inanimated molecules.    

The stereochemical hypothesis according to which codon assignments are dictated by Physico-chemical affinity between amino acids and the cognate codons (anticodons).
Soon after the genetic code was deciphered, this hypothesis was proposed by Carl Woese. He wrote (1967): I am particularly struck by the difficulty of getting [the genetic code] started unless there is some basis in the specificity of interaction between nucleic acids and amino acids or polypeptide to build upon. 78 

Since there is no direct physical interaction between the codon/anticodon site, and the attachment of amino acids on the other binding site of tRNA, there could be no affinity between the two sites. David B. F. Johnson (2010): The stereochemical hypothesis postulates that the code developed from interactions between nucleotides and amino acids, yet supporting evidence in a biological context is lacking. Eugene V. Koonin (2017):  Translation of the code does not involve direct recognition of the codons (or anticodons) by amino acids, which brings up a burning question: Why are the codon assignments what they are? In other words, why is it the case that, for instance, glycine is encoded by GGN codons rather than, say, CCN codons (the latter of which encode proline in the SGC)? The initial attempts for a direct experimental demonstration of the interaction between amino acids and the cognate codons or anticodons, by Woese and coworkers, were generally unconvincing, resulting in a long lull in the pursuit of the stereochemical account of the code.  In our previous review of the code evolution, we presented several arguments to the effect that the statistical evidence from the aptamer experiments did not provide for direct conclusions about the stereochemical origin of the code. Notwithstanding the additional data and counterargument, our reasoning still appears to hold. 79

The coevolution hypothesis was first proposed by TZE-FEI WONG in 1975: The theory proposed that the structure of the genetic code was determined by the sequence of evolutionary emergence of new amino acids within the primordial biochemical system. 80

Carl Woese (1969): Knowing the codon to be a nucleotide triplet, we must ask why this number is three instead of some other number. In the past it has often been suggested that the number three derives from the fact that three is the minimum number necessary to supply sufficient information to encode the 20 amino acids. We shall point out the weakness of this sort of reasoning. If this were the explanation for size of the codon, one could question how the cell ever came to use 20 amino acids. 16 or less, say, would do nearly as well (particularly during the early phases of cellular evolution), and given an option as to codon size, it would seem easier to evolve a doublet code (which could handle up to 16 amino acids) than a triplet one. Further, were a doublet code to become established by evolution, it would hardly seem likely that it would be replaced by a triplet code (that in any case offered so little in the way of selective advantage) when the change-over process would be so disruptive, so lethal, to the cell. 76

Eugene V. Koonin (2017): Under this scenario, the code evolved from an ancestral version that included only simple amino acids produced abiogenically and then expanded to incorporate the more complex amino acids in parallel with the evolution of their respective biosynthetic pathways (i.e., there was code–pathway coevolution). The importance of biosynthetic pathways for code evolution is almost self-evident because amino acids could not be incorporated into the code unless they were available. Under this coevolution theory, the code evolved by subdivision: In the ancestral code, large blocks of codons encoded the same amino acid but were split to encode two amino acids upon the evolution of the respective metabolic pathways. 79

So that means, that these ultra-complex biosynthesis pathways evolved, catalyzing their reactions utilizing complex enzymes and proteins, without the machinery to make proteins yet established? How would that have been possible? It is a classic chicken & egg problem. Another question i: Why do we not find proteins with just 16 amino acids or less, encoded by codes assigning just 16 amino acids, using two nucleotide codons ( 4 x 6 =16 ), rather than 3 ( 3^4 = 64) which composes the current codon table? Also, why not  The fact that there are none, is evidence against this hypothesis.

Several questions present themselves here, however. Why don’t we find any protein sequences in the fossils of ancient organisms, which only have primary amino acids? The fact that no such proteins exist is strong proof against the evolutionary origin of the genetic code.  Why are there no 2 or 4-letter codes? Why did it not expand to quadruplets? With 4^4 = 256 possible codon quadruplets, coding space could have increased, and thus a much larger alphabet of possible proteins could have emerged.

Irene A. Chen (2010): The development of ribosomes that can read quadruplet codons could trigger a giant leap in the complexity of protein sequences. Although the practical exploration of sequence space is still limited to an infinitesimal fraction of the total volume, a full quadruplet genetic code would essentially double the information-theoretic content of proteins. Analogous studies modifying the alphabet size of ribozymes suggest that increasing the information-theoretic content of the genetic code could permit a corresponding increase in functionality. Recent work has overcome major inefficiencies in the translation of programmable quadruplet codons, paving the way for studies on fundamental questions about the origin of the genetic code and the characteristics of alternate protein “universes”. 81

T. Mukai et.al (2018): It has long been possible to transiently produce proteins bearing noncoding amino acids, but stabilizing an expanded genetic code for sustained function in vivo requires an integrated approach: creating recoded genomes and introducing new translation machinery that function together without compromising viability or clashing with endogenous pathways. 82

Here, Mukay and colleagues give out a trade secret: The genetic code, and the translation machinery have to function together. That means they have to be conceptualized and implemented together. They operate based on integrated complexity, which constitutes as well interdependence, and irreducible complexity. The genetic code has no function unless there is machinery to decode and translate the information and vice versa. Foresight is necessary to engineer a system where software and hardware join forces to convey a functional output. 

Dirson Jian Li (2021): The expansion of the genetic code along the roadmap can be explained by the coevolution of tRNAs with Aminoacyl tRNA synthetases (aaRSs).  A comprehensive study of the evolution of the genetic code inevitably involves the origins of tRNAs and aaRSs.The evolution of tRNAs played significant role to implement the number of canonical amino acids as 20. The primordial translation mechanism were invented during the evolution of the genetic code. The tRNAs and ribosomes were indispensable in the senior stage of the primordial translation mechanism, as well as in the modern translation mechanism. 82a

Li shows that the translation forms an interdependent system, where all players, hardware, and software, had to be created together.

T. Mukai continues: A synthetic organism would have additional protein constituents, noncanonical amino acids (ncAAs) assigned to their own codon in the genetic code. This would be the dream of protein engineers; it would allow the design of proteins with novel properties based on the presence of new building blocks in addition to the 20 canonical amino acids. Progress along these lines is being made, as codons have been successfully reassigned to encode ncAAs in Escherichia coli.

Protein engineers would be required to design proteins, and reassign codon triplet words, having a new meaning. 

T. Mukai : Rewriting the genetic code involves (a) engineering orthogonal translational components, (b) engineering endogenous translational components, (c) metabolome engineering, (d ) massive genome/chromosome engineering for modulating global codon usage, (e) chemical synthesis or biosynthesis of ncAAs

Mukai mentions what would be required to rewrite the genetic code, and basically, in order to achieve the goals mentioned, involves the necessity of engineering, five times. Evolution by Natural selection does not engineer anything. Intelligence does. 

1. Cells demonstrate to operate based on engineering principles.
2. Engineering is something always performed by intelligent designers.
3. Therefore, Cells are most probably designed


The error minimization theory under which selection to minimize the adverse effect of point mutations and translation errors was the principal factor of the code’s evolution.
The error-minimization theory supposes that genetic codes with high error rates would somehow evolve less error-prone over time. There is no evidence for this claim. Errors only lead to non-functional products, not higher precision. One would also have to presuppose a graduate evolution of the ribosome, itself as well as not performing the translation process with the same precision, accuracy, and error-minimization scheme through error check and repair mechanisms, which obviously, also were not there at the beginning, and would have to evolve in a gradual manner.  

In 1965, Carl Woese wrote: Sonneborn has recently suggested an ingenious evolutionary mechanism whereby the codon catalogue can be highly ordered, but the order not derive from any sort of molecular interactions.
Ordering in this case would result from selection pressure for a code which is the least sensitive to the lethality introduced by mutation. A scheme such as Sonneborn's would involve countless evolutionary trials and errors, and I feel that the possibilities for evolving into "blind alleys" (forms of the code having a far lower degree of order) so far outnumber the possibilities for evolving an optimal code (the one observed) that the latter could never have evolved in this way. However, it must be admitted that without a proper analysis of the Sonne-born model-such as a computer study-this counterargument remains feeble. Thus the question is not completely resolved at this time. 83

Almost 40 years later, in 2003, STEPHEN J. FREELAND admitted:  There remain ill-explored facets of the ‘error minimizing’ code hypothesis, however, including the mechanism and pathway by which an adaptive pattern of codon assignments emerged, the extent to which natural selection created synonym redundancy, its role in shaping the amino acid and nucleotide languages, and even the correct interpretation of the adaptive codon assignment pattern 84

J.Monod (1972): Indeed, mutations are known which, impairing the structure of certain components of the translation mechanism, thereby modify the interpretation of certain triplets and thus (with regard to the convention in force) commit errors which are exceedingly prejudicial to the organism. 85

If seemingly "small" translation errors lead to catastrophic outcomes, how much more, if the system is not fully developed, and operating with exquisite precision and accuracy, and all error-check and repair mechanisms fully implemented?  

Warren Shipton and  David W. Swift extend further on the topic. 

No hope to find a naturalistic explanation for the origin of the genetic code
Carl Woese (1969): In conclusion the evolution of the genetic code is the major remaining problem in the coding field. This problem is also the central one in the evolution of the first "modern" cell. At present we have very little concept of what the stages and events in this most intricate process were. Understanding in this area is probably more impeded by this lack of a concept than it is by a lack of facts. Barring miracles, the code's evolution should be a gradual step-wise process, utilizing and conforming to simple interactions between nucleic acids and polypeptides and/or their derivatives, and so readily understandable. 76

J.Monod (1972):The code is meaningless unless translated. The modern cell's translating machinery consists of at least fifty macromolecular components which are themselves coded in DNA: the code cannot be translated otherwise than by products of translation. It is the modern expression of omne vivum ex ovo. When and how did this circle become closed? It is exceedingly difficult to imagine. 85

John Maynard Smith (1997): described the origin of the code as the most perplexing problem in evolutionary biology. With collaborator Eörs Szathmáry he writes: “The existing translational machinery is at the same time so complex, so universal, and so essential that it is hard to see how it could have come into existence, or how life could have existed without it.” To get some idea of why the code is such an enigma, consider whether there is anything special about the numbers involved. Why does life use twenty amino acids and four nucleotide bases? It would be far simpler to employ, say, sixteen amino acids and package the four bases into doublets rather than triplets. Easier still would be to have just two bases and use a binary code, like a computer. If a simpler system had evolved, it is hard to see how the more complicated triplet code would ever take over. The answer could be a case of “It was a good idea at the time.” A good idea of whom?  If the code evolved at a very early stage in the history of life, perhaps even during its prebiotic phase, the numbers four and twenty may have been the best way to go for chemical reasons relevant at that stage. Life simply got stuck with these numbers thereafter, their original purpose lost. Or perhaps the use of four and twenty is the optimum way to do it. There is an advantage in life’s employing many varieties of amino acid because they can be strung together in more ways to offer a wider selection of proteins. But there is also a price: with increasing numbers of amino acids, the risk of translation errors grows. With too many amino acids around, there would be a greater likelihood that the wrong one would be hooked onto the protein chain. So maybe twenty is a good compromise. Do random chemical reactions have knowledge to arrive at a optimal conclusion or a " good compromise"?  An even tougher problem concerns the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly but have to deal via chemical intermediaries, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance. 86

Victor A. Gusev Arzamastsev (1997): “the situation when Nature invented the DNA code surprisingly resembles designing a computer by man. If a computer were designed today, the binary notation would be hardly used. Binary notation was chosen only at the first stage, for the purpose to simplify at most the construction of decoding machine. But now, it is too late to correct this mistake”. 87

Yuri I Wolf: (2007): The origin of the translation system is, arguably, the central and the hardest problem in the study of the origin of life, and one of the hardest in all evolutionary biology. The problem has a clear catch-22 aspect: high translation fidelity hardly can be achieved without a complex, highly evolved set of RNAs and proteins but an elaborate protein machinery could not evolve without an accurate translation system. The origin of the genetic code and whether it evolved on the basis of a stereochemical correspondence between amino acids and their cognate codons (or anticodons), through selectional optimization of the code vocabulary, as a "frozen accident" or via a combination of all these routes is another wide open problem despite extensive theoretical and experimental studies. 88

Eugene V. Koonin (2012): In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made. Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology. 

Eugene V. Koonin (2017): Certainly, there have been many developments with regard to the quantification of code properties, but the fundamental framework of evolutionary ideas laid out in the classic papers has not been overhauled. Unfortunately, this is due less to the successes of the early research than to the limited and questionable progress achieved over the next half-century. Notwithstanding the complete transformation of biology that occurred over these decades, we do not seem to be much closer to the solution. 89

Marcello Barbieri (2018): "...there is no deterministic link between codons and amino acids because any codon can be associated with any amino acid.  This means that the rules of the genetic code do not descend from chemical necessity and in this sense they are arbitrary." "...we have the experimental evidence that the genetic code is a real code, a code that is compatible with the laws of physics and chemistry but is not dictated by them." 90

Julian Mejıa (2018): Due to the complexity of such an event, it is highly unlikely that that this information could have been generated randomly. A number of theories have attempted to addressed this problem by considering the origin of the association between amino acids and their cognate codons or anticodons.  There is no physical-chemical description of how the specificity of such an association relates to the origin of life, in particular, to enzyme-less reproduction, proliferation and evolution. Carl Woese recognized this early on and emphasized the probelm, still unresolved, of uncovering the basis of the specifity between amino acids and codons in the genetic code. Carl Woese (1967) reproduced in the seminal paper of Yarus et al. cited frequently above;  “I am particularly struck by the difficulty of getting [the genetic code] started unless there is some basis in the specificity of interaction between nucleic acids and amino acids or polypeptide to build upon.” 91

Charles W Carter (2018): The hypothetical RNA World does not furnish an adequate basis for explaining how this system came into being. The preservation of encoded information processing during the historically necessary transition from any ribozymally operated code to the ancestral aaRS enzymes of molecular biology appears to be impossible, rendering the notion of an RNA Coding World scientifically superfluous. Instantiation of functional reflexivity in the dynamic processes of real-world molecular interactions demanded of nature that it fall upon, or we might say “discover”, a computational “strange loop” (Hofstadter, 1979): a self-amplifying set of nanoscopic “rules” for the construction of the pattern that we humans recognize as “coding relationships” between the sequences of two types of macromolecular polymers. However, molecules are innately oblivious to such abstractions. Many relevant details of the basic steps of code evolution cannot yet be outlined. 92

Florian Kaiser (2020): One of the most profound open questions in biology is how the genetic code was established. The emergence of this self-referencing system poses a chicken-or-egg dilemma and its origin is still heavily debated 93

JOSEF BERGER (1976):  The complexity of the functional correlation between recent nucleic acids and proteins can e.g. give rise to the assumption that the genetic code (and life) could not originate on the Earth. It was Portelli (1975) who published the hypothesis that the genetic code could not originate during the history of the Earth. In his opinion the recent genetic code represents the informational message transmitted by living systems of the previous eyrie of the Universe. Our last assumption is also in agreement with Crick's (1968) hypothesis that the triplets are formed before the creation of life.  94a

What creates codes, algorithms, and translation systems?
David L. Abel (2009): Computational methods often employ genetic algorithms (GA’s). The GA search technique begins with a large random pool of representations of “potential solutions.” GA’s are not dealing with physicodynamic cause-and-effect chains. A representation of any kind cannot be reduced to inanimate physicality. Second, “potential solutions” are formal, not merely physical entities. The optimized solution was purposefully pursued at each iteration. The overall process was entirely goal-directed (formal). Real evolution has no goal. Fourth, a formal fitness function is used to define and measure the fittest solutions thus far to a certain formal problem.  Genetic algorithms are no model at all of natural process. GA’s are nothing more than multiple layers of abstract conceptual engineering. Like language, we may start with a random phase space of alphabetical symbols. But no meaning or function results without deliberate and purposeful selection of letters out of that random phase space. No abiotic primordial physicodynamic environment could have exercised such programming prowess. Neither physics nor chemistry can dictate formal optimization, any more than physicality itself generates the formal study of physicality. Human epistemological pursuits are formal enterprises of agent minds. Natural process genetic algorithms have not been observed to exist. The genetic algorithms of living organisms are just metaphysically presupposed to have originated through natural process. But genetic algorithms cannot be used to model spontaneous life origin through natural process because genetic algorithms are formal. 94

S.C. Meyer, P.Nelson (2011): Persistent lack of progress on a scientific problem is exactly what one should expect when a causal puzzle has been fundamentally misconceived, or when the toolkit employed in causal explanation is too limited. Our knowledge of cause and effect, long understood to be the basis of all scientific inference and explanation, affirms that true codes—and the semantic relationships they embody—always arise from intelligent causes. If the genetic code as an effect gives evidence of irreducible semantic or functional mappings—i.e., if what we see operating in cells is not like a code, but genuinely is a code—then we should seek its explanation in the only cause “true and sufficient” to such effects: intelligence. 95
 
M.Eberlin (2019):The genetic information and the genetic code together include features like semantic logic and the meaningful ordering of characters—things not dictated by any laws of physics or chemistry 96

Order vs. Organization
David L. Abel (2009):  Organization ≠ order. Disorganization ≠ disorder. Spontaneous bona fide self-organization has never been observed. “Self-organization” is logically a nonsense term. Inanimate objects cannot organize themselves into integrated, cooperative, holistic schemes. Schemes are formal, not physical. To organize requires choice contingency, not just chance contingency and law-like necessity. Sloppy definitions lead to fallacious inferences, especially to category errors. Organization requires 1) decision nodes, 2) steering toward a goal of formal function, 3) algorithmic optimization, 4) selective switch-setting to achieve integration of a circuit, 5) choice with intent. The only entity that logically could possibly be considered to organize itself is an agent. But not even an agent self-organizes. Agents organize things and events in their lives. They do not organize their own molecular biology, cellular structure, organs and organ systems. Agents do not organize their own being. Agents do not create themselves. They merely make purposeful choices with the brains and minds with which they find themselves. Artificial intelligence does not organize itself either. It is invariably programmed by agents to respond in certain ways to various environmental challenges in the artificial life data base. Thus the reality of self-organization is highly suspect on logical and analytic grounds even before facing the absence of empirical evidence of any spontaneous formal self-organization. Certainly no prediction of bona fide self-organization from unaided physicodynamics has ever been fulfilled. Of course, if we fail through sloppy definitions to discern between self-ordering phenomena and organization, we will think that evidence of self-organization is abundant. We will point to hundreds of peer-reviewed papers with “self-organization” in their titles. But when all of these papers are carefully critiqued with a proper scientific skepticism, our embarrassment only grows with each exposure of the blatant artificial selection that was incorporated into each paper’s experimental design. Such investigator involvement is usually readily apparent right within Materials and Methods of the paper. 

Formalism vs. Physicality
When it comes to life-origin studies, we have to address how symbol selection in the genetic material symbol system came about objectively in nature. Life-origin science must address the derivation of objective organization and control in the first cells. How did prescriptive information and control arise out of the chaos of a primordial slime, vent interfaces in the ocean floor, or mere tide pools? We have no evidence whatsoever of formal organization arising spontaneously out of physical chaos or self-ordering phenomena. Chance and necessity has not been shown to generate the choice contingency required to program computational halting, algorithmic optimization, or sophisticated function. If chance and necessity, order and complexity cannot produce formal function, what does? Selection for potential utility is what optimizes algorithms, not randomness, and not fixed law. Utility lies in a third dimension imperceptible to chance and necessity. What provides this third dimension is when each token in a linear digital programming string is arbitrarily (non physicodynamically, formally) selected for potential function. The string becomes a cybernetic program capable of computation only when signs/symbols/tokens are arbitrarily chosen from an alphabet to represent utilitarian configurable switch settings. The choice represented by that symbol can then be instantiated into physicality using a dynamically inert configurable switch setting. At the moment the switch knob seen in Figure 4 is pushed, nonphysical formalism is instantiated into physicality. Then and only then does algorithmic programming become a physical reality. Once instantiated, we easily forget the requirement of instantiation of formal instructions and controls into the physical system to achieve engineering function. It was the formal voluntary pushing of the configurable switch knob in a certain direction that alone organized physicality.  The selection of any combination of multiple switch settings to achieve degrees of organization is called programming. But purposefully flipping the very first binary configurable switch is the foundation and first step of any form of programming. Programming requires choice contingency. . No known natural process spontaneously compresses an informational message string. Any type of measurement is a formal function that cannot be reduced to physicodynamics. We do not plug initial conditions into the formal equations known as “the laws of physics.” We plug symbolic representations of those initial conditions into the laws of physics. Then we do formal mathematical manipulations of these equations to reliably predict physicodynamic interactions and outcomes. In this sense formalism governs physicality. The role that mathematics plays in physics is alone sufficient to argue for formalism’s transcendence over physicality. Just as it takes an additional dimension to measure the algorithmic compressibility of a sequence, it takes still another dimension to measure the formal utility of any sequence. Formalisms are abstract, conceptual, representational, algorithmic, choice-contingent, non physical activities of mind. Formalisms typically involve steering toward utility. Formalisms employ controls rather than mere physicodynamic constraints. Formalisms require obedience to arbitrarily prescribed rules rather than forced laws. Physicodynamics cannot visualize, let alone quantify formal utility. Formalisms cannot be produced by chance or necessity. Language, for example, uses arbitrary symbol selections from an alphabet of options. Logic theory uses rules, not laws, to judge inferences. Programming requires choice contingency at each decision node. Each logic gate and configurable switch must be deliberately set a certain way to achieve potential (not-yet-existent) computational halting. These are all formal functions, not spontaneous physicodynamic events. They are just as formal as mathematics. Decision nodes, logic gates, and configurable switches cannot be set by chance and/or necessity if sophisticated formal utility is expected to arise. They must be set with the intent to control and to program computational halting. Acknowledgment of the reality of formal controls was growing within the molecular biological community even prior to the now weekly new discoveries of extraordinarily sophisticated cybernetic mechanisms in cellular physiology. 97

These are Abel's key terms: Intelligence can utilize search techniques, it can instantiate purposefully pursued goal-directed processes. It can define and measure the fittest solutions, It can implement abstract conceptual engineering. It can deliberately and purposefully select letters, it can set and select switches to achieve the integration of a circuit, and it can make choices with intent.  It can formally, and mathematically manipulate equations to reliably predict physicodynamic interactions and outcomes. The mind can instantiate abstract, conceptual, representational, algorithmic, choice-contingent, and non-physical affairs.  A formalized setup requires obedience to arbitrarily prescribed rules rather than forced laws. Intelligence can deliberately set a certain way to achieve potential (not-yet-existent) computational halting.  Minds can implement not spontaneous physicodynamic events.

Neither chance nor laws of physics can do any of this. An intelligent mind is required to instantiate a system, where information, data based on digital codes, and data transmission can direct the assembly process of complex specified machines and factories for defined purposes.  

Open questions: 98
1. Did the dialects, i.e., mitochondrial version, with UGA codon (being the stop codon in the universal version) codifying tryptophan; AUA codon (being the isoleucine in the universal version), methionine; and Candida cylindrica (funges), with CUG codon (being the leucine in the universal version) codifying serine, appear accidentally or as a result of some kind of selection process? 
2. Why is the genetic code represented by the four bases A, T(U), G, and C? 
3. Why does the genetic code have a triplet structure?
4. Why is the genetic code not overlapping, that is, why does the translation apparatus of a cell, which transcribes information, have a discrete equaling to three, but not to one?
5. Why does the degeneracy number of the code vary from one to six for various amino acids?
6. Is the existing distribution of codon degeneracy for particular amino acids accidental or some kind of selection process?
7. Why were only 20 canonical amino acids selected for the protein synthesis? 9. Is this very choice of amino acids accidental or some kind of selection process?
8. Why should there be a genetic code at all?
9. Why should there be the emergency of stereochemical association of a specific arbitrary codon-anticodon set?
10. Aminoacyl-tRNA synthetases recognize the correct tRNA. How did that recognition emerge, and why?
11. Is this very choice of amino acids accidental or some kind of selection process?
12. Why don’t we find any protein sequences in the fossils of ancient organisms, which only have primary amino acids?
13. Why didn’t the genetic code keep on expanding to cover more than 20 amino acids? Why not 39, 48 or 62?
14. Why did codon triplets evolve, and why not quadruplets? With 44 = 256 possible codon quadruplets, coding space could have increased, and thus a much larger universe of possible proteins could have been made possible.

72. David L Abel: Redundancy of the genetic code enables translational pausing  2014 May 20
72a.  Susha Cheriyedath: START and STOP Codons Feb 26, 2019
73. University of Utah Reading the genetic code depends on context APRIL 17, 2017
74. J Lehmann: Physico-chemical constraints connected with the coding properties of the genetic system 2000 Jan 21
75. Carl R. Woese: Aminoacyl-tRNA Synthetases, the Genetic Code, and the Evolutionary Process  2000 Mar; 6
76. CARL R. WOESE: The Biological Significance of the Genetic Code 1969
77. Ádám Radványid The evolution of the genetic code: Impasses and challenges February 2018
77a. Henri Grosjean: An integrated, structure- and energy-based view of the genetic code 2016 Sep 30
78. C R Woese: The molecular basis for the genetic code. 1967
79. Eugene V. Koonin: Origin and Evolution of the Universal Genetic Code 2017
80. TZE-FEI WONG: A Co-Evolution Theory of the Genetic Code 1975
81. Irene A. Chen:  An expanded genetic code could address fundamental questions about algorithmic information, biological function, and the origins of life 20 July 2010
82. Takahito Mukai: Rewriting the Genetic Code  July 11, 2017
82a. Dirson Jian Li: Formation of the Codon Degeneracy during Interdependent Development between Metabolism and Replication 20 December 2021
83. C R Woese: Order in the genetic code 1965 Jul;5
84. Stephen J. Freeland: The Case for an Error Minimizing Standard Genetic Code October 2003
85. J.Monod: Chance and Necessity: An Essay on the Natural Philosophy of Modern Biology  12 setember 1972
86. John Maynard Smith: The Major Transitions in Evolution 1997
87. Victor A. Gusev Arzamastsev:  Genetic code: Lucky chance or fundamental law of nature? 1997
88. Yuri I Wolf On the origin of the translation system and the genetic code in the RNA world by means of natural selection, exaptation, and subfunctionalization 2007 May 31
89. Eugene V. Koonin: Origin and evolution of the genetic code: the universal enigma 2012 Mar 5
90. Marcello Barbieri Code Biology  February 2018
91. Julian Mejıa: Origin of Information Encoding in Nucleic Acids through a Dissipation-Replication Relation April 18, 2018
92. Charles W Carter: Insuperable problems of the genetic code initially emerging in an RNA World 2018 February
93. Florian Kaiser: The structural basis of the genetic code: amino acid recognition by aminoacyl-tRNA synthetases 28 July 2020
94a. JOSEF BERGER: THE GENETIC CODE AND THE ORIGIN OF LIFE 1976
94. David L. Abel: The Capabilities of Chaos and Complexity 9 January 2009
95. S.C. Meyer,  P.A. Nelson Can the Origin of the Genetic Code Be Explained by Direct RNA Templating?  August 24, 2011
96. M.Eberlin Foresight 2019
97. David L. Abel: The Capabilities of Chaos and Complexity 9 January 2009
98. Victor A.Gusev: Genetic code: Lucky chance or fundamental law of nature? December 2004



Last edited by Otangelo on Fri Sep 09, 2022 9:28 am; edited 8 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Chapter 8

The origin of Viruses
Peter Pollard (2014): Viruses are vital to our very existence. No one seems to stick up for the good guys that keep ecosystems diverse and balanced. Phage species richness is immense with a predicted 100 million phage species.
The sheer number of these good viruses is astonishing. Their concentration in a productive lake or river is often 100 million per millilitre – that’s more than four times the population of Australia squeezed into a ¼ of a teaspoon of water. Globally the oceans contain 10^30 viruses.

Curtis A. Suttle (2005):  If the viruses were stretched end to end they would span ~10 million light years. In context, this is equivalent to the carbon in ~75 million blue whales (~10% carbon, by weight), and is ~100 times the distance across our own galaxy. This makes viruses the most abundant biological entity in the water column of the world’s oceans, and the second largest component of biomass after prokaryotes. 29

Polland continues: What are viruses? Viruses are not living organisms. They are simply bits of genetic material (DNA or RNA) covered in protein, that behave like parasites. They attach to their target cell (the host), inject their genetic material, and replicate themselves using the host cells’ metabolic pathways, as you can see in the figure below. Then the new viruses break out of the cell — the cell explodes (lyses), releasing hundreds of viruses. Viruses are very picky about who they will infect. Each viral type has evolved to infect only one host species. Viruses that infect bacteria dominate our world. A virus that infects one species of bacteria won’t infect another bacterial species, and definitely can’t infect you. We have our own suite of a couple of dozen viral types that cause us disease and death. Algae and plants are primary producers, the foundation of the world’s ecosystems. Using sunlight they turn raw elements like carbon dioxide, nitrogen and phosphorus into organic matter. In turn, they are eaten by herbivores, which are in turn eaten by other animals, and so on. Energy and nutrients are passed on up the food chain until animals die. But what ensures that the primary producers get the raw elements they need to get started? The answer hinges on the viruses’ relationship with bacteria. A virus doesn’t go hunting for its prey. It relies on randomly encountering a host — it’s a numbers game. When the host, such as a bacterial cell, grows rapidly, that number increases. The more of a bacterial species there is, the more likely it will come into contact with its viral nemesis — “killing the winner”. This means that no single bacterial species dominates an ecosystem for very long. In freshwater, for example, you see very high rates of bacterial growth. You would think this high bacterial production would become part of the food chain and end up as fish food. But that is rarely the case. We now realise that the bacteria actually disappear from these ecosystems. So where do the bacteria go? The answer lies in the interaction between bacteria and viruses. When a virus bursts open a bacterial cell its “guts” are spewed back into the water along with all the new viruses. The cell contents then become food for the neighboring bacteria, thereby stimulating their growth. These bacteria increase in numbers and upon coming into contact with their viral nemesis they, too, become infected and lyse. This process of viral infection, lysis, and nutrient release occur over and over again. Bacteria are, in effect, cannibalizing each other with the help of their associated viruses. Very quickly, the elements that support the food web are put back into circulation with the help of viruses.

It’s the combination of high bacterial growth and viral infection that keeps ecosystems functioning. Thus viruses are a critical part of inorganic nutrient recycling. So while they are tiny and seem insignificant, viruses actually play an essential global role in the recycling of nutrients through food webs.

The origin of viruses, essential agents for life, is another mystery besides  the origin of life 
This is a major conundrum. Researchers struggle to find a coherent narrative to explain the fact that life depends on Viruses and vice versa. If we want to elucidate how life began, it is important to give a closer look as well to the origin of Viruses.  Viruses are rarely in the spotlight when it comes to elucidating biological origins. Unjustifiably so, since they are essential for life.

What came first, cells or viruses? 
This is a classical chicken & egg problem: Gladys Kostyrka (2016): Cells depend on viruses, but viruses depend on cells as a host for replication. What came first? How could viruses play critical roles in the OL if life relies on cellular organization and if viruses are defined as parasites of cells? In other words, how could viruses play a role in the emergence of cellular life if the existence of cells is a prerequisite for the existence of viruses? 16

Colin Hill (2021): From the giant, ameba-infecting marine viruses to the tiny Porcine circovirus harboring only two genes, viruses and their cellular hosts are ecologically and evolutionarily intertwined. 28

C.Arnold (2014): Koonin believes in a Virus World. According to him, the ancestors of modern viruses emerged when all life was still a floating stew of genetic information, amino acids, and lipids. The earliest pieces of genetic material were, according to him, likely short pieces of RNA with relatively few genes that often parasitized other floating bits of genetic material to make copies of themselves. These naked pieces of genetic information swapped genes at a primeval genetic flea market, appropriating hand-me-downs from other elements and discarding genes that were no longer needed.

This appears to be one of the many attempts to make sense and explain the origin of viruses, but it is not convincing. Recent evidence puts a question mark on such hypotheses. C.Arnold continues: The largest virus ever discovered, pithovirus is more massive than even some bacteria. Most viruses copy themselves by hijacking their host's molecular machinery. But pithovirus is much more independent, possessing some replication machinery of its own. Pithovirus's relatively large number of genes also differentiated it from other viruses, which are often genetically simple—the smallest have a mere four genes. Pithovirus has around 500 genes, and some are used for complex tasks such as making proteins and repairing and replicating DNA. "It was so different from what we were taught about viruses," Abergel said. (Also see "Virus-Infecting Virus Fuels Definition of Life Debate.") The stunning find, first revealed in 2014, isn't just expanding scientists' notions of what a virus can be. It is reframing the debate over the origins of life. The ancestors of modern viruses are far from being evolutionary laggards.

The origin of their complexity demands as much an explanation as the origin of the first cells. C.Arnold again: The predominant theories for the origin of viruses propose that they emerged either from a type of degenerate cell that had lost the ability to replicate on its own or from genes that had escaped their cellular confines. Giant viruses, first described in 2003, began to change that line of thinking for some scientists. These novel entities represented an entirely new kind of virus. Indeed, the first specimen—isolated from an amoeba living in a cooling tower in England—was so odd that it took scientists years to understand what they had.  The scientists named it mimivirus, for MImicking MIcrobe virus, because amoebae appear to mistake it for their typical bacterial meal. Then, next, with a staggeringly high number of genes, approximately 2,500, pandoravirus seemed to herald an entirely new class of viral life. "More than 90 percent of its genes do not resemble anything else found on Earth.

If viruses developed from cells, they should be less diverse because cells would contain the entire range of genes available to viruses. Viruses are also more diverse when it comes to reproduction. "Cells only have two main ways of replicating their DNA. Viruses, on the other hand, have many more methods at their disposal. 30

Hugh Ross (2020): Without viruses, bacteria would multiply and, within a relatively short time period, occupy every niche and cranny on Earth’s surface. The planet would become a giant bacterial slime ball. Those sextillions of bacteria would consume all the resources essential for life and die. Viruses keep Earth’s bacterial population in check. They break up and kill bacteria at the just-right rates and in the just-right locations so as to maintain a population and diversity of bacteria that is optimal for both the bacteria and all the other life forms. It is important to note that all multicellular life depends on bacteria being present at the optimal population level and optimal diversity. We wouldn’t be here without viruses! Viruses also play a crucial role in Earth’s carbon cycle. They and the bacterial fragments they create are carbonaceous substances. Through their role in precipitation, they collect as vast carbonaceous sheets on the surfaces of the world’s oceans. These sheets or mats of viruses and bacterial fragments sink slowly and eventually land on the ocean floors. As they are sinking they provide important nutrients for deep-sea and benthic (bottom-dwelling) life. Plate tectonics drive much of the viral and bacterial fragments into Earth’s crust and mantle where some of that carbonaceous material is returned to the atmosphere through volcanic eruptions.20 Virus-archaea interactions play a central role in global biogeochemical cycles. Ramesh K Goel (2021): Viruses play vital biogeochemical and ecological roles by (a) expressing auxiliary metabolic genes during infection, (b) enhancing the lateral transfer of host genes, and (c) inducing host mortality. Even in harsh and extreme environments, viruses are major players in carbon and nutrient recycling from organic matter. 21 Eugene V. Koonin (2020): Lytic infections (involving the replication of a viral genome) of cellular organisms, primarily bacteria, by viruses play a central role in the biological matter turnover in the biosphere. Considering the enormous abundance and diversity of viruses and other mobile genetic elements (MGEs), and the ubiquitous interactions between MGEs and cellular hosts, a thorough investigation of the evolutionary relationships among viruses and mobile genetic elements (MGEs) is essential to advance our understanding of the evolution of life 14 Eugene V Koonin (2013): Virus killing of marine bacteria and protists largely determines the composition of the biota, provides a major source of organic matter for consumption by heterotrophic organisms, and also defines the formation of marine sediments through the deposition of skeletons of killed plankton organisms such as foraminifera and diatoms. 18 Rachel Nuwer (2020):  If all viruses suddenly disappeared, the world would be a wonderful place for about a day and a half, and then we’d all die – that’s the bottom line. The vast majority of viruses are not pathogenic to humans, and many play integral roles in propping up ecosystems. Others maintain the health of individual organisms – everything from fungi and plants to insects and humans. “We live in a balance, in a perfect equilibrium.  In 2018, for example, two research teams independently made a fascinating discovery. A gene of viral origin encodes for a protein that plays a key role in long-term memory formation by moving information between cells in the nervous system. 2

P. Forterre (2008): Historically, three hypotheses have been proposed to explain the origin of viruses: (1) they originated in a precellular world (‘the virus-first hypothesis’); (2) they originated by reductive evolution from parasitic cells (‘the reduction hypothesis’); and (3) they originated from fragments of cellular genetic material that escaped from cell control (‘the escape hypothesis’). All these hypotheses had specific drawbacks. The virus-first hypothesis was usually rejected firsthand since all known viruses require a cellular host. The reduction hypothesis was difficult to reconcile with the observation that the most reduced cellular parasites in the three domains of life, such as Mycoplasma in Bacteria, Microsporidia in Eukarya, or Nanoarchaea in Archaea, do not look like intermediate forms between viruses and cells. Finally, the escape hypothesis failed to explain how such elaborate structures as complex capsids and nucleic acid injection mechanisms evolved from cellular structures since we do not know any cellular homologs of these crucial viral components. 

Much like the concept of prokaryotes became the paradigm on how to think about bacterial evolution, the escape hypothesis became the paradigm favored by most virologists to solve the problem of virus origin. This scenario was chosen mainly because it was apparently supported by the observation that modern viruses can pick up genes from their hosts. In its classical version, the escape theory suggested that bacteriophages originated from bacterial genomes and eukaryotic viruses from eukaryotic genomes. This led to a damaging division of the virologist community into those studying bacteriophages and those studying eukaryotic viruses, ‘phages’ and viruses being somehow considered to be completely different entities. The artificial division of the viral world between ‘viruses’ and bacteriophages also led to much confusion on the nature of archaeal viruses. Indeed, although most of them are completely unrelated to bacterial viruses, they are often called ‘bacteriophages’, since archaea (formerly archaebacteria) are still considered by some biologists as ‘strange bacteria’. For instance, archaeal viruses are grouped with bacteriophages in the drawing that illustrates viral diversity in the last edition of the Virus Taxonomy Handbook. Hopefully, these outdated visions will finally succumb to the accumulating evidence from molecular analyses. 

Viruses Are Not Derived from Modern Cells 
Abundant data are now already available to discredit the escape hypothesis in its classical adaptation of the prokaryote/eukaryote paradigm. This hypothesis indeed predicts that proteins encoded by bacterial viruses (avoiding the term bacteriophage here) should be evolutionarily related to bacterial proteins, whereas proteins encoded by viruses infecting eukaryotes should be related to eukaryotic proteins. This turned out to be wrong since, with a few exceptions (that can be identified as recent transfers from their hosts), most viral encoded proteins have either no homologs in any cell or only distantly related homologs. In the latter cases, the most closely related cellular homolog is rarely from the host and can even be from cells of a domain different from the host. More and more biologists are thus now fully aware that viruses form a world of their own, and that it is futile to speculate on their origin in the framework of the old prokaryote/ eukaryote dichotomy.

A more elaborate version has been proposed by William Martin and Eugene Koonin, who suggested that life originated and evolved in the cell-like mineral compartments of a warm hydrothermal chimney. In that model, viruses emerged from the assemblage of self-replicating elements using these inorganic compartments as the first hosts. The formation of true cells occurred twice independently only at the end of the process (and at the top of the chimney), producing the first archaea and bacteria. The latter escaped from the same chimney system as already fully elaborated modern cells. In the model, viruses first co-evolved with acellular machineries producing nucleotide precursors and proteins.

The emergence of the RNA world involves at least the existence of complex mechanisms to produce ATP, RNA, and proteins. This means an elaborated metabolism to produce ribonucleotide triphosphate (rNTP) and amino acids, RNA polymerases, and ribosomes, as well as an ATP-generating system. If such a complex metabolism was present, it appears unlikely that it was unable to produce lipid precursors, hence membranes. If this is correct, then ‘modern’ viruses did not predate cells but originated in a world populated by primitive cells. 

Viruses and the Origin of DNA 
Considering the possibility that at least some DNA viruses originated from RNA viruses, it has been suggested that DNA itself could have appeared in the course of virus evolution (in the context of competition between viruses and their cellular hosts). Indeed, DNA is a modified form of RNA, and both viruses and cells often chemically modify their genomes to protect themselves from nucleases produced by their competitor. It is usually considered that DNA replaced RNA in the course of evolution simply because it is more stable (thanks to the removal of the reactive oxygen in position 20 of the ribose) and because cytosine deamination (producing uracil) can be corrected in DNA (where uracil is recognized as an alien base) but not in RNA. 25

Anyone that studies biochemistry, knows the enormous complexity of ribonucleotide reductase enzymes, that remove oxygen from the 2' position of ribose, the backbone of RNA, to transform RNA into DNA. There is no scientific explanation for how RNA could have transitioned to DNA, and the origin of the ultra-complex machinery to catalyze the needed reactions. Molecules have no goals, no foresight. They did not think about the advantage of stability if transitioning to DNA. There's nothing about inert chemicals and physical forces that say we want to become part of a living self-replicating entity called a cell at the end of a chemical evolutionary process. Molecules do not have the "drive", they do not urge or "want" to find ways to become information-bearing biomolecules, or able to harness energy as ATP molecules, become more efficient, or become part of a molecular machine, or in the end, a complex organism. There is a further hurdle to overcome. More and more biologists are now fully aware that viruses form a world of their own. Proteins encoded by bacterial viruses are not related to bacterial proteins. Modern viruses exhibit very different types of genomes (RNA, DNA, single-stranded, double-stranded), including highly modified DNA, whereas all modern cellular organisms have double-stranded DNA genomes. So the question becomes how Viruses that have a DNA genome originated since they had an independent origin from living cells. Even more: P. Forterre (2008): Many DNA viruses encode their own enzymes for deoxynucleotide triphosphate (dNTP) production, ribonucleotide reductases (the enzymes that produce deoxyribonucleotides from ribonucleotides), and thymidylate synthases (the enzymes that produce deoxythymidine monophosphate (dTMP) from deoxyuridine monophosphate (dUMP). 
That means RNR enzymes would have evolved independently, in a convergent manner, twice !! 

Forterre continues: The replacement of RNA by DNA as cellular genetic material would have thus allowed genome size to increase, with a concomitant increase in cellular complexity (and efficiency) leading to the complete elimination of RNA cells by the ancestors of modern DNA cells. This traditional textbook explanation has been recently criticized as incompatible with Darwinian evolution since it does not explain what immediate selective advantage allowed the first organism with a DNA genome to predominate over former organisms with RNA genomes. Indeed, the newly emerging DNA cell could not have immediately enlarged its genome and could not have benefited straight away from a DNA repair mechanism to remove uracil from DNA. Instead, if the replacement of RNA by DNA occurred in the framework of the competition between cells and viruses, either in an RNA virus or in an RNA cell, modification of the RNA genome into a DNA genome would have immediately produced a benefit for the virus or the cell. It has been argued that the transformation of RNA genomes into DNA genomes occurred preferentially in viruses because it was simpler to change in one step the chemical composition of the viral genome than that of the cellular genomes (the latter interacting with many more proteins). Furthermore, modern viruses exhibit very different types of genomes (RNA, DNA, single-stranded, double-stranded), including highly modified DNA, whereas all modern cellular organisms have double-stranded DNA genomes. This suggests a higher degree of plasticity for viral genomes compared to cellular ones. The idea that DNA originated first in viruses could also explain why many DNA viruses encode their own enzymes for deoxynucleotide triphosphate (dNTP) production, ribonucleotide reductases (the enzymes that produce deoxyribonucleotides from ribonucleotides), and thymidylate synthases (the enzymes that produce deoxythymidine monophosphate (dTMP) from deoxyuridine monophosphate (dUMP). Because in modern cells, dTMP is produced from dUMP, the transition from RNA to DNA occurred likely in two steps, first with the appearance of ribonucleotide reductase and production of U-DNA (DNA containing uracil), followed by the appearance of thymidylate synthases and formation of T-DNA (DNA containing thymine). The existence of a few bacterial viruses with U-DNA genomes has been taken as evidence that they could be relics of this period of evolution. If DNA first appeared in the ancestral virosphere, one has also to explain how it was later on transferred to cells. One scenario posits the co-existence for some time of an RNA cellular chromosome and a DNA viral genome (episome) in the same cell, with the progressive transfer of the information originally carried by the RNA chromosome to the DNA ‘plasmid’ via retro-transposition. 25

Viruses, the most abundant biological entities on earth
Steven W. Wilhelm (2012): Viruses are the most abundant life forms on Earth, with an estimated 10^31 total viruses globally. 19 Eugene V. Koonin (2020): Viruses appear to be the dominant biological entities on our planet, with the total count of virus particles in aquatic environments alone at any given point in time reaching the staggering value of 10^31, a number that is at least an order of magnitude greater than the corresponding count of cells.  The genetic diversity of viruses is harder to assess, but, beyond doubt, the gene pool of viruses is, in the least, comparable to that of hosts. The estimates of the number of distinct prokaryotes on earth differ widely, in the range of 10^7 to 10^12, and accordingly, estimation of the number of distinct viruses infecting prokaryotes at 10^8 to 10^13 is reasonable. Even assuming the lowest number in this range and even without attempting to count viruses of eukaryotes, these estimates represent vast diversity. Despite the rapid short-term evolution of viruses, the key genes responsible for virion formation and virus genome replication are conserved over the long term due to selective constraints. Genetic parasites inescapably emerge even in the simplest molecular replicator systems and persist through their subsequent evolution. Together with the ubiquity and enormous diversity of viruses in the extant biosphere, these findings lead to the conclusion that viruses and other mobile genetic elements MGEs played major roles in the evolution of life ever since its earliest stages.   14 G.Witzany (2015): If we imagine that 1ml of seawater contains one million bacteria and ten times more viral sequences it can be determined that 10^31 bacteriophages infect 10^24 bacteria per second. 15

No common ancestor for Viruses
Eugene V. Koonin (2020): In the genetic space of viruses and mobile genetic elements (MGEs), no genes are universal or even conserved in the majority of viruses. Viruses have several distinct points of origin, so there has never been a last common ancestor of all viruses. 14

Viruses and the tree of life ( Virology blog 2009): Viruses are polyphyletic (a group whose members come from multiple ancestral sources): In a phylogenetic tree, the characteristics of members of taxa are inherited from previous ancestors. Viruses cannot be included in the tree of life because they do not share characteristics with cells, and no single gene is shared by all viruses or viral lineages. Viruses are polyphyletic – they have many evolutionary origins. Viruses don’t have a structure derived from a common ancestor.  Cells obtain membranes from other cells during cell division. According to this concept of ‘membrane heredity’, today’s cells have inherited membranes from the first cells.  Viruses have no such inherited structure.  They play an important role in regulating population and biodiversity. 7

Eugene V. Koonin (2017): The entire history of life is the story of virus-host coevolution. Therefore the origins and evolution of viruses are an essential component of this process. A signature feature of the virus state is the capsid, the proteinaceous shell that encases the viral genome. Although homologous capsid proteins are encoded by highly diverse viruses, there are at least 20 unrelated varieties of these proteins. Viruses are the most abundant biological entities on earth and show a remarkable diversity of genome sequences, replication and expression strategies, and virion structures.  Virus genomes typically consist of distinct structural and replication modules that recombine frequently and can have different evolutionary trajectories. 

The importance of the admission that viruses do not share a common ancestor cannot be outlined enough. Researchers also admit, that under a naturalistic framework, the origin of viruses remains obscure, and has not found an explanation. One reason is that viruses depend on a cell host in order to replicate. Another is, that the virus capsid shells that protect the viral genome are unique, there is no counterpart in life. A science paper that I quote below describes capsids with a "geometrically sophisticated architecture not seen in other biological assemblies". This seems to be interesting evidence of design. The claim that their origin has something to do with evolution is also misleading - evolution plays no role in explaining either the origin of life or the origin of viruses. The fact that "no single gene is shared by all viruses or viral lineages" prohibits drawing a tree of viruses leading to a common ancestor.  

Edward C. Holmes (2011): The discovery of mimivirus has undoubtedly had a major impact on theories of viral origins. More striking is that most (∼70% at the time of writing) mimivirus genes have no known homologs, in either virus or cellular genomes, so their origins are unknown. More importantly, the discovery of mimivirus highlights our profound ignorance of the virosphere. It is therefore a truism that a wider sampling of viruses in nature is likely to tell us a great deal more about viral origins. Although perhaps less lauded, the discovery of conserved protein structures among diverse viruses with little if any primary sequence similarity has even grander implications for our understanding of viral origins. 23

Capsid-encoding organisms in contrast to ribosome-encoding organisms
Eugene V. Koonin (2014): Viruses were defined as one of the two principal types of organisms in the biosphere, namely, as capsid-encoding organisms in contrast to ribosome-encoding organisms, i.e., all cellular life forms. Structurally similar, apparently homologous capsids are present in a huge variety of icosahedral viruses that infect bacteria, archaea, and eukaryotes. These findings prompted the concept of the capsid as the virus “self” that defines the identity of deep, ancient viral lineages. This “capsidocentric” perspective on the virus world is buttressed by observations on the extremely wide spread of certain capsid protein (CP) structures that are shared by an enormous variety of viruses, from the smallest to the largest ones, that infect bacteria, archaea, and all divisions of eukaryotes. The foremost among such conserved capsid protein structures is the so-called jelly roll capsid (JRC) protein fold, which is represented, in a variety of modifications, in extremely diverse icosahedral (spherical) viruses that infect hosts from all major groups of cellular life forms. In particular, the presence of the double-beta-barrel JRC (JRC2b) in a broad variety of double-stranded DNA (dsDNA) viruses infecting bacteria, archaea, and eukaryotes has been touted as an argument for the existence of an “ancient virus lineage,” of which this type of capsid protein is the principal signature. Under this approach, viruses that possess a single beta-barrel JRC (JRC1b)—primarily RNA viruses and single-stranded DNA (ssDNA) viruses— could be considered another major viral lineage. A third lineage is represented by dsDNA viruses with icosahedral capsids formed by the so-called HK97-like capsid protein (after bacteriophage HK97, in which this structure was first determined), with a fold that is unrelated to the jelly roll fold. This assemblage of viruses is much less expansive than those defined by either JRC2b or JRC1b, but nevertheless, it unites dsDNA viruses from all three domains of cellular life. The capsid-based definition of a virus does capture a quintessential distinction between the two major empires of life forms, i.e., viruses and cellular life forms.    27

Viruses with a different genetic alphabet
Stephen Freeland (2022): The genetic material of more than 200 bacteriophage viruses uses 1-aminoadenine (Z) instead of adenine (A). This minor difference in chemical structures is nevertheless a fundamental deviation from the standard alphabet of four nucleobases established by biological evolution at the time of life's Last Universal Common Ancestor (LUCA). Placed into broader context, the finding illustrates a deep shift taking place in our understanding of the chemical basis for biology. 26

What is the best explanation for viral origin?
Edward C. Holmes (2011):  The central debating point in discussions of the origin of viruses is whether they are ancient, first appearing before the last universal cellular ancestor (LUCA), or evolved more recently, such that their ancestry lies with genes that “escaped” from the genomes of their cellular host organisms and subsequently evolved independent replication. The escaped gene theory has traditionally dominated thinking on viral origins, in large part because viruses are parasitic on cells now and it has been argued that this must have always have been the case. However, there is no gene shared by all viruses, and recent data are providing increasingly strong support for a far more ancient origin. 23

Koonin mentions three possible scenarios for their origin. One of them: Eugene V. Koonin (2017): The virus-first hypothesis, also known as the primordial virus world hypothesis, regards viruses (or virus-like genetic elements) as intermediates between prebiotic chemical systems and cellular life and accordingly posits that virus-like entities originated in the precellular world.

The second: The regression hypothesis, in contrast, submits that viruses are degenerated cells that have succumbed to obligate intracellular parasitism and in the process shed many functional systems that are ubiquitous and essential in cellular life forms, in particular the translation apparatus. The third, the escape hypothesis postulates that viruses evolved independently in different domains of life from cellular genes that embraced selfish replication and became infectious. 17

The second and third are questionable, in face of the fact that evolution would sort out degenerated cell parts that would harm their survival. The hypothesis that these parts would become parasites, goes detrimentally against the evolutionary paradigm, since evolution is about the survival of the fittest, and not evolving parasites that would kill the cell. Furthermore, if Viruses were not extant right from the beginning, how would ecological homeostasis be guaranteed?

Koonin agrees that the first is the most plausible. He writes:  The diversity of genome replication-expression strategies in viruses, contrasting the uniformity in cellular organisms, had been considered to be most compatible with the possibility that the virus world descends directly from a precellular stage of evolution, and an updated version of the escape hypothesis states that the first viruses have escaped not from contemporary but rather from primordial cells, predating the last universal cellular ancestor. The three evolutionary scenarios imply different timelines for the origin of viruses but offer little insight into how the different components constituting viral genomes might have combined to give rise to modern viruses.

The conclusion that can be drawn is, that Viruses co-emerged with life, and that occurred multiple times. If just emerging once is extremely unlikely based on the odds, how much more, multiple times?

Koonin continues: A typical virus genome encompasses two major functional modules, namely, determinants of virion formation and those of genome replication. Understanding the origin of any virus group is possible only if the provenances of both components are elucidated. Given that viral replication proteins often have no closely related homologs in known cellular organisms, it has been suggested that many of these proteins evolved in the precellular world or in primordial, now extinct, cellular lineages. The ability to transfer the genetic information encased within capsids—the protective proteinaceous shells that comprise the cores of virus particles (virions)—is unique to bona fide viruses and distinguishes them from other types of selfish genetic elements such as plasmids and transposons. Thus, the origin of the first true viruses is inseparable from the emergence of viral capsids. Studies on the origin of viral capsids are severely hampered by the high sequence divergence among these proteins.

Analysis of the available sequences and structures of major capsid proteins (CP) and nucleocapsid (NC) proteins encoded by representative members of 135 virus taxa (117 families and 18 unassigned genera) allowed us to attribute structural folds to 76.3% of the known virus families and unassigned genera. The remaining taxa included viruses that do not form viral particles (3%) and viruses for which the fold of the major virion proteins is not known and could not be predicted from the sequence data (20.7%). The former group includes capsidless viruses of the families Endornaviridae, Hypoviridae, Narnaviridae, and Amalgaviridae, all of which appear to have evolved independently from different groups of full-fledged capsid-encoding RNA viruses. The latter category includes eight taxa of archaeal viruses with unique morphologies and genomes, pleomorphic bacterial viruses of the family Plasmaviridae, and 19 diverse taxa of eukaryotic viruses. It should be noted that, with the current explosion of metagenomics studies, the number and diversity of newly recognized virus taxa will continue to rise. Although many of these viruses are expected to have previously observed CP/NC protein folds, novel architectural solutions doubtlessly will be discovered as well. 17

Gladys Kostyrka (2016): To french molecular biologist and microbiologist Patrick Forterre, viruses could not exist without cells because he endorses their definition as intracellular obligate parasites. However, this does not mean that viruses did not exist prior to DNA cells. On the basis of comparative sequence analyses of proteins and nucleic acids from viruses and their cellular hosts, Forterre hypothesized that viruses originated before DNA cells and before LUCA (the Last Universal Cellular Ancestor). Forterre’s hypothesis has been first formulated in the 1990s and was inspired by protein phylogenies. “Comparative sequence analyses of type II DNA topoisomerases and DNA polymerases from viruses, prokaryotes and eukaryotes suggest that viral genes diverged from cellular genes before the emergence of the last common ancestor (LCA) of prokaryotes and eukaryotes”.  At least some viruses originated not from the known cellular domains e Bacteria, Eukarya, and Archaea e but before these three domains were formed. In other words, these viruses must have originated before LUCA.  There are several genes shared by many groups of viruses with extremely diverse replication-expression strategies, genome size and host ranges. In other words, there are several “hallmark genes”, coding for several hallmark proteins present in many viruses. Yet these genes and proteins are not supposed to be shared by viruses that do not have the same origin, given their diversity. This “key observation” of several hallmark viral genes is thus problematic. It is even more problematic if one takes into account the fact that these genes are not found in any cellular life forms. It is then highly improbable that these viral hallmark genes were originally cellular genes that were transferred to viruses. Koonin assumes that these genes originated in a primordial viral world and were conserved. “The simplest explanation for the fact that the hallmark proteins involved in viral replication and virion formation are present in a broad variety of viruses but not in any cellular life forms seems to be that the latter actually never possessed these genes. Rather, the hallmark genes, probably, antedate cells and descend directly from the primordial pool of virus-like genetic elements” 17

If Koonin's hypothesis were the case, these nucleotides would require foresight to assemble into genes, that later would become virions, depending on cell hosts. That seems not tenable.  The evidence is better interpreted by the creationist model. It coincides with the hypothesis, that God created each species/kind and virus separately. Multiple creation events by natural means and the emergence of symbiotic and parasitic relationships just mean multiplying the odds, and then naturalistic proposals become more and more untenable.

Achieving the same function through different molecular assembly routes refutes an evolutionary-naturalistic origin of viruses
Eugene V. Koonin (2015): The ability to form virions is the key feature that distinguishes viruses from other types of mobile genetic elements, such as plasmids and transposons. The origin of bona fide viruses thus appears to be intimately linked to and likely concomitant with the origin of the capsids. However, tracing the provenance of viral capsid proteins (CPs) proved to be particularly challenging because they typically do not display sequence or structural similarity to proteins from cellular life forms. Over the years, a number of structural folds have been discovered in viral CPs. Strikingly, morphologically similar viral capsids, in particular, icosahedral, spindle-shaped and filamentous ones, can be built from CPs which have unrelated folds. Thus, viruses have found multiple solutions to the same problem. Nevertheless, the process of de novo origin of viral CPs remains largely enigmatic.  9

Stephen J. Gould (1990):…No finale can be specified at the start, none would ever occur a second time in the same way, because any pathway proceeds through thousands of improbable stages. Alter any early event, ever so slightly, and without apparent importance at the time, and evolution cascades into a radically different channel.10

Fazale Rana (2001): Gould’s metaphor of “replaying life’s tape” asserts that if one were to push the rewind button, erase life’s history, and let the tape run again, the results would be completely different.  The very essence of the evolutionary process renders evolutionary outcomes as nonreproducible (or nonrepeatable). Therefore, “repeatable” evolution is inconsistent with the mechanism available to bring about biological change. 12

William Schopf (2002): Because biochemical systems comprise many intricately interlinked pieces, any particular full-blown system can only arise once…Since any complete biochemical system is far too elaborate to have evolved more than once in the history of life, it is safe to assume that microbes of the primal LCA cell line had the same traits that characterize all its present-day descendants. 11 13

Hugh M. B. Harris: (2021): Viruses are ubiquitous. They infect almost every species and are probably the most abundant biological entities on the planet, yet they are excluded from the Tree of Life (ToL). Viruses may well be essential for ecosystem diversity 1

Matti Jalasvuori (2012): Viruses play a vital role in all cellular and genetic functions, and we can therefore define viruses as essential agents of life. Viruses provide the largest reservoir of genes known in the biosphere but were not, stolen’ from the host. Such capsids cannot be of host origin. It is well accepted by virologists that viruses often contain many complex genes (including core genes) that cannot be attributed to having been derived from host genes. 3

Julia Durzyńska (2015): Many attempts have been made to define nature of viruses and to uncover their origin.   As the origin of viruses and that of living cells are most probably interdependent, we decided to reveal ideas concerning nature of cellular last universal common ancestor (LUCA).   Many viral particles (virions) contain specific viral enzymes required for replication. A few years ago, a new division for all living organisms into two distinct groups has been proposed: ribosome-encoding organisms (REOs) and capsid-encoding organisms (CEOs). 4

Eugene V. Koonin: (2012): Probably an even more fundamental departure from the three-domain schema is the discovery of the Virus World, with its unanticipated, astonishing expanse and equally surprising evolutionary connectedness. Virus-like parasites inevitably emerge in any replicator systems, so THERE IS NO EXAGGERATION IN THE STATEMENT THAT THERE IS NO LIFE WITHOUT VIRUSES. And in quite a meaningful sense, not only viruses taken together, but also major groups of viruses seem to be no less (if not more) fundamentally distinct as the three (or two) domains of cellular life forms, given that viruses employ different replication-expression cycles, unlike cellular life forms which, in this respect, are all the same. 5

Shanshan Cheng: (2013): Viral capsid proteins protect the viral genome by forming a closed protein shell around it. Most of currently found viral shells with known structure are spherical in shape and observe icosahedral symmetry. Comprised of a large number of proteins, such large, symmetrical complexes assume a geometrically sophisticated architecture not seen in other biological assemblies. Geometry of the complex architecture aside, another striking feature of viral capsid proteins lies in the folded topology of the monomers, with the canonical jelly-roll β barrel appearing most prevalent (but not sole) as a core structural motif among capsid proteins that make up these viral shells of varying sizes. Our study provided support for the hypothesis that viral capsid proteins, which are functionally unique in viruses in constructing protein shells, are also structurally unique in terms of their folding topology. 6

Eugene V. Koonin (2020): In a seminal 1971 article, Baltimore classified all then-known viruses into six distinct classes that became known as Baltimore classes (BCs) (a seventh class was introduced later), on the basis of the structure of the virion's nucleic acid (traditionally called the virus genome):

The seven Baltimore classes (BCs)
For each BC, the processes of replication, transcription, translation, and virion assembly are shown by color-coded arrows (see the inset). Host enzymes that are involved in virus genome replication or transcription are prefixed with “h-,” and in cases when, in a given BC, one of these processes can be mediated by either a host- or a virus-encoded enzyme, the latter is prefixed with “v-.” Otherwise, virus-encoded enzymes are not prefixed. CP, capsid protein; DdDp, DNA-directed DNA polymerase; DdRp, DNA-directed RNA polymerase; gRNA, genomic RNA; RdRp, RNA-directed RNA polymerase; RT, reverse transcriptase; RCRE, rolling-circle replication (initiation) endonuclease.

1. Double-stranded DNA (dsDNA) viruses, with the same replication-expression strategy as in cellular life forms
2. Single-stranded DNA (ssDNA) viruses that replicate mostly via a rolling-circle mechanism
3. dsRNA viruses
4. Positive-sense RNA [(+)RNA] viruses that have ssRNA genomes with the same polarity as the virus mRNA(s)
5. Negative-sense RNA [(−)RNA] viruses that have ssRNA genomes complementary to the virus mRNA(s)
6. RNA reverse-transcribing viruses that have (+)RNA genomes that replicate via DNA intermediates synthesized by reverse transcription of the genome
7. DNA reverse-transcribing viruses replicating via reverse transcription but incorporating into virions a dsDNA or an RNA-DNA form of the virus genome.

Evidence supports monophyly for some of the BCs but refutes it for others. Generally, the evolution of viruses and MGEs is studied with methods of molecular evolutionary analysis that are also used for cellular organisms. However, the organizations of the genetic spaces dramatically differ between viruses and their cellular hosts.

The Cell factory maker, Paley's watchmaker argument 2.0 Baltim10
An illustration of the "pathways" each Baltimore group goes through to synthesize mRNA. Of the 6 “superviral hallmark genes” in virus genomes of the seven Baltimore classes.

Rob Phillips (2018):The origins of superviral hallmark genes VHGs appear to be widely different. In particular, RdRps, RTs, and RCREs most likely represent the heritage of the primordial replicator pool as indicated by the absence of orthologs of these proteins in cellular life-forms. At the top of the megataxonomy are the four effectively independent realms that, however, are connected at an even higher rank of unification through the super-VHG domains. 14

The International Committee on Taxonomy of Viruses or ICTV classifies viruses into seven orders:

Herpesvirales, large eukaryotic double-stranded DNA viruses;
Caudovirales, tailed double-stranded DNA viruses typically infecting bacteria;
Ligamenvirales, linear double-stranded viruses infecting archaea;
Mononegavirales, nonsegmented negative (or antisense) strand single-stranded RNA viruses of plants and animals;
Nidovirales, positive (or sense) strand single-stranded RNA viruses of vertebrates;
Picornavirales, small positive strand single-stranded RNA viruses infecting plants, insects, and animals;
Tymovirales, monopartite positive single-stranded RNA viruses of plants.

In addition to these orders, there are ICTV families, some of which have not been assigned to an ICTV order. Only those ICTV viral families with more than a few members present in our dataset are explored. 22

Structure and Assembly of Complex Viruses
Carmen San Martin (2013): Viral particles consist essentially of a proteinaceous capsid protecting a genome and involved also in many functions during the virus life cycle. In simple viruses, the capsid consists of a number of copies of the same, or a few different proteins organized into a symmetric oligomer. Structurally complex viruses present a larger variety of components in their capsids than simple viruses. They may contain accessory proteins with specific architectural or functional roles; or incorporate non-proteic elements such as lipids. They present a range of geometrical variability, from slight deviations from the icosahedral symmetry to complete asymmetry or even pleomorphism. Putting together the many different elements in the virion requires an extra effort to achieve correct assembly, and thus complex viruses require sophisticated mechanisms to regulate morphogenesis. This chapter provides a general view of the structure and assembly of complex viruses.

A viral particle consists essentially of a proteinaceous capsid with multiple roles in the protection of the viral genome, cell recognition and entry, intracellular trafficking, and controlled uncoating. Viruses adopt different strategies to achieve these goals. Simple viruses generally build their capsids from a number of copies of the same, or a few different proteins, organized into a symmetric oligomer. In the case of complex viruses, capsid assembly requires further elaborations. What are the main characteristics that define a structurally complex virus? Structural complexity on a virus often, but not necessarily, derives from the need to house a large genome, in which case a larger capsid is required. However, capsid or genome sizes by themselves are not determinants of complexity. For example, flexible filamentous viruses can reach lengths in the order of microns, but most of their capsid mass is built by a single capsid protein arranged in a helical pattern. On the other hand, architecturally complex viruses such as HIV have moderate-sized genomes (7–10 kb of single-stranded (ss) RNA). Structurally complex viruses incorporate a larger variety of components into their capsids than simple viruses. They may contain accessory proteins with specific architectural or functional roles or incorporate non-proteic elements such as lipids. 24

Forming viral symmetric shells
Roya Zandi (2020): The process of formation of virus particles in which the protein subunits encapsidate genome (RNA or DNA) to form a stable, protective shell called the capsid is an essential step in the viral life cycle. The capsid proteins of many small single-stranded RNA viruses spontaneously package their wild-type (wt) and other negatively charged polyelectrolytes, a process basically driven by the electrostatic interaction between positively charged protein subunits and negatively charged cargo.  Regardless of the virion size and assembly procedures, most spherical viruses adopt structures with icosahedral symmetry. How exactly capsid proteins (CPs) assemble to assume a specific size and symmetry have been investigated for over half a century now. As the self-assembly of virus particles involves a wide range of thermodynamics parameters, different time scales, and an extraordinary number of possible pathways, the kinetics of assembly has remained elusive, linked to Levinthal’s paradox for protein folding. The role of the genome on the assembly pathways and the structure of the capsid is even more intriguing. The kinetics of virus growth in the presence of RNA is at least 3 orders of magnitude faster than that of empty capsid assembly, indicating that the mechanism of assembly of CPs around RNA might be quite different. Some questions then naturally arise: What is the role of RNA in the assembly process, and by what means then does RNA preserve assembly accuracy at fast assembly speed? Two different mechanisms for the role of the genome have been proposed: (i) en masse assembly and (ii) nucleation and growth.

The assembly interfaces in many CPs are principally short-ranged hydrophobic in character, whereas there is a strong electrostatic, nonspecific long-ranged interaction between RNA and CPs. To this end, the positively charged domains of CPs associate with the negatively charged RNA quite fast and form an amorphous complex. Hydrophobic interfaces then start to associate, which leads to the assembly of a perfect icosahedral shell. Based on the en masse mechanism, the assembly pathways correspond to situations in which intermediates are predominantly disordered. They found that, at neutral pH, a considerable number of CPs were rapidly (∼28 ms) adsorbed to the genome, which more slowly (∼48 s) self-organized into compact but amorphous nucleoprotein complexes (NPC). By lowering the pH, they observed a disorder−order transition as the protein−protein interaction became strong enough to close up the capsid and to overcome the high energy barrier separating NPCs from virions. 8

1. Hugh M. B. Harris: A Place for Viruses on the Tree of Life 14 January 2021
2. Rachel Nuwer  Why the world needs viruses to function  (2020)
3. Matti Jalasvuori  Viruses: Essential Agents of Life (2012)
4. Julia Durzyńska  Viruses and cells intertwined since the dawn of evolution  (2015)
5. Eugene V. Koonin:  The Logic of Chance : The Nature and Origin of Biological Evolution (2012)
6. Shanshan Cheng: Viral Capsid Proteins Are Segregated in Structural Fold Space February 7, 2013
7. Viruses and the tree of life 19 March 2009
8. Roya Zandi: How a Virus Circumvents Energy Barriers to Form Symmetric Shells March 2, 2020
9. Eugene V. Koonin:  Evolution of an archaeal virus nucleocapsid protein from the CRISPR-associated Cas4 nuclease 2015
10. Stephen J. Gould, Wonderful Life: The Burgess Shale and the Nature of History 1990
11. J. William Schopf: Life’s Origin 2002
12. Fazale Rana: Repeatable Evolution or Repeated Creation? 2001
13. Fazale Rana: Newly Discovered Example of Convergence Challenges Biological Evolution 2008
14. Eugene V. Koonin: Global Organization and Proposed Megataxonomy of the Virus World 4 March 2020
15. G.Witzany: Viruses are essential agents within the roots and stem of the tree of life 21 February 2010
16. Gladys Kostyrka: What roles for viruses in origin of life scenarios? 27 February 2016
17. Eugene V. Koonin: Multiple origins of viral capsid proteins from cellular ancestors March 6, 2017
18. Eugene VKoonin: A virocentric perspective on the evolution of life October 2013
19. Steven W. Wilhelm: Ocean viruses and their effects on microbial communities and biogeochemical cycles 2012 Sep 5.
20. Hugh Ross: Viruses and God’s Good Designs March 30, 2020
21. Ramesh K Goel: Viruses and Their Interactions With Bacteria and Archaea of Hypersaline Great Salt Lake 2021 Sep 28
22. Rob Phillips: A comprehensive and quantitative exploration of thousands of viral genomes 2018 Apr 19
23. Edward C. Holmes: What Does Virus Evolution Tell Us about Virus Origins? 2011 Jun; 85
24. Carmen San Martin: Structure and Assembly of Complex Viruses  19 April 2013
25. P.Forterre: Origin of Viruses 2008
26. Stephen Freeland: Undefining life's biochemistry: implications for abiogenesis 23 February 2022
27. Eugene V. Koonin: Virus World as an Evolutionary Network of Viruses and Capsidless Selfish Elements 2, June 2014
28. Hugh M. B. Harris A Place for Viruses on the Tree of Life 14 January 2021
29. Curtis A. Suttle: Viruses in the sea 2005
30. CARRIE ARNOLD: Could Giant Viruses Be the Origin of Life on Earth? JULY 17, 2014



Last edited by Otangelo on Fri Aug 05, 2022 8:08 am; edited 9 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Muller's Ratchet: Another hurdle in the hypothetical origin of life scenarios
E. V. Koonin (2017): Both the emergence of parasites in simple replicator systems and their persistence in evolving life forms are inevitable because the putative parasite-free states are evolutionarily unstable. 3 E. V. Koonin (2016): In the absence of recombination, finite populations are subject to irreversible deterioration through the accumulation of deleterious mutations, a process known as Muller’s ratchet, that eventually leads to the collapse of a population via mutational meltdown. 2

Dana K Howe (2008): The theory of Muller's Ratchet predicts that small asexual populations are doomed to accumulate ever-increasing deleterious mutation loads as a consequence of the magnified power of genetic drift and mutation that accompanies small population size. Evolutionary theory predicts that mutational decay is inevitable for small asexual populations, provided deleterious mutation rates are high enough. Such populations are expected to experience the effects of Muller's Ratchet where the most-fit class of individuals is lost at some rate due to chance alone, leaving the second-best class to ultimately suffer the same fate, and so on, leading to a gradual decline in mean fitness. The mutational meltdown theory built upon Muller's Ratchet to predict a synergism between mutation and genetic drift in promoting the extinction of small asexual populations that are at the end of a long genomic decay process. Since deleterious mutations are harmful by definition, accumulation of them would result in loss of individuals and a smaller population size. Small populations are more susceptible to the ratchet effect and more deleterious mutations would be fixed as a result of genetic drift. This creates a positive feedback loop that accelerates the extinction of small asexual populations. This phenomenon has been called mutational meltdown. From the onset, there would have had to be a population of diversified microbes, not just the population of one progenitor, but varies with different genetic make-ups, internally compartmentalized, able to perform Horizontal Gene Transfer and recombination. Unless these preconditions were met, the population would die. 1

A plurality of ancestors
The origin of life did not coincide with the organismal LUCA; rather, a profound gap in time, biological evolution, geochemical change, and surviving evidence separates the two. After life emerged from prebiotic processes, diversification ensued and the initial self-replicating and evolving living systems occupied a wide range of available ecological niches. From this time until the existence of the organismal LUCA, living systems, lineages and communities would have come and gone, evolving via the same processes that are at work today, including speciation, extinction, and gene transfer.  4
Eugene V. Koonin (2020): The LUCA was not a homogenous microbial population but rather a community of diverse microorganisms, with a shared gene core that was inherited by all descendant life-forms and a diversified pangenome that included various genes involved in virus–host interactions, in particular multiple defense systems. 8

Horizontal Gene transfer, and the Origin of Life
Gregory P Fournier (2015): The genomic history of prokaryotic organismal lineages is marked by extensive horizontal gene transfer (HGT) between groups of organisms at all taxonomic levels. These HGT events have played an essential role in the origin and distribution of biological innovations. Analyses of ancient gene families show that HGT existed in the distant past, even at the time of the organismal last universal common ancestor (LUCA). Mobile genetic elements, including transposons, plasmids, bacteriophage, and self-splicing molecular parasites, have played a crucial role in facilitating the movement of genetic material between organisms. Ancient HGT during Hadean/Archaean times is more difficult to study than more recent transfers, although it has been proposed that its role was even more pronounced during earlier times in life’s history.  

Aude Bernheim (2019): None of the strains encode all defense systems. However, if these strains are mixed as part of a population, the pan-genome of this population would encode an ‘immune potential’ that encompasses all of the depicted systems. As these systems can be readily available by HGT, given the high rate of HGT in defense systems, the population in effect harbors an accessible reservoir of immune systems that can be acquired by population members. When the population is subjected to infection, this diversity ensures that at least some population members would encode the appropriate defense system, and these members would survive and form the basis for the perpetuation of the population 5

Eugene V. Koonin (2014): Recombinases derived from unrelated mobile genetic elements have essential roles in both prokaryotic and vertebrate adaptive immune systems. 7

From the onset, there would have had to be a population of diversified microbes, not just the population of one species of progenitor, but varies with different genetic make-ups, able to perform Horizontal Gene Transfer (HGT) and recombination. Also, there had to be transposons, viral sequences, plasmids, viruses, mobile genetic elements, parasites, etc.  Unless these preconditions were met, the population would go extinct.

The virome of the Last Universal Common Ancestor (LUCA)
Eugene V. Koonin (2020):  Given that all life forms are associated with viruses and/or other mobile genetic elements, there is no doubt that the LUCA was a host to viruses. Even a conservative version of this reconstruction suggests a remarkably complex virome that already included the main groups of extant viruses of bacteria and archaea. The presence of a highly complex virome implies the substantial genomic and pan-genomic complexity of the LUCA itself. Viruses and other mobile genetic elements (MGEs) are involved in parasitic or symbiotic relationships with all cellular life forms.  Genetic parasites must have been inalienable components of life from their very beginnings. Unlike cellular life forms, viruses employ all existing types of nucleic acids as replicating genomes packaged into virions. This diversity of the replication and expression strategies has been captured in a systematic form in the ‘Baltimore classification of viruses. There are four realms (the highest rank in virus taxonomy) of viruses that are monophyletic (a group of taxa composed only of a common ancestor) with respect to their core gene sets and partially overlap with the Baltimore classification: Riboviria, Monodnaviria, Duplodnaviria and Varidnaviria. Riboviria includes viruses with positive-sense, negative-sense, and double-stranded RNA (dsRNA) genomes as well as reverse-transcribing viruses with RNA and DNA genomes. Members of this realm are unified by the homologous RNA-dependent RNA polymerases (RdRPs) and reverse archaea that form several distinct, seemingly unrelated groups.

The accessory genes that are present in each strain in addition to the core genome and collectively account for the bulk of the pangenome include diverse anti-parasite defense systems, genes involved in inter-microbial conflicts, such as antibiotic production and resistance, and integrated mobile genetic elements (MGEs). Given that genetic parasites are intrinsic components of any replicator system, this pangenome structure should necessarily have been established at the earliest stages of cellular evolution. Thus, we can conclude with reasonable confidence that it was a prokaryotic population with a pangenomic complexity comparable to that of the extant archaea and bacteria. The LUCA virome was likely dominated by dsDNA viruses. More specifically, several groups of tailed dsDNA viruses (Duplodnaviria) indicate that (at least) this realm of viruses had already reached considerable diversity prior to the radiation of archaea and bacteria.   Each virus genome includes two major functional modules, one for virion formation (morphogenetic module) and one for genome replication. The two modules rarely display congruent histories over long evolutionary spans and are instead exchanged horizontally between different groups of viruses through recombination, continuously producing new virus lineages. 

LUCAs ancestral virome was likely dominated by dsDNA viruses from the realms Duplodnaviria and Varidnaviria. LUCA was not a homogenous microbial population but rather a community of diverse microorganisms, with a shared gene core that was inherited by all descendant life-forms and a diversified pangenome that included various genes involved in virus-host interactions, in particular multiple defense systems. 8

Koonin mentions that Duplodnaviria and Varidnaviria are subdivided into three classes: DJR MCP viruses,  Sphaerolipoviridae, and  Portogloboviridae were extant in the LUCA.

Aude Bernheim (2019): For a microorganism to be protected against a wide variety of viruses, it should encode a broad defense arsenal that can overcome the multiple types of viruses that can infect it. Owing to the selective advantage that defense systems provide, they are frequently gained by bacteria and archaea through horizontal gene transfer (HGT). Faced with viruses that encode counter-defense mechanisms, bacteria and archaea cannot rely on a single defense system and thus need to present several lines of defense as a bet-hedging strategy of survival. Given their selective advantage in the arms race against viruses, one might expect that defense systems, once acquired (either through direct evolution or via HGT), would accumulate in prokaryotic genomes and be selected for. Surprisingly, this is not the case as defense systems are known to be frequently lost from microbial genomes over short evolutionary time scales, suggesting that they can impose selective disadvantages in the absence of infection pressure. Competition studies between strains encoding defense systems, such as CRISPR–Cas or Lit Abi, and cognate defense-lacking strains have demonstrated the existence of a fitness cost in the absence of phage infection. Access to a diverse set of defense mechanisms is essential in order to combat the enormous genetic and functional diversity of viruses. None of the strains encode all defense systems. However, if these strains are mixed as part of a population, the pan-genome of this population would encode an ‘immune potential’ that encompasses all of the depicted systems. As these systems can be readily available by HGT, given the high rate of HGT in defense systems, the population in effect harbors an accessible reservoir of immune systems that can be acquired by population members. When the population is subjected to infection, this diversity ensures that at least some population members would encode the appropriate defense system, and these members would survive and form the basis for the perpetuation of the population 5

Felix Broecker (2019): Cellular organisms have co-evolved with various mobile genetic elements (MGEs), including transposable elements (TEs), retroelements, and viruses, many of which can integrate into the host DNA. MGEs constitute ∼50% of mammalian genomes, >70% of some plant genomes, and up to 30% of bacterial genomes. The recruitment of transposable elements (TEs), viral sequences, and other MGEs for antiviral defense mechanisms has been a major driving force in the evolution of cellular life. 6

The immune system and defense mechanisms of the LUCA
LUCA had to have already sophisticated, complex, and advanced immune and defense systems to protect itself from invaders, viruses, plasmids, and phages. They had to because viruses are sophisticated too. The various defense systems of bacteria serve as a good model to exemplify.

The Cell factory maker, Paley's watchmaker argument 2.0 Bacter11
Defense systems: Bacteria have many ways to fend off phage infection, such as blocking adsorption, injection, or assembly; or through cell suicide or RM systems. These are all innate mechanisms that defend bacteria against phages in general rather than targeting a particular type of phage. CRISPR is an adaptive immune system. It defends bacteria against specific phages and adapts to recognize new threats.

Deep in the ocean, in our own gut, and everywhere in between, phage viruses infect all sorts of bacteria. Phages carry around DNA (or RNA) genomes as blueprints for their assembly and spread. However, phages cannot reproduce on their own. They must take resources from bacteria during the process of infection and use these resources to produce more phages.

THE PHAGE INFECTION PROCESS
When a phage lands on a bacterium, it “locks onto” the bacterial surface using its legs or tail fibers. Then, the phage’s DNA travels from storage space in its head, down a long tube,  the tail, and into the bacterial cell. After a phage injects its DNA into a bacterium, the bacterium’s own protein machinery replicates the phage genome. The bacterium then follows the instructions in the phage DNA and generates more heads, tails, tail fibers, and other phage pieces. These “body parts” assemble to form new phages. Once the phages reach a critical mass, they explode out of or lyse the bacterium and escape into the environment. This is similar to a balloon popping. Each time a new bacterial cell is infected, a new swarm of phages is produced. The infection can thereby spread exponentially, quickly taking over communities of bacteria. This is known as the lytic cycle. In contrast, so-called lysogenic phages hide their DNA inside an infected bacterium’s genome. They lie in wait and only produce more phages at a later time when conditions are right.

HOW DO BACTERIA DEFEND AGAINST VIRUSES?
If phages are so good at killing bacteria, how are there any bacteria left on Earth? Bacteria have a whole range of tricks to fend off phages. Each of their defense systems works a bit differently. Perhaps the most obvious defense is to prevent phages from landing on bacterial cells in the first place. The bacterial cell membrane is covered with different receptors, proteins, or groups of proteins that recognize molecules from the outside environment and transmit signals into the bacterium. Phages have taken advantage of bacterial receptors for their own purposes. They attach to their bacterial hosts by sticking to receptors in a process called adsorption. Some bacteria keep phages from binding to their surfaces by altering the structure of their receptors or introducing physical barriers to prevent attachment. Alternatively, some bacteria block phage DNA from entering the cytoplasm (the cell interior). In order to inject their DNA into a bacterial cell, phages must penetrate the cell membrane. By adding new proteins into this membrane, bacteria can clog the phage entryway and block DNA injection. When phage DNA makes it into the cytoplasm, some bacteria can prevent the genome from being replicated, while others prevent replicated phage genomes from being loaded into capsids and bursting out of the cell. Many bacteria deploy restriction-modification (RM) systems to destroy phage DNA that is injected into the cell. These defense systems are composed of scissor-like proteins called restriction enzymes. These enzymes cut phage DNA apart, thereby destroying the instructions for making more phages.

To prevent their own DNA from being damaged by restriction enzymes, bacteria add protective chemicals called methyl groups to their genomes. Restriction enzymes ignore methylated DNA and don’t cut it up. Thus, methylation keeps the bacterial genome safe. If a phage manages to bypass all these safeguards, the bacterium’s last line of defense is cell suicide. This “altruistic” act kills the individual bacterium but prevents the production of more phage copies that could go on to infect neighboring cells. One common version of this process is known as abortive infection.

BEYOND INNATE DEFENSES
CRISPRAll the defense systems described above are considered innate defenses. This means that they generally evolve slowly, act quickly during infection, and defend against phages in general rather than against any one specific phage. Almost every bacterium has some form of innate defense.

About half of bacteria also have an adaptive immune mechanism called a CRISPR-Cas system, which defends against specific types of phages. This system can adapt, rapidly generating immunity against new phage challengers. 23

Bacteriophage resistance mechanisms
D. Trasanidou (2019): As a response to the constant threat of phage infection, a diverse arsenal of defence mechanisms has evolved in bacterial hosts. Because phages evolve rapidly to counter these immune systems (Drake et al. 1998), the hosts need to constantly evolve new means of self-protection, leading to a perennial arms-race between hosts and their phages (Forterre and Prangishvili 2009). The defence systems evolved by bacteria provide both innate and adaptive immunity against phage infection. Innate immunity systems interfere at different levels of the phage's infection cycle via receptor masking, superinfection exclusion (Sie), restriction–modification (R-M), bacteriophage exclusion (BREX), toxin–antitoxin (TA) modules, abortive infection (Abi), prokaryotic Argonautes (pAgos), production of anti-phage chemicals and defence island system associated with restriction–modification (DISARM) systems 24

Anna Lopatina (2020): In general, bacteria are known to resist phage infections by mutating or altering their surface receptors, targeting the phage nucleic acids, producing small molecules that poison phage replication, or committing suicide upon phage infection. 11 

Tina Y.Liu (2020): All cells must defend against infection by harmful genetic elements, like viruses or transposons. Prokaryotes use a multitude of different strategies to combat their viruses, which are called phages. These include, but are not limited to, adsorption and injection blocking, abortive infection, toxin-antitoxin, restriction-modification, and CRISPR-Cas (clustered regularly interspaced short palindromic repeats CRISPR-associated) systems 19

Luciano Marraffini (2019): Everywhere bacteria are found, they coexist with their respective phages, undergoing continuous cycles of infection. As a consequence, in order to survive and thrive, bacteria have developed an arsenal of anti-phage mechanisms. The diversity and sophistication of bacteria’s anti-phage mechanisms are astounding. The red queen hypothesis states that an organism must constantly evolve to maintain their relative fitness in the face of a predator. In the context of the bacteria-phage relationship, this means that bacteria continuously evolve and update anti-phage mechanisms, while phages adapt to overcome these mechanisms. Competitive bacteria-phage coevolution often referred to as an “evolutionary arms race”, has produced a multitude of bacterial defense mechanisms that act to inhibit every stage of the phage life cycle. As a result of this arms race, bacteria and phages coevolve, and seem to exist in stable equilibria without dramatic fluctuations or extinction events in natural environments. Key to this arms race is the propensity of bacterial defense systems to spread through horizontal gene transfer. Whereas this in principle could lead to an extensive proliferation of defense mechanisms to provide more protection to the host population, bacteria only tend to have a subset of the available diversity of anti-phage mechanisms. This is in part due to fitness costs associated with carrying defense systems. Therefore bacteria, even in the context of a race for survival against their parasites, must tune the trade-off between the cost of carrying anti-phage systems and the benefit of resisting phage infection.

Infection begins with the binding to specific surface proteins or cell wall components of the host cell, an event that is followed by the injection of the phage’s genome. Consequently, bacteria use both broad and phage-specific mechanisms to prevent phage adsorption and injection. Many bacteria spend much of their life cycle embedded in biofilms, an extracellular matrix made up of polymers where bacteria live in close proximity, often on surfaces. Biofilms protect bacteria in various ways. Biofilms can conditionally survive and grow in the presence of phage. Cells inside the colony divide, and are shielded by peripheral cells that get infected. The biofilm structure prevents phage access to the biofilm interior. This depended on the presence of the host protein curli, which forms amyloid fibres that promote the formation of an extracellular matrix and a dense cell packing. In addition to the protective shield provided by biofilms, Gram-negative bacteria can secrete outer membrane vesicles (OMVs), spherical structures made up of outer membrane components and periplasmic cargo which pinch off the cell. Since they contain exposed outer membrane proteins that can act as phage receptors, OMVs can act as decoys, sequestering extracellular phage. One report showed that pre-incubation with OMVs reduced T4 infectivity in E. coli.

Another mechanism to prevent adsorption is the introduction of mutations within receptor genes that affect the protein or its expression. This is a common mode of resistance that is perhaps best exemplified by the identification of mutations in LamB, the phage lambda receptor, in E. coli resistant cells. More recently it was found that receptor expression can be modulated by lysogenic phages via Sie. The P. aeruginosa prophage D3112 expresses the protein Tip, which interacts with the ATPase PilB to prevent type IV pili extension. D3112, as well as other phages that use these pili as receptors, are therefore unable to infect D3112 lysogens. Indeed, a systematic screen of P. aeruginosa Sie mechanisms identified many prophages interfering with either type IV pilus function, or with the O-antigen, another typical P. aeruginosa phage receptor in the surface polysaccharide. 9

Simon J. Labrie et al. (2010): Bacteriophages (or phages) are now widely recognized to outnumber bacteria by an estimated tenfold. The remarkable diversity of phages is best illustrated by the frequency of novel genes that are found in newly characterized phage genomes. Such natural variation is a reflection of the array of bacterial hosts that are available to phages and the high evolution rate of phages when facing selective pressure created by antiphage barriers. In most environments, a large pool of phages and hosts are involved in continuous cycles of co-evolution, in which emerging phage-insensitive hosts help to preserve bacterial lineages, whereas counter-resistant phages threaten such new bacterial strains. Phages and phage resistance mechanisms, therefore, have key roles in regulating bacterial populations in most, if not all, habitats.

Preventing phage adsorption 
Adsorption of phages to host receptors is the initial step of infection and, perhaps, one of the most intricate events, as phages must recognize a particular host-specific cell component. Phages are faced with an astonishing diversity in the composition of host membranes and cell walls. Furthermore, bacteria have a range of barriers to prevent phage adsorption. These adsorption-blocking mechanisms can be divided into at least three categories: the blocking of phage receptors, the production of extracellular matrix, and the production of competitive inhibitors.  10

Luciano Marraffini (2019): Bacteria can also prevent adsorption by hiding or masking surface receptors. For example, in Pseudomonas aeruginosa, type IV pili can be glycosylated to prevent the binding of several pilus-specific phages. Receptors can also be blocked by polysaccharide capsules, which shield the whole bacterial surface. The polysialic acid capsule of E. coli K1 prevents phage T7 attachment to its receptor, lipopolysaccharide (LPS), thereby reducing infectivity. In response, phages can have enzymes in their tails that degrade various capsules, giving rise to an evolutionary arms race that results in the extreme diversification of capsule synthesis and hydrolyzing enzyme genes of the host and phage, respectively. Finally, surface proteins can also hide phage receptors. E. coli lytic phage T5 uses the outer membrane iron uptake protein FhuA as its receptor and expresses the lipoprotein Llp to mask it. This prevents additional T5 particles, and possibly other phages that use FhuA as receptor, such as T1 and phi80, from entering and disturbing T5’s infection cycle. This phenomenon is an example of superinfection exclusion (Sie), a process where intracellular phages, including prophages, block the infection of the same (homotypic Sie) or a different (heterotypic Sie) phage. 9

Blocking of phage receptors. 
Simon J. Labrie et al. (2010): To limit phage propagation, bacteria can adapt the structure of their cell surface receptors or their three-dimensional conformation. For example, Staphylococcus aureus produces a cell-wall anchored virulence factor, immunoglobulin G-binding protein A, which binds to the Fc fragment of immunoglobulin G. It has been shown that phage adsorption improves when the bacteria produce less protein A, indicating that the phage receptor is masked by this protein. Phage T5, which infects Escherichia coli, produces a lipoprotein (Llp) that blocks its own receptor, ferrichrome-iron receptor (FhuA). Llp is expressed at the beginning of the infection, thereby preventing superinfection. This protein also protects newly synthesized phage T5 virions from inactivation by binding free receptors that are released from lysed cells. Host cells also use lipoproteins to inhibit phages, as seen in E. coli F+ strains. The outer-membrane protein TraT, encoded by the F plasmid, masks or modifies the conformation of outer-membrane protein A (OmpA), which is a receptor for many T-even-like E. coli phages. Bordetella spp. use phase variation to alter their cell surface, which is necessary for the colonization and survival of the bacteria. The production of many adhesins and toxins is under the control of the BvgAS two-component regulatory system. Bvg+ Bordetella spp. cells express colonization and virulence factors, including adhesins, toxins and a type III secretion system, that are not expressed in the Bvg– phase. The phage receptor, pertactin autotransporter (Prn), is expressed only in the Bvg+ phase, thus the efficiency of infection of the Bordetella phage BPP-1 is 1 million-fold higher for Bvg+ cells than for Bvg– cells. Interestingly, although this receptor is absent from Bvg– cells, the phage BPP-1 is still able to infect them, albeit at a much lower rate, indicating that this phage has evolved a strategy to overcome the absence of its primary receptor. Some phages that infect Bordetella spp. use a newly discovered family of genetic elements known as diversity generating retroelements to promote genetic variability. These phages switch hosts through a template-dependent, reverse-transcriptase-mediated process that introduces nucleotide substitutions in the variable region of the phage gene mtd, which encodes major tropism determinant protein, the protein that is responsible for host recognition. Comparative genome analyses have revealed putative diversity-generating retroelement systems in other phages, including in those that infect Bifidobacterium spp.

Production of extracellular matrix. 
The production of structured extracellular polymers can promote bacterial survival in various ecological niches by protecting the bacteria against harsh environmental conditions and, in some cases, providing a physical barrier between phages and their receptors. Some phages also specifically recognize these extracellular polymers and even degrade them. Polysaccharide-degrading enzymes can be classified into two groups: hydrolases (also known as the polysaccharases) and lyases. The lyases cleave the linkage between the monosaccharide and the C4 of uronic acid and introduce a double bond between the C4 and C5 of uronic acid. The hydrolases break the glycosyl–oxygen bond in the glycosidic linkage. These viral enzymes are found either bound to the phage structure (connected to the receptor-binding complex) or as free soluble enzymes from lysed bacterial cells. Alginates are exopolysaccharides that are mainly produced by Pseudomonas spp., Azotobacter spp. and some marine algae. An increased phage resistance was observed for alginate-producing Azotobacter spp. cells. However, phage F116, which targets Pseudomonas spp., produces an alginate lyase, facilitating its dispersion in the alginate matrix as well as reducing the viscosity of this matrix. It was proposed that alginate is involved in the adsorption of phage 2 and φPLS-I, which also target Pseudomonas spp., as an alginate-deficient mutant was phage resistant.

Hyaluronan (also known as hyaluronic acid) is composed of alternating N-acetylglucosamine and glucuronic-acid residues and is produced by pathogenic streptococci as a constituent of their capsule. This virulence factor helps bacterial cells to escape the immune system by interfering with defense mechanisms that are mediated by antibodies, complements, and phagocytes. Interestingly, genes encoding hyaluronan-degrading enzymes (known as hyaluronidases) are often found in the prophages that are inserted into the genomes of pathogenic bacterial strains. not only are these prophage-encoded enzymes able to destroy the bacterial hyaluronan, but they also degrade human hyaluronan, helping the bacteria to spread through connective tissues. Both virulent and temperate streptococcal phages possess hyaluronidase, but the quantity of enzyme produced by temperate phages is several orders of magnitude higher than the quantity produced by virulent phages, therefore enabling the temperate phages to cross the hyaluronan barrier. Cell surface glycoconjugates of E. coli strains and Salmonella spp. serovars are extremely diverse. At least two serotype-specific surface sugars are produced by E. coli isolates: the lipopolysaccharide O antigen and the capsular polysaccharide K antigen. Phages have co-evolved with that diversity, and some are specific to these antigens. Capsular-negative mutants are insensitive to K antigen-specific phages. A similar observation was made with Salmonella phage P22, which recognized the O antigen. Furthermore, the P22 tail spike also possesses an endoglycosidase activity, enabling the phage to cross the 100 nm O antigen layer. Phage Φv10, which specifically binds to the O antigen of E.coli O157:H7, possesses an O-acetyltransferase that modifies the O157 antigen to block adsorption of Φv10 and similar phages. 

Preventing phage DNA injection
Superinfection exclusion (Sie) systems are proteins that block the entry of phage DnA into host cells, thereby conferring immunity against specific phages. These proteins are predicted to be membrane-anchored or associated with membrane components. The genes encoding these proteins are often found in prophages, suggesting that in many cases Sie systems are important for phage–phage interactions rather than phage–host interactions. many different Sie systems have been identified, although only a few have been characterized. 10

Luciano Marraffini (2019): Blocking the entry of phage DNA into the cytoplasm is another mechanism of preventing phage infections. The E. coli prophage HK97 confers both homotypic as well as heterotypic (against the closely related phage HK75) Sie thorough the expression of gp15. This is an inner membrane (transmembrane) protein that interacts with the host glucose transporter PtsG, and most likely disrupts its association with phage components required for translocating the viral genome across the inner membrane, thereby preventing the transfer of DNA into the cytoplasm. Another recent example of a heterotypic Sie mechanism preventing DNA injection is found in the mycobacteriophage Fruitloop. During the lytic cycle, Fruitloop gp52 inactivates Wag31, an essential mycobacterial protein involved in cell wall synthesis at the cell poles. This prevents DNA injection by an unrelated group of mycobacteriophages that rely on Wag31, including the phages Hedgerow and Rosebush.  9

Sie systems in Gram-negative bacteria. 
Simon J. Labrie et al. (2010): Coliphage T4, as well-characterized virulent phage, has two Sie systems encoded by imm and sp. These systems cause rapid inhibition of DNA injection into cells, preventing subsequent infection by other T-even-like phages. The Imm and Sp systems act separately and have different mechanisms of action. Imm prevents the transfer of phage DNA into the bacterial cytoplasm by changing the conformation of the injection site. Imm has two non-conventional transmembrane domains and is predicted to be localized to the membrane, but Imm alone does not confer complete phage immunity and must be associated with another membrane protein to exert its function and achieve complete exclusion. The membrane protein Sp inhibits the activity of the T4 lysozyme (which is encoded by gp5), thereby presumably preventing the degradation of peptidoglycan and the subsequent entry of phage DNA. The T4 lysozyme is found at the extremity of the tail and creates holes in the host cell wall, facilitating the injection of phage DNA into the cell. The Sim and SieA systems are associated with the prophages that are found in several enterobacteriaceae species and have been well characterized, although the molecular mechanisms of their blocking activities are not yet fully understood. To exert its activity, Sim must be processed at its amino terminus in a SecA-dependent manner. The resulting 24 kDa Sim protein confers resistance against coliphages P1, c1, c4 and vir mutants. The only evidence that has led to the proposal that Sim blocks DNA entry is that phage adsorption is not affected by the presence of this protein and a bacterium can be successfully transformed with the phage genome. Finally, SieA is found in the inner membrane of Salmonella enterica subsp. enterica serovar Typhimurium carrying lysogenic phage P22 and prevents the infection of phages L, mG178, and mG40. notably, it was initially believed that SieB was also involved in superinfection exclusion, but it was later shown to cause phage-abortive infection.

Sie systems in Gram-positive bacteria. 
To date, only a few examples of mechanisms that inhibit phage DNA injection have been identified in Gram-positive bacteria. most were identified in Lactococcus lactis, a species used in industrial milk fermentation processes. The best-characterized system is Sie2009, which was identified in the genome of the temperate phage lactococcal phage Tuc2009 and then subsequently found in other prophages in the genomes of several L. lactis strains. most lactococcal prophages (including Tuc2009) belong to the P335 lactococcal phage group, and Sie2009 from these phages confers resistance to a genetically distinct group of lactococcal phages (the 936 group). The 936 group is the predominant group of L. lactis-specific phages found in the dairy industry. Lactococcal Sie systems are predicted to be localized to the membrane, and they provide resistance by inhibiting the transfer of phage DNA into host cells. Finally, a Sie-like system was recently found in the prophage of Streptococcus thermophilus, another bacterial species used in industrial milk fermentation processes. Prophage TP-J34 encodes a signal-peptide-bearing 142-amino-acid lipoprotein (LTP) that blocks the injection of phage DNA into the cell. Surprisingly, this system confers resistance to some lactococcal phages when transformed into L. lactis.

Cutting phage nucleic acids Restriction–modification systems. 
Many, if not all, bacterial genera possess restriction-modification (r–m) systems. Their activities are due to several heterogeneous proteins that have been classified into at least four groups (type I–type Iv). The principal function of the r–m system is thought to be protecting the cell against invading DNA, including viruses. When unmethylated phage DNA enters a cell harboring a r–m system, it will be either recognized by the restriction enzyme and rapidly degraded or, to a lesser extent, methylated by a bacterial methylase to avoid restriction, therefore leading to the initiation of the phage’s lytic cycle. The fate of phage DNA is determined mainly by the processing rates of these two enzymes. As the restriction enzyme is often more active than the methylase, the incoming phage DNA is usually cleaved, although the host DNA is always protected by the methylase activity. moreover, methylases are usually more specific for hemimethylated DNA (that is, DNA containing methyl groups on only one of the two DNA strands). when the phage DNA is methylated, the new virions become insensitive to the cognate restriction enzyme and readily infect neighboring cells containing the same r–m system. The phage will remain insensitive until it infects a bacterium that does not encode the same methylase gene, in which case the new virions will become unmethylated again and will therefore be sensitive once again to the r–m system of the original bacterium. To cope with these r–m systems, phages have evolved several anti-restriction strategies. One of these strategies is the absence of endonuclease recognition sites in their genomes through the accumulation of point mutations. For example, the polyvalent Staphylococcus phage K has no Sau3A sites (which have a 5′-GATC-3′ recognition sequence) in its double-stranded-DNA genome. The antiviral efficiency of an r–m system is directly proportional to the number of recognition sites in a viral double-stranded-DNA genome. Furthermore, some phages have overcome r–m systems through the acquisition of the cognate methylase gene in their genomes. Perhaps, the most striking example of an anti-restriction system is found in phage T4. The genome of this virulent phage contains the unusual base hydroxymethylcytosine (HmC) instead of the cytosine that is found in the host DNA. This modification allows phage T4 DNA to be impervious to r–m systems that recognize specific sequences containing a cytosine. In the co-evolutionary arm race, some bacteria have acquired the ability to attack modified phage DNA. In contrast to classical r–m systems, modification-dependent systems (mDSs) are specific for either methylated or hydroxymethylated DNA. Only a few mDS enzymes have been thoroughly characterized, such as DpnI from Streptococcus pneumoniae and mcrA, mcrBC and mrr from E. coli. Interestingly, phage T4 is also resistant to mDS enzymes, because its HmC residues are glucosylated. In yet another twist, E. coli CT596 is able to attack glucosylated DNA, as it possesses a two-component system consisting of glucose-modified restriction S (GmrS) and GmrD proteins encoded by a prophage. This system specifically recognizes and cleaves DNA containing glucosylated HmC but has no effect on unglycosylated DNA. Some T4-like phages have a gene encoding internal protein I (IPI), which is specifically designed to disable the GmrS–GmrD system. During infection, mature IPI (IPI*) is injected into the host cell along with the phage genome. According to its structure, IPI* may interact with the GmrS–GmrD complex to inactivate its restriction activity. Interestingly, some bacterial strains have found ways to bypass IPI* by using a single, fused polypeptide.

Luciano Marraffini (2019): Restriction-modification (RM) systems are a ubiquitous and extremely diverse mode of anti-phage defense. They are normally made up of two activities; a restriction endonuclease and a methyltransferase (the modification component). The restriction endonuclease recognizes short DNA motifs, usually 4–8 base-pairs long, and cuts the phage DNA. These DNA motifs exist in both the bacterial host and invading phage, but the host protects its genome by using the methyltransferase to modify its own DNA to avoid recognition by the restriction enzyme. An invading phage is usually not methylated, and will therefore be cut upon injection. RM systems are classified into four major types, based on their mechanism of action and subunit composition. Both type I and III systems translocate along DNA and cleave away from the recognition sites. Type II, known for its use in molecular cloning, cleave within or near the recognition site. Type IV systems lack a methylase and only contain a restriction endonuclease which only cleaves modified DNA. Finally, there are examples of “inverted” RM systems that do not belong to any of these types. The phage ϕC31 can propagate in Streptomyces coelicolor A2(3) harboring the four-gene “phage growth limiting” (pgl) locus, but only mounts one cycle of infection. The released phages are unable to reinfect Pgl+ hosts, presumably due to the action of the methyltransferase pglX, which modifies new phage DNA to make it susceptible for restriction in the next Pgl+ host by an unknown mechanism.

RM systems and DNA modfications exemplify an elaborate “arms race” between E. coli and phage T4. T4 contains hydroxymethylcytosine (HMC) instead of cytosine in its DNA, inhibiting all type I-III RM systems that recognize sites containing cytosine. To counter this, E. coli uses McrBC, a type IV system specific for HMC-containing DNA. In response, T4 can glycosylate its DNA, which impairs McrBC activity. Against this, E. coli has evolved an additional type IV system, the GmrSGmrD system, that can cleave glycosylated DNA.

The CRISPR–Cas system
Tina Y.Liu (2020): CRISPR-Cas systems stand out as the only known RNA programmed pathways for detecting and destroying bacteriophages and plasmids. . Class 1 CRISPR-Cas systems, the most widespread and diverse of these adaptive immune systems, use an RNA-guided multi-protein complex to find foreign nucleic acids and trigger their destruction. These multisubunit complexes target and cleave DNA and RNA, and regulatory molecules control their activities. CRISPR-Cas loci constitute the only known adaptive immune system in bacteria and archaea. They typically include an array of repeat sequences (CRISPRs) with intervening “spacers” matching sequences of DNA or RNA from viruses or other mobile genetic elements, and a set of genes encoding CRISPR-associated (Cas) proteins.  

Transcription across the CRISPR array produces a precursor crRNA (pre-crRNA) that is processed by nucleases into small, non-coding CRISPR RNAs (crRNAs). Each crRNA molecule assembles with one or more Cas proteins into an effector complex that binds crRNA-complementary regions in foreign DNA or RNA. The effector complex then triggers degradation of the targeted DNA or RNA using either an intrinsic nuclease activity or a separate nuclease in trans.

Giedrius Gasiunas (2012):The silencing of invading nucleic acids is executed by ribonucleoprotein complexes preloaded with small, interfering CRISPR RNAs (crRNAs) that act as guides for targeting and degradation of foreign nucleic acid. The Cas9–crRNA complex of the Streptococcus thermophilus CRISPR3/Cas system introduces a double-strand break at a specific site in DNA containing a sequence complementary to crRNA. DNA cleavage is executed by Cas9, which uses two distinct active sites, RuvC and HNH, to generate site-specific nicks on opposite DNA strands. Results demonstrate that the Cas9–crRNA complex functions as an RNA-guided endonuclease with RNA-directed target sequence recognition and protein-mediated DNA cleavage. 20

J.Cepelewicz (2020): CRISPR acts like an adaptive immune system; it enables bacteria that have been exposed to a virus to pass on a genetic “memory” of that infection to their descendants, which can then mount better defenses against a repeat infection. It’s a system that works so well that an estimated half of all bacterial species use CRISPR. Researchers have uncovered dozens of other systems that bacteria use to rebuff phage invasions. But in laboratory studies, bacteria primarily develop what’s known as surface-based phage resistance. Mutations change receptor molecules on the surface of the bacterial cell, so that the phage can no longer recognize and invade it.

The strategy is akin to shutting a door and throwing away the key: It offers the bacteria complete safety from infection by the virus. But that protection comes at a significant price, because it also disrupts whatever nutrient uptake, waste disposal, communication task or other cellular function the receptor would have been providing — taking a constant toll on a cell’s fitness.

In contrast, CRISPR only drags on a cell’s resources when it’s active, during a viral infection. Even so, CRISPR represents a riskier gambit: It doesn’t start to work until phages have already entered the cell, meaning that there’s a chance the viruses could overcome it. And CRISPR doesn’t just attack viral DNA; it can also prevent bacteria from taking up beneficial genes from other microbes, like those that confer antibiotic resistance. What factors affect the trade-offs in costs and fitness? For the past six years, Edze Westra, an evolutionary ecologist at the University of Exeter in England, has led a team pursuing the answer to that question. In 2015, they discovered that nutrient availability and phage density affected whether Pseudomonas bacteria relied on surface-based or CRISPR-based resistance. In environments poor in resources, receptor modifications were more burdensome, so CRISPR became a better bargain. When resources were plentiful, bacteria grew more densely and phage epidemics became more frequent. Bacteria then faced greater selective pressure to close themselves off from infection entirely, and so they shut down receptors to gain surface-based resistance. This explained why surface-based resistance was so common in laboratory cultures. Growing in a test tube rich in nutrients, “these bacteria are on a holiday,” Westra said. “They are having a terrific time.”

Still, these rules weren’t cut and dried. Plenty of bacteria in natural high-nutrient environments use CRISPR, and plenty of bacteria in natural low-nutrient environments don’t. “It’s all over the place,” Westra said. “That told us that we were probably still missing something.”

How Biodiversity Reshapes the Battle
Then one of Westra’s graduate students, Ellinor Opsal, proposed another potential factor: the diversity of the biological communities in which bacteria live. This factor is harder to study, but scientists had previously observed that it could affect phage immunity in bacteria. For example, in 2005, James Bull, a biologist at the University of Texas, Austin, and William Harcombe, his graduate student at the time (now at the University of Minnesota), found that E. coli bacteria didn’t evolve immunity to a phage when a second bacterial species was present. Similarly, Britt Koskella, an evolutionary biologist at the University of California, Berkeley, and one of her graduate students, Catherine Hernandez, reported last year that phage resistance failed to arise in Pseudomonas bacteria living on their actual host (a plant), though they always gained immunity in a test tube. Could the diversity of the surroundings influence not just whether or not resistance to phages evolved, but the nature of that resistance?

To find out, Westra’s team performed a new set of experiments: Instead of altering the nutrient conditions for Pseudomonas bacteria growing with phages, they added three other bacterial species — species that competed against Pseudomonas for resources but weren’t targeted by the phage. Left to themselves, Pseudomonas would normally develop surface-based mutations. But in the company of rivals, they were far more likely to turn to CRISPR. Further investigation showed that the more complex community dynamics had shifted the fitness costs: The bacteria could no longer afford to inactivate receptors because they not only had to survive the phage, but also had to outcompete the bacteria around them. These results from Westra’s group dovetail with earlier findings that phages can produce greater diversity in bacterial communities. “Now, that diversity is actually feeding back to the phage side of things” by affecting phage resistance, Koskella said. “It’s neat to see that coming full circle.” By understanding that kind of feedback loop, she added, “we can start to ask more general questions about the impacts that phages have in a community context.”

For one, the bacteria’s shift toward a CRISPR-based phage response had another, broader effect. When Westra’s group grew Pseudomonas in moth larvae hosts, they found that the bacteria with surface-based resistance were less virulent, killing the larvae much more slowly than the bacteria with active CRISPR systems did. 18

Discovering CRISPR
S.H. Sternberg (2015): The CRISPR locus was first identified in Escherichia coli as an unusual series of 29-bp repeats separated by 32-bp spacer sequences (Ishino et al., 1987) 21 Carl Zimmer tells us the story (2015): The scientists who discovered CRISPR had no way of knowing that they had discovered something so revolutionary. They didn’t even understand what they had found. In 1987, Yoshizumi Ishino and colleagues at Osaka University in Japan published the sequence of a gene called iap belonging to the gut microbe E. coli. To better understand how the gene worked, the scientists also sequenced some of the DNA surrounding it. They hoped to find spots where proteins landed, turning iap on and off. But instead of a switch, the scientists found something incomprehensible. Near the iap gene lay five identical segments of DNA. DNA is made up of building blocks called bases, and the five segments were each composed of the same 29 bases. These repeat sequences were separated from each other by 32-base blocks of DNA, called spacers. Unlike the repeat sequences, each of the spacers had a unique sequence.

This peculiar genetic sandwich didn’t look like anything biologists had found before. When the Japanese researchers published their results, they could only shrug. “The biological significance of these sequences is not known,” they wrote. It was hard to know at the time if the sequences were unique to E. coli, because microbiologists only had crude techniques for deciphering DNA. But in the 1990s, technological advances allowed them to speed up their sequencing. By the end of the decade, microbiologists could scoop up seawater or soil and quickly sequence much of the DNA in the sample. This technique — called metagenomics — revealed those strange genetic sandwiches in a staggering number of species of microbes. They became so common that scientists needed a name to talk about them, even if they still didn’t know what the sequences were for. In 2002, Ruud Jansen of Utrecht University in the Netherlands and colleagues dubbed these sandwiches “clustered regularly interspaced short palindromic repeats” — CRISPR for short.

Jansen’s team noticed something else about CRISPR sequences: They were always accompanied by a collection of genes nearby. They called these genes Cas genes, for CRISPR-associated genes. The genes encoded enzymes that could cut DNA, but no one could say why they did so, or why they always sat next to the CRISPR sequence. Three years later, three teams of scientists independently noticed something odd about CRISPR spacers. They looked a lot like the DNA of viruses. “And then the whole thing clicked,” said Eugene Koonin. At the time, Koonin, an evolutionary biologist at the National Center for Biotechnology Information in Bethesda, Md., had been puzzling over CRISPR and Cas genes for a few years. As soon as he learned of the discovery of bits of virus DNA in CRISPR spacers, he realized that microbes were using CRISPR as a weapon against viruses.

Koonin knew that microbes are not passive victims of virus attacks. They have several lines of defense. Koonin thought that CRISPR and Cas enzymes provide one more. In Koonin’s hypothesis, bacteria use Cas enzymes to grab fragments of viral DNA. They then insert the virus fragments into their own CRISPR sequences. Later, when another virus comes along, the bacteria can use the CRISPR sequence as a cheat sheet to recognize the invader.
Scientists didn’t know enough about the function of CRISPR and Cas enzymes for Koonin to make a detailed hypothesis. But his thinking was provocative enough for a microbiologist named Rodolphe Barrangou to test it. To Barrangou, Koonin’s idea was not just fascinating, but potentially a huge deal for his employer at the time, the yogurt maker Danisco. Danisco depended on bacteria to convert milk into yogurt, and sometimes entire cultures would be lost to outbreaks of bacteria-killing viruses. Now Koonin was suggesting that bacteria could use CRISPR as a weapon against these enemies.

To test Koonin’s hypothesis, Barrangou and his colleagues infected the milk-fermenting microbe Streptococcus thermophilus with two strains of viruses. The viruses killed many of the bacteria, but some survived. When those resistant bacteria multiplied, their descendants turned out to be resistant too. Some genetic change had occurred. Barrangou and his colleagues found that the bacteria had stuffed DNA fragments from the two viruses into their spacers. When the scientists chopped out the new spacers, the bacteria lost their resistance. Barrangou, now an associate professor at North Carolina State University, said that this discovery led many manufacturers to select for customized CRISPR sequences in their cultures, so that the bacteria could withstand virus outbreaks. “If you’ve eaten yogurt or cheese, chances are you’ve eaten CRISPR-ized cells,” he said.

In 2007, Blake Wiedenheft joined Doudna’s lab as a postdoctoral researcher, eager to study the structure of Cas enzymes to understand how they worked. Doudna agreed to the plan — not because she thought CRISPR had any practical value, but just because she thought the chemistry might be cool. “You’re not trying to get to a particular goal, except understanding,” she said. As Wiedenheft, Doudna and their colleagues figured out the structure of Cas enzymes, they began to see how the molecules worked together as a system. When a virus invades a microbe, the host cell grabs a little of the virus’s genetic material, cuts open its own DNA, and inserts the piece of virus DNA into a spacer. As the CRISPR region fills with virus DNA, it becomes a molecular most-wanted gallery, representing the enemies the microbe has encountered. The microbe can then use this viral DNA to turn Cas enzymes into precision-guided weapons. The microbe copies the genetic material in each spacer into an RNA molecule. Cas enzymes then take up one of the RNA molecules and cradle it. Together, the viral RNA and the Cas enzymes drift through the cell. If they encounter genetic material from a virus that matches the CRISPR RNA, the RNA latches on tightly. The Cas enzymes then chop the DNA in two, preventing the virus from replicating.

CRISPR, microbiologists realized, is also an adaptive immune system. It lets microbes learn the signatures of new viruses and remember them. And while we need a complex network of different cell types and signals to learn to recognize pathogens, a single-celled microbe has all the equipment necessary to learn the same lesson on its own. But how did microbes develop these abilities? Ever since microbiologists began discovering CRISPR-Cas systems in different species, Koonin and his colleagues have been reconstructing the systems’ evolution. CRISPR-Cas systems use a huge number of different enzymes, but all of them have one enzyme in common, called Cas1. The job of this universal enzyme is to grab incoming virus DNA and insert it in CRISPR spacers. Recently, Koonin and his colleagues discovered what may be the origin of Cas1 enzymes.

Along with their own genes, microbes carry stretches of DNA called mobile elements that act like parasites. The mobile elements contain genes for enzymes that exist solely to make new copies of their own DNA, cut open their host’s genome, and insert the new copy. Sometimes mobile elements can jump from one host to another, either by hitching a ride with a virus or by other means, and spread through their new host’s genome.

1. Dana K Howe Muller's Ratchet and compensatory mutation in Caenorhabditis briggsae mitochondrial genome evolution 2008
2. Eugene V Koonin: Inevitability of Genetic Parasites 2016 Sep 26
3. Eugene V. Koonin: Inevitability of the emergence and persistence of genetic parasites caused by evolutionary instability of parasite-free states 04 December 2017
4. Gregory P Fournier: Ancient horizontal gene transfer and the last common ancestors 22 April 2015
5. Aude Bernheim The pan-immune system of bacteria: antiviral defence as a community resource 06 November 2019
6. Felix Broecker: Evolution of Immune Systems From Viruses and Transposable Elements 29 January 2019
7. Eugene V. Koonin: Evolution of adaptive immunity from transposable elements combined with innate immune systems December 2014
8. Eugene V. Koonin: The LUCA and its complex virome  14 July 2020
9. Luciano Marraffini: (Ph)ighting phages – how bacteria resist their parasites 2020 Feb 13
10. Simon J Labrie: Bacteriophage resistance mechanisms 2010 Mar 29.
11. Anna Lopatina: Abortive Infection: Bacterial Suicide as an Antiviral Immune Strategy 2020 Sep 29
12. Aryn A. Price et al.,: Harnessing the Prokaryotic Adaptive Immune System as a Eukaryotic Antiviral Defense 2016 Feb 3
13. Devashish Rath: The CRISPR-Cas immune system: Biology, mechanisms and applications October 2015
14. Dipali G Sashital: The Cas4-Cas1-Cas2 complex mediates precise prespacer processing during CRISPR adaptation Apr 25, 2019
15. SIMON A. JACKSON: CRISPR-Cas: Adapting to change 7 Apr 2017
16. M. P. Terns et al. Three CRISPR-Cas immune effector complexes coexist in Pyrococcus furious 2015 Jun; 21
17. Carl Zimmer Breakthrough DNA Editor Born of Bacteria February 6, 2015
18. Jordana Cepelewicz: Biodiversity Alters Strategies of Bacterial Evolution January 6, 2020
19. Tina Y.Liu: Chemistry of Class 1 CRISPR-Cas effectors: Binding, editing, and regulation 16 October 2020
20. Giedrius Gasiunas: Cas9–crRNA ribonucleoprotein complex mediates specific DNA cleavage for adaptive immunity in bacteria September 4, 2012
21. Samuel H. Sternberg et al. Surveillance and Processing of Foreign DNA by the Escherichia coli CRISPR-Cas System  2015 Nov 5
23. Jennifer Doudna: CRISPR in Nature
24. D.Trasanidou: Keeping CRISPR in check: diverse mechanisms of phage-encoded anti-CRISPRS 11 May 2019



Last edited by Otangelo on Tue Sep 27, 2022 9:29 pm; edited 22 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Koonin and his colleagues discovered that one group of mobile elements, called casposons, makes enzymes that are pretty much identical to Cas1. In a new paper in Nature Reviews Genetics, Koonin and Mart Krupovic of the Pasteur Institute in Paris argue that the CRISPR-Cas system got its start when mutations transformed casposons from enemies into friends. Their DNA-cutting enzymes became domesticated, taking on a new function: to store captured virus DNA as part of an immune defense. While CRISPR may have had a single origin, it has blossomed into a tremendous diversity of molecules. Koonin is convinced that viruses are responsible for this. Once they faced CRISPR’s powerful, precise defense, the viruses evolved evasions. Their genes changed sequence so that CRISPR couldn’t latch onto them easily. And the viruses also evolved molecules that could block the Cas enzymes. The microbes responded by evolving in their turn. They acquired new strategies for using CRISPR that the viruses couldn’t fight. Over many thousands of years, in other words, evolution behaved like a natural laboratory, coming up with new recipes for altering DNA. 17

Konstantin Severinov  (2020): CRISPR-Cas are diverse (two classes, six types) prokaryotic adaptive immunity systems that protect cells from phages and other mobile genetic elements (MGEs) They consist of CRISPR arrays and CRISPR-associated cas genes. The total number of spacers in array varies from one to several hundreds.  22

Understanding CRISPR-Cas9
Imagine a company had the task to install a security system in its headquarters, based on biometrics. Biometrics comes from the Greek words “bios” (life) and “metrikos” (measure). It involves the implementation of a system that uses the analysis of biological characteristics of people, and that analyzes human characteristics for identity verification or identification. In order to distinguish employees that are permitted to enter the building, and exclude to enter those that are not welcome, there has to be first data collection and storage of the information in a memory bank. Every time, when someone arrives at the building, it will go through the security check, and the provided data will be compared to the data in the memory bank. If there is a match, the person is permitted to enter, or not.

Analogously, cells are capable of doing almost the same, with a few differences. They have an ingenious security check system, based on enemy recognition, and based on that knowledge, creating a sophisticated data bank, that is employed to recognize future enemy invasions, and annihilate them.

The Cell factory maker, Paley's watchmaker argument 2.0 Crispr21
A roadmap of CRISPR-Cas adaptation and defense. 
In the example illustrated, a bacterial cell is infected by a bacteriophage. The first stage of CRISPR-Cas defense is CRISPR adaptation. This involves the incorporation of small fragments of DNA from the invader into the host CRISPR array. This forms a genetic “memory” of the infection. The memories are stored as spacers (colored squares) between repeat sequences (R), and new spacers are added at the leader-proximal (L) end of the array. The Cas1 and Cas2 proteins, encoded within the cas gene operon, form a Cas1-Cas2 complex (blue)—the “workhorse” of CRISPR adaptation. In this example, the Cas1-Cas2 complex catalyzes the addition of a spacer from the phage genome (purple) into the CRISPR array. The second stage of CRISPR-Cas defense involves transcription of the CRISPR array and subsequent processing of the precursor transcript to generate CRISPR RNAs (crRNAs). Each crRNA contains a single spacer unit that is typically flanked by parts of the adjoining repeat sequences (gray). Individual crRNAs assemble with Cas effector proteins (light green) to form crRNA-effector complexes. The crRNA-effector complexes catalyze the sequence-specific recognition and destruction of foreign DNA and/or RNA elements. This process is known as interference.


Dipali G Sashital (2019): Within this system, the CRISPR locus is programmed with ‘spacer’ sequences that are derived from foreign DNA and serve as a record of prior infection events   CRISPR cas9 in a bacteria acts as an adaptive immune response that is it remembers when a virus has infected the cell in the past and it keeps a little bit of viral DNA and stores it in a memory bank ( the spacers)  and uses it so that if the same species of virus infects the cell again it will be able to compare the injected DNA to sequences in the data bank, recognize and respond to it quickly and effectively, and destroy it.  14

Devashish Rath (2015): The CRISPR-Cas mediated defense process can be divided into three stages. 

1. Adaptation or spacer acquisition, where a short fragment of invading DNA is inserted into the CRISPR locus for future recognition of that invader;
2. In expression, the system gets ready for action by expressing the Cas genes and transcribing the CRISPR into a long precursor CRISPR RNA (pre-crRNA).  The pre-crRNA is subsequently processed into mature crRNA by Cas proteins and accessory factors.
3. Target interference where these effector complexes vigilantly scan for and degrade invading genetic material previously identified by—and integrated into—the CRISPR-Cas system 13

CRISPR memory update
https://www.youtube.com/watch?v=piHaA1nBsDY

1. Naive adaptation
- The MGE enters RecBCD and one strand is secured by the Chi site
- Cas1-Cas2 is recruited to the PAM site
2. Prespacer trimming
- DNA Pol III trims the prespacer
- The DNA Pol III cuts PAM of the prespacer and integrates it at the S-Site.
3. Spacer integration
4. Repeat duplication
- The repeat is divided into two single strands, one of the two strands is used to form a new repeat, with the spacer dividing the two, and a second strand is added by DNA polymerase to both single strand repeats, to have double strand repeats again. The new spacer is formed.
5. Interference
- A spacer is taken to form a crRNA and inserted in CASCADE
6. Primed adaptation

Comment: This is a logical sequence with a clear end goal. To confer protection bacteria from invading MGEs. Foresight is necessary to instantiate such an interdependent integrated system, that requires all sequential operating steps to be fully implemented, instantiated, and operational, with each player, member, each enzyme, and protein operating in a coordinated fashion, in a joint venture, in a precise orchestrated manner together. If one of the members in the chain is missing, or not apt to exercise its precise function accurately, the entire system breaks down.  

CRISPR adaptation

Konstantin Severinov  (2020): Crispr adaptation is a multistep process that comprises selection, generation, and incorporation of prespacers into arrays. It is a highly regulated process 22

Devashish Rath (2015):  The adaptation phase provides the genetic memory that is a prerequisite for the subsequent expression and interference phases that neutralize the re-invading nucleic acids. Conceptually, the process can be divided into the followings steps:

1. Protospacer selection from injected foreign phage mobile genetic element ( capture from the invading phage DNA) by Cas1-Cas2 protein complex ( in many cases associated with other protein complexes) 
2. Generation of spacer material followed by transport to a CRISPR array,
3. Integration of the spacer into the CRISPR array followed by DNA gap filling to duplicate the associated repeat. 


Naïve CRISPR adaptation
Acquisition of spacers from MGEs that are not already cataloged in host CRISPRs is termed naïve CRISPR adaptation. For naïve CRISPR adaptation, prespacer substrates are generated from foreign material and loaded onto Cas1-Cas2. The main known source of these precursors is the host RecBCD complex.

Stalled replication forks that occur during DNA replication can result in double-strand breaks (DSBs), which are repaired through RecBCD-mediated unwinding and degradation of the dsDNA ends back to the nearest Chi sites (In Escherichia coli, acquisition of new spacers largely depends on RecBCD-mediated processing of double-stranded DNA breaks occurring primarily at replication forks, and that the preference for foreign DNA is achieved through the higher density of Chi sites on the self chromosome, in combination with the higher number of forks on the foreign DNA. This explains the strong preference to acquire spacers both from high copy plasmids and from phages). During this repair process, RecBCD produces single-stranded DNA (ssDNA) fragments, which have been proposed to subsequently anneal to form partially duplexed prespacer substrates for Cas1-Cas2. The greater number of active origins of replication and the paucity of Chi sites on MGEs, compared with the host chromosome, bias naïve adaptation toward foreign DNA. Furthermore, RecBCD recognizes the unprotected dsDNA ends that are commonly present in phage genomes upon injection or before packaging, which theoretically provides an additional phage-specific source of naïve prespacer substrates. Despite the role of RecBCD in substrate generation, naïve CRISPR adaptation can occur in its absence, albeit with reduced bias toward foreign DNA. Thus, events other than double-strand breaks (DSBs) might also stimulate naïve CRISPR adaptation, such as R-loops that occur during plasmid replication, lagging ends of incoming conjugative elements, and even CRISPR-Cas–mediated spacer integration events themselves. Furthermore, we do not know whether all CRISPR-Cas systems have an intrinsic bias toward production of prespacers from foreign DNA. In high-throughput studies of native systems, the frequency of acquisition of spacers from host genomes is likely to be underestimated, because the autoimmunity resulting from self-targeting spacers means that these genotypes are typically lethal. For example, in the S. thermophilus type II-A system, spacer acquisition appears biased toward MGEs, yet nuclease-deficient Cas9 fails to discriminate between host and foreign DNA. It is unknown whether CRISPR adaptation in type II systems is reliant on DNA break repair. Further studies in a range of host systems are required to clarify how diverse CRISPR-Cas systems balance the requirement for naïve production of prespacers from MGEs against the risk of acquiring spacers from host DNA.

22. Konstantin Severinov: Detection of CRISPR adaptation FEBRUARY 03 2020



Last edited by Otangelo on Thu Aug 25, 2022 11:40 am; edited 9 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

This book is about one of the deepest unsolved mysteries: The immense difficult puzzle of the origin of life. Watson and Crick discovered the DNA molecule in the early fifties, and Miller & Urey performed their chemical experiments in 1953, which started the modern era of investigation of the origin of life. Huge sums of money were spent, and incalculable manhours were invested to solve the mystery of life's origin, but it did not bring clear answers to the trajectory from non-life to life by unguided natural means. Investigators have come up with numerous hypotheses, but are even in the dark about having an idea about the trajectory, or a clear model. In popular media, the impression is being nourished, that solving the problem of the origin of life is just a matter of time. "Science is working on it" - so they tell. Truth is, there is widespread ignorance and lack of knowledge in regards to the size of the problem not acknowledged beyond the narrow circle of specialists.

Otangelo Grasso is exposing in this work the result of many years of investigations into biochemistry, and the Origin of Life, that we can advance Paleys Watchmaker argument, to the factory maker argument:

Cells have a codified description of themselves in digital form stored in genes and have the machinery to transform that blueprint through information transfer from genotype to phenotype, into an identical representation in analog 3D form, the physical 'reality' of that description. The cause leading to a machine’s and factory's functionality has only been found in the mind of the engineer and nowhere else.

This book is divided into the following main sections: Cells are chemical factories (Chapter 1), Setting up a framework to investigate the Origin of Life (The Methods [2, 3]) The prebiotic origin of the four building blocks of life (The Materials [4-7]) the origin of biological information storage, transmission, and systems of expres​sion( 8 ) the origin of the Virus world (9), and some notes on why Intelligent Design is the most plausible explanation for the Origin of Life and Viruses (10).

https://pdfs.semanticscholar.org/52ab/5a2c76cee9dbba55b80f3db64671a80b6e88.pdf

https://reasonandscience.catsboard.com

Sponsored content



Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum