ElShamah Ministries: Defending the Christian Worldview and Creationism
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah Ministries: Defending the Christian Worldview and Creationism

Otangelo Grasso: This is my personal virtual library, where i collect information, which leads in my view to the Christian faith, creationism, and Intelligent Design as the best explanation of the origin of the physical Universe, life, and biodiversity


You are not connected. Please login or register

Refuting Darwin, confirming design

Go down  Message [Page 1 of 1]

1Refuting Darwin, confirming design Empty Refuting Darwin, confirming design Fri Oct 21, 2022 2:21 pm

Otangelo


Admin

Refuting Darwin, confirming design

https://reasonandscience.catsboard.com/t3264-refuting-darwin-confirming-design

Unraveling the real mechanisms giving rise to biological adaptation, development, complex organismal forms, anatomical novelty,  and biodiversity

Introduction

Making sense of the vast diversity of life is still today one of the greatest, if not the greatest intellectual challenge, together with the Origin of Life. The quest about if evolution is true is more than a scientific question. It is a battle that goes beyond science. It is a culture war between naturalism/strong atheism, and creationism/theism. If the literal interpretation of the Genesis account in the Bible is true, then Darwin's Theory of Evolution is false, and vice-versa. 

Frank Zindler, President of American Atheists,  in 1996: 
The most devastating thing though that biology did to Christianity was the discovery of biological evolution. Now that we know that Adam and Eve never were real people the central myth of Christianity is destroyed. If there never was an Adam and Eve there never was an original sin. If there never was an original sin there is no need of salvation. If there is no need of salvation there is no need of a Savior. And I submit that puts Jesus, historical or otherwise, into the ranks of the unemployed. I think that evolution is absolutely the death knell of Christianity.

Refuting Darwin, confirming design 31280610
Conservative Protestants in the 1920s also saw themselves in the midst of a great culture war, with the Bible (depicted here as the Rock of Gibraltar) coming under fierce attack by “battle-ships of unbelief.”

So what is it? Leaving the Bible aside, the dispute is not about religion versus science, but between case-adequate inferences based on scientific evidence, and unwarranted conclusions. The big question is: Is evolution supported by science, as the scientific establishment, and consensus among professional biologists advocate for, calling Darwin's Theory, and the recently modified versions of it, an undisputable scientific fact, or does the data lead to another direction? 

But we can ask a deeper question, and dissect the issue to the core question: Which of the two has more creative power: Design, or non-design? Intelligence, or non-intelligence? Agency, or non-agency?

Refuting Darwin, confirming design Herber11

Claim: Herbert Spencer: Those who cavalierly reject the Theory of Evolution as not being adequately supported by facts, seem to forget that their own theory is supported by no facts at all. Like the majority of men who are born to a given belief, they demand the most rigorous proof of any adverse belief, but assume that their own needs none.

Richard Dawkins: "It is absolutely safe to say that, if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I'd rather not consider that)."1

John Joe McFadden (2008):  Quite simply, Darwin and Wallace destroyed the strongest evidence left in the 19th century for the existence of a deity. Biologists have since used Darwin's theory to make sense of the natural world. Contrary to the arguments of creationists, evolution is no longer just a theory. It is as much a fact as gravity or erosion. 23

Reply: Opinions like Richard Dawkins have contributed to stigmatizing the proposition of intelligent design as pseudo-science, or as unscientific altogether. But is it justified? Many books have been published on the subject, and articles are frequently being written, defending both views and positions. Those that advocate in favor, often resort to the fact that a majority of biologists are on their side, and argue, because there is a widespread consensus, it must be true. Sailing against unfavorable wind direction is always energy-consuming, and difficult. Then, why do it? Because the truth matters. Many, today, lose faith in a creator, because they are not educated enough to make up their mind based on the scientific evidence, and permit being influenced by those that argue to have the evidence on their side, that favors evolution. I have investigated the topic for many years, and have permitted the evidence to lead me wherever it is. I do not intend to turn this book one more on the shelf, among many other anti-evolution books, that point out, why evolution fails.
Maybe you think I am one more lonely zealot deluded by blindly believing in what Genesis says, and swimming against the current of contemporary scientific advances and consensus among professional biologists in the field, so what I have to say is irrelevant and can be easily dismissed. I am, however, not alone. My findings parallel what the most prestigious investigators in the field are saying. Of course, they will not conclude in the end: ..... and therefore, God!!... since they are committed to philosophical naturalism - God cannot be permitted to put the footstep into the door - after all, the proposition: God did it, is unscientific, and cannot be tested, but they are admitting, that the traditional evolutionary view has problems. And that is already a big deal.

In November 2016, there was a three-day conference in London, a scientific discussion meeting organized by the Royal Society: New trends in evolutionary biology: biological, philosophical, and social science perspectives. On the website, they wrote: Developments in evolutionary biology and adjacent fields have produced calls for revision of the standard theory of evolution 2

Paul Nelson and David Klinghoffer reported about the meeting: ID proponents point to a chasm that divides how evolution and its evidence are presented to the public, and how scientists themselves discuss it behind closed doors and in technical publications. This chasm has been well hidden from laypeople, yet it was clear to anyone who attended the Royal Society conference, as did a number of ID-friendly scientists. The opening presentation at the Royal Society by one of those world-class biologists, Austrian evolutionary theorist Gerd Müller, underscored exactly Meyer’s contention. Dr. Müller opened the meeting by discussing several of the fundamental "explanatory deficits" of “the modern synthesis,” that is, textbook neo-Darwinian theory. According to Müller, the as yet unsolved problems include those of explaining:

Phenotypic complexity (the origin of eyes, ears, and body plans, i.e., the anatomical and structural features of living creatures); Phenotypic novelty, i.e., the origin of new forms throughout the history of life (for example, the mammalian radiation some 66 million years ago, in which the major orders of mammals, such as cetaceans, bats, carnivores, enter the fossil record, or even more dramatically, the Cambrian explosion, with most animal body plans appearing more or less without antecedents); and finally: Non-gradual forms or modes of transition, where you see abrupt discontinuities in the fossil record between different types. As Müller has explained in a 2003 work (“On the Origin of Organismal Form,” with Stuart Newman), although “the neo-Darwinian paradigm still represents the central explanatory framework of evolution, as represented by recent textbooks” it “has no theory of the generative.” In other words, the neo-Darwinian mechanism of mutation and natural selection lacks the creative power to generate the novel anatomical traits and forms of life that have arisen during the history of life. Yet, as Müller noted, the neo-Darwinian theory continues to be presented to the public via textbooks as the canonical understanding of how new living forms arose. The conference did an excellent job of defining the problems that evolutionary theory has failed to solve, but it offered little, if anything, by way of new solutions to those longstanding fundamental problems. 3

While in the first chapters I will touch on based on what grounds natural selection and genetic drift fail to explain complex organismal form and fail even to have predictive power, the focus of this book is to unravel and bring to light, what science has uncovered in the last few years in regards to the real mechanisms that define phenotypic complexity and architecture. It is a game-changer. It will describe complexity that was not imagined nor unexpected until a short while ago, and new layers of biological sophistication that go far beyond genetics, and leads to a view from a systems perspective, where all actors on a molecular, cell, tissue, organ, organ systems, organisms, and ecology are taken into account, to come to a conclusion, that does justice to the evidence and the facts. 

As you will probably observe and take notice since I am a machine designer by profession, my approach to investigating and making inferences based on the evidence seen in biological systems will be from an engineering perspective and standpoint. After studying biology, it has become clear to me, that there are many parallels between engineering principles applied in human-made artifacts and devices, like computers, hardware, software, information transmission systems, machines, robotics, automation, production lines, energy turbines, waste disposal mechanisms, recycling methods, and factories, and the inner workings in the cell, and multicellular organisms. Cells are not only chemical factories full of machines, analogous to human-made chemical factories, but they are so in a literal sense.

What is the best scientific method and approach to investigate the origin of species? 

Many books are dedicated to providing positive evidence for evolution and resort to a variegated toolbox intending to do so. Libretext, for example, elucidates: "Fossils are a window into the past. They provide clear evidence that evolution has occurred."4 Furthermore, they mention Comparative Anatomy, Homologous structures, Comparative Embryology, Vestigial Structures, comparative genomics, evidence from biogeography. A very common method is to infer evolution based on phylogenetic comparisons, measuring physical features and similarities between organisms, phylogenetic trees, and the reconstruction of organismal relationships based on gene and protein trees, nested hierarchies, cladograms, and evolutionary physiology. By seeing all these different faculties, one can easily be persuaded that these are adequate tools permitting one to come to secure, case-adequate conclusions and inferences, that portray the real picture of the historical facts. But is it so? How can we be certain about this? Observe how the word "compared' goes through most, almost all disciplines like a red line. Is doing phylogenetic and physiological comparisons, and drawing phylogenetic trees the right approach? 

Why is the emergence of new body forms, cell shapes, organs, and functions not analyzed in regard to function and interdependence, rather than just phylogeny? Cells of Multicellular organisms work in an interdependent fashion. A Merkel sensory cell bears only function when interconnected with the brain. A Bud taste cell only functions if connected to the nerve that goes to the right cortex in the brain to produce the sensation of taste. This leads with ease to the conclusion that back in the tree of life, there had to be a crucial point, where the development from unicellular to multicellular, and further branching producing new phyla and traits, required the emergence of new genes, instructing systems to change, able to produce all at once new body limbs, forms, and organs that are interdependent. That would also mean the addition or mutation of multiple genes with multiple new instructions all at once through Darwin's natural selection. A hard sell..... But since Darwin's theory cannot be questioned, genetic phylogeny comparisons continue the norm.


Refuting Darwin, confirming design E_b_wi10
Almost a hundred years ago the outstanding American biologist E. B. Wilson wrote: “the key to every biological problem must finally be sought in the cell; for every living organism is, or at some time has been, a cell.” This position remains unshakeable, despite impressing successes of molecular biology, genetics

In my understanding, the key questions will be answered on the molecular level. Whatever the method is to investigate the origin of biodiversity, the key answers have to be found by investigating the cell. 

M.J. BEHE: (1987):  In order to say that some function is understood, every relevant step in the process must be elucidated. The relevant steps in biological processes occur ultimately at the molecular level, so a satisfactory explanation of a biological phenomenon such as sight, digestion, or immunity, must include a molecular explanation. It is no longer sufficient, now that the black box of vision has been opened, for an ‘evolutionary explanation’ of that power to invoke only the anatomical structures of whole eyes, as Darwin did in the 19th century and as most popularizers of evolution continue to do today. Anatomy is, quite simply, irrelevant. So is the fossil record. It does not matter whether or not the fossil record is consistent with evolutionary theory, any more than it mattered in physics that Newton’s theory was consistent with everyday experience. The fossil record has nothing to tell us about, say, whether or how the interactions of 11-cis-retinal with rhodopsin, transducin, and phosphodiesterase could have developed step-by-step. Neither do the patterns of biogeography matter, or of population genetics, or the explanations that evolutionary theory has given for rudimentary organs or species abundance.5

M. W. Kirschner (2005)To understand novelty in evolution, we need to understand organisms down to their individual building blocks, down to the workings of their deepest components, for these are what undergo change.6
 
Is a gene-centered view of evolution still warranted?

M.Lewis: In the 1960s and 1970s, a scientific shift occurred and evolutionary biologists began viewing genes as the fundamental unit of selection. Noted evolutionary theorist Richard Dawkins wrote the revolutionary, and now classic, book The Selfish Gene in 1976, explaining the new genetic view and making it more accessible to lay people. The controversy between purist gene selectionism and the Multilevel Selection Theory (MST) may seem theoretical, but the reasoning behind the two perspectives profoundly changes the way scientists understand evolutionary changes. The claim now becomes a question: Survival of the fittest what? Gene? Organism? Or group? The gene selectionist perspective proposed by Dawkins and others is the predominant view among modern evolutionary biologists. The main premise relies on the concept of the gene as being the ultimate, fundamental unit of natural selection. By the basic principles of natural selection, genes that are more successful at replicating themselves will, by default, become more numerous in the population. Therefore, a gene that happens to increase the general fitness of the individual in which it is located will be more likely to be passed down to the next generation.9

Are genes alone, or various integrated players on intra and extra-cellular systems levels responsible for defining phenotype, and organismal architecture? David Haig (2012): Gene selectionism is the conceptual framework that views genes as the ultimate beneficiaries of adaptations and organisms or groups as the means for genes’ ends. Rival conceptual frameworks exist. Multi-level selection theory views genes as the lowest level of a nested hierarchy in which each level is subject to selection and each level can be the beneficiary of adaptations. Developmental systems theory similarly denies a privileged role for genes in development and evolution. In this framework, many things other than genes are inherited and many things other than genes have a causal role in development. It is the entire developmental system, including developmental resources of the environment, that reconstructs itself from generation to generation. 10

Matt Ridley (2016): The gene-centered view of evolution that Dawkins championed and crystallized is now central both to evolutionary theorizing and to lay commentaries on natural history such as wildlife documentaries. Genes that cause birds and bees to breed survive at the expense of other genes. No other explanation makes sense, although some insist that there are other ways to tell the story. What stood out was Dawkins’s radical insistence that the digital information in a gene is effectively immortal and must be the primary unit of selection. No other unit shows such persistence — not chromosomes, not individuals, not groups and not species. 11

The extended evolutionary synthesis

Paul C. W. Davies (2013): The algorithm for building an organism is not only stored in a linear digital sequence (tape), but also in the current state of the entire system (e.g. epigenetic factors such as the level of gene expression, post-translational modifications of proteins, methylation patterns, chromatin architecture, nucleosome distribution, cellular phenotype and environmental context). The algorithm itself is therefore highly delocalized, distributed inextricably throughout the very physical system whose dynamics it encodes.12

Kevin Laland (2014):The gene-centric view has come under more and more pressure and scrutiny in recent times. K. Laland (2014): Mainstream evolutionary theory has come to focus almost exclusively on genetic inheritance and processes that change gene frequencies. Yet new data pouring out of adjacent fields are starting to undermine this narrow stance. An alternative vision is beginning to crystallize, in which the processes by which organisms grow and develop are recognized as causes of evolution. We have worked intensively to develop a broader framework, termed the extended evolutionary synthesis1 (EES), and to flesh out its structure, assumptions and predictions. In essence, this synthesis maintains that important drivers of evolution, ones that cannot be reduced to genes, must be woven into the very fabric of evolutionary theory. We hold that organisms are constructed in development, not simply ‘programmed’ to develop by genes. The number of biologists calling for change in how evolution is conceptualized is growing rapidly. Strong support comes from allied disciplines, particularly developmental biology, but also genomics, epigenetics, ecology and social science. We contend that evolutionary biology needs revision if it is to benefit fully from these other disciplines. The data supporting our position gets stronger every day. Yet the mere mention of the EES often evokes an emotional, even hostile, reaction among evolutionary biologists. Too often, vital discussions descend into acrimony, with accusations of muddle or misrepresentation. Perhaps haunted by the spectre of intelligent design, evolutionary biologists wish to show a united front to those hostile to science. Some might fear that they will receive less funding and recognition if outsiders — such as physiologists or developmental biologists — flood into their field. However, another factor is more important: many conventional evolutionary biologists study the processes that we claim are neglected, but they comprehend them very differently.

In our view, this ‘gene-centric’ focus fails to capture the full gamut of processes that direct evolution. Missing pieces include how physical development influences the generation of variation (developmental bias); how the environment directly shapes organisms’ traits (plasticity); how organisms modify environments (niche construction); and how organisms transmit more than genes across generations (extra-genetic inheritance).  Valuable insight into the causes of adaptation and the appearance of new traits comes from the field of evolutionary developmental biology (‘evo-devo’). Particularly thorny is the observation that much variation is not random because developmental processes generate certain forms more readily than others. ‘Extra-genetic inheritance’ includes the transmission of epigenetic marks (chemical changes that alter DNA expression but not the underlying sequence) that influence fertility, longevity and disease resistance across taxa. It also encompasses those structures and altered conditions that organisms leave to their descendants through their niche construction — from beavers’ dams to worm-processed soils.13

Gerd B. Müller (2017): These examples of conceptual change in various domains of evolutionary biology represent only a condensed segment of the advances made since the inception of the MS theory some 80 years ago. Relatively minor attention has been paid to the fact that many of these concepts, which are in full use today, sometimes contradict or expand central tenets of the MS theory. Given proper attention, these conceptual expansions force us to consider what they mean for our present understanding of evolution. Obviously, several of the cornerstones of the traditional evolutionary framework need to be revised and new components incorporated into a common theoretical structure.
Although today's organismal systems biology is mostly rooted in biophysics and biological function, its endeavors are profoundly integrative, aiming at multiscale and multilevel explanations of organismal properties and their evolution. Instead of chance variation in DNA composition, evolving developmental interactions account for the specificities of phenotypic construction.14

Qiaoying Lu (2017): Advocates of an ‘extended evolutionary synthesis’ have claimed that standard evolutionary theory fails to accommodate epigenetic inheritance. The opponents of the extended synthesis argue that the evidence for epigenetic inheritance causing adaptive evolution in nature is insufficient. We argue that the evolutionary gene, when being materialized, need not be restricted to nucleic acids but can encompass other heritable units such as epialleles. If the evolutionary gene is understood more broadly, and the notions of environment and phenotype are defined accordingly, current evolutionary theory does not require a major conceptual change in order to incorporate the mechanisms of epigenetic inheritance.15

Comment: I am convinced that the gene-centric view is false. Epigenetic mechanisms are, in addition, and in a joint venture with genetic information, responsible for phenotypes and biocomplexity. Evolutionary theory has been, and can be further stretched, and new scientific findings incorporated, and it can still be claimed ( as already done): evolution did it. The question if this is a warranted claim, is the theme of this book. I am convinced, that the reader, after "digesting" the information exposed here, will come to the conclusion, that a conceptual change is in fact on the table - refuting Darwin, confirming design.  

Analyzing the origin of biological form from an engineering perspective

1. Cells are marvelous factories, that display technologically super advanced solutions. They use information, and codes to instruct how to make and operate things,  highly specialized machines to make energy turbines,  molecular machines to make their own materials, the basic building blocks of life, and mechanisms to self-replicate.  They operate based on minute tolerances involved in their production and assembly. Exquisite precision is required in the synchronization of their operation. 
2. Engineering is from the Latin word Ingenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise, invent". Science has discovered, that cells operate based on engineering principles such as integral control and robustness implemented in diverse intracellular systems. As such, cells display superior inventions and innovations over us, and new scientific fields, like Biomimetics, take advantage of this. 
3. Engineering requires engineers, that use their intelligence to invent and employ superior technological solutions for complex functions and solutions. Therefore, the best explanation for living cells is an intelligent designer. 

Marvelous Factories: The Ingenious Cells

Cells, the building blocks of life,
A world of wonders, beyond our sight,
Factories so small, yet so grand,
With intricacies we can't understand.

They use codes and information,
To operate with precise coordination,
Molecular machines, oh, so fine,
Creating materials, a design divine.

Energy turbines, highly specialized,
Self-replicating, they're so prized,
Ingenium, the Latin word for "cleverness",
Cells display it with flawless finesse.

Engineering, the art of invention,
Cells are masters, beyond our comprehension,
Robust and integral, their control,
Operating with perfection, a wondrous role.

Science reveals their engineering feats,
Innovations beyond our wildest treats,
Biomimetics, a field so new,
Inspired by cells, it's awe-inspiring too.

For engineering requires a mind,
An intelligent designer, we can find,
Cells are evidence of superior creation,
With precision and complexity, beyond imagination.

So let's marvel at these factories small,
Ingenious cells, the greatest of all,
A testament to design, so divine,
A masterpiece of intelligence, a gift so fine.

What Have the Principles of Engineering Taught Us about Biological Systems? 
Engineering principles such as integral control and robustness were found to be implemented in diverse biological systems. Nature has so far proved to be a superior inventor and innovator over us. While it is fruitful to
comprehend biological complexity in terms of engineering principles, perhaps a fascinating question in the near future would be ‘‘what can biological systems teach us about engineering (and physics and mathematics)?’’

While biological systems appear to be ad hoc in many ways, the more we begin to understand them, the more we begin to see engineering principles of abstraction, modularity, redundancy, self-diagnosis, and hierarchy. By viewing seemingly random biological design ‘‘decisions’’ through an engineering lens, we have found powerful patterns, intricate mechanical mechanisms, and evolved modularity.

Biology is transforming engineering, as evidenced by the new discipline of Biologically Inspired Engineering, which seeks to leverage biological principles to develop new engineering innovations

Natural designs are simple, functional, and remarkably elegant. Biology is a great source for innovative design inspiration. By examining the structure, function, growth, origin, evolution, and distribution of living entities, biology contributes a whole different set of tools and ideas that a design engineer wouldn't otherwise have. Biology has greatly influenced engineering. The intriguing and awesome achievements of the natural world have inspired engineering breakthroughs that many take for granted, such as airplanes, pacemakers and velcro. One cannot simply dismiss engineering breakthroughs utilizing biological organisms or phenomena as chance occurrences. 16

For a complete understanding of biological processes that orchestrate adaptation to the environment, define the intricate development of body architecture with striking precision, organismal development, cell and tissue shape, organization, and body form, homeostasis, responding to external cues, attacks by invaders like viruses, etc., it is necessary to understand as many integrative elements of biological systems as possible. Complex pattern formation involves numerous highly intricate biomolecular mechanisms that lead to the superb formation of tissue structures. That includes providing information that gives mechanical cues directing intra-and extracellular shape changes and movements on the level of individual cells, but also tissue substratum as a whole. Answering the questions about how cells, tissues, and organisms masterfully develop and form, precedes the question IF evolution by non-intelligent, unguided means provides the best, most compelling answers, and IF the evolutionary changes permit a purely blind primary macroevolutionary transition zone, morphogenesis of an entire organism moving and morph from one species to another on a first-degree speciation level, where novel features arise, like wings, eyes, ears, legs, arms, and so forth. The fact and truth are, that science is still far from having a substantive answer to that question. But what we do know, permits us to come to informed conclusions.

Biodiversity and complex organismal architecture is explained by trillions of bits. Incredible amounts of data far beyond our imagination. Instructions, complex codified specifications, information. Algorithms that are masterfully encoded in various genetic and sophisticated epigenetic languages and communication channels,  and various signaling networks. Neurotransmitters, through nanotubes between cells. Communication through vesicles and amazingly, even light photons. Genes, as well as especially various striking epigenetic signaling and bioelectric codes through various signaling networks, provide cues to molecules and macromolecule complexes, and ingenious scaffold networks interpret and react in a variety of ways upon decoding and data processing of those instructions. Since signaling pathways work in an extraordinarily precise, synergetic integrated way with the transcriptional regulatory network and complex short and long-range cross-talk between cells, these crucial instructions, crucial for advanced life forms, seem to be the result of preloaded information that is able to operate fully developed, right from the beginning, since a step-wise implementation is from an engineering standpoint, not possible. These superb information networks only operate and work in an integrated fashion, and had to be "born", and fully set up right from the beginning. Conveying codes, a system of rules to convert information, such as letters and words, into another form, and translation ciphers of one language to another are always sourced back to intelligent set-up. What we see in biochemistry is incredibly complex instructional codified information being stored through the genetic code ( codons) in a masterful information-storage molecule  (DNA), encoded & transcribed ( RNA polymerase), sent (mRNA), and decoded & translated (Ribosome), as well as epigenetic codes and languages, and several signaling pathways. The morphogenesis of organismal structure and shape is classified into two groups: The various instructional codes and languages using molecules that provide complex cues of action based on information through signaling and secondly by force-generating molecules that are precisely directed through those signals, which are responsible for fantastic cell morphogenesis. Blueprints, instructional information, and master plans, which permit the striking autonomous assembly and control of complex proteins (molecular machines) and exquisite factory parks (interconnected cells) upon that instructional information are both always tracked back to an intelligent source that made both for purposeful, specific goals.   That brings us unambiguously to the source by a superpowerful, unfathomably intelligent designer. The core inference of this book is: 

1. Biological sciences have come to discover in the last decades that major morphological innovation, development, and body form are based on at least 19 different, but integrative mechanisms, the interplay of genes with the gene regulatory network, Trans and Retrotransposons, so-called Junk DNA, gene splicing and recombination, and at least 49 epigenetic informational code and language systems, some, like the glycan (sugar) code, far more complex than the genetic code, on the membrane - exterior side of cells, Post-transcriptional modifications (PTMs) of histones, hormones, Ion Channels and Electromagnetic Fields that are not specified by nuclear DNA, Membrane targets and patterns, cytoskeletal arrays, centrosomes, and inheritance by cell memory which is not defined through DNA sequences alone.
2. These varied mechanisms orchestrate gene expression, generate Cell types and patterns, perform various tasks essential to cell structure and development, are responsible for important tasks of organismal development, affect gene transcription, switch protein-coding genes on or off,  determine the shape of the body, regulate genes, provide critical structural information and spatial coordinates for embryological development,  influence the form of a developing organism and the arrangement of different cell types during embryological development, organize the axes, and act as chemical messengers for development
3. Neo-Darwinism and Modern Synthesis have proposed traditionally a gene-centric view, a scientific metabiological proposal going back to Darwin's " On the origin of species ", where first natural selection was proposed as the mechanism of biodiversity, and later,  gene variation defining how bodies are built and organized. Not even recently proposed alternatives, like the third way, neutral theory, inclusive fitness theory, Saltationism, Saltatory ontogeny, mutationism, Genetic drift, or combined theories, like the extended evolutionary synthesis (EEL) do full justice by taking into account all organizational physiological hierarchy and complexity which empirical science has come to discover and unravel. 
4. Only a holistic view, namely structuralism, and systems biology take in an integrated fashion all influences into account that forms cell phenotype and size, organismal development (evo-devo), and growth and inheritance, providing adequate descriptions of the recent new scientific evidence, which I call thereafter Systems Design (SD).

A Symphony of Biology: Unraveling Complexity

In the world of biology, a complex dance,
Of mechanisms and codes, a captivating trance,
From genes and networks, to retrotransposons,
Junk DNA, and splicing, a symphony of connections.

Epigenetic codes, a language so grand,
Glycan codes on membranes, complex and unplanned,
Post-transcriptional modifications abound,
Histones, hormones, with roles profound.

Ion channels and fields, beyond the genes,
Inheritance by cell memory, a tale unseen,
Membrane targets and patterns, shaping cells,
Cytoskeletal arrays, with stories to tell.

These mechanisms, they orchestrate,
Gene expression, they regulate,
Cell types and patterns, they generate,
Critical tasks, they fascinate.

Traditionally, a gene-centric view,
Darwin's proposal, old and true,
But recent discoveries, so vast and grand,
Challenging the theories, that once did stand.

Neo-Darwinism, the Modern Synthesis,
Only partial, with some hypotheses amiss,
Extended evolutionary synthesis, a new proposal,
But still, there's more, a holistic disposal.

Structuralism and systems biology,
A Systems Design, a novel analogy,
Taking into account, the complexity untold,
Of organizational hierarchy, and mysteries unfold.

So let's embrace the wonder, of science's quest,
Unraveling the mysteries, with zest,
For life's complexity, a marvel to behold,
In the world of biology, stories yet untold.

A major paradigm shift is more than due. Scientists would do good to start taking the design inference very seriously.  The same naturalistic philosophical framework that is applied to operational science ( where it is justified), is given to historical sciences. Design by an intelligent agent as a possible cause for the origin of life, and species, has traditionally been excluded prior. The game was rigged, before being plaid. Consequently, only naturalistic candidates have been admitted and will be the winners, no matter how counterintuitive, or not supported by the evidence. Dobzhansky, a famous geneticist, wrote that "nothing in biology makes sense except in the light of evolution". Today, so i am convinced: "Nothing in biology makes sense except in the light of systems design"

Proponents of evolution claim that life diversified, starting from a universal common ancestor, that supposedly originated between 3,5 and 4 billion years ago. The trajectory that led to LUCA, the Last Universal Common Ancestor, is not known. But once a small population of this organism existed, it started to replicate with small variations, and diversified, to give rise through evolutionary pressures to the three domains of life: prokaryotes, eukaryotes, and archaea. Carl Simpson (2011) describes what followed next: The origin of multicellularity ( animals, fungi, plants ), the origin of sex, obligatory social groups of animals ( ex. wasps), the origin of language, of different communication systems in the animal world, and the dominance of tool-using and conscious planning (man).17

Traditionally, the view has been that small changes gave rise to big changes over millions and billions of years. Micro leads to macroevolution. An uninterrupted process from less to more complex, with valleys in the trajectory, and various mass extinction events on the road. Before we however talk about what this book is attempting to refute, we need to clarify what is meant by evolution. Stephen C. Meyer clarifies: Evolution has various meanings. 

What are the mechanisms of evolution? 

The mechanisms are: natural selection, mutations, gene flow, genetic drift, biased variation, movable elements, non-random mating (including sexual selection), and recombination

What is fact and undisputed in regards to evolution :
1. Change over time; history of nature; any sequence of events in nature. 
2. Changes in the frequencies of alleles in the gene pool of a population. 
3. Limited common descent: the idea that particular groups of organisms have descended from a common ancestor. 
4. The mechanisms responsible for the change required to produce limited descent with modification, chiefly natural selection acting on random variations or mutations. 

What is not fact:
5. Universal common descent: the idea that all organisms have descended from a single common ancestor. 
6. The idea that all organisms have descended from common ancestors solely through unguided, unintelligent, purposeless, material processes such as natural selection acting on random variations or mutations; that the mechanisms of natural selection, random variation and mutation, and perhaps other similarly naturalistic mechanisms, are completely sufficient to account for the appearance of design in living organisms.34

I will focus on points 5 and 6. The dispute is that diversification started with a universal common ancestor and the claim that there is a tree of life. Maybe you are reading my book with the intention to knock down the forthcoming arguments, and believe that there is no way to succeed and refute Darwin's theory and the more recent versions of it. So here is a question for you:  What you do believe to be true about evolution, is it true? Or do you just believe that it is true, because you were taught so, but you never looked and investigated to check if there is an empirical proof? I challenge the claim that Darwin's tree of life ever existed. NEVER, in over 160 years, since Darwin's book " On the origin of species " was published, has even ONE, amongst hundreds or thousands, if not millions of science papers, provided ONE DEMONSTRATION, and empirical verifiable replicable experiment, that any of the evolutionary mechanisms proposed, namely natural selection, mutations, gene flow, genetic drift, biased variation, movable elements, non-random mating (including sexual selection), and recombination. could produce a primary macroevolutionary transition zone of speciation and population differentiation. Not even one science paper EVER has provided empirical proof/demonstration that one organism can morph into a completely different one with new body features and functions and entirely new architecture. You will probably object to that that it took long periods of time and that what we are currently able to observe, warrants the extrapolation. But does it really? This is what this book is about.

Over the years, I have participated in many ID/evolution debates, in written form, or Youtube live debates, message boards, and especially on social media. One of the common tactics adopted by ID proponents is to expose why evolutionary arguments have no empirical support, presenting evidence that falsifies the theory. Certainly a warranted approach. The evidence is plenty.  It is however not fully complete, without demonstrating and mentioning what replaces evolution - what the actual mechanisms are. And doing that is a task that takes considerably more effort and investigation. The first time I did find some indicative answers to that question was in Steve Meyers's best-selling book: Darwin's doubt. I took that as a starting point to dig deeper in an attempt to find more answers. And I did, as I will expose in this book. There are many different mechanisms involved in constructing complex organismal forms and body architecture. One reason it is rarely brought up as an argument is, that it is an exceedingly difficult question to answer, and involves unraveling and describing the very complex biological systems both on a micro, and macroscale, from cells to organs, to organ systems, organisms, and even ecosystems. It requires knowledge of systems biology, and various different fields in biology, in special, cell biology, biochemistry, genomics, development, and evolutionary biology. Hard to find those that have knowledge in that broader sense. In special, when most scientists specialize often and concentrate to investigate one specific open issue with yet unanswered questions. Some invest a lifetime to understand and describe one single protein, like topoisomerase, or the ribosome. 

Before one can ask about how things came to be and originated, one needs to understand how biological systems work, replicate, inheritance transmitted, and how their operations are controlled, adapt, and develop. Answering how things came to be, depends on knowing how things are made, which is answered by operational science that deals with this question. Only then do we have the premise and the necessary knowledge and tool to ask the follow-up questions of origins. What adds difficulty, is that there is not one way, but many different ways of how organisms develop from an embryo/zygote to an adult. Studying and understanding just the development of one organism takes tremendous effort, and time, and is a challenging investigation. 

The origin of organismal architecture through evolution is not even close to an established fact

A growing number of evolutionary biologists are confessing today their ignorance of the question of the origins of biological form, and confess openly their skepticism, and disagreement in regards to evolution as an all-encompassing, satisfying explanation. For example:
 
R. DeSalle (2002): It remains a mystery how the undirected process of mutation, combined with natural selection, has resulted in the creation of thousands of new proteins with extraordinarily diverse and well-optimized functions. This problem is particularly acute for tightly integrated molecular systems that consist of many interacting parts . . . It is not clear how a new function for any protein might be selected for unless the other members of the complex are already present, creating a molecular version of the ancient evolutionary riddle of the chicken and the egg. 18 

E.K. Balon (2004): In the last 25 years, criticism of most theories advanced by Darwin and the neo-Darwinians has increased considerably, and so did their defense. Darwinism has become an ideology, while the most significant theories of Darwin were proven unsupportable.19

Hiroshi Akashi et.al.,(2006): Although mutation, genetic drift, and natural selection are well established as determinants of genome evolution, the importance (frequency and magnitude) of parameter fluctuations in molecular evolution is less understood. The magnitude, timescale, and genomic breadth of fluctuations in molecular evolutionary forces remain to be studied systematically. Such knowledge is critical for modeling the causes of molecular evolution and is necessary for designing tests of adaptive and deleterious evolution and methods for phylogenetic inference and ancestral state reconstruction. 20 

E.V. Koonin (2007): Major transitions in biological evolution show the same pattern of sudden emergence of diverse forms at a new level of complexity.  The relationships between major groups within an emergent new class of biological entities are hard to decipher and do not seem to fit the tree pattern that, following Darwin's original proposal, remains the dominant description of biological evolution.   The cases in point include the origin of complex RNA molecules and protein folds; major groups of viruses; archaea and bacteria, and the principal lineages within each of these prokaryotic domains; eukaryotic supergroups; and animal phyla. In each of these pivotal nexuses in life's history, the principal "types" seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate "grades" or intermediate forms between different types are detectable. 21

Stuart Pivar (2010): No coherent causative model of morphogenesis has ever been presented.22 

E. V. Koonin (2010)The summary of the state of affairs on the 150th anniversary of the Origin is somewhat shocking: in the post-genomic era, all major tenets of the Modern Synthesis are, if not outright overturned, replaced by a new and incomparably more complex vision of the key aspects of evolution. So, not to mince words, the Modern Synthesis is gone. The idea of evolution being driven primarily by infinitesimal heritable changes in the Darwinian tradition has become untenable. 23 

J. S. Turner (2010): Although I touch upon ID obliquely from time to time, I do so not because I endorse it, but because it is mostly unavoidable. ID theory is essentially warmed-over natural theology, but there is, at its core, a serious point that deserves serious attention. ID theory would like us to believe that some overarching intelligence guides the evolutionary process: to say the least, that is unlikely. Nevertheless, how design arises remains a very real problem in biology.24 

C. S. Roberts (2012): The three limitations of Darwin's theory concern the origin of DNA, the irreducible complexity of the cell, and the paucity of transitional species. Because of these limitations, the author predicts a paradigm shift away from evolution to an alternative explanation. The intellectual problem is that it remains a suspect theory >150 years after the publication of The Origin of Species (1859). 25

D. E. K. Ferrier (2016): There is uncertainty in our understanding of homeobox gene cluster evolution at present. This relates to our still rudimentary understanding of the dynamics of genome rearrangements and evolution over the evolutionary timescales being considered when we compare lineages from across the animal kingdom.26


1. Richard Dawkins:  IN SHORT: NONFICTION April 9, 1989
2. New trends in evolutionary biology: biological, philosophical and social science perspectives
3. Paul Nelson and David Klinghoffer: Scientists Confirm: Darwinism Is Broken December 13, 2016
4. Libretext: Evidence for Evolution
5. MICHAEL J. BEHE: Experimental Support for the Design Inference DECEMBER 27, 1987
6. Dr. Marc W. Kirschner: The Plausibility of Life: Resolving Darwin's Dilemma 2005
9. Michaela Lewis: Understanding Evolution: Gene Selection
10. David Haig: The strategic gene 30 March 2012
11. Matt Ridley: In retrospect: The Selfish Gene 27 January 2016
12. Paul C. W. Davies: The algorithmic origins of life 2013 Feb 6
13. Kevin Laland: Does evolutionary theory need a rethink? 08 October 2014
14. Gerd B. Müller: Why an extended evolutionary synthesis is necessary 18 August 2017
15. Qiaoying Lu: The Evolutionary Gene and the Extended Evolutionary Synthesis 12 May 2018
16. J.K. Stroble Nagel: Function-Based Biology Inspired Concept Generation  March 1st, 2010
17. Carl Simpson: The Miscellaneous Transitions in Evolution April 2011
18. R. DeSalle: Molecular Systematics and Evolution: Theory and Practice 2002
19. Eugene K Balon: Evolution by epigenesis: farewell to Darwinism, neo- and otherwise 2004 May-Aug
20. Hiroshi Akashi: Molecular Evolution in the Drosophila melanogaster Species Subgroup: Frequent Parameter Fluctuations on the Timescale of Molecular Divergence 2006 Mar
21. Eugene V Koonin: The Biological Big Bang model for the major transitions in evolution 20 August 2007
22. Stuart Pivar: The origin of the vertebrate skeleton  16 August 2010
23. Eugene V. Koonin:  The Origin at 150: is a new evolutionary synthesis in sight? 2010 Nov 1.
24. J. Scott Turner: The Tinkerer's Accomplice: How Design Emerges from Life Itself  30 setember 2010
25. Charles Stewart Roberts: Comments on Darwinism 2012 Jan
26. D. E. K. Ferrier: Evolution of Homeobox Gene Clusters in Animals: The Giga-Cluster and Primary vs. Secondary Clustering 14 April 2016
27. John J. Welch: What’s wrong with evolutionary biology?  20 December 2016
28. Ryohei Seki et.al.,: Functional roles of Aves class-specific cis-regulatory elements on macroevolution of bird-specific features 06 February 2017
29. Phys.Org.: Sweeping gene survey reveals new facets of evolution MAY 28, 2018
30. Sebastian Kittelmann et.al.,: Gene regulatory network architecture in different developmental contexts influences the genetic basis of morphological evolution May 3, 2018
31. M. Linde‑Medina On the problem of biological form 5 May 2020
32. Alison Caldwell, PhD: A simple rule drives the evolution of useless complexity December 9, 2020
33. Gerd B. Müller: Why an extended evolutionary synthesis is necessary 18 August 2017
34. Stephen C. Meyer: The Meanings of Evolution 

22. Abyt Ibraimov: Editorial Open Access Organismal Biology Journal  April 06, 2017
23. John Joe McFadden: Evolution of the best idea that anyone has ever had July 1, 2008



Last edited by Otangelo on Sat Apr 08, 2023 8:19 am; edited 68 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

J. J. Welch (2016): There have been periodic claims that evolutionary biology needs urgent reform. Irrespective of the content of the individual critiques, the sheer volume and persistence of the discontent must be telling us something important about evolutionary biology. Broadly speaking, there are two possibilities, both dispiriting. Either (1) the field is seriously deficient, but it shows a peculiar conservatism and failure to embrace ideas that are new, true and very important; or (2) something about evolutionary biology makes it prone to the championing of ideas that are new but false or unimportant, or true and important, but already well studied under a different branding. It has been argued here that the discontent is better understood as stemming from a few inescapable properties of living things, which lead to disappointment with evolutionary biology, and a nagging feeling that reform must be overdue. Evolutionary biology, like history, but unlike other natural sciences, raises issues of purpose and agency, alongside those of complexity and generality.27 

Ryohei Seki et.al., (2017): It has been argued for several decades that the phenotypic variations within and between species can be established by modification of cis-regulatory elements, which can alter the tempo and mode of gene expression. Nevertheless, we still have little knowledge about the genetic basis of macroevolutionary transitions that produced the phenotypic novelties that led to the great leap of evolution and adaptation to new environment. Although numerous efforts have been made to study the evolutionary roles of newly evolved genes in a limited numbers of model species2, little is known about how the genetic changes underlying the major transitions occurred in the deep time, and how they were maintained through long-term macroevolution.28

And Phys.Org (2018): The most extensive genetics study ever completed the Journal of Human Evolution revealed NO genetic evidence for Evolution. The author, an avid proponent of evolution, was reduced to the following conclusions: And yet—another unexpected finding from the study—species have very clear genetic boundaries, and there's nothing much in between. "If individuals are stars, then species are galaxies," said Thaler. "They are compact clusters in the vastness of empty sequence space." The absence of "in-between" species is something that also perplexed Darwin, he said.29 

S. Kittelmann et.al., (2018) A major goal of biology is to identify the genetic causes of organismal diversity. 30
M. Linde‑Medina (2020): At present, the problem of biological form remains unsolved. 31

Joseph Thornton, PhD, professor of human genetics and ecology and evolution (2020): How complexity evolves is one of the great questions of evolutionary biology 32

Refuting Darwin, confirming design Wolfga10

I hear often atheists arguing that:" Creationists have no clue how evolution works ". The mechanisms supposedly involved are common knowledge. After quoting the above science papers, one is IMHO warranted to ask: Who has demonstrated that it is indeed unguided evolutionary pressures that explain the origin of millions of different species on earth?

Nonetheless, these facts, many with high confidence continue believing and claiming above a shadow of a doubt that evolution is a settled fact. After all, the professionals in the field say so. But isn't that in reality just the successful result of indoctrination of many generations? Today, more and more biologists are starting to recognize that the construction of phenotypic form & architecture depends on mechanisms that science is far from having fully explored and unraveled. In this book, I will mention and describe almost 50 epigenetic codes and languages, and in most of them, it is a mystery for science where and how the information is stored. Some are recognizing this and trying to incorporate these recently unraveled mechanisms into an expanded evolutionary framework. Like Gerd B. Müller, an Austrian biologist, for example. He writes in the science paper:  Why an extended evolutionary synthesis is necessary? 

These examples of conceptual change in various domains of evolutionary biology represent only a condensed segment of the advances made since the inception of the MS theory some 80 years ago. Relatively minor attention has been paid to the fact that many of these concepts, which are in full use today, sometimes contradict or expand central tenets of the MS theory. Given proper attention, these conceptual expansions force us to consider what they mean for our present understanding of evolution. Obviously, several of the cornerstones of the traditional evolutionary framework need to be revised and new components incorporated into a common theoretical structure. Although today's organismal systems biology is mostly rooted in biophysics and biological function, its endeavors are profoundly integrative, aiming at multiscale and multilevel explanations of organismal properties and their evolution. Instead of chance variation in DNA composition, evolving developmental interactions account for the specificities of phenotypic construction. 33

Refuting Darwin, confirming design Lkffny10


Refuting Darwin, confirming design G47oo710

One can always bend, stretch, evolve, change the theory of evolution, incorporate new mechanisms, and claim that evolution through unguided genetic and epigenetic changes did it. The challenge is: What is the drawing line of separation, where is it justified to bring intelligence as a required ingredient into the picture to explain these things? I think the two already-known and well-established ID tenets, specified, and irreducible complexity do that just fine and demonstrate why ID is a superior hypothesis. The fact is that the instantiation of information storage systems, languages, codes, information instantiated by using these codes and languages, information transmission systems, encoding, transmitting, decoding, transcribing, translating, transduction, etc. requires a mind, and so does the making of irreducibly complex integrated machines, production lines and factories based on information transmission systems that are the link between the cell's hardware and software, and the cellular machinery that is made, operated and controlled upon it.  The more the number of different software/hardware on genetic and epigenetic levels unraveled, that operates on an intra and extracellular (systems) level, there it is evidence that unguided, non-intelligent natural mechanisms are inadequate, and do not suffice to explain the phenomena in question, and ID becomes a better, more plausible and probable explanation. As this book will show, we are already deep in this territory. 
To get the message out, and have it acknowledged by a wider number of people is the goal of this book.   

Answering Frank Zindler, which I intend to support and substantiate with this book, is: The most devastating thing though that biology has done to naturalism is the failed claim of chemical and biological evolution. Now that we know that Adam and Eve were real people the central creation narrative of Christianity is confirmed. If there was an Adam and Eve there was an original sin. If there was an original sin there is need of salvation. If there is need of salvation there is need of a Savior. And I submit that puts Jesus, historical or otherwise, into the ranks of the necessary. I think that the failure of abiogenesis and evolution is absolutely the death knell of naturalism.

1

What is natural selection? 

Merriam-Webster defines selection as the act or process of selecting: the state of being selected one that is selected: CHOICE. Making choices is always assigned/attributed to intelligent action. Darwin however coined the word "natural selection" to mean something different. Many think that natural selection actively selects favorable traits in a population. But in fact, as EvolutionShorts explains:  It is a passive process that does not involve organisms “trying” to adapt. This concept of the organism becoming more suited to its current environment is roughly the basis of adaptive evolution. This is a fundamental principle for natural selection instead of specific desires of species. 

R.Carter:‘Natural selection’ properly defined simply means ‘differential reproduction’, meaning some organisms leave more progeny than others based on the mutations they carry and the environment in which they live. 1

Paul R. Ehrlich (1988): In modem evolutionary genetics, natural selection is defined as the differential reproduction of genotypes (individuals of some genotypes have more offspring than those of others). Natural selection would be occurring if, in a population of jungle fowl (the wild progenitors of chickens), single-comb genotypes were more reproductively successful than pea-comb genotypes. Note that the emphasis is not on survival  (as it was in Herbert Spencer's famous phrase "survival of the fittest") but on reproduction.2

Natural selection is not an acting force but is passive. It does not invent something new.
E. Osterloff:  Natural selection is a mechanism of evolution. Organisms that are more adapted to their environment are more likely to survive and pass on the genes that aided their success. This process causes species to change and diverge over time. 3

David Stack (2021): Natural selection was the term Darwin used to describe both the mechanism and the effect of the evolutionary process by which favorable or advantageous traits and characteristics are preserved and unfavorable or disadvantageous ones discarded. The “selection” process is “natural” in the sense that it occurs without any conscious intervention (there is no “selector”) in response to an ongoing “struggle for life.” Traits and characteristics favorable to survival in that struggle are preserved and developed. This, for Darwin, is the basis of evolution. Key to the process is inheritance, but, as he was writing without knowledge of modern genetics, Darwin’s presentation of natural selection did not include any detailed understanding of how inheritance worked. 4

FRANCISCO J. AYALA (2007): With Darwin’s discovery of natural selection, the origin and adaptations of organisms were brought into the realm of science. The adaptive features of organisms could now be explained, like the phenomena of the inanimate world, as the result of natural processes, without recourse to an Intelligent Designer.

Variation. Organisms (within populations) exhibit individual variation in appearance and behavior.  These variations may involve body size, hair color, facial markings, voice properties, or number of offspring.  On the other hand, some traits show little to no variation among individuals—for example, number of eyes in vertebrates.
Inheritance.  Some traits are consistently passed on from parent to offspring.  Such traits are heritable, whereas other traits are strongly influenced by environmental conditions and show weak heritability.
High rate of population growth. Most populations have more offspring each year than local resources can support leading to a struggle for resources.  Each generation experiences substantial mortality.
Differential survival and reproduction.  Individuals possessing traits well suited for the struggle for local resources will contribute more offspring to the next generation.

Refuting Darwin, confirming design 96456310

Is there evidence for natural selection?

According to Darwin's Theory, the main actors that drive evolution, is natural Selection, Genetic Drift, and Gene Flow. Natural selection depends on variation through random mutations. Inheritance,  differential survival, and reproduction ( reproductive success which permits new traits to spread in the population).   The genetic modification is supposed to be due to: Survival of the fittest, in other words, 1.  higher survival rates upon specific gene-induced phenotype adaptations to the environment, and 2. higher reproduction rates upon specific evolutionary genetic modifications. Keep in mind that these are two different, distinct factors. It's a fact that harmful variants, where a mutation influences negatively health, fitness, and reproduction ability of organisms diminish. These are sorted out, or die through disease. In that regard, natural selection is a fact. That says nothing however about an organism gaining more fitness  ( reproductive success )  through the evolution of new advantageous traits.

In an interview in 1999, Mayr stated: “Darwin showed very clearly that you don't need Aristotle's teleology because natural selection applied to bio-populations of unique phenomena can explain all the puzzling phenomena for which previously the mysterious process of teleology had been invoked”. 5

Definitions of fitness:

J. Dekker (2007): 1. The average number of offspring produced by individuals with a certain genotype, relative to the numbers produced by individuals with other genotypes. 2: The relative competitive ability of a given genotype conferred by adaptive morphological, physiological, or behavioral characters, expressed and usually quantified as the average number of surviving progeny of one genotype compared with the average number of surviving progeny of competing genotypes; a measure of the contribution of a given genotype to the subsequent generation relative to that of other genotypes
A condition necessary for evolution to occur is variation in fitness of organisms according to the state they have for a heritable character. Individuals in the population with some characters must be more likely to reproduce, more fit. Organisms in a population vary in reproductive success. We will discuss fitness in Life History when we discuss competition, interference and the effects of neighbor plants.

Three Components of Fitness.  These different components are in conflict with each other, and any estimate of fitness must consider all of them:
1.  Reproduction
2.  Struggle for existence with competitors
3.  Avoidance of predators  6

S.El-Showk (2012): The common usage of the term “fitness” is connected with the idea of being in shape and associated physical attributes like strength, endurance or speed; this is quite different from its use in biology.  To an evolutionary biologist, fitness simply means reproductive success and reflects how well an organism is adapted to its environment.The main point is that fitness is simply a measure of reproductive success and so won’t always depend on traits such as strength and speed; reproductive success can also be achieved by mimicry, colorful displays, sneak fertilization and a host of other strategies that don’t correspond to the common notion of “physical fitness”.

What then are we to make of the phrase “survival of the fittest”? Fitness is just book-keeping; survival and differential reproduction result from natural selection, which actually is a driving mechanism in evolution. Organisms which are better suited to their environment will reproduce more and so increase the proportion of the population with their traits. Fitness is simply a measurement of survival (which is defined as reproductive success); it’s not the mechanism driving survival.  Organisms (or genes or replicators) don’t survive because they are fit; rather, they are considered fit because they survived. 7

The environment is not stable, but changes. Science would need to have the knowledge of what traits of each species are favored in a specific environment. Adaptation rates and mutational diversity and other spatiotemporal parameters, including population density, mutation rate, and the relative expansion speed and spatial dimensions. When the attempt is made to define with more precision what is meant by the degree of adaptation and fitness, we come across very thorny and seemingly intractable problems. 

As Evolution. Berkley explains: Of course, fitness is a relative thing. A genotype's fitness depends on the environment in which the organism lives. The fittest genotype during an ice age, for example, is probably not the fittest genotype once the ice age is over. Fitness is a handy concept because it lumps everything that matters to natural selection (survival, mate-finding, reproduction) into one idea. The fittest individual is not necessarily the strongest, fastest, or biggest. A genotype's fitness includes its ability to survive, find a mate, produce offspring — and ultimately leave its genes in the next generation. 8

Can fitness be measured? 

Claim: Adam Eyre-Walker (2007): All organisms undergo mutation, the effects of which can be broadly divided into three categories. First, there are mutations that are harmful to the fitness of their host; these mutations generally either reduce survival or fertility. Second, there are ‘neutral’ mutations, which have little or no effect on fitness. Finally, there are advantageous mutations, which increase fitness by allowing organisms to adapt to their environment. Although we can divide mutations into these three categories, there is, in reality, a continuum of selective effects, stretching from those that are strongly deleterious, through weakly deleterious mutations, to neutral mutations and then on to mutations that are mildly or highly adaptive. The relative frequencies of these types of mutation are called the distribution of fitness effects (DFE) 9

R. G. Brajesh et.al., (2019): Mutations occur spontaneously during the course of reproduction of an organism. Mutations that impart a beneficial characteristic to the organism are selected and consequently, the frequency of the mutant allele increases in the population. Mutations can be single base changes called point mutations like substitutions, insertions, deletions, as well as gross changes like chromosome recombination, duplication, and translocation 10

Reply:  A theory must be able to make predictions, and it must be testable.  How can it be tested that random mutations give rise to higher fitness and higher reproduction of the individuals with the new allele variation favored by natural selection, and so spread in the population, and how can the results be quantified? This seems in fact to be a core issue that raises questions. The environmental conditions of a population, the weather, food resources, temperatures, etc. are random How do random events, like weather conditions, together with random mutations in the genome, provoke a fitness increase in an organism and a survival advantage over the other individuals without the mutation? 

T.Bataillon (2014): The rates and properties of new mutations affecting fitness have implications for a number of outstanding questions in evolutionary biology. Obtaining estimates of mutation rates and effects has historically been challenging, and little theory has been available for predicting the distribution of fitness effects (DFE); Future work should be aimed at identifying factors driving the observed variation in the distribution of fitness effects. What can we say about the distribution of fitness effects of new mutations? For the distribution of fitness effects DFE of beneficial mutations, experimentally inferred distributions seem to support theory for the most part. Distribution of fitness effects DFE has largely been unexplored and there is a need to extend both theory and experiment in this area. 11

Christopher J Graves (2019): When fitness effects are invariant across a lineage, the long-term fate of an allele can be deduced in a relatively straightforward manner from its recursive effects on survival and reproduction across descendent carriers. In other cases, the evolutionary success of an allele is not an obvious consequence of its effects on individuals. For example, variable environments can cause the same allele to have differing effects on fitness depending on an individuals’ environmental context. 18

V. Ž. Alif et.al., (2021): The concept of fitness is central to evolutionary theory. Natural selection maximizes fitness, which is therefore a driving force of evolution as well as a measure of evolutionary success. One definition  of fitness is how good an individual is at spreading its genes into future generations, relative to all other individuals in the population. A universal definition of fitness in mathematical terms that applies to all population structures and dynamics is however not agreed on. Fitness it is difficult to measure accurately. One way to measure long-term fitness is by calculating the individual’s reproductive value, which represents the expected number of allele copies an individual passes on to distant future generations. However, this metric of fitness is scarcely used because the estimation of individual’s reproductive value requires long-term pedigree data, which is rarely available in wild populations where following individuals from birth to death is often impossible. Wild study systems therefore use short-term fitness metrics as proxies, such as the number of offspring produced. 19

The above confession demonstrates that a key question, namely how mutations in fact affect fitness has not been answered. I go further and say: Darwin's Theory can in reality not be tested, nor quantified. The unknown factors in each case are too many, and the variations in the environment, and population sizes undergo large seasonal fluctuations, providing more opportunities for mutations when the population size is large and a greater probability of fixation of mutation x during the recurring bottlenecks, and population and species behavior vary too. It cannot be defined what influence the given environment exercises in regard to specific animals and traits in that environment, nor how the environmental influence would change the fitness and reproduction success of each distinct animal species. Nor how reproduction success given new traits would change upon environmental changes.  What determines whether a gene variant spreads or not would depend theoretically on an incredibly complex web of factors - the species' ecology, its physical and social environment, altered nutrient conditions,  and sexual behavior. A further factor adding complexity is the fact that high social rank is associated with high levels of both copulatory behavior and the production of offspring which is widespread in the study of animal social behavior. 

As alpha males have on average higher reproductive success than other males, since they outcompete weaker individuals, and get preference to copulate if other (weaker)  males gain beneficial mutations (or the alphas' negative mutations) as the alphas can outperform and win the battle for reproduction,  thus selection has an additional hurdle to overcome and spread the new variant in the population. This does not say anything about the fact that it would have to be determined what gene loci are responsible for sexual selection and behavior, and only mutations that influence sexual behavior would have an influence on fitness and the struggle to contribute more offspring to the next generation.   It is in praxis impossible to isolate these factors and see which is of selective importance,  quantify them, plug them in (usually in this context) to a mixed multivariate computational model, see what's statistically significant, and get meaningful, real-life results. The varying factors are too many and nonpredictive. Darwin's idea, therefore, depends on variable, unquantifiable multitude of factors that cannot be known, and cannot be tested, which turns the theory at best into a non-testable hypothesis, which then remains just that: a hypothesis. Since Darwin's idea cannot be tested, it's by definition, unscientific. 

If fitness is a relative thing, it cannot be detected and proven that natural selection is the mechanism that generates variations that produce more offspring, and therefore the new trait spreads in the population. Therefore, mutations and natural selection cannot be demonstrated to have the claimed effects. What is the relation between mutations in the genome, and the number of offspring? What mutations are responsible for the number of offspring produced? If the theory of evolution is true, there must be a detectable mechanism, that determines or induces, or regulates the number of offspring based due to specific genetic mutations. Only a specific section in the genome is responsible for this regulation.

There are specific regions in the genome responsible for each  mechanism of reproduction, being it sexual, or asexual reproduction, that is:  

1. Regulation and programming of sexual attraction ( hormones, pheromones, instinct, etc.)
2. Frequency of sexual intercourse and reproduction
3. The regulation of the number of offspring produced

What influence do environmental pressures have on these 3 points? What pressures induced organisms to evolve sexual, and asexual reproduction?  Are the tree mechanisms mentioned not amazingly various and differentiated, and each species have individual, species-specific mechanisms? Some have an enormous number of offspring that helps the survival of the species, while others have a very low reproduction rate ( whales  ? ) How could environmental pressures have induced this amazing variation, and why?  That means also on a molecular level, enormous differences from one species to the other exist.  how could accidental mutations have been the basis for all this variation? Would there not have to be SPECIFIC environmental pressures resulting in the selection of  SPECIFIC traits based on mutations of the organism to be selected that provide survival advantage and fitness? ( genome or epigenome, whatever )  AND higher reproduction rates of the organism at the same time?

What is the chance, that random mutations provoke positive phenotypic differences, that help the survival of the individual? What kind of environmental factors influence the survival of a species? What kind of mutations must be selected to guarantee a higher survival rate?

The lack of predictive power of natural selection is due to different environmental conditions that turn it impossible to quantify the effects and measure their outcome.

Ivana Cvijović (2015): Temporal fluctuations in environmental conditions can have dramatic effects on the fate of each new mutation, reducing the efficiency of natural selection and increasing the fixation probability of all mutations, including those that are strongly deleterious on average. This makes it difficult for a population to maintain specialist adaptations, even if their benefits outweigh their costs. Temporally varying selection pressures are neglected throughout much of population genetics, despite the fact that truly constant environments are rare. The fate of each mutation depends critically on its fitness in each environment, the dynamics of environmental changes, and the population size. We still lack both a quantitative and conceptual understanding of more significant fluctuations, where selection in each environment can lead to measurable changes in allele frequency. 20

L.Bromham (2017): The search for simple unifying theories in macroevolution and macroecology seems unlikely to succeed given the vast number of factors that can influence a particular lineage’s evolutionary trajectory, including rare events and the weight of history. Patterns in biodiversity are shaped by a great many factors, both intrinsic and extrinsic to organisms. Both evidence and theory suggests that one such factor is variation in the mutation rate between species. But the explanatory power of the observed relationship between molecular rates and biodiversity is relatively modest, so it does not provide anything like the predictive power that might be hoped for in a unifying theory. However, we feel that the evidence is growing that, in addition to the many and varied influences on the generation of diversity, the differential rate supply of variation through species-specific differences in mutation rate has some role to play in generating different rates of diversification.21

Z. Patwa (2008): To date, the fixation probability of a specific beneficial mutation has never been experimentally measured. 22

More problems: R. G. Brajesh (2019): The genotypic mutational space of an organism is so vast, even for the tiniest of organisms like viruses or even one gene, that it becomes experimentally intractable. Hence, studies have limited to studying only small parts of the genome. For example, experiments have attempted to map the functional effect of mutations at important active site residues in proteins, like Lunzer et al. engineered the IDMH enzyme to use NADP as cofactor instead of NAD, and obtain the fitness landscape in terms of the mutational steps. Other experiments have attempted to ascertain how virulence is affected by mutations at certain important loci in viruses. However, due to the scale of the genotypic mutational space, it has been extremely difficult to experimentally obtain fitness landscapes of larger multicomponent systems, and study the statistical properties of these landscapes like the Distribution of Fitness Effects (DFE). Attempts have also been made to back-calculate the underlying DFE by experimentally observing how frequently new beneficial mutations emerge and of what strength, but the final results were inconclusive. As a result, how the beneficial, neutral, and deleterious mutations and their effects are distributed, when the organism genotype is at different locations on the fitness landscape, has remained largely intractable. 23

And more problems: Adam Eyre-Walker (2007): The distribution of fitness effects DFE of deleterious mutations, in particular the proportion of weakly deleterious mutations, determine a population's expected drift load—the reduction in fitness due to multiple small-effect deleterious mutations that individually are close enough to neutral to occasionally escape selection, but can collectively have important impacts on fitness. The DFE of new mutations influences many evolutionary patterns, such as the expected degree of parallel evolution, the evolutionary potential and capacity of populations to respond to novel environments, the evolutionary advantage of sex, and the maintenance of variation on quantitative traits, to name a few. Thus, an understanding of the DFE of mutations is a pivotal part of our understanding of the process of evolution.  Furthermore, the available data suggest that some aspects of the DFE of advantageous mutations are likely to differ between species9

Conclusion: The positive effects of natural selection on differential reproduction cannot be tested, since too many unknown variables have to be included, and that cannot lead to meaningful, quantifiable results that permit a clear picture. 

D.Coppedge (2021): The central concept of natural selection cannot be measured. This means it has no scientific value. 24

Large-scale evolution by natural selection is a non-testable hypothesis
1. P. R. Ehrlich (1988): In modem evolutionary genetics, natural selection is defined as the differential reproduction of genotypes (individuals of some genotypes have more offspring than those of others) based on the mutations they carry and the environment in which they live. Organisms that are better suited to their environment will reproduce more and so increase the proportion of the population with their traits. ( More reproduction of a genotype = survival of the fittest = measure of  fitness)  
2. T. Bataillon (2014): Obtaining estimates of mutation rates and effects has historically been challengingI. Cvijović (2015): The fate of each mutation depends critically on its fitness in each environment, the dynamics of environmental changes, and the population size. We still lack both a quantitative and conceptual understanding of more significant fluctuations, where selection in each environment can lead to measurable changes in allele frequency.C. J. Graves (2019): Variable environments can cause the same allele to have differing effects on fitness depending on an individual’s environmental context. V. Ž. Alif (2021): Fitness is difficult to measure accurately. The metric of fitness is scarcely used because the estimation of an individual’s reproductive value requires long-term pedigree data, which is rarely available in wild populations where following individuals from birth to death is often impossible. D.Coppedge (2021): The central concept of natural selection cannot be measured. This means it has no scientific value.
3. The key question, namely how mutations in fact affect fitness has not been answered. Darwin's Theory can not be tested, nor quantified. The unknown factors are too many, the variations in the environment, and population and species behavior vary too. It cannot be defined what influence the given environment exercises in regard to specific animals and traits in that environment, nor how the environmental influence would change the fitness and reproduction success of each distinct animal species. Large-scale evolution is at best a non-testable hypothesis, which then remains just that: a hypothesis. Since Darwin's idea cannot be tested, it's by definition, unscientific, and anyone claiming that natural selection explains biodiversity makes that claim based on blind confidence and belief. Not evidence. 

Unverifiable Evolution: A Hypothesis Beyond Testability

Large-scale evolution, a hypothesis non-testable,
Factors complex, unknown, and variable, make it unstable.
Fitness, mutation rates, and effects are hard to gauge,
Estimating reproductive value is a challenging stage.

Darwin's theory, difficult to quantify,
Unknowns abound, making it hard to verify.
Variations in the environment, species behavior change,
Influence on specific traits, a puzzle, so strange.

The key question remains unanswered still,
How mutations affect fitness, is a complex skill.
Natural selection, a concept hard to measure,
Lacking scientific value, is a challenging endeavor.

A hypothesis at best, an untestable claim,
Based on belief, not evidence, it may aim.
Blind confidence, not scientific proof,
Biodiversity's explanation is still aloof.

Refuting Darwin, confirming design Darwin12

No evidence of natural selection contributing to the increase in organismal complexity

And if that was not already bad news, it gets worse than that: M.Lynch (2007): Myth: Natural selection promotes the evolution of organismal complexity. Reality: There is no evidence at any level of biological organization that natural selection is a directional force encouraging complexity. What is in question is whether natural selection is a necessary or sufficient force to explain the emergence of the genomic and cellular features central to the building of complex organisms. 25

Molly K Burke et.al. (2010),"Genomic changes caused by epigenetic mechanisms tend to fail to fixate in the population, which reverts back to its initial pattern." That's not all that doesn't fixate. Despite decades of sustained selection in relatively small, sexually reproducing laboratory populations, selection did not lead to the fixation of newly arising unconditionally advantageous alleles. This is notable because in wild populations we expect the strength of natural selection to be less intense and the environment unlikely to remain constant for ~600 generations. Consequently, the probability of fixation in wild populations should be even lower than its likelihood in these experiments.26

Ben Bradley (2022): As soon as contemporary scientists accept that, as per Darwin’s argument in Origin, natural selection does not cause, but results from the ordinary activities of organisms, contemporary evolutionary theorists must address a new foundational challenge: the need to construct a viable, evidence-based picture of the natural world. 27

Comment: In other words, natural selection is not an actor, but a reactor. It is not a protagonist, but passively "selects" or unconsciously, without "intention" gives "preference" to those alleles that are somehow beneficial and therefore are favored to spread into the population and become dominant variants.  It does not "invent" something new. But that is precisely what is required if the tree of life ought to be true. It has to add de novo genes from scratch, with new information, that directs the making of new organismal structures, like limbs, eyes, ears, different cells, organs, and new body plans and forms. 

Adam Levy (2019): The ability of organisms to acquire new genes is testament to evolution’s “plasticity to make something seemingly impossible, possible”, says Yong Zhang, a geneticist at the Chinese Academy of Sciences’ Institute of Zoology in Beijing, who has studied the role of de novo genes in the human brain. But researchers have yet to work out how to definitively identify a gene as being de novo, and questions still remain over exactly how — and how often — they are born.28

Comment:  So Levy confesses, in 2019, there is no answer to this all-relevant question of whether evolution can generate a gene de novo - it has yet to be worked out. Wow...

But then, Levy makes the following claim at the end of the article: Although de novo genes remain enigmatic, their existence makes one thing clear: evolution can readily make something from nothing. “One of the beauties of working with de novo genes,” says Casola, “is that it shows how dynamic genomes are.”

Comment: Remarkable. On the one hand, in the article, Levy admits that researchers do not know (yet) how evolution can generate genes de novo, but in the end, surprise surprise (not): evolution can readily make something from nothing.....  

Gene duplications

Gene duplication will get you a new allele, but not a novelty - additional new information - which is what evolution needs.

Levy continues: Gene duplication occurs when errors in the DNA-replication process produce multiple instances of a gene. Over generations, the versions accrue mutations and diverge, so that they eventually encode different molecules, each with their own function. Since the 1970s, researchers have found a raft of other examples of how evolution tinkers with genes — existing genes can be broken up or ‘laterally transferred’ between species. All these processes have something in common: their main ingredient is existing code from a well-oiled molecular machine.

Comment: J.Dulle: Duplicating existing information cannot produce new information.  Just as saying, “duplicating a gene does not increase the net information content of the cell” three times does not triple the information content of the sentence, duplicating a gene cannot increase the information content of the cell.  Gene duplication cannot help an organism perform some new function.  Trying to get new biological information/function by duplicating an existing gene is like thinking you can obtain an engine for your car by making a second steering wheel! 29

The following papers challenge the claim that gene duplication events could account for evolutionary novelties:

M. Hurles (2004): A duplicated gene newly arisen in a single genome must overcome substantial hurdles before it can be observed in evolutionary comparisons. First, it must become fixed in the population, and second, it must be preserved over time. Population genetics tells us that for new alleles, fixation is a rare event, even for new mutations that confer an immediate selective advantage. Nevertheless, it has been estimated that one in a hundred genes is duplicated and fixed every million years, although it should be clear from the duplication mechanisms described above that it is highly unlikely that duplication rates are constant over time. However, once fixed, three possible fates are typically envisaged for our gene duplication. Despite the slackened selective constraints, mutations can still destroy the incipient functionality of a duplicated gene: for example, by introducing a premature stop codon or a mutation that destroys the structure of a major protein domain. 30

A. K. Holloway (2007): The fate of gene duplicates subjected to diversifying selection was tested experimentally in a bacterial system. In a striking contradiction to our model, no such conditions were found. The fitness cost of carrying both plasmids increased dramatically as antibiotic levels were raised, and either the wild-type plasmid was lost or the cells did not grow. 31

J. Esfandiar (2010): Although the process of gene duplication and subsequent random mutation has certainly contributed to the size and diversity of the genome, it is alone insufficient in explaining the origination of the highly complex information pertinent to the essential functioning of living organisms. Gene duplication and subsequent evolutionary divergence certainly adds to the size of the genome and in large measure to its diversity and versatility. However, in all of the examples given above, known evolutionary mechanisms were markedly constrained in their ability to innovate and to create any novel information. This natural limit to biological change can be attributed mostly to the power of purifying selection, which, despite being relaxed in duplicates, is nonetheless ever-present.32

Comment: After gene duplication and the arising of a divergent gene, complementary changes involving the regulation of gene expression of that new gene would have to be instantiated in parallel. New gene products require a rewiring of the gene regulatory architecture to function optimally and be integrated into the existing cellular networks. That new information does not have only to be added to the genome, but on top of the gene itself, the gene regulatory program as well has to be reprogrammed with new instructions, on when to express the new gene. Neofunctionalization of the new gene would depend on the right timing of expression. That requires as well the addition of new transcription factor markers, that bind at the right place in the genome. 

This is pointed out in the following quote:

Johan Hallin (2019): One category of molecular changes that appears to play a key role in the evolution of genes that originate from gene duplication (duplicates or paralogs) are regulatory changes, i.e., changes in the gene itself or elsewhere in the genome that determine when, where, and at what level a gene is transcribed and translated. The immediate effect of gene duplication could favor gene retention or loss, or if the expression change is effectively neutral, the duplicate could remain neutral for extended periods of time. 33

Alternative evolutionary forces to natural selection

Michael Lynch (2007): First, evolution is a population-genetic process governed by four fundamental forces. Darwin articulated one of those forces, the process of natural selection. The remaining three evolutionary forces are nonadaptive in the sense that they are not a function of the fitness properties of individuals: mutation is the ultimate source of variation on which natural selection acts, recombination assorts variation within and among chromosomes, and genetic drift ensures that gene frequencies will deviate a bit from generation to generation independent of other forces. Given the century of work devoted to the study of evolution, it is reasonable to conclude that these four broad classes encompass all of the fundamental forces of evolution. 34

Eugene V Koonin (2009):“Evolutionary-genomic studies show that natural selection is only one of the forces that shape genome evolution and is not quantitatively dominant, whereas non-adaptive processes are much more prominent than previously suspected.” There’s quite a lot of this sort of thing around these days, and we confidently predict a lot more in the near future. There is no consistent tendency of evolution towards increased genomic complexity, and when complexity increases, this appears to be a non-adaptive consequence of evolution under weak purifying selection rather than an adaptation. 35

Random genetic drift

H. Allen Orr (2008): Until the 1960s almost all biologists assumed that natural selection drives the evolution of most physical traits in living creatures,  but a group of population geneticists led by Japanese investigator Motoo Kimura sharply challenged that view. Kimura argued that molecular evolution is not usually driven by “positive” natural selection—in which the environment increases the frequency of a beneficial type that is initially rare. Rather, he said, nearly all the genetic mutations that persist or reach high frequencies in populations are selectively neutral—they have no appreciable effect on fitness one way or the other. (Of course, harmful mutations continue to appear at a high rate, but they can never reach high frequencies in a population and thus are evolutionary dead ends.) Since neutral mutations are essentially invisible in the present environment, such changes can slip silently through a population, substantially altering its genetic composition over time. The process is called random genetic drift; it is the heart of the neutral theory of molecular evolution. By the 1980s many evolutionary geneticists had accepted the neutral theory. But the data bearing on it were mostly indirect; more direct, critical tests were lacking. Two developments have helped fix that problem. First, population geneticists have devised simple statistical tests for distinguishing neutral changes in the genome from adaptive ones. Second, new technology has enabled entire genomes from many species to be sequenced, providing voluminous data on which these statistical tests can be applied. The new data suggest that the neutral theory underestimated the importance of natural selection. 8

P. Gibson (2013): In conclusion, numerical simulation shows that realistic levels of biological noise result in a high selection threshold. This results in the ongoing accumulation of low-impact deleterious mutations, with deleterious mutation count per individual increasing linearly over time. Even in very long experiments (more than 100,000 generations), slightly deleterious alleles accumulate steadily, causing eventual extinction. These findings provide independent validation of previous analytical and simulation studies. Previous concerns about the problem of accumulation of nearly neutral mutations are strongly supported by our analysis. Indeed, when numerical simulations incorporate realistic levels of biological noise, our analyses indicate that the problem is much more severe than has been acknowledged, and that the large majority of deleterious mutations become invisible to the selection process. 14

E. V. Koonin (2022): Modern evolutionary theory, steeped in population genetics, gives a detailed and arguably, largely satisfactory account of microevolutionary processes: that is, evolution of allele frequencies in a population of organisms under selection and random genetic drift. However, this theory has little to say about the actual history of life, especially the emergence of new levels of biological complexity, and nothing at all about the origin of life. The preponderance of neutral and slightly deleterious changes provides for evolution by genetic drift whereby a population moves on the same level or even slightly downward on the fitness landscape, potentially reaching another region of the landscape where beneficial mutations are available. 15

Jerry A. Coyne (2009): Both drift and natural selection produce genetic change that we recognize as evolution. But there’s an important difference. Drift is a random process, while selection is the anti-thesis of randomness. … As a purely random process, genetic drift can’t cause the evolution of adaptations. It could never build a wing or an eye. That takes nonrandom natural selection. What drift can do is cause the evolution of features that are neither useful nor harmful to the organism 16

Michael Lynch (2007): Contrary to popular belief, evolution is not driven by natural selection alone. Many aspects of evolutionary change are indeed facilitated by natural selection, but all populations are influenced by non-adaptive forces of mutation, recombination, and random genetic drift. These additional forces are not simple embellishments around a primary axis of selection, but are quite the opposite—they dictate what natural selection can and cannot do … A central point to be explained is that most aspects of evolution at the genome level cannot be fully explained in adaptive terms, and moreover, that many features could not have emerged without a near-complete disengagement of the power of natural selection. This contention is supported by a wide array of comparative data, as well as by well-established principles of population genetics” 19

George Ellis (2018): If most of the variation found in evolutionary lineages is a product of random genetic drift, how does apparent design arise? It surely can’t be an accidental by-product of random events – that was the whole point of Darwin’s momentous discovery (Darwin 1872) of a mechanism to explain apparent design that is so apparent in all of nature. On the face of it, Lynch, Myers, and Moran seem to be saying the ID people are right: evolution cannot dapt life to its environment, because random effects dominate.

Horizontal DNA transfer

Libretext: Horizontal gene transfer (HGT) is the introduction of genetic material from one species to another species by mechanisms other than the vertical transmission from parent(s) to offspring. These transfers allow even distantly-related species (using standard phylogeny) to share genes, influencing their phenotypes. It is thought that HGT is more prevalent in prokaryotes, but that only about 2% of the prokaryotic genome may be transferred by this process. 25



Last edited by Otangelo on Sat Apr 08, 2023 8:59 am; edited 29 times in total

https://reasonandscience.catsboard.com

3Refuting Darwin, confirming design Empty Re: Refuting Darwin, confirming design Mon Oct 24, 2022 7:12 pm

Otangelo


Admin

Irreducible complexity falsifies evolution

What function could the heart exercise without blood? or pacemaking cells without there to be determined the rhythm of the heart? Or what good are the subunits of ATP synthase good for without the other subunits that make up the molecular energy turbine? What good is ATP synthase good for without a proton gradient, and the electron transport chain? What good is a nucleobase for without the base? An atom without an electron?  It seems that large structures for specific functions could only exist if all the smaller parts are in place, that are necessary to make up that larger system, and the smaller parts would have no function on their own.

Functional parts are only meaningful within a whole, in other words, it is the whole that gives meaning to its parts.  Natural selection would not select components of a complex system that would be useful only in the completion of that much larger system. It cannot select when the usefulness is only conveyed many steps later. Why would natural selection select an intermediate biosynthesis product, which has by its own no use for the organism, unless that product keeps going through all necessary steps, up to the point to be ready to be assembled in a larger system?  Never do we see blind, unguided processes leading to complex functional systems with integrated parts contributing to the overarching design goal. A minimal amount of instructional complex information is required for a gene to produce useful proteins. A minimal size of a protein is necessary for it to be functional. Thus, before a region of DNA contains the requisite information to make useful proteins, natural selection would not select a positive trait and play no role in guiding its evolution.

The argument of irreducible complexity is obvious and clear. Subparts like a piston in a car engine are only designed, when there is a goal where they will be mounted with specific fitting sizes and correct materials, and have a specific function in the machine as a whole. Individually they have no function. The same is in biological systems, which work as factories ( cells ) or machines ( cells host a big number of the most various molecular machines and equal to factory production lines ) For example, in photosynthesis, there is no function for chlorophyll individually, only when inserted in the light-harvesting complex, to catch photons, and direct their excitation energy by Förster resonance energy transfer to the reaction center in Photosystem one and two. Foreplanning is absolutely essential. This is a  simple fact, which makes the concept of  Irreducible complexity obvious concept. Nonetheless, people argue all the time that it's a debunked argument. Why? That's as if genetic mutations and natural selection had enough probability to generate interdependent individual parts being able to perform new functions while the individual would have no function unless interconnected.

To No.3: A.Y. Mulkidjanian (2007): The principle of evolutionary continuity, succinctly formulated by Albert Lehninger in his Biochemistry textbook. An adaptation that does not increase the fitness is no longer selected for and eventually gets lost in the evolution (in the current view, only those adaptations that effectively decrease the fitness end up getting lost). Hence, any evolutionary scenario has to invoke – at each and every step – only such intermediate states that are functionally useful (or at least not harmful). 21

1. In biology, there are many complex elementary components necessary to build large integrated macromolecular systems like multi-protein complexes (RNA polymerase), 3D printers (the ribosome), organelles (mitochondria), etc., where their making requires complex multistep enzyme-catalyzed biosynthesis pathways. These elementary components are only useful in the completion of that much larger system. Not rarely, these biosynthetic pathways produce intermediate products, that left without further processing, are either a) nonfunctional, or b) harmful and kill the cell (for example, Reactive Oxygen Species (ROS), in the biosynthesis pathway of Chlorophyll b.
2. A minimal amount of prescribed, pre-programmed, instructional complex information stored in genes is required to instruct the making of a) functional elementary components and b) the assembly instructions to integrate them into complex macromolecular systems. Natural selection would not fix an allele variant that would instruct the making of an intermediate, nonfunctional, or harmful elementary component, and play no role in guiding its evolution. Foreknowledge is required to get a complex biological system through implementing a biosemiotic information system (which is irreducibly complex), directing the making of functional elementary components, and assembly into the entire complex integrated system.
3. Therefore, the origin of biological systems based on biosemiotic instructions are best explained by a brilliant, super-powerful mind with foresight and intent, and not undirected evolutionary pressures.

Irreducible Complexity: A Design Inference for Biological Systems

In the realm of biology, complexity abounds,
From proteins to organelles, intricate compounds,
Multistep pathways, enzyme-catalyzed,
Building macromolecules, so finely devised.

Proteins like RNA polymerase, so grand,
Ribosomes, 3D printers, at Nature's command,
But intermediate products, sometimes arise,
Nonfunctional or harmful, a cell's demise.

Prescribed, pre-programmed, genes hold the key,
Instructional information, so complex, you see,
Guiding the making of functional parts,
Assembly instructions, for life to impart.

Natural selection, a force in its place,
But fixing harmful variants, it would not embrace,
Foreknowledge is needed, to navigate,
Biosemiotic system, complex and innate.

The origin of life, a question profound,
Based on biosemiotic instructions, it's found,
A brilliant, powerful mind with foresight and aim,
Design and intent, the evolutionary claim.

So let us ponder, the wonders of life,
Complexity and intricacy, amid nature's strife,
Irreducible complexity, a design we infer,
A testament to a Creator's wisdom, that we concur.






What are the boundaries/limits of beneficial mutations? 

Natural selection does not create or add something. The innovations that permit organisms to evolve have to come from the variations/mutations of pre-existing traits in the genome. It is accidental mutations that would have to convey innovation, that natural selection would select and fix in the genome. There would not only have to be variation but also an increase in genome size. The smallest known free-living bacterium today is called Pelagibacter Ubique. It has a genome of 1,3 million nucleotides. If we suppose that the Last Universal Common Ancestor had the genome size of P.Ubique, it would have to increase to get to 3 billion nucleotides, the size of a human genome 2300 times larger in size.

Behe's second book ( After Darwin's Black Box 1996), The edge of evolution (2008) 72, gave a lot to talk.   

Aditi Gupta (2016): Genome sizes vary widely, from 250 bases in viroids to 670 billion bases in some amoebas. This remarkable variation in genome size is the outcome of complex interactions between various evolutionary factors such as mutation rate and population size. While comparative genomics has uncovered how some of these evolutionary factors influence genome size, we still do not understand what drives genome size evolution. Specifically, it is not clear how the primordial mutational processes of base substitutions, insertions, and deletions influence genome size evolution in asexual organisms. 53

D. Joseph (2021): “Genomes are the genetic specifications that allow life to exist. Specifications are obviously inherently SPECIFIC. This means that random changes in specifications will disrupt information with a very high degree of certainty. This has become especially clear ever since the publication of the ENCODE results, which show that very little of our genome is actually ‘junk DNA’. The ENCODE project also shows that most nucleotides play a role in multiple overlapping codes, making any beneficial mutations which are not deleterious at some level vanishingly rare. In the abstract of the paper titled “Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation”, the authors describe why these overlapping genetic codes present a profoundly serious challenge to evolutionary theory. 54

John C. Sanford (2013): “There is growing evidence that much of the DNA in higher genomes is poly-functional, with the same nucleotide contributing to more than one type of code. Such poly-functional DNA should logically be multiply-constrained in terms of the probability of sequence improvement via random mutation. We describe a model of this relationship, which relates the degree of poly-functionality and the degree of constraint on mutational improvement. We show that: 

a) the probability of beneficial mutation is inversely related to the degree that a sequence is already optimized for a given code; 
b) the probability of beneficial mutation drastically diminishes as the number of overlapping codes increases. 

The growing evidence for a high degree of optimization in biological systems, and the growing evidence for multiple levels of poly-functionality within DNA, both suggest that mutations that are unambiguously beneficial must be especially rare. The theoretical scarcity of beneficial mutations is compounded by the fact that most of the beneficial mutations that do arise should confer extremely small increments of improvement in terms of total biological function. This makes such mutations invisible to natural selection. Beneficial mutations that are below a population's selection threshold are effectively neutral in terms of selection, and so should be entirely unproductive from an evolutionary perspective. We conclude that beneficial mutations that are unambiguous (not deleterious at any level), and useful (subject to natural selection), should be extremely rare.” 55

Gert Korthof (2007): The book "edge of evolution" is principally about the probability of new protein-protein binding sites arising by chance and necessity. Experimental evidence (mostly chloroquine resistance) shows such protein-protein binding sites to be difficult to evolve by chance mechanisms. He says the empirical (extrapolation) of the "edge" of evolution is no more than two coordinated protein-protein binding sites could have evolved in a lineage in all the time available on earth. The flagellum has perhaps dozens of such sites. It is a quantitative argument. 75

M.Behe: Edge of evolution (2008): Recall the example of sickle cell disease. The sickle cell mutation is both a life saver and a life destroyer. It fends off malaria but can lead to sickle cell disease. However, hemoglobin C-Harlem has all the benefits of sickle, but none of its fatal drawbacks. So in western and central Africa, a population of humans that had normal hemoglobin would be worst off, a population that had half normal and half sickle would be better off, and a population that had half normal and half C-Harlem would be best of all. But if that’s the case, why bother with sickle hemoglobin? Why shouldn’t evolution just go from the worst to the best case directly? Why not just produce the C-Harlem mutation straightaway and avoid all the misery of sickle? The problem with going straight from normal hemoglobin to hemoglobin C-Harlem is that, rather than walking smoothly up the stairs, evolution would have to jump a step. C-Harlem differs from normal hemoglobin by two amino acids. In order to go straight from regular hemoglobin to C-Harlem, the right mutations would have to show up simultaneously in positions 6 and 73 of the beta chain of hemoglobin. Why is that so hard? Switching those two amino acids at the same time would be very difficult for the same reason that developing resistance to a cocktail of drugs is difficult for malaria—the odds against getting two needed steps at once are the multiple of the odds for each step happening on its own. What are those odds? Very low. The human genome is composed of over three billion nucleotides. Yet only a hundred million nucleotides seem to be critical, coding for proteins or necessary control features. The mutation rate in humans (and many other species) is around this same number; that is, approximately one in a hundred million nucleotides is changed in a baby compared to its parents (in other words, a total of about thirty changes per generation in the baby’s three-billion-nucleotide genome, one of which might be in coding or control regions).  In order to get the sickle mutation, we can’t change just any nucleotide in human DNA; the change has to occur at exactly the right spot. So the probability that one of those mutations will be in the right place is one out of a hundred million. Put another way, only one out of every hundred million babies is born with a new mutation that gives it sickle hemoglobin. Over a hundred generations in a population of a million people, we would expect the mutation to occur once by chance. That’s within the range of what can be done by mutation/selection. To get hemoglobin C-Harlem, in addition to the sickle mutation we have to get the other mutation in the beta chain, the one at position 73. The odds of getting the second mutation in exactly the right spot are again about one in a hundred million. So the odds of getting both mutations right, to give hemoglobin C-Harlem in one generation in an individual whose parents have normal hemoglobin, are about a hundred million times a hundred million (10^16 ). On average, then, nature needs about that many babies in order to find just one that has the right double mutation. With a generation time of ten years and an average population size of a million people, on average it should take about a hundred billion years for that particular mutation to arise—more than the age of the universe. 

Hemoglobin C-Harlem would be advantageous if it were widespread in Africa, but it isn’t. It was discovered in a single family in the United States, where it doesn’t offer any protection against malaria for the simple reason that malaria has been eradicated in North America. Natural selection, therefore, may not select the mutation, and it may easily disappear by happenstance if the members of the family don’t have children, or if the family’s children don’t inherit a copy of the C-Harlem gene. It’s well known to evolutionary biologists that the majority even of helpful mutations are lost by chance before they get an opportunity to spread in the population. 7 If that happens with C-Harlem, we may have to wait for another hundred million carriers of the sickle gene to be born before another new C-Harlem mutation arises. 74

Of course, providing such a powerful argument demonstrating the edge/limit of evolution, would not keep the opponents silent. Sean Carroll, an evolutionary developmental biologist, wrote a critical response in the magazine Science, named "God as Genetic Engineer", to which Behe responded on his Amazon blog. The link can be accessed in the bibliography of this chapter. 73 

Gunter Bechly (2018):Michael Behe discovered the waiting time problem as a problem for darwinism in his book the age of evolution and he didn't make a mathematical calculation but he looked at the empirical data from malaria drug resistance and what he found is that a lot of the malaria drugs resistance developed very quickly in a few years because only point mutations were necessary but in the case of chloroquine the drug chloroquine it took several decades and the reason was it was discovered later that there you needed a coordinated mutations to mutations neutral for each other had to come together to produce this kind of resistance against chloroquine and then he  simply transpose the data if you look at the vast population size of malaria microbes compared to the population size of vertebrates and their short generation time and you transpose these data he came up to the hypothesis that invertebrates were a single coordinated change he would have to need longer than the existence of the whole universe 10 to the power of 15 years now this is of course would be a problem and for example in human evolution we have all these nice fossils so if the signal coordinated change would take longer than the universe then then it would be game over so of course evolutionary biologists tried to repute me and indeed in 2008 the earth and Schmidt they published a paper in genetics where they said they have refuted his result was completely unrealistic they did they made a mathematical calculation based on the methodological apparatus of population genetics and simulations and they came with a number of 260 million years. Wonderful this is really much shorter than Big E the problem is we have only 6 million years available since the splitting of the human lineage from the chimp lineage so that is what evolutionary biologists say is the time needed for a single coordinated mutation and you have to keep in mind this is a mathematical model which always involves simplifications and simplifications may involve errors so what is more likely that the empirical data from B from a lot of drug resistance are closer to the truth or the mathematical simulation I would suggest that rather this ten to the power of 15 is closer to the the real constraint in nature but anyway we arrive at times that are much too long for for evolution to occur.76

J. B. Fischer: (2007): What about a case where 10 mutations are needed before there is a benefit? If each mutation by itself is neutral, natural selection has nothing to act on. Then the probability of all ten specific mutations ending up in one organism, even if they are acquired sequentially over many generations, is vanishingly small. Once a structure already exists, natural selection can fine-tune it. However, in some cases, natural selection is not sufficient, because multiple mutations are required, which are not beneficial in themselves. They are only beneficial after the basic structure is completed and functioning. Small mutations happen, which cause changes within a species. However,  natural selection cannot have been responsible for the huge differences between the major groups of living things with their vastly different body structures. If evolution is the only cause of the diversity of life, then pathways must exist where multiple mutations are each beneficial in themselves. Until recent advances in DNA research and biochemistry, it has not been possible to propose a detailed, step-by-step, beneficial pathway to a new biological system. 73 

The waiting time problem in a model hominin population

Rick Durrett (2008): We now show that two coordinated changes that turn off one regulatory sequence and turn on another without either mutant becoming fixed are unlikely to occur in the human population. Theorem 1 predicts a mean waiting time of 216 million years. 78

John Sanford (2015): Biologically realistic numerical simulations revealed that a population of this type required inordinately long waiting times to establish even the shortest nucleotide strings. To establish a string of two nucleotides required on average 84 million years. To establish a string of five nucleotides required on average 2 billion years. We found that waiting times were reduced by higher mutation rates, stronger fitness benefits, and larger population sizes. However, even using the most generous feasible parameters settings, the waiting time required to establish any specific nucleotide string within this type of population was consistently prohibitive.77

John Sanford (2016): Our paper shows that the waiting time problem cannot honestly be ignored. Even given best-case scenarios, using parameter settings that are grossly overgenerous (for example, rewarding a given string by increasing total fitness 10 percent), waiting times are consistently prohibitive. This is even for the shortest possible words. Establishment of just a two-letter word (two specific mutations within a hominin population of ten thousand) requires at least 84 million years. A three-letter word requires at least 376 million years. A six-letter word requires over 4 billion years. An eight-letter word requires over 18 billion years (again, see Table 2 in the paper). The waiting time problem is so profound that even given the most generous feasible timeframes, evolution fails. The mutation/selection process completely fails to reproducibly and systematically create meaningful strings of genetic letters in a pre-human population.79

Sophisticated mechanisms prevent cells to accumulate harmful mutations

Imagine changing a blueprint that instructs how to make all the complicated parts of a complex factory, and how they have to be assembled and joined to get the factory's intended end function, inserting different sizes of various sorts, instructing the replacement of one kind of material with another, changing the instructions to assemble the machines in a way that in the end, it cannot convey the intended function. It would result in catastrophic consequences. Sometimes, even switching one tiny thing with another can mean a total inability of a factory to exercise its intended functions. The chance that a random change would instead of driving havoc, improve the functioning of the factory, is negligible.  In the same sense, any random mutation in the genome is likely to result in the synthesis of a protein that does not function properly or not at all. Most mutations are detrimental, causing genetic disorders or even cancer and death.

DNA and RNA error checking and repair: What causal mechanism explains best their origin?

NIWRAD: Molecular biology shows that many complex control-repair mechanisms work inside the cell to recover genetic errors. For example, there are at least three major DNA repair mechanisms. Without such mechanisms, life would be impossible because the internal entropy of the cell would be too high and destructive. Each of them involves the complex and coordinated action of several enzymes/proteins. Random mutations and natural selection are a process that needs errors and at the same time, this process creates mechanisms to eliminate them? The bottom line is that repair mechanisms are incompatible with Darwinism in principle. Since sophisticated repair mechanisms do exist in the cell after all, then the thing to discard in the dilemma to avoid the contradiction necessarily is the Darwinist dogma. 60

J. M. Fischer: Some of the sophisticated and overlapping repair mechanisms found for DNA include:

1. A proofreading system that catches almost all errors
2. A mismatch repair system to back up the proofreading system
3. Photoreactivation (light repair)
4. Removal of methyl or ethyl groups by O6 – methylguanine methyltransferase
5. Base excision repair
6. Nucleotide excision repair
7. Double-strand DNA break repair
8. Recombination repair
9. Error-prone bypass 40

Harmful mutations happen constantly. Without repair mechanisms, life would be very short indeed and might not even get started because mutations often lead to disease, deformity, or death. So even the earliest, “simple” creatures in the evolutionist’s primeval soup or tree of life would have needed a sophisticated repair system. But the (sophisticated repair) mechanisms not only remove harmful mutations from DNA, they would also remove mutations that are believed to build new parts. So there is the problem of the evolution of (sophisticated repair) mechanisms that prevent evolution, all the way back to the very origin of life. 58

A. B. Williams (2016): Cells can revert the large variety of DNA lesions that are induced by endogenous and exogenous genotoxic attacks through a variety of sophisticated DNA-repair machineries,. Nucleotide excision repair (NER) removes a variety of helix-distorting lesions such as typically induced by UV irradiation, whereas base excision repair (BER) targets oxidative base modifications. Mismatch repair (MMR) scans for nucleotides that have been erroneously inserted during replication. DNA DSBs that are typically induced by IR are resolved either by nonhomologous end joining (NHEJ) or by homologous recombination (HR), whereas RECQ helicases assume various roles in genome maintenance during recombination repair and replication.64

Remarkable DNA repair capabilities of Deinococcus radiodurans and E.Coli

Extreme Genome Repair (2009): If its naming had followed, rather than preceded, molecular analyses of its DNA, the extremophile bacterium Deinococcus radiodurans might have been called Lazarus. After shattering of its 3.2 Mb genome into 20–30 kb pieces by desiccation or a high dose of ionizing radiation, D. radiodurans miraculously reassembles its genome such that only 3 hr later fully reconstituted nonrearranged chromosomes are present, and the cells carry on, alive as normal 65

T. Devitt (2014): John R. Battista, a professor of biological sciences at Louisiana State University, showed that E. coli could evolve to resist ionizing radiation by exposing cultures of the bacterium to the highly radioactive isotope cobalt-60. “We blasted the cultures until 99 percent of the bacteria were dead. Then we’d grow up the survivors and blast them again. We did that twenty times,” explains Cox. The result were E. coli capable of enduring as much as four orders of magnitude more ionizing radiation, making them similar to Deinococcus radiodurans, a desert-dwelling bacterium found in the 1950s to be remarkably resistant to radiation. That bacterium is capable of surviving more than one thousand times the radiation dose that would kill a human. 66

1. Organisms are constantly exposed to different environments, and in order to survive, require to be able to adapt to external conditions.
2. Life, in order to perpetuate, has to replicate. That includes DNA, which must be replicated with extreme accuracy. Somehow, the cell knows when DNA is accurately replicated, and when not. There are extremely complex quality control mechanisms in place, which constantly monitor the process. At least 3 error check and repair mechanisms keep error during replication down to 1 error in 10 billion nucleotides replicated.
3. These repair mechanisms, sophisticated proteins, are also encoded in DNA. So proteins are required to error check and repair DNA but accurately replicated DNA is necessary to make the proteins that repair DNA.
4. That is an all-or-nothing business. Therefore, these sophisticated systems had to emerge all at once, and require a designer.  

The existence of multiple layers of repair mechanisms that prevent random mutations from happening to DNA, has been termed the ‘mutation protection paradox’.

W. DeJong (2011): Both digital codes in computers and nucleotide codes in cells are protected against mutations. Mutation protection affects the random change and selection of digital and nucleotide codes. Our mutation protection perspective enhances the understanding of the evolutionary dynamics of digital and nucleotide codes and its limitations, and reveals a paradox between the necessity of dysfunctioning mutation protection for evolution and its disadvantage for survival. Our mutation protection perspective suggests new directions for research into mutational robustness. Unbounded random change of nucleotide codes through the accumulation of irreparable, advantageous, code expanding, inheritable mutations at the level of individual nucleotides, as proposed by evolutionary theory, requires the mutation protection at the level of the individual nucleotides and at the higher levels of the code to be switched off or at least to dysfunction. Dysfunctioning mutation protection, however, is the origin of cancer and hereditary diseases, which reduce the capacity to live and to reproduce. Our mutation protection perspective of the evolutionary dynamics of digital and nucleotide codes thus reveals the presence of a paradox in evolutionary theory between the necessity and the disadvantage of dysfunctioning mutation protection. This mutation protection paradox, which is closely related with the paradox between evolvability and mutational robustness, needs further investigation. 59

Peto's paradox....

Marc Tollis (2017): In a multicellular organism, cells must go through a cell cycle that includes growth and division. Every time a human cell divides, it must copy its six billion base pairs of DNA, and it inevitably makes some mistakes. These mistakes are called somatic mutations (cells in the body other than sperm and egg cells). Some somatic mutations may occur in genetic pathways that control cell proliferation, DNA repair, apoptosis, telomere erosion, and growth of new blood vessels, disrupting the normal checks on carcinogenesis. If every cell division carries a certain chance that a cancer-causing somatic mutation could occur, then the risk of developing cancer should be a function of the number of cell divisions in an organism’s lifetime. Therefore, large-bodied and long-lived organisms should face a higher lifetime risk of cancer simply due to the fact that their bodies contain more cells and will undergo more cell divisions over the course of their lifespan. However, a 2015 study that compared cancer incidence from zoo necropsy data for 36 mammals found that a higher risk of cancer does not correlate with increased body mass or lifespan. In fact, the evidence suggested that larger long-lived mammals actually get less cancer. This has profound implications for our understanding of how the cancer problem is solved.

When individuals in populations are exposed to the selective pressure of cancer risk, the population must evolve cancer suppression as an adaptation or else suffer fitness costs and possibly extinction. Discovering the mechanisms underlying these solutions to Peto’s Paradox requires the tools of numerous subfields of biology including genomics, comparative methods, and experiments with cells. For instance, genomic analyses revealed that the African savannah elephant (Loxodonta africana) genome contains 20 copies, or 40 alleles, of the most famous tumor suppressor gene TP53. The human genome contains only one TP53 copy, and two functional TP53 alleles are required for proper checks on cancer progression. When cells become stressed and incur DNA damage, they can either try to repair the DNA or they can undergo apopotosis, or self-destruction. The protein produced by the TP53 gene is necessary to turn on this apoptotic pathway. Humans with one defective TP53 allele have Li Fraumeni syndrome and a ~90% lifetime risk of many cancers, because they cannot properly shut down cells with DNA damage. Meanwhile, experiments revealed that elephant cells exposed to ionizing radiation behave in a manner consistent with what you would expect with all those TP53 copies—they are much more likely to switch on the apoptotic pathway and therefore destroy cells rather than accumulate carcinogenic mutations. 61

Comment: How does the author explain the origin of these protective mechanisms? He claims: "The solution to Peto’s Paradox is quite simple: evolution". This is an ad-hoc assertion and raises the question: How could complex multicellular organisms have evolved if these cancer protection mechanisms were not implemented before the transition occurred, since, otherwise, these organisms would have gone extinct? But how would cancer protection mechanisms have been selected, if before multicellularity arose, these complex systems would not have conveyed any function? 1. Multicellularity, 2. genome maintenance mechanisms, cell-cycle arrest mechanisms, and apoptosis, and 3. p53 transcription factors would have had to evolve together since they work as a system.   This problem becomes even greater if considering, that animals with large body size supposedly evolved independently many times across the history of life, and, therefore, these mechanisms would have had to be recruited multiple times. The paradox is only solved if we hypothesize that large animals were created independently by God, and right from the beginning equipped with tumor suppressor mechanisms from the get-go.

A. B. Williams (2016): The loss of p53 is a major driver of cancer development mainly because, in the absence of this “guardian of the genome,” cells are no longer adequately protected from mutations and genomic aberrations. Intriguingly, the evolutionary occurrence of p53 homologs appears to be associated with multicellularity. With the advent of metazoans, genome maintenance became a specialized task with distinct requirements in germ cells and somatic tissues. With the central importance of p53 in controlling genome instability–driven cancer development, it might not be surprising that p53 controls DNA-damage checkpoints and impacts the activity of various DNA-repair systems. 64

B. J. Aubrey (2016): The fundamental biological importance of the Tp53 gene family is highlighted by its evolutionary conservation for more than one billion years dating back to the earliest multicellular organisms. The TP53 protein provides essential functions in the cellular response to diverse stresses and safeguards maintenance of genomic integrity, and this is manifest in its critical role in tumor suppression. The importance of Tp53 in tumor prevention is exemplified in human cancer where it is the most frequently detected genetic alteration. This is confirmed in animal models, in which a defective Tp53 gene leads inexorably to cancer development, whereas reinstatement of TP53 function results in regression of established tumors that had been initiated by loss of TP53. 

TP53: Tumor Suppression and Transcriptional Regulation

Following activation, the TP53 protein functions predominantly as a transcription factor. The TP53 protein forms a homotetramer that binds to specific Tp53 response elements in genomic DNA to direct the transcription of a large number of protein-coding genes. The requirement for TP53 transcriptional activity in tumor suppression has been examined by systematically mutating the transactivation domains of the TP53 protein, rendering it either partially or wholly transcriptionally defective. Importantly, mutations resulting in complete loss of TP53 transcriptional activity ablate its ability to prevent tumor formation, supporting the concept that transcriptional regulation is central to the tumor-suppressor function. TP53-mediated tumor suppression is governed by transcriptional regulation.

TP53-mediated transcriptional regulation varies according to the type of stress stimulus and type of cell, so that appropriate corrective processes can be implemented. For example, minor DNA damage may institute cell-cycle arrest and activate DNA-repair mechanisms, whereas stronger TP53-activating signals induce senescence or apoptosis. Accordingly, the TP53 transcriptional response varies depending on the nature of the activating signal and the type of cell. The number of known or suspected TP53 target genes has increased into the thousands with dramatic differences in transcriptional responses observed among different cell types, different TP53-inducing stress stimuli, and varying time points following TP53 activation. These studies paint an increasingly complex picture of the modes by which TP53 can regulate gene expression. For example, before TP53 activation, a subset of target genes is transcriptionally repressed by the TP53 protein. More recently appreciated functions of the TP53 protein include widespread binding and modulation of enhancer regions throughout the genome and transcriptional activation of noncoding RNAs. Interestingly, the TP53-activated long noncoding RNA, lincRNA-p21, exerts widespread suppression of gene expression. The list of proposed TP53 target genes is vast and they are known to influence diverse cellular processes, including apoptosis, cell-cycle arrest, senescence, DNA-damage repair, metabolism, and global regulation of gene expression, each of which could potentially contribute to its tumor-suppressor function.

The p53 pathway responds to various cellular stress signals (the input) by activating p53 as a transcription factor (increasing its levels and protein modifications) and transcribing a programme of genes (the output) to accomplish a number of functions. Together, these functions prevent errors in the duplication process of a cell that is under stress, and as such the p53 pathway increases the fidelity of cell division and prevents cancers from arising. 63

K. D. Sullivan (2017): The p53 polypeptide contains several functional domains that work coordinately, in a context-dependent fashion, to achieve DNA binding and transactivation. (the increased rate of gene expression) 68

K. Kamagata (2020): Interactions between DNA and DNA-binding proteins play an important role in many essential cellular processes. A key function of the DNA-binding protein p53 is to search for and bind to target sites incorporated in genomic DNA, which triggers transcriptional regulation. How do p53 molecules achieve “rapid” and “accurate” target search in living cells?  The genome encompasses DNA sequences that encode genes, and gene editing is the genetic engineering of a specific DNA sequence, including insertion, deletion, modification, and replacement. The main player in genome editing is a type of protein that can bind to DNA, known as DNA-binding proteins. DNA-binding proteins include enzymes, which can cut DNA or ligate two DNA molecules, and transcription factors, which can activate or deactivate gene expression. These proteins are classified into DNA sequence-specific and nonspecific binders. The transcription factor p53 can induce multiple tumor suppression functions, such as cell cycle arrest, DNA repair, and apoptosis. p53 is presumed to solve the target search problem by utilizing 3D diffusion, 1D diffusion along DNA, and intersegmental transfer between two DNAs in the cell. 67

E. Sentur (2016): Genomic stability is a critical requirement for cell survival and the prevention of tumorigenesis. In order to ensure that mutations that result from DNA damage are not passed on to daughter generations, the cell must pause and repair the damage. The cellular response pathway is a network that involves sensors of damage that ultimately transmit signals to mediator proteins that regulate the transcription of effector proteins that play an important role in arresting the cell cycle. In the cell cycle, transitions (G1/S, intra S, G2/M) that lead from DNA replication to mitosis are monitored for successful completion. In the event of DNA damage, genotoxic stress, or ribonucleotide depletion, cell cycle checkpoints prevent progression to the next phase of the cell cycle until the damage is repaired, the stress is removed, or nutrients are replenished. Other pathways may be activated that result in programmed cell death if the damage is irreparable. When there are defects in the cell cycle checkpoints, gene mutations, chromosome damage, and aneuploidy can result and ultimately, cell transformation can be a consequence of such defects.

p53, a transcription factor and tumor suppressor protein, can regulate the expression of proteins that play critical roles in growth arrest and apoptosis (programmed cell death). p53 plays a critical role both in the G1/S checkpoint, in which cells arrest prior to DNA replication and have a 2N content of DNA, and in the G2/M checkpoint, in which arrest occurs before mitosis and cells have a 4N content of DNA. The activation of p53 following DNA damage results in the expression of many proteins which are important in cell cycle arrest, repair, and apoptosis. Cells in which p53 is deleted or mutated lose the G1 checkpoint and no longer arrest at the G1/S transition. Although they maintain a G2 arrest, this arrest can decay over time thus allowing cells to enter mitosis with unrepaired DNA damage and mutations that increase the risk of progression to malignancy. People in which one allele of the p53 gene is mutated, are susceptible to sarcomas, leukemias, brain and adrenal tumors. In these tumors the remaining allele of p53 is often deleted (loss of heterozygosity) highlighting the importance of the role of p53 in genomic stability. 69

W. Feroz (2020):  “The protein P53 is a transcription factor encoded by the gene TP53 which is the most commonly mutated tumor suppressor gene in human cancers, it performs multiple regulatory functions by receiving information, modulating and relaying the information, carrying out multiple downstream signals such as cellular senescence, cell metabolism, inflammation, autophagy, and other biological processes which control the survival and death of abnormal cells”.  “P53 also plays a crucial role in determining cell’s response to various cellular stress like DNA damage, nutrient deficiency, and hypoxia by inducing gene transcription, which controls the process of cell cycle and programmed cell death (apoptosis)”. Generally, in a cell, P53 is an unstable protein that is present in meagre amounts inside the cell because it is continuously degraded by MDM2 proteins. P53 has a complex array of functions.
P53 plays a central role in DNA damage response and is considered the “Guardian of the Genome”. DNA damage response is dependent on the nature of the stress signal, the cell type, timing, and intensity of the stress signal. “DNA damage promotes Post-translational modifications (PTMs) on P53”, “whereas oncogenic stress activates Alternative reading frame (ARF) tumour suppressor protein to inhibit MDM2”. “In response, P53 can activate cell cycle arrest, repair the damaged DNA, activates specific cell death pathways, and metabolic changes in the cell. “DNA damage causes P53activation which induces an array of genes spanning multiple functions, using various genetic studies the best known P53 targets” are 

1. DNA damage response genes, 
2. “cell cycle arrest genes, 
3. “genes involved in apoptosis, 
4. metabolism, and 
5. “Post-translational regulators of P53 

Expression profiling study identified many target genes of P53 whose number ranged from less than 100 to more than 1500 based on the conditions of P53 activation and approaches used for data processing, “the main drawback was that they could not differentiate among the direct and indirect targets of P53”. 70

Comment: This transcription factor p53 actively searches targets in the genome to be expressed. This is a goal-oriented process implemented to activate processes that avoid the origination of cancer. Various players are required that work as a system. It is a team play.  The p53 transcription factor has to be able to perform “rapid” and “accurate” target search, recognize it, and bind to the DNA sequence so it can be expressed, but most important, before it can act like a switch commanding "on", the gene sequences to be expressed must be there, that is, the actors that are recruited to permit DNA repair, or apoptosis (cell death). It is an all-or-nothing business to convey the function to suppress the development and growth of tumors, and consequently, death. In other words, this is an irreducibly complex system where p53 would be functionless unless the actors to act upon were not there. 

The concepts of machine and factory error monitoring, checking, and repair are all tasks performed with goal-directedness, intent, and purpose
 
1. Repairing things that are broken, malfunctioning, or instantiating complex systems that autonomously prevent things to break are always actions performed by agents with intentions, volition, goal-orientedness, foresight, understanding, and know-how.
2. Man-made machines almost always require direct intelligent intervention by technicians to recognize errors, find which parts of a machine are broken, know how to remove and replace them without breaking surrounding parts of the device, and know how to construct the part that has to be replaced with fidelity, and re-insert and re-connect it where the part was removed. The entire process is complex, demanding know-how, and depends on a high quantity of intelligence in performing all involved actions.
3. Man has not been able to create a fully autonomous, preprogrammed machine or factory, that is able to quality and error monitor all manufacturing processes and the correct performance of all devices involved, and if the products are up to the required quality standard, and, if something drives havoc, repair and re-establish normal function of what was broken or malfunctioning without external intervention.
4. C.H. Loch writes in the science paper: "Organic Production Systems: What the Biological Cell Can Teach Us About Manufacturing" (2004): Biological cells are preprogrammed to use quality-management techniques used in manufacturing today. The cell invests in defect prevention at various stages of its replication process, using 100% inspection processes, quality assurance procedures, and foolproofing techniques. An example of the cell inspecting each and every part of a product is DNA proofreading. As the DNA gets replicated, the enzyme DNA polymerase adds new nucleotides to the growing DNA strand, limiting the number of errors by removing incorrectly incorporated nucleotides with a proofreading function. Following is an impressive example:  Unbroken DNA conducts electricity, while an error blocks the current. Some repair enzymes exploit this. One pair of enzymes lock onto different parts of a DNA strand. One of them sends an electron down the strand. If the DNA is unbroken, the electron reaches the other enzyme and causes it to detach. I.e. this process scans the region of DNA between them, and if it’s clean, there is no need for repairs. But if there is a break, the electron doesn’t reach the second enzyme. This enzyme then moves along the strand until it reaches the error, and fixes it. This mechanism of repair seems to be present in all living things, from bacteria to man. Know-how is needed: 

a. to know that something is broken (DNA damage sensing) 
b. to identify where exactly it is broken 
c. to know when to repair it (e.g. one has to stop/or put on hold some other ongoing processes, in other words, one needs to know lots of other things, one needs to know the whole system, otherwise one creates more damage…) 
d. to know how to repair it (to use the right tools, materials, energy, etc, etc, etc ) 
e. to make sure that the repair was performed correctly. (this can be observed in DNA repair as well)

5. On top of that: Cells do not even wait until a protein machine fails, but replace it long before it has a chance to break down. Furthermore, it completely recycles the machine that is taken out of production. The components derived from this recycling process can be used not only to create other machines of the same type but also to create different machines if that is what is needed in the “plant.” This way of handling its machines has some clear advantages for the cell. New capacity can be installed quickly to meet current demand. At the same time, there are never idle machines around taking up space or hogging important building blocks. Maintenance is a positive “side effect” of the continuous machine renewal process, thereby guaranteeing the quality of output. Finally, the ability to quickly build new production lines from scratch has allowed the cell to take advantage of a big library of contingency plans in its DNA that allow it to quickly react to a wide range of circumstances, as we will describe next.
6. The more sophisticated, advanced, autonomous, complex, and information-driven machines or factories are, the more they carry the hallmark of design. The very concepts of error monitoring, checking, and repair, and replacement in advance to avoid future break-ups are tasks performed with goal-directedness, and purpose. Biological cells are far more advanced than any machine and factory ever devised and invented by man. It is therefore rational and warranted to infer, that biological cells were designed. 

Genetic entropy: Random mutations deteriorate the genome

M.LYNCH (2003): Although uncertainties remain with respect to the form of the mutational-effect distribution, a great deal of evidence from several sources strongly suggests that the overall effects of mutations are to reduce fitness. Indirect evidence comes from asymmetrical responses to artificial selection on life history traits, suggesting that variance for these traits is maintained by downwardly skewed distributions of mutational effects. More direct evidence comes from spontaneous mutation accumulation (MA) experiments in Drosophila, Caenorhabditis elegans, wheat, yeast, Escherichia coli, and different mutation accumulation (MA) experiments in Arabidopsis. All of these experiments detected downward trends in mutation accumulation (MA) line population mean fitness relative to control populations as generations accrued. As far as we know, there is no case of even a single MA line maintained by bottlenecking that showed significantly higher fitness than its contemporary control populations. 48

M.C. Whitlock (2004): The overall effect of mutation on a population is strongly dependent on the population size. A large population has many new mutations in each generation, and therefore the probability is high that it will obtain new favorable mutations. This large population also has effective selection against the bad mutations that occur; deleterious mutations in a large population are kept at a low frequency within a balance between the forces of selection and those of mutation. A population with relatively fewer individuals, however, will have lower fitness on average, not only because fewer beneficial mutations arise, but also because deleterious mutations are more likely to reach high frequencies through random genetic drift. This shift in the balance between fixation of beneficial and deleterious mutations can result in a decline in the fitness of individuals in a small population and, ultimately, may lead to the extinction of that population. As such, a change in population size may determine the ultimate fate of a species affected by anthropogenic change.49



Last edited by Otangelo on Sat Apr 08, 2023 9:06 am; edited 31 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

J.C.Sandord (2022): Genetic Entropy is the genetic degeneration of living things.  Genetic entropy is the systematic breakdown of the internal biological information systems that make life alive.  Genetic entropy results from genetic mutations, which are typographical errors in the programming of life (life’s instruction manuals). Mutations systematically erode the information that encodes life’s many essential functions.  Biological information consists of a large set of specifications, and random mutations systematically scramble these specifications – gradually but relentlessly destroying the programming instructions essential to life. Genetic entropy is most easily understood on a personal level. In our bodies there are roughly 3 new mutations (word-processing errors), every cell division. Our cells become more mutant, and more divergent from each other every day. By the time we are old, each of our cells has accumulated tens of thousands of mutations. Mutation accumulation is the primary reason we grow old and die.  This level of genetic entropy is easy to understand. There is another level of genetic entropy that affects us as a population. Because mutations arise in all of our cells, including our reproductive cells, we pass many of our new mutations to our children. So mutations continuously accumulate in the population – with each generation being more mutant than the last. So not only do we undergo genetic degeneration personally, we also are undergoing genetic degeneration as a population. This is essentially evolution going the wrong way. Natural selection can slow down, but cannot stop, genetic entropy on the population level.  Apart from intelligence, information and information systems always degenerate. This is obviously true in the human realm, but is equally true in the biological realm (contrary to what evolutionists claim).  The more technical definition of entropy, as used by engineers and physicists, is simply a measure of disorder. Technically, apart from any external intervention, all functional systems degenerate, consistently moving from order to disorder (because entropy always increases in any closed system). For the biologist it is more useful to employ the more general use of the word entropy, which conveys that since physical entropy is ever-increasing (disorder is always increasing), therefore there is universal tendency for all biological information systems to degenerate over time - apart from intelligent intervention.47


23. James A. Shapiro: Evolution: A View from the 21st Century 2011
40. James Shapiro: Physiology of the read–write genome 9 March 2014
41. Spetner, Lee: The Evolution Revolution: Why Thinking People Are Rethinking the Theory of Evolution 2014
42. James A. Shapiro: How life changes itself: The Read–Write (RW) genome 2013 Sep
43. Science Daily: Study challenges evolutionary theory that DNA mutations are random  January 12, 2022
44. J. Grey Monroe: Mutation bias reflects natural selection in Arabidopsis thaliana 12 January 2022
45. Ryan M. Hull: Environmental change drives accelerated adaptation through stimulated copy number variation June 27, 2017
47. J.C.Sanford: Genetic entropy 2022
48. Michael Lynch: TOWARD A REALISTIC MODEL OF MUTATIONS AFFECTING FITNESS 2003 Mar
49. Michael C. Whitlock: Fixation of New Mutations in Small Populations 2004
50. Uncommon descent: Some Thoughts From A Reader On Behe’s Vindication At Lehigh February 18, 2021
51. Sean W Buskirk: Adaptive evolution of nontransitive fitness in yeast Dec 29, 2020
52. Natural History Museum: What is natural selection?
53. Aditi Gupta: Evolution of Genome Size in Asexual Digital Organisms 16 May 2016
54. D. Joseph: GENETIC DEGENERATION—EVIDENCE FOR INDEPENDENT ORIGINS  August 15, 2021
55. John C. Sanford: Multiple Overlapping Genetic Codes Profoundly Reduce the Probability of Beneficial Mutation 2013
58. John Michael Fischer: Debunking Evolution 2022
59. William DeJong: The Evolutionary Dynamics of Digital and Nucleotide Codes: A Mutation Protection Perspective Feb 13, 2011
60. Niwrad: The Darwinism Contradiction Of Repair Systems  September 15, 2009
61. Marc Tollis: Peto’s Paradox: how has evolution solved the problem of cancer prevention? 13 July 2017
62. J. Scott Turner: The Tinkerer's Accomplice: How Design Emerges from Life Itself 30 september 2010
63. Brandon J. Aubrey: Tumor-Suppressor Functions of the TP53 Pathway 2016
64. A.B. Williams: p53 in the DNA-Damage-Repair Process 2016 May; 6
65. Rodrigo S. Galhardo: Extreme Genome Repair 2012 Apr 4. 
66. T. Devitt: In the lab, scientists coax E. coli to resist radiation damage March 17, 2014
67. Kiyoto Kamagata: How p53 Molecules Solve the Target DNA Search Problem: A Review 2020 Feb; 21
68. Kelly D Sullivan: Mechanisms of transcriptional regulation by p53 2017 Nov 10
69. Emir Senturk: p53 and Cell Cycle Effects After DNA Damage 2016 Jan 14
70. Wasim Feroz: Exploring the multiple roles of guardian of the genome: P53 16 November 2020
71. Rick Durrett: Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution 2008 Nov
72. Michael Behe: The Edge of Evolution: The Search for the Limits of Darwinism 2008
73. Joseph B Fischer: Response to Critics, Part 2: Sean Carroll Jun 27, 2007
74. Michael Behe: The Edge of Evolution: The Search for the Limits of Darwinism 17 junho 2008
75. Gert Korthof: Either Design or Common Descent 22 July 2007
76. Uncommon descent: Fossil Discontinuities: A Refutation Of Darwinism And Confirmation Of Intelligent Design  October 13, 2018
77. John Sanford: The waiting time problem in a model hominin population 17 September 2015
78. Rick Durrett: Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution 2008 Nov  3
79. John Sanford: The Origin of Man and the “Waiting Time” Problem August 10, 2016
80. Gerd B. Muller: The extended evolutionary synthesis: its structure, assumptions and predictions 9 July 2015

J. C. Sanford, Genetic Entropy (2005), p. 47: Natural selection has a fundamental problem. It involves the enormous chasm that exists between genotypic change (a molecular mutation) and phenotypic selection (on the level of the whole organism). There needs to be selection for billions of almost infinitely subtle and complex genetic differences on the molecular level. But this can only be done by controlling reproduction on the level of the whole organism. When Mother Nature selects for or against an individual within a population, she has to accept or reject a complete set of 6 billion nucleotides—all at once! It’s either take the whole book or have nothing of it. In fact, Mother Nature never sees the individual nucleotides. She sees the whole organism. She never has the luxury of seeing, or selecting for, any particular nucleotide. We start to see what a great leap of faith is required to believe that by selecting or rejecting a whole organism, Mother Nature can precisely control the fate of billions of individual misspellings within the assembly manual.

Nearly-neutral mutations have infinitesimally small effects on the genome as a whole. Mutations at all near-neutral nucleotide positions are automatically subject to random drift, meaning they are essentially immune to selection. Their fitness effects are so miniscule that they are masked by even the slightest fluctuations, or noise, in the biological system. —Genetic Entropy, J. C. Sanford,  p. 72

Noise always remains a severe constraint to natural selection. Under artificial conditions, plant and animal breeders have been able to very successfully select for a limited number of traits. They have done this by employing intelligent design to deliberately minimize noise. They have used blocking techniques, replication, statistical analysis, truncation selection, and highly controlled environments. Natural selection does none of this. It is, by definition, a blind and uncontrolled process, subject to unconstrained noise and unlimited random fluctuations. —Genetic Entropy, J. C. Sanford,  p. 97

The problem of near-neutrality is much more severe for beneficial mutations than for deleterious mutations. Essentially every beneficial mutation must fall within Kimura’s “no-selection zone.” All such mutations can never be selected for. —Genetic Entropy, J. C. Sanford,  p. 136

There is only one evolutionary mechanism. That mechanism is mutation/selection (the Primary Axiom). There is no viable alternative mechanism for the spontaneous generation of genomes. It is false to say that mutation selection is only one of various mechanisms of evolution. There are several types of mutations and there are several types of selection but there is still only one basic evolutionary mechanism (mutation/selection) The demise of the Primary Axiom leaves evolutionary theory without any viable mechanism. Without any naturalistic mechanism, evolution is not significantly different from any faith-based religion. —Genetic Entropy, J. C. Sanford,  p. 205 - 206 47

Carter, R. (2012): “When living things reproduce, they make a copy of their DNA and pass this to their progeny. From time to time, mistakes occur, and the next generation does not have a perfect copy of the original DNA. These copying errors are known as mutations. Most people think that ‘natural selection’ can dispose of harmful mutations by eliminating individuals that carry them. But ‘natural selection’ properly defined simply means ‘differential reproduction’, meaning some organisms leave more progeny than others based on the mutations they carry and the environment in which they live. Moreover, reproductive success is only affected by mutations that have a significant effect. Unless mutations cause a noticeable reduction in reproductive rates, the organisms that carry them will be just as successful in leaving offspring as all the others. In other words, if the mutations aren’t ‘bad’ enough, selection can’t ‘see’ them, cannot eliminate them, and the mutations will accumulate. The result is ‘genetic entropy. Each new generation carries all the mutations of previous generations plus their own. Over time, all these very slightly harmful mutations build up to a point that, in combination, they start to have serious effects on reproductive fitness. The downward spiral becomes unstoppable because every member of the population has the same problem: natural selection can’t choose between ‘fit’ and ‘less fit’ individuals if every member of the population is, more or less, equally mutated. The population descends into sickness and finally becomes extinct. There’s simply no way to stop it. 56

In 2021, Rob Stadler wrote at uncommon descent, a website serving the Intelligent Design community: The paper: Adaptive evolution of nontransitive fitness in yeast demonstrates by experimental evolution that evolution actually favors devolution — resulting in a less fit organism. [For me, experimental evolution provides the highest level of confidence among all evidence for evolution, because it is repeatable, is directly observable, and is prospectively designed, therefore making it possible to reduce the influence of bias and assumptions.]

Where did this paper come from? The Department of Biological Sciences at Lehigh University. So, you think, it must have come from our own Mike Behe — that would only make sense. But no, this paper was authored by those who oppose Mike at his own university! Remember, these same authors wrote a scathing review of Darwin Devolves in the journal Evolution. Now, somehow, they must hold their position of opposing Darwin Devolves, while presenting compelling evidence to support Darwin Devolves. Quite a conundrum! 50

In the Discussion, at the end of the paper, the authors confess:
Sean W Buskirk (2020): Our results show that the continuous action of selection can give rise to genotypes that are less fit compared to a distant ancestor. 51

Question: Let's suppose there was a first Last Universal Common Ancestor (LUCA) or a small population of it. How did it overcome deleterious harmful mutations, in order not to go extinct? 

Selection is a result of built-in mechanisms by organisms to adapt to the environment. 

" When we see species variations, it is evidence of superior design, of inbuilt adaptation, not the power of unguided evolution. Adaptation is one thing. Innovation an entirely different one."

Evidence in recent years has demonstrated that mutations can be far from just random but actually orchestrated by cells to adapt to environmental conditions. As such, it is a purposefully designed, pre-programmed process under the cell's control and regulation in order to react to environmental conditions and adapt to them. This goes diametrically against the orthodox evolutionary view that mutations are mere random accidents.

Spetner, Lee (2014): In my book Not By Chance! I introduced a hypothesis suggesting that much of the evolution we actually observe is the result of organisms’ built-in capability to respond adaptively to environmental inputs. I called it the nonrandom evolutionary hypothesis (NREH). This kind of evolution relies on events that are epigenetic in the broad sense. The type of evolution I have suggested is driven by nonrandom epigenetic change triggered by environmental inputs. I have suggested that an environmental change can cause the genome of an individual to be altered to effect an adaptive response to the change, and this altered form of the genome can be inherited. It is generally recognized that environmental inputs can stimulate epigenetic events, but it is not so generally recognized that a significant fraction of these are adaptive to the environment that did the stimulation. Animals and plants have the built-in ability to respond adaptively to environmental stimuli. This capability enables these plants and animals to adapt quickly to a changing environment. The ability to respond requires that an organism be able to perceive a change in the environment and have a mechanism whereby that perception leads to the activation of a latent gene or other genetic resources, which in turn leads to a phenotypic change that will grant the organism an advantage in the new environment. 

In the last several decades there were already some biologists who felt that neo-Darwinian theory could not account for large-scale evolution (Ho and Saunders 1979, Shapiro 1992, 2009, Johnston and Gottlieb 1990). Noble (2013) claims the central assumptions of neo-Darwinism “have been disproved.” I showed (Spetner 1997) that 

(1) speciation by the neo-Darwinian process is so highly improbable that it should be considered impossible, and 
(2) when random mutations were shown to produce some microevolution, they were not the kind of mutations that could lead to Common Descent even if they were to operate over an unlimited span of time. 

Random point mutations, which neo-Darwinian evolution holds are the source of novelty in evolution, have not been shown to add any information to the genome. Usually, they have been seen to have lost information. I have stated (Spetner 1997) that no random point mutation has been observed that adds information to the genome, and the statement still holds. Some biologists are now beginning to realize that the genetic changes required for evolution have to be non-random. Large nonrandom genetic changes are indeed known to occur, and these changes are under cellular control. An environmental change can be a long-term challenge, and the organism can respond through a heritable change that will serve to adapt it and its progeny to the new environment. The organism can do this through an inherent, built-in capability to alter its genome to enable it to respond to the change. The cell may have other tricks it can do as well to accomplish the same purpose. This capability has some similarity to its ability to exercise its short-term control. James Shapiro has suggested that cells have the capability of doing their own genetic engineering. This capability is built into the cell, which enables organisms to alter their genome to adapt to a changing environment. Organisms thus have the capability to adapt quickly to a new environment. The genetic rearrangements that will reveal the adaptive genes are known to be triggered by inputs from the environment.  41

James Shapiro (2011): Stated in terms of an electronic metaphor, the view of traditional genetics and conventional evolutionary theory is that the genome is a read-only memory (ROM) system subject to change by stochastic damage and copying errors. For over six decades, however, an increasingly prevalent alternative view has gained prominence. The alternative view has its basis in cytogenetic and molecular evidence. This distinct perspective treats the genome as a read-write (RW) memory system subject to nonrandom change by dedicated cell functions. The radical difference between the ROM and RW views of genomic information storage is basic to a 21st Century understanding of all aspects of genome action in living cells. 23

We can distinguish at least seven distinct but interrelated genomic functions essential for survival, reproduction, and evolution: 

1. DNA condensation and packaging in chromatin 
2. Correctly positioning DNA-chromatin complexes through the cell cycle 
3. DNA replication once per cell cycle 
4. Proofreading and repair  
5. Ensuring accurate transmission of replicated genomes at cell division 
6. Making stored data accessible to the transcription apparatus at the right time and place 
7. Genome restructuring when appropriate 

In all organisms, functions 1 through 6 are critical for normal reproduction, and quite a few organisms also require function 7 during their normal life cycles. We humans, for instance, could not survive if our lymphocytes (immune system cells) were incapable of restructuring certain regions of their genomes to generate the essential diversity of antibodies needed for adaptive immunity. In addition, function 7 is essential for evolutionary change. 23

On a side note: One of the examples of objects that have the imprint of design is: Creating a specified complex object that performs multiple necessary/essential specific functions simultaneously ( Like a swiss multi-tool army knife) Machines, tools, etc. that perform functions/reactions with multiple possible meaningful, significant outcomes and purposes/ functional products. They can operate forward and reverse, and perform/incorporate interdependent manufacturing processes ( one-pot reactions) to achieve a specific functional outcome. Dna, mentioned above incorporating 7 life-essential functions simultaneously, fits perfectly that description, and is, therefore, one of the many reasons why DNA points to intelligent design.  

James Shapiro (2011): Discoveries in cytogenetics, molecular biology, and genomics have revealed that genome change is an active cell-mediated physiological process. This is distinctly at variance with the pre-DNA assumption that genetic changes arise accidentally and sporadically. The discovery that DNA changes arise as the result of regulated cell biochemistry means that the genome is best modelled as a read–write (RW) data storage system rather than a read-only memory (ROM). Cells have a broad variety of natural genetic engineering (NGE) functions for transporting, diversifying and reorganizing DNA sequences in ways that generate many classes of genomic novelties; natural genetic engineering functions are regulated and subject to activation by a range of challenging life history events; cells can target the action of natural genetic engineering functions to particular genome locations by a range of well-established molecular interactions, including protein binding with regulatory factors and linkage to transcription; and genome changes in cancer can usefully be considered as consequences of the loss of homeostatic control over natural genetic engineering functions.40

James Shapiro (2013): It is essential for scientists to keep in mind the astonishing reliability and complexity of living cells. Even the smallest cells contain millions of different molecules combined into an integrated set of densely packed and continuously changing macromolecular structures. Depending upon the energy source and other circumstances, these indescribably complex entities can reproduce themselves with great reliability at times as short as 10–20 minutes. Each reproductive cell cycle involves literally hundreds of millions of biochemical and biomechanical events. We must recognize that cells possess a cybernetic capacity beyond our ability to imitate. Therefore, it should not surprise us when we discover extremely dense and interconnected control architectures at all levels. Simplifying assumptions about cell informatics can be more misleading than helpful in understanding the basic principles of biological function. Two dangerous oversimplifications have been (i) to consider the genome as a mere physical carrier of hypothetical units called “genes” that determine particular cell or organismal traits, and (ii) to think of the genome as a digitally encoded Read-Only Turing tape that feeds instructions to the rest of the cell about individual characters. As we are learning from the ENCODE project data, the vast majority of genomic DNA, including many so-called “non-coding” (nc) segments, participate in biologically specific molecular interactions. Moreover, the term “gene” is a theoretical construct whose functional properties and physical structure have never been possible to define rigorously. It is telling that genome sequence annotators used to call protein-coding regions (chiefly in prokaryotic DNA) “genes,” but now use the more neutral terms CDS, for “coding sequence.” The Turing tape idea falls short, as we will see, because it does not take into account direct physical participation of the genome in reproductive and regulatory interactions. The concept of a Read-Only Turing genome also fails to recognize the essential Write capability of a universal Turing machine, which fits remarkably well with the ability of cells to make temporary or permanent inscriptions in DNA  42

Gerd B. Muller et.al.,(2015):  Developmental, or phenotypic, plasticity is the capacity of an organism to change its phenotype in response to the environment. Plasticity is ubiquitous across all levels of biological organization. While the evolution of plasticity has been studied for decades, there is renewed interest in plasticity as a cause, and not just a consequence, of phenotypic evolution. For example, plasticity facilitates colonization of novel environments, affects population connectivity and gene flow, contributes to temporal and spatial variation in selection and may increase the chance of adaptive peak shifts, radiations and speciation events. Particularly contentious is the contribution of plasticity to evolution through phenotypic and genetic accommodation. Phenotypic accommodation refers to the mutual and often functional adjustment of parts of an organism during development that typically does not involve genetic mutation. Genetic accommodation may provide a mechanism for rapid adaptation to novel environments.80 

Ryan M. Hull (2017): The assertion that adaptation occurs purely through natural selection of random mutations is deeply embedded in our understanding of evolution. However, we have demonstrated that a controllable mechanism exists in yeast for increasing the mutation rate in response to at least 1 environmental stimulus and that this mechanism shows remarkable allele selectivity. Cells have a remarkable and unexpected ability to alter their own genome in response to the environment.Evidence for adaptation through genome-wide nonrandom mutation is substantial.45

More recent scientific investigations have further bolstered this finding. A more recent scientific news article reported:

Science Daily (2022): Mutations occur when DNA is damaged and left unrepaired, creating a new variation. The scientists wanted to know if mutation was purely random or something deeper. What they found was unexpected. Mutations are very non-random and it's non-random in a way that benefits the plant. It's a totally new way of thinking about mutation. Arabidopsis thaliana, or thale cress, is a small, flowering weed considered the "lab rat among plants" because of its relatively small genome comprising around 120 million base pairs. Humans, by comparison, have roughly 3 billion base pairs. It's a model organism for genetics. Lab-grown plants yield many variations.  Work began at Max Planck Institute where researchers grew specimens in a protected lab environment, which allowed plants with defects that may not have survived in nature be able to survive in a controlled space. Sequencing of those hundreds of Arabidopsis thaliana plants revealed more than 1 million mutations. Within those mutations a nonrandom pattern was revealed, counter to what was expected. At first glance, what we found seemed to contradict established theory that initial mutations are entirely random and that only natural selection determines which mutations are observed in organisms. Instead of randomness they found patches of the genome with low mutation rates. In those patches, they were surprised to discover an over-representation of essential genes, such as those involved in cell growth and gene expression. These are the really important regions of the genome. The areas that are the most biologically important are the ones being protected from mutation. The areas are also sensitive to the harmful effects of new mutations. DNA damage repair seems therefore to be particularly effective in these regions. The way DNA was wrapped around different types of proteins was a good predictor of whether a gene would mutate or not. It means we can predict which genes are more likely to mutate than others and it gives us a good idea of what's going on. The findings add a surprising twist to Charles Darwin's theory of evolution by natural selection because it reveals that the plant has evolved to protect its genes from mutation to ensure survival. The plant has evolved a way to protect its most important places from mutation. 22

J. Grey Monroe (2022): The random occurrence of mutations with respect to their consequences is an axiom upon which much of biology and evolutionary theory rests. This simple proposition has had profound effects on models of evolution developed since the modern synthesis, shaping how biologists have thought about and studied genetic diversity over the past century. From this view, for example, the common observation that genetic variants are found less often in functionally constrained regions of the genome is believed to be due solely to selection after random mutation. Yet, emerging discoveries in genome biology inspire a reconsideration of classical views. It is now known that nucleotide composition, epigenomic features and bias in DNA repair can influence the likelihood that mutations occur at different places across the genome. At the same time, we have learned that specific gene regions and broad classes of genes, including constitutively expressed and essential housekeeping genes, can exist in distinct epigenomic states. This could in turn provide opportunities for adaptive mutation biases to evolve by coupling DNA repair with features enriched in constrained loci. Indeed, evidence that DNA repair is targeted to genic regions and active genes has been found. 

While it will be important to test the degree and extent of mutation bias beyond Arabidopsis, the adaptive mutation bias described here provides an alternative explanation for many previous observations in eukaryotes. Our discovery yields a new account of the forces driving patterns of natural variation, challenging a long-standing paradigm regarding the randomness of mutation.44

Adaptation and is an engineered process, which does not happen by accident. The Cell receives macroscopic signals from the environment and responds by adaptive, nonrandom mutations. The capacity of Mammals and other multicellular organisms to adapt to changing environmental conditions is extraordinary.  In order to effectively produce and secrete mature proteins, cellular mechanisms for monitoring the environment are essential. Exposure of cells to various environmental causes accumulation of unfolded proteins and results in the activation of a well-orchestrated set of pathways during a phenomenon known as the unfolded protein response (UPR). Cells have powerful quality control networks consisting of chaperones and proteases that cooperate to monitor the folding states of proteins and to remove misfolded conformers through either refolding or degradation. Free-living organisms, which are more directly exposed to environmental fluctuations, must often survive even harsher folding stresses. These stresses not only disrupt the folding of newly synthesized proteins but can also cause misfolding of already folded proteins.  In living organisms, robustness is provided by homeostatic mechanismsAt least five epigenetic mechanisms are responsible for these life-essential processes :

- heat shock factors (HSFs)
- The unfolded protein response (UPR)
- nonhomologous end-joining and homologous recombination
- The DNA Damage Response
- The Response to Oxidative Stress

The cell modulates the signalling pathways at transcriptional, post-transcriptional, and post-translational levels. Complex signaling pathways contribute to the maintenance of systemic homeostasis. Homeostasis is the mechanistic fundament of living organisms.

Homeostasis, from the Greek words for "same" and "steady," refers to any process that living things use to actively maintain fairly stable conditions necessary for survival. It is also synonymous with robustness and adaptability.

This essential characteristic of living cells, homeostasis, is the ability to maintain a steady and more-or-less constant chemical balance in a changing environment. Cell survival requires appropriate proportions of molecular oxygen and various antioxidants. Reactive products of oxygen, calles Reactive Oxygen Species ( ROS) are amongst the most potent and omnipresent threats faced by cells. Cells, damaged by ROS, irreversibly infected, functionless and/or potentially oncogenic cells are destined for persistent inactivation or elimination, respectively. If mechanisms that do not trigger controlled and programmed Cell death ( apoptosis) are not present at day 1, the organisms cannot survive and dies. Simply put, the principle is that all of a multicellular organism's cells are prepared to suicide when needed for the benefit of the organism as a whole. They eliminate themselves in a very carefully programmed way so as to minimize damage to the larger organism.  On average, in human adults, it’s about 50-70 BILLION cells that die per day. We shed 30,000 to 50,000 skin cells every minute.

1. The control of metabolism is a fundamental requirement for all life, with perturbations of metabolic homeostasis underpinning numerous disease-associated pathologies.
2. Any incomplete Metabolic network without the control mechanisms in place to get homeostasis would mean disease and cell death.
3. A minimal metabolic network and control mechanisms had to be in place from the beginning, which means, and gradualistic explanation of the origin of biological Cells, and life is unrealistic. 
Life is an all-or-nothing business and points to a creative act of God.


Metabolic Symphony: The Dance of Life

Metabolism, the conductor of life's symphony,
Controlling the dance, with perfect harmony,
A fundamental requirement for every living cell,
Maintaining homeostasis, where all functions dwell.

Perturbations disrupt, diseases arise,
Metabolic imbalance, a perilous guise,
Without control mechanisms, all in place,
Cell death and disease, a daunting race.

A network incomplete, without the reins,
Life's delicate balance, it must sustain,
Minimal metabolic pathways, a grand design,
From the very beginning, a feat divine.

Gradualistic explanations fall short,
Life's origin, a complex, intricate court,
All-or-nothing, a truth to behold,
Creative act of God, a story untold.

The symphony of life, a majestic play,
Controlled metabolism, in every way,
A testament to wisdom, foresight, and might,
Life's masterpiece, a gift of divine light.

So let us marvel at the wonders of life,
Metabolic symphony, with rhythms so rife,
A tribute to a Creator, whose hand we see,
In every cell's dance, in perfect harmony.

The following molecules must stay in a finely tuned order and balance for life to survive:
Halogens like chlorine, fluoride, iodine, and bromine.  The body needs to maintain a delicate balance between all these elements.
Molybdenum (Mo) and iron (Fe) are essential micronutrients required for crucial enzyme activities and mutually impact their homeostasis, which means, they are interdependent on each other to maintain homeostatic levels. 
Potassium plays a key role in maintaining cell function, and it is important in maintaining fluid and electrolyte balance. Potassium-40 is probably the most dangerous light radioactive isotope, yet the one most essential to life. Its abundance must be balanced on a razor’s edge.
The ability of cells to maintain a large gradient of calcium across their outer membrane is universal. All biological cells have a low cytosolic (liquid found inside Cells ) calcium concentration, can and must keep this even when the free calcium outside is up to 20,000 times higher concentrated! 
- Nutrient uptake and homeostasis must be adjusted to the needs of the organisms according to developmental stages and environmental conditions.
Magnesium is the second most abundant cellular cation after potassium. The concentrations are essential to regulate numerous cellular functions and enzymes
Iron is required for the survival of most organisms, including bacteria, plants, and humans. Its homeostasis in mammals must be fine-tuned to avoid iron deficiency with a reduced oxygen transport 
Phosphate, as a cellular energy currency, essentially drives most biochemical reactions defining living organisms, and thus its homeostasis must be tightly regulated. 
Zinc (Zn) is an essential heavy metal that is incorporated into a number of human Zn metalloproteins. Zn plays important roles in nucleic acid metabolism, cell replication, and tissue repair and growth. Zn contributes to intracellular metal homeostasis. 
Selenium homeostasis and antioxidant selenoproteins in the brain: lack of finetuned balance has implications for disorders in the central nervous system
Copper ion homeostasis is maintained through regulated expression of genes involved in copper ion uptake. 

In the early 1960s, Ernest Nagel and Carl Hempel showed that self-regulated systems are teleological.

In his book: THE TINKERER’S ACCOMPLICE, How Design Emerges from Life Itself  J. S. TURNER, writes on page 12 :
Although I touch upon ID obliquely from time to time, I do so not because I endorse it, but because it is mostly unavoidable. ID theory is essentially warmed-over natural theology, but there is, at its core, a serious point that deserves serious attention. ID theory would like us to believe that some overarching intelligence guides the evolutionary process: to say the least, that is unlikely. Nevertheless, how design arises remains a very real problem in biology.  My thesis is quite simple: organisms are designed not so much because natural selection of particular genes has made them that way, but because agents of homeostasis build them that way. These agents’ modus operandi is to construct environments upon which the precarious and dynamic stability that is homeostasis can be imposed, and design is the result.62

Comment: Turner does not identify these agents, but Wiki describes agents as CONSCIOUS beings, which act with specific goals in mind. In the case of life, this agent made it possible for biological cells to actively maintain fairly stable levels of various metabolites and molecules, necessary for survival. We are once more, upon careful examination of the evidence in nature, justified to infer an intelligent designer as the most case-adequate explanation of the origin of homeostasis and the ability of adaptation, commonly called evolution, of all living organisms.



Last edited by Otangelo on Sat Apr 08, 2023 9:32 am; edited 11 times in total

https://reasonandscience.catsboard.com

5Refuting Darwin, confirming design Empty Re: Refuting Darwin, confirming design Fri Oct 28, 2022 5:14 am

Otangelo


Admin

M.Syvanen (2012): The flow of genes between different species represents a form of genetic variation. When I first started writing in 1982 about the implications of horizontal gene transfer (HGT) for macroevolutionary trends, the existence of the phenomenon was not yet accepted, given that evidence came only from isolated examples of bacterial plasmid transfer events and a case involving a retrovirus in mammals. There was great explanatory power in a theory that incorporated the notion of horizontally transferred genes as a source of genetic variation upon which natural selection acts. Reports of HGT in nature trickled in, but investigators debated the significance of HGT and the methodology for identifying it. In fact, only in the past 14 years has the number of examples of naturally occurring HGT become too large to ignore. With the recent availability of genome sequence data, the pace of discovery has picked up and interest in the phenomenon has increased. We now know that the ability of genes to function perfectly well across species boundaries has resulted in a significant horizontal flow of genes. Whether the genes are transferred by transposons, viruses, bacteria, or other vectors, or perhaps through direct contact or initial hybridization-like events, the horizontal flow of genes is a part of the story of life. Although the phenomenon of HGT is now widely accepted, current theoretical constructs remain quite resistant to many of its deeper implications. The first area I touch upon concerns the role of phylogenetic trees as a model for biological history. A second and related area concerns the continued speculation about and search for what is called the last universal common ancestor (LUCA) and the evolutionary significance of the biological unities. A third question involves the rethinking of higher taxonomic nomenclature. The chaotic phylogenetic relationships among plants remain unresolved, and a consideration of horizontalgene flow could help solve the puzzle. 24

Shelly Hamilich (2022): Our results suggest that horizontal gene transfer between hosts and their microbiota is a significant and active evolutionary mechanism that contributed new traits to plants and their commensal microbiota. 26

Comment: Rightly, Coppedge points out that: Information shared is not the same as information innovated, nor is borrowing a book as difficult as writing one. 27

Rama P. Bhatia (2022): the fitness effects of horizontally transferred genes are highly dependent on the environment. 28

Comment: The same problem to natural selection applies to HGT: since environmental factors influencing fitness effects would have to be taken into consideration in order to measure/calculate the influence of HGT in fitness, and since that is a variable that is stochastic, and cannot be measured, it becomes de facto impossible to detect up to what degree HGT influences fitness in populations in their natural environment.  

Irreducible complexity 

C.Darwin (1860): “If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find no such case.” 30

Irreducible complexity, a term popularized by Michael Behe in his infamous book Darwin's Black box falsifies the claim that evolution explains the origin of biocomplexity and organismal form. If natural selection makes intelligent design superfluous, is it capable of instantiating proteins, cell types, organs, and organ systems, that have only function in cooperation/joint venture with other functional parts of the organism, or the organism as a whole, by slight, successive modifications over time?  Does it select for structures, functions, traits, or what? 

Behe, Darwin's Black Box (1996), page 39: By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. An irreducibly complex biological system, if there is such a thing, would be a powerful challenge to Darwinian evolution.22

An irreducibly complex system is characterized by five points:
1. a single system composed of several well-matched, interacting parts
2. that contribute to the basic function
3. the removal of any one of the parts causes the system to effectively cease functioning
4. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system
5. any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.

1. Robert Carter: Genetic entropy and simple organisms 25 October 2012
2. Paul R. Ehrlich Natural Selection 1988
3. Natural History Museum: What is natural selection?
4. David Stack: Charles Darwin: Theory of Natural Selection 01 January 2021
5. Ernst Mayr: WHAT EVOLUTION IS A Conversation With Ernst Mayr [12.31.99]
6. J.Dekker: Natural Selection and its Four Conditions 2007
7. S.El-Showk: Natural selection: On fitness 2012
8. Evolution.Berkley: Evolutionary fitness
9. Adam Eyre-Walker: The distribution of fitness effects of new mutations August 2007
10. R. G. Brajesh: Distribution of fitness effects of mutations obtained from a simple genetic regulatory network model  08 July 2019
11. Thomas Bataillon: Effects of new mutations on fitness: insights from models and data 2014 Jul
18. Christopher J Graves: Variability in fitness effects can preclude selection of the fittest 2019 Sep 30
19. Vita Živa Alif: What is the best fitness measure in wild populations? A case study on the power of short-term fitness proxies to predict reproductive value November 19, 2021.
20. Ivana Cvijović: Fate of a mutation in a fluctuating environment August 24, 2015
21. Xia Hua: Darwinism for the Genomic Age: Connecting Mutation to Diversification  07 February 2017
22. Z Patwa: The fixation probability of beneficial mutations 29 July 2008
23. R. G. Brajesh: Distribution of fitness effects of mutations obtained from a simple genetic regulatory network model 08 July 2019
24. David F. Coppedge Evolutionary Fitness Is Not Measurable November 20, 2021
25. Michael Lynch: The frailty of adaptive hypotheses for the origins of organismal complexity May 15, 2007
26. Molly K Burke et.al.,: Genome-wide analysis of a long-term evolution experiment with Drosophila 2010 Sep 30
27. Ben Bradley: Natural selection according to Darwin: cause or effect? 11 April 2022
28. Adam Levy: How evolution builds genes from scratch 16 October 2019
29. J.Dulle: The (In)adequacy of Darwinian Evolution
30. Matthew Hurles: Gene Duplication: The Genomic Trade in Spare Parts July 13, 2004
31. Alisha K Holloway: Experimental evolution of gene duplicates in a bacterial plasmid model 2007 Feb
32. Joseph Esfandiar: Is gene duplication a viable explanation for the origination of biological information and complexity? 22 December 2010
33. Johan Hallin: Regulation plays a multifaceted role in the retention of gene duplicates November 22, 2019
34. Michael Lynch: The Origins of Genome Architecture 2007
35. Eugene V Koonin: Darwinian evolution in the light of genomics 2009 Mar


12. H. Allen Orr: Testing Natural Selection  2008
13. FRANCISCO J. AYALA: Darwin’s Greatest Discovery: Design Without Designer 2007
14. Paul Gibson : Can Purifying Natural Selection Preserve Biological Information? – May 2013
15. Eugene V. Koonin :Toward a theory of evolution as multilevel learning February 4, 2022
16. Jerry A. Coyne, Why Evolution is True, p. 123. 2009
20. George Ellis: Controversy in Evolutionary Theory: A multilevel view of the issues. 2018
21. Armen Y. Mulkidjanian: Physico-Chemical and Evolutionary Constraints for the Formation and Selection of First Biopolymers: Towards the Consensus Paradigm of the Abiogenic Origin of Life 21 September 2007
22. M.Behe: Darwin's Black Box: The Biochemical Challenge to Evolution 1996
24. Michael Syvanen: Evolutionary Implications of Horizontal Gene Transfer 21 August 2012
25. Libretext: Horizontal Gene Transfer
26. Shelly Hamilich: Widespread horizontal gene transfer between plants and their microbiota August 26, 2022
27. David Coppedge: Gene Sharing Is More Widespread than Thought, with Implications for Darwinism September 20, 2022
28. Rama P. Bhatia: Environment and the Evolutionary Trajectory of Horizontal Gene Transfer April 01, 2022



30. Charles Darwin: Origin of Species: second British edition (1860), page 189
46. Ina Huang: [url=https://scholarblogs.emory.edu/evolutionshorts/2015/12/11/can-we-decide-the-direction-of-evolution/#:~:text=Many%20people%20believe%20that%20natural,random%20mutations%20that%20may%20occur




2

Did life start polyphyletic, diversified, or monophyletic, with a universal common ancestor? 

Most, if not probably all science papers today start with the premise that universal common ancestry, and the tree of life, are true. Rather than questioning Darwin's hypothesis, 160 years old by now, it remains unquestioned. That is the basis upon which all evolutionary investigations today are made. It is accepted dogma and the starting point. The question asked is not: Is the tree of life true, and is universal common ancestry (UCA) true? But: How does the evidence fit into the tree? Even among some creationists, and intelligent design proponents, universal common ancestry is accepted as a plausible and justifiable possibility. But is it so? And what, when the nested hierarchy is not detectable, and evidence is lacking? it is just wiped under the table and ignored. There are many such cases. And scientific researchers find more and more such examples.  

If one looks into the scientific literature, however, nothing is certain. O. Zhaxybayeva (2004): There is disagreement on the location of the root of the tree of life (e.g. different studies place the root: 

1. within the bacterial domain or on the branch that leads to the bacterial domain; 
2. within the eukaryotic domain; 
3. within the archaeal domain; 
4. yield inconclusive results. The timing of the organismal cenancestor is another unresolved question.41

In the meantime, the Genesis account of the special creation of each kind is often discarded from the onset or questioned. Some argue that Genesis does not require to be taken literally, it is perfectly fine to accept that it is an ancient near eastern myth, that uses figurative speech, and allegories that never were intended to be taken literally - so they say. Since Genesis is a several thousand-year-old book, a myth from ancient near eastern tribes, maybe even a copy from even older Mesopotamian texts, from the Babylonians, it by no means deserves to be trusted or taken literally. In special, when talking snakes, and angels with fiery, turning swords are mentioned. The Bible contains the highest information/semantic content in world literature. Genesis 1.1: "In the beginning, God created the heavens and the earth". That informs us in one short sentence about our origins. In information theory, semantics can be defined as the weight of the meanings” per sentence or per paragraph. There are thousands, maybe hundreds of thousands of books about origins, the beginning of the universe, life, and biodiversity, but none provide conclusive answers. All that scientific investigators can say in regard to our origins is: " probably, we suppose, we imagine, we theorize, we hypothesize, most likely, we suggest, it seems, it appears " etc. That extends through ALL evolutionary biology. The Bible, on the other hand, describes the origin of life in definitive terms: According to Genesis in the Bible, God created life during the creation week from the start and diversified it.

Gen. 1.11: Then God said, "Let the land produce vegetation: seed-bearing plants and trees on the land that bear fruit with seed in it, according to their various kinds." And it was so.
Gen. 1.21: So God created the great creatures of the sea and every living and moving thing with which the water teems, according to their kinds, and every winged bird according to its kind. And God saw that it was good.
Gen. 1.24: And God said, "Let the land produce living creatures according to their kinds: livestock, creatures that move along the ground, and wild animals, each according to its kind." And it was so.
Gen. 1.26/27: Then God said, "Let us make man in our image, in our likeness, and let them rule over the fish of the sea and the birds of the air, over the livestock, over all the earth, and over all the creatures that move along the ground." 27 So God created man in his own image, in the image of God he created him; male and female he created them.

It is important to outline: We have in Genesis a complete account of origins. We can take two stances towards the Genesis account: 1. We believe it, or 2. We don't. Darwin contradicts the Genesis account, claiming that a Universal Common Ancestor existed, about 3,5 billion years ago, and gave rise to all biological forms and diversity. Which of the two accounts is most likely true?

Alberts (2022): The living world consists of three major divisions, or domains: eukaryotes, bacteria, and archaea. The great variety of living creatures that we see around us are eukaryotes. The name is from Greek, meaning “truly nucleated”, reflecting the fact that the cells of these organisms have their DNA enclosed in a membrane-bound organelle called the nucleus. Visible by simple light microscopy, this feature was used in the early twentieth century to classify living organisms as either eukaryotes (those with a nucleus) or prokaryotes (those without a nucleus). We now know that prokaryotes comprise two of the three major domains of life, bacteria, and archaea. Eukaryotic cells are typically much larger than those of bacteria and archaea; in addition to a nucleus, they typically contain a variety of membrane-bound organelles that are also lacking in prokaryotes. The genomes of eukaryotes also tend to run much larger—containing more than 20,000 genes for humans and corals, for example, compared with 4000–6000 genes for the typical bacteria or archaea. In addition to plants and animals, the eukaryotes include fungi (such as mushrooms or the yeasts used in beer- and bread-making), as well as an astonishing variety of single-celled, microscopic forms of life. 29

Was there a First and a Last Universal Common Ancestor?

In his book, Darwin suggests that all living organisms are related by ascendency, and therefore they are all derived from ancestral species, which migrate around the world and diversify, generating the amazing biodiversity of organisms (Darwin, 1859).

K. Padian (2008): A sketch Darwin made soon after returning from his voyage on HMS Beagle (1831–36) showed his thinking about the diversification of species from a single stock (see Figure). This branching, extended by the concept of common descent, eventually formed an entire 'tree of life, developed enthusiastically by his German disciple Ernst Haeckel in the decades following the Origin. 1

Refuting Darwin, confirming design 41586_2008_Article_BF451632a_Figc_HTML

Charles Darwin's 1837 sketch of the diversification of species from a single stock. Credit: CAMBRIDGE UNIV. LIB./DARWIN-ONLINE.ORG.UK

D.Moran: Though the tree of life idea had been used to visualize taxonomy by Carl Linneaus, it became foundational as a tool for the development of Darwin’s evolutionary hypothesis.  Lines connecting groups of organism branched off to more specific and supposedly related forms.  Darwin saw that the connections made to groups and the position of species within a group were the result of shared similarities through ancestral descent.  His theory was one attempt at explaining how those relationships might have come to exist.  Ancestry was presumed to give rise to multiple lineages that diverged to create new life forms.  Natural selection was the driving force for the divergence of species from a common ancestor.  Natural variation within a type of organism was the generator of novel traits.  Together, variation and selection would prove life evolved to its current time in existence.

M.A. Ragan (2009): The rapid growth of genome-sequence data since the mid-1990s is now providing unprecedented detail on the genetic basis of life, and not surprisingly is catalyzing the most fundamental re-evaluation of origins and evolution since Darwin’s day. Several papers in this theme issue argue that Darwin’s tree of life is now best seen as an approximation—one quite adequate as a description of some parts of the living world (e.g. morphologically complex eukaryotes), but less helpful elsewhere (e.g. viruses and many prokaryotes); indeed, one of our authors goes farther, proclaiming the “demise” of Darwin’s tree as a hypothesis on the diversity and seeming naturalness of hierarchical arrangements of groups of living organisms. 16

10 reasons to refute the claim of  Universal Common Ancestry 

1. The DNA replication machinery is not homologous in the 3 domains of life. 

The bacterial core replisome enzymes do not share a common ancestor with the analogous components in eukaryotes and archaea. L.S. Kaguni (2016): Genome sequencing of cells from the three domains of life, bacteria, archaea, and eukaryotes, reveals that most of the core replisome components evolved twice, independently. Thus, the bacterial core replisome enzymes do not share a common ancestor with the analogous components in eukaryotes and archaea, while the archaea and eukaryotic core replisome machinery share a common ancestor. An exception to this are the clamps and clamp loaders, which are homologous in all three domains of life.2

2. Bacteria and Archaea differ strikingly in the chemistry of their membrane lipids. 

There is no evidence of a common ancestor for any of the four glycolytic kinases or of the seven enzymes that bind nucleotides. S. Jain (2014): The composition of the phospholipid bilayer is distinct in archaea when compared to bacteria and eukarya. In archaea, isoprenoid hydrocarbon side chains are linked via an ether bond to the sn-glycerol-1-phosphate backbone. In bacteria and eukarya on the other hand, fatty acid side chains are linked via an ester bond to the sn-glycerol-3-phosphate backbone. 3 Cell membrane phospholipids are synthesized by different, unrelated enzymes in bacteria and archaea, and yield chemically distinct membranes. Bacteria and archaea have membranes made of water-repellent fatty molecules. Bacterial membranes are made of fatty acids bound to the phosphate group while archaeal membranes are made of isoprenes bonded to phosphate in a different way. This leads to something of a paradox: Since a supposed last universal common ancestor, LUCA already had an impermeable membrane for exploiting proton gradients, why would its descendants have independently evolved two different kinds of impermeable membrane?

Franklin M. Harold (2014): Membranes also pose one of the most stubborn puzzles in all of cell evolution. Shortly after the discovery of the Archaea, it was realized that these organisms differ strikingly from the Bacteria in the chemistry of their membrane lipids. Archaea make their plasma membranes of isoprenoid subunits, linked by ether bonds to glycerol-1-phosphate; by contrast, Bacteria and Eukarya employ fatty acids linked by ester bonds to glycerol-3-phosphate. There are a few partial exceptions to the rule. Archaeal membranes often contain fatty acids, and some deeply branching Bacteria, such as Thermotoga, favor isoprenoid ether lipids (but even they couple the ethers to glycerol-3-phosphate). This pattern of lipid composition, which groups Bacteria and Eukarya together on one side and Archaea on the other, stands in glaring contrast to what would be expected from the universal tree, which puts Eukarya with the Archaea 14

3. Sequences of glycolytic enzymes differ between Archaea and Bacteria/Eukaryotes. 

S. F Alnomasy (2017): Some archaeal enzymes have some similarities with bacteria, but most archaeal enzymes have no similarity with classical glycolytic pathways in Bacteria 4 There is no evidence of a common ancestor for any of the four glycolytic kinases or of the seven enzymes that bind nucleotides.

Keith A. Webster (2003): There is no evidence of a common ancestor for any of the four glycolytic kinases or of the seven enzymes that bind nucleotides. Genetic, protein and DNA analysis, together with major differences in the biochemistry and molecular biology of all three domains – Bacteria, Archaea and Eukaryota – suggest that the three fundamental cell types are distinct and evolved separately (i.e. Bacteria are not actually pro-precursors of the eukaryotes, which have sequence similarities in particular parts of their biochemistry between both Bacteria or Archaea).  Only a relatively small percentage of genes in Archaea have sequence similarity to genes in Bacteria or Eukaryota. Furthermore, most of the cellular events triggered by intracellular Ca2+ in eukaryotes do not occur in either Bacteria or Archaea. 15

4. There are at least six distinct autotrophic carbon fixation pathways.

If common ancestry were true, an ancestral Wood–Ljungdahl pathway should have become life's one and only principle for biomass production. W. Nitschke (2013): At least six distinct autotrophic carbon fixation pathways have been elucidated during the past few decades 5 Since the claim is that this is how life began fixing carbon, and the first carbon fixation pathways were anaerobic, this represents a major puzzle for proponents of common ancestry, and its proponents are led to wonder why an ancestral Wood–Ljungdahl pathway has not become life's one and only principle for biomass production. What is even more puzzling, is the fact that searches of the genomes of acetogenins for enzymes clearly homologous to those of the methanogenic C1-branch came up empty-handed with one notable exception, i.e. the initial step of CO2 reduction which is, in both cases, catalyzed by a molybdo/tungstopterin enzyme from the complex iron-sulfur molybdoenzyme (CISM) superfamily. So, partially, carbon fixation pathways share partially the same enzymes. This points clearly to a common designer choosing different routes for the same reaction but using partially convergent design. Similarities between living organisms could be because they have been designed by the same intelligence, just as we can recognize a Norman Foster building by his characteristic style, or a painting by Van Gogh. We expect to see repeated motifs and re-used techniques in different works by the same artist/designer. 

5. There is a sharp divide in the organizational complexity of the cell between eukaryotes and prokaryotes

E. V. Koonin (2010): There is a sharp divide in the organizational complexity of the cell between eukaryotes, which have complex intracellular compartmentalization, and even the most sophisticated prokaryotes (archaea and bacteria), which do not. The compartmentalization of eukaryotic cells is supported by an elaborate endomembrane system and by the actin-tubulin-based cytoskeleton. There are no direct counterparts of these organelles in archaea or bacteria. The other hallmark of the eukaryotic cell is the presence of mitochondria, which have a central role in energy transformation and perform many additional roles in eukaryotic cells, such as in signaling and cell death. 6

6. A typical eukaryotic cell is about 1,000-fold bigger by volume than a typical bacterium or archaeon

E. V. Koonin (2010): The origin of eukaryotes is a huge enigma and a major challenge for evolutionary biology. There is a sharp divide in the organizational complexity of the cell between eukaryotes, which have complex intracellular compartmentalization, and even the most sophisticated prokaryotes (archaea and bacteria), which do not. A typical eukaryotic cell is about 1,000-fold bigger by volume than a typical bacterium or archaeon, and functions under different physical principles: free diffusion has little role in eukaryotic cells, but is crucial in prokaryotes. The compartmentalization of eukaryotic cells is supported by an elaborate endomembrane system and by the actin-tubulin-based cytoskeleton. There are no direct counterparts of these organelles in archaea or bacteria. The other hallmark of the eukaryotic cell is the presence of mitochondria, which have a central role in energy transformation and perform many additional roles in eukaryotic cells, such as in signaling and cell death. 6

7.  Horizontal gene transfer (HGT)

E. V. Koonin (2012): Subsequent massive sequencing of numerous, complete microbial genomes have revealed novel evolutionary phenomena, the most fundamental of these being: pervasive horizontal gene transfer (HGT), in large part mediated by viruses and plasmids, that shapes the genomes of archaea and bacteria and call for a radical revision (if not abandonment) of the Tree of Life concept 7

8. RNA Polymerase differences

RNA Polymerase differences: Prokaryotes only contain three different promoter elements: -10, -35 promoters, and upstream elements.  Eukaryotes contain many different promoter elements: TATA box, initiator elements, downstream core promoter element, CAAT box, and the GC box to name a few.  Eukaryotes have three types of RNA polymerases, I, II, and III, and prokaryotes only have one type.  Eukaryotes form and initiation complex with the various transcription factors that dissociate after initiation is completed.  There is no such structure seen in prokaryotes.  Another main difference between the two is that transcription and translation occurs simultaneously in prokaryotes and in eukaryotes the RNA is first transcribed in the nucleus and then translated in the cytoplasm.  RNAs from eukaryotes undergo post-transcriptional modifications including: capping, polyadenylation, and splicing.  These events do not occur in prokaryotes.  mRNAs in prokaryotes tend to contain many different genes on a single mRNA meaning they are polycystronic.  Eukaryotes contain mRNAs that are monocystronic.  Termination in prokaryotes is done by either rho-dependent or rho-independent mechanisms.  In eukaryotes transcription is terminated by two elements: a poly(A) signal and a downstream terminator sequence.  8

9. Ribosome and ribosome biogenesis differences

Ribosome and ribosome biogenesis differences: Although we could identify E. coli counterparts with comparable biochemical activity for 12 yeast ribosome biogenesis factors (RBFs), only 2 are known to participate in bacterial ribosome assembly. This indicates that the recruitment of individual proteins to this pathway has been largely independent in the bacterial and eukaryotic lineages. The bacterial version of a universal ribosomal protein tends to be remarkably different from its archaeal equivalent, the same being true, even more dramatically, for the aminoacyl-tRNA synthetases. In both cases, in a sequence alignment, a position constant in composition in the Bacteria tends to be so in its archaeal homolog as well, but the archaeal and bacterial compositions for that position often differ from each other. Moreover, among the aminoacyl-tRNA synthetases, a total lack of homology between large (and characteristic) sections of the bacterial version of a molecule and its archaeal counterpart is common. 9 

10. The replication promoters of bacterial and yeast genes have different structures

A. C. Leonard (2013): Like the origins of DNA replication, the promoters of bacterial and yeast genes have different structures, are recognized by different proteins, and are not exchangeable. The absolute incompatibility between prokaryote (e.g., E. coli) and eukaryote (e.g., yeast) origins of replication and promoters, as well as DNA replication, transcription, and translation machineries, stands as a largely unrecognized challenge to the evolutionary view that the two share a common ancestor. 10

No common ancestor for Viruses

Eugene V. Koonin (2020): In the genetic space of viruses and MGEs, no genes are universal or even conserved in the majority of viruses. Viruses have several distinct points of origin, so there has never been a last common ancestor of all viruses. 11

Viruses and the tree of life (2009): Viruses are polyphyletic: In a phylogenetic tree, the characteristics of members of taxa are inherited from previous ancestors. Viruses cannot be included in the tree of life because they do not share characteristics with cells, and no single gene is shared by all viruses or viral lineages. While cellular life has a single, common origin, viruses are polyphyletic – they have many evolutionary origins. Viruses don’t have a structure derived from a common ancestor.  Cells obtain membranes from other cells during cell division. According to this concept of ‘membrane heredity’, today’s cells have inherited membranes from the first cells.  Viruses have no such inherited structure.  They play an important role by regulating population and biodiversity. 12

Eugene V. Koonin (2017): The entire history of life is the story of virus–host coevolution. Therefore the origins and evolution of viruses are an essential component of this process. A signature feature of the virus state is the capsid, the proteinaceous shell that encases the viral genome. Although homologous capsid proteins are encoded by highly diverse viruses, there are at least 20 unrelated varieties of these proteins. Viruses are the most abundant biological entities on earth and show remarkable diversity of genome sequences, replication and expression strategies, and virion structures. Evolutionary genomics of viruses revealed many unexpected connections but the general scenario(s) for the evolution of the virosphere remains a matter of intense debate among proponents of the cellular regression, escaped genes, and primordial virus world hypotheses. A comprehensive sequence and structure analysis of major virion proteins indicates that they evolved on about 20 independent occasions. Virus genomes typically consist of distinct structural and replication modules that recombine frequently and can have different evolutionary trajectories. The present analysis suggests that, although the replication modules of at least some classes of viruses might descend from primordial selfish genetic elements, bona fide viruses evolved on multiple, independent occasions throughout the course of evolution by the recruitment of diverse host proteins that became major virion components. 13

Comment: The importance of the admission that viruses do not share a common ancestor cannot be outlined enough. Researchers also admit, that under a naturalistic framework, the origin of viruses remains obscure, and has not found an explanation. One reason is that viruses depend on a cell host in order to replicate. Another is, that the virus capsid shells that protect the viral genome are unique, there is no counterpart in life. A science paper that I quote below describes capsids with a "geometrically sophisticated architecture not seen in other biological assemblies". This seems to be interesting evidence of design. The claim that their origin has something to do with evolution is also misleading - evolution plays no role in explaining either the origin of life or the origin of viruses. The fact that "no single gene is shared by all viruses or viral lineages" prohibits drawing a tree of viruses leading to a common ancestor.  

D M Raup (1983):  Life forms are made possible by the remarkable properties of polypeptides. It has been argued that there must be many potential but unrealized polypeptides that could be used in living systems. The number of possible primary polypeptide structures with lengths comparable to those found in living systems is almost infinite. This suggests that the particular subset of polypeptides of which organisms are now composed is only one of a great many that could be associated in viable biochemistries. There is no taxonomic category available to contain all life forms descended from a single event of life origin. Here, we term such a group, earthly or otherwise, a bioclade. If more than one bioclade survives, life is polyphyletic. If only one survives, it is monophyletic. We conclude that multiple origins of life in the early Precambrian is a reasonable possibility.21

W. Ford Doolittle (2007): Darwin claimed that a unique inclusively hierarchical pattern of relationships between all organisms based on their similarities and differences [the Tree of Life (TOL)] was a fact of nature, for which evolution, and in particular a branching process of descent with modification, was the explanation. However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true. This is not to say that similarities and differences between organisms are not to be accounted for by evolutionary mechanisms, but descent with modification is only one of these mechanisms, and a single tree-like pattern is not the necessary (or expected) result of their collective operation. Pattern pluralism (the recognition that different evolutionary models and representations of relationships will be appropriate, and true, for different taxa or at different scales or fordifferent purposes) is an attractive alternative to the quixotic pursuit of a single true TOL.20

Douglas L. Theobald (2010): In all cases tried, with a wide variety of evolutionary models (from the simplest to the most parameter rich), the multiple-ancestry models for shuffled data sets are preferred by a large margin over common ancestry models (LLR on the order of a thousand), even with the large internal branches. 17

C. P. Kempes (2021): We argue for multiple forms of life realized through multiple different historical pathways. From this perspective, there have been multiple origins of life on Earth—life is not a universal homology. By broadening the class of originations, we significantly expand the data set for searching for life.  We define life as the union of two crucial energetic and informatic processes producing an autonomous system that can metabolically extract and encode information from the environment of adaptive/survival value and propagate it forward through time. We provide a new perspective on the origin of life by arguing that life has emerged many times on Earth and that there are many forms of extant life coexisting on a variety of physical substrates. The ultimate theory of life will certainly have ingredients from abstract theories of engineering, computation, physics, and evolution, but we expect will also require new perspectives and tools, just as theories of computation have.  It should be able to highlight life as the ultimate homoplasy (convergence) rather than homology, where life is discovered repeatedly from many different trajectories.

A science forum was held at Arizona State University in February 2011, where the following dialogue between Dawkins and Venter was reported:

Venter: I'm not so sanguine as some of my colleagues here that there's only one life form on this planet we have a lot of different types of metabolism different organisms I wouldn't call you the same life-form as the one we have that lives in pH12 base that would dissolve your skin if we drop you at it. The same genetic it will have a common anything well you don't have the same genetic code in fact the micoplasmas use a different genetic code and it would not work  in yourself so there are a lot of variations on the unit
Dawkins: But you're not saying it belongs to a different tree of life from me
Venter: I well I think the Tree of Life is an artifact of some early scientific studies that aren't really holding up so the tree you know there may be a bush of life. Bush I don't like that word written but that's only I can see that so there's not a tree of life and in fact from our deep sequencing of organisms in the ocean out of now we have about 60 million different unique gene sets we found 12 that looked like a very very deep branching perhaps fourth domain of life that obviously is extremely rare that it only shows up out of those few sequences but it's still DNA based but you know the diversity we have in the DNA world I'm not so saying what in wedding ready to throw out the DNA world. 18 19

From the Last Universal Common Ancestor, LUCA, to Eukaryotic cells

C. Woese (2002): The evolution of modern cells is arguably the most challenging and important problem the field of Biology has ever faced. 40

G. E. Mikhailovsky (2021): It is puzzling why life on Earth consisted of prokaryotes for up to 2.5 ± 0.5 billion years (Gy) before the appearance of the first eukaryotes. This period, from LUCA (Last Universal Common Ancestor) to LECA (Last Eucaryotic Common Ancestor), we have named the Lucacene, to suggest all prokaryotic descendants of LUCA before the appearance of LECA. The structural diversity of eukaryotic organisms is very large, while the morphological diversity of prokaryotic cells is immeasurably lower.    37

Eukaryotes

As everywhere through evolutionary biology, the claim is that things went from less, to more complex, over long periods of time. 

D. A. Peattie:  The eukaryote has structural features that allow it to communicate better than prokaryotes, features that permit cellular aggregation and multicellular life. In contrast, the more primitive prokaryotes are less well-equipped for intercellular communication and cannot readily organize into multicellular organisms. Not only do eukaryotic cells allow larger and more complex organisms to be made, but they are themselves larger and more complex than prokaryotic cells. Whether eukaryotic cells live singly or as part of a multicellular organism, their activities can be much more complex and diversified than those of their prokaryotic counterparts. In prokaryotes, all internal cellular events take place within a single compartment, the cytoplasm. Eukaryotes contain many subcellular compartments, called organelles. Even single-celled eukaryotes can display remarkable complexity of function; some have features as specialized and diverse as sensory bristles, mouth parts, muscle-like contractile bundles, or stinging darts.

Refuting Darwin, confirming design Eukary11
Structure of a typical animal cell

On a very fundamental level, eukaryotes and prokaryotes are similar. They share many aspects of their basic chemistry, physiology and metabolism. Both cell types are constructed of and use similar kinds of molecules and macromolecules to accomplish their cellular work. In both, for example, membranes are constructed mainly of fatty substances called lipids, and molecules that perform the cell's biological and mechanical work are called proteins.
Eukaryotes and prokaryotes both use the same chemical relay system to make protein. A permanent record of the code for all of the proteins the cell will require is stored in the form of DNA. Because DNA is the master copy of the cell's (or organism's) genetic make-up, the information it contains is absolutely crucial to the maintenance and perpetuation of the cell. As if to safeguard this archive, the cell does not use the DNA directly in protein synthesis but instead copies the information onto a temporary template of RNA, a chemical relative of DNA. Both the DNA and the RNA constitute a "recipe" for the cell's proteins. The recipe specifies the order in which amino acids, the chemical subunits of proteins, should be strung together to make the functional protein. Protein synthesis both in eukaryotes and prokaryotes takes place on structures called ribosomes, which are composed of RNA and protein. This illustrates one way in which prokaryotes and eukaryotes are similar and highlights the idea that differences between these organisms are often architectural. In other words, both cell types use the same bricks and mortar, but the structures they build with these materials vary dramatically.

The prokaryotic cell can be compared to a studio apartment: a one-room living space that has a kitchen area abutting the living room, which converts into a bedroom at night. All necessary items fit into their own locations in one room. There is an everyday; washable rug. Room temperature is comfortable-not too hot, not too cold. Conditions are adequate for everything that must occur in the apartment, but not optimal for any specific activity. In a similar way, all of the prokaryote's functions fit into a single compartment. The DNA is attached to the cell's membrane. Ribosomes float freely in the single compartment. Cellular respiration-the process by which nutrients are metabolized to release energy-is carried out at the cell membrane; there is no dedicated compartment for respiration. A eukaryotic cell can be compared to a mansion, where specific rooms are designed for particular activities. The mansion is more diverse in the activities it supports than the studio apartment. It can accommodate overnight guests comfortably and support social activities for adults in the living room or dining room, for children in the playroom. The baby's room is warm and furnished with bright colors and a soft, thick carpet. The kitchen has a stove, a refrigerator and a tile floor. Items are kept in the room that is most appropriate for them, under conditions ideal for the activities in that specific room. A eukaryotic cell resembles a mansion in that it is subdivided into many compartments. Each compartment is furnished with items and conditions suitable for a specific function, yet the compartments work together to allow the cell to maintain itself, to replicate and to perform more specialized activities.

Taking a closer look, we find three main structural aspects that differentiate prokaryotes from eukaryotes. The definitive difference is the presence of a true (eu) nucleus (karyon) in the eukaryotic cell. The nucleus, a double-membrane casing, sequesters the DNA in its own compartment and keeps it separate from the rest of the cell. In contrast, no such housing is provided for the DNA of a prokaryote. Instead the genetic material is tethered to the cell membrane and is otherwise allowed to float freely in the cell's interior. It is interesting to note that the DNA of eukaryotes is attached to the nuclear membrane, in a manner reminiscent of the attachment of prokaryotic DNA to the cell's outer membrane. 28

The greatest discontinuity in evolution: The gap from prokaryotes to eukaryotes

Refuting Darwin, confirming design Staner10

Ro Y. STANIER et. al., (1963) “The basic divergence in cellular structure, which separates the bacteria and blue-green algae from all other cellular organisms, represents the greatest single evolutionary discontinuity to be found in the presentday world” 31

E. V. Koonin (2002):The eukaryotic chromatin remodeling machinery, the cell cycle regulation systems, the nuclear envelope, the cytoskeleton, and the programmed cell death (PCD, or apoptosis) apparatus all are such major eukaryotic innovations, which do not appear to have direct prokaryotic predecessors.25

E. Derelle et.al.,(2006): The unicellular green marine alga Ostreococcus tauri is the world's smallest free-living eukaryote known to date, and encodes the fewest number of genes. It has been hypothesized, based on its small cellular and genome sizes, that it may reveal the “bare limits” of life as a free-living photosynthetic eukaryote, presumably having disposed of redundancies and presenting a simple organization and very little noncoding sequence. 27 It has a genome size of 12.560,000 base pairs, 8,166 genes and 7745 proteins. in comparison, the simplest free-living bacteria today is Pelagibacter ubique get by with about 1,300 genes and 1,308,759 base pairs and code for 1,354 proteins.

T. Cavalier-Smith (2010):  This radical transformation of cell structure (eukaryogenesis) is the most complex and extensive case of quantum evolution in the history of life. Beforehand earth was a sexless, purely bacterial and viral world. Afterwards sexy, endoskeletal eukaryotes evolved morphological complexity: diatoms, butterflies, corals, whales, kelps, and trees 32

E. Szathmáry (2015): The divide between prokaryotes and eukaryotes is the biggest known evolutionary discontinuity. There is no space here to enter the whole maze of the recent debate about the origin of the eukaryotic cells; suffice it to say that the picture seems more obscure than 20 y ago. How did eukaryotic life evolve? This is one of the most controversial and puzzling questions in evolutionary history. Life began as single-celled, independent organisms that evolved into cells containing membrane-bound, specialized structures known as organelles. What’s clear is that this new type of cell, the eukaryote, is more complex than its predecessors. What’s unclear is how these changes took place. 24

A.Kauko (2018): The origin of eukaryotes is one of the central transitions in the history of life; without eukaryotes there would be no complex multicellular life.36

F.Rana (2019): The origin of eukaryotes is one of the hardest and most intriguing problems in the study of the evolution of life, and arguably, in the whole of biology. On average, the volume of eukaryotic cells is about 15,000 times larger than that of prokaryotic cells.30

Josip Skejo (2021): Eukaryotic cells are vastly more complex than prokaryotic cells as evident by their endomembrane system 26

A. Spang (2022): Archaea and Bacteria are often referred to as primary domains of life while eukaryotes form a secondary domain of life. The prevalence of horizontal gene transfer (HGT) via both mobile genetic elements (MGEs) and viruses but also directly between distinct organisms has to some extent questioned the concept of a Tree of Life (TOL), which may be more correctly represented as a network including both vertical and horizontal branches.

Arizona State University (2022): The transition from prokaryote to eukaryote has remained a central mystery biologists are still trying to untangle. How this crucial transition came to be remains a central mystery in biology.40

Origin of eukaryotes

M. A. O’Malley (2015):  There are very roughly two main hypotheses for the evolution of eukaryotes: one sees the process as mutation-driven, with lateral acquisitions of genes and organisms also involved but in a causally secondary way; the other sees eukaryogenesis as driven causally by the acquisition of the mitochondrion. The acquisition of the mitochondrion is often portrayed as a one-off event that instigated a rapid transformation with major evolutionary outcomes 38

Eugene V. Koonin (2015): The origin of eukaryotes is one of the hardest and most intriguing problems in the study of the evolution of life, and arguably, in the whole of biology. Compared to archaea and bacteria (collectively, prokaryotes), eukaryotic cells display a qualitatively higher level of complexity of intracellular organization. Unlike the great majority of prokaryotes, eukaryotic cells possess an extended system of intracellular membranes that includes the eponymous eukaryotic organelle, the nucleus, and fully compartmentalizes the intracellular space. In eukaryotic cells, proteins, nucleic acids and small molecules are distributed by specific trafficking mechanisms rather than by free diffusion as is largely the case in bacteria and archaea. Thus, eukaryotic cells function on different physical principles compared to prokaryotic cells, which is directly due to their (comparatively) enormous size. The gulf between the cellular organizations of eukaryotes and prokaryotes is all the more striking because no intermediates have been found. The actin and tubulin cytoskeletons, the nuclear pore, the spliceosome, the proteasome, and the ubiquitin signalling system are only a few of the striking examples of the organizational complexity that seems to be a ‘birthright’ of eukaryotic cells. The formidable problem that these fundamental complex features present to evolutionary biologists makes Darwin’s famous account of the evolution of the eye look like a simple, straightforward case. Indeed, so intimidating is the challenge of eukaryogenesis that the infamous notion of ‘irreducible complexity’ has sneaked into serious scientific debate: 

C. G. Kurland: Genomics and the Irreducible Nature of Eukaryote Cells (2006): Data from many sources give no direct evidence that eukaryotes evolved by genome fusion between archaea and bacteria. Because their cells appear simpler, prokaryotes have traditionally been considered ancestors of eukaryotes. Here, we review recent data from proteomics and genome sequences suggesting that eukaryotes are a unique primordial lineage. Mitochondria, mitosomes, and hydrogenosomes are a related family of organelles that distinguish eukaryotes from all prokaryotes. Recent analyses also suggest that early eukaryotes had many introns, and RNAs and proteins found in modern spliceosomes. Nuclei, nucleoli, Golgi apparatus, centrioles, and endoplasmic reticulum are examples of cellular signature structures (CSSs) that distinguish eukaryote cells from archaea and bacteria. Comparative genomics, aided by proteomics of CSSs such as the mitochondria, nucleoli, and spliceosomes, reveals hundreds of proteins with no orthologs  (Orthologs are genes in different species that evolved from a common ancestral gene by speciation) evident in the genomes of prokaryotes; these are the eukaryotic signature proteins (ESPs). The many ESPs within the subcellular structures of eukaryote cells provide landmarks to track the trajectory of eukaryote genomes from their origins. In contrast, hypotheses that attribute eukaryote origins to genome fusion between archaea and bacteria are surprisingly uninformative about the emergence of the cellular and genomic signatures of eukaryotes (CSSs and ESPs). The failure of genome fusion to directly explain any characteristic feature of the eukaryote cell is a critical starting point for studying eukaryote origins. 34



Last edited by Otangelo on Sat Apr 08, 2023 8:21 am; edited 9 times in total

https://reasonandscience.catsboard.com

6Refuting Darwin, confirming design Empty Re: Refuting Darwin, confirming design Wed Nov 02, 2022 7:51 pm

Otangelo


Admin

albeit followed by a swift refutation by Eugene V. Koonin (2007):  The statement that “[e]ukaryote proteins that are rooted in the bacterial or in archaeal clusters are few and far between” is inaccurate. The genomes of both yeast and humans harbor many hundreds of proteins that have readily identifiable homologs among α-proteobacteria but not among archaebacteria, and vice versa. They opine that “t is an attractively simple idea that a primitive eukaryote took up the endosymbiont/mitochondrion by phagocytosis,” yet all testable predictions of that idea have failed. By contrast, examples of prokaryotes that live within other prokaryotes show that prokaryotes can indeed host endosymbionts in the absence of phagocytosis, as predicted by competing alternative theories. They misattribute the notion that a eukaryotic “raptor” phagocytosed the mitochondrion to Stanier and van Niel’s classical paper, which does not mention mitochondrial origin, and to de Duve’s 1982 exposé, which argues for the endosymbiotic origin of microbodies while mentioning “alleged symbiotic adoption” of mitochondria in passing, but without mentioning phagocytosis. Their references and are misattributed as examples of “fusion” hypotheses; indeed, they indiscriminately label views on eukaryote origins that differ from their own as fusion hypotheses.Finally, and most disturbing, if contemporary eukaryotic cells are truly of “irreducible nature,” as Kurland et al.’s title declares, then no stepwise evolutionary process could have possibly brought about their origin, and processes other than evolution must be invoked. Is there a hidden message in their paper? 

which got another response by C. G. KURLAND: OUR VIEW IS THAT CELLULAR AND MOLECULAR biology, especially genomics, reveals signs of an ancient complexity of the eukaryotic cell. This new information was not available to older hypotheses for eukaryote origins; they were answering questions that were incompletely formulated. Our primary conclusions regarding the ancestral complexity of the eukaryote cell microsporidian and the subcellular location of its eukaryote signature proteins (ESPs). Even though Microsporidian genomes are among the most heavily reduced in eukaryotes, they still have many eukaryote signature proteins ESPs. An anaerobic endoparasitic lifestyle (parasites that live in the tissues and organs of their hosts) has reduced their mitochondria to mitosomes and allowed the characteristic proteins of phagocytosis to be lost. Nevertheless, it is striking that characteristic ESPs are found throughout the cell; nothing in this picture suggests they are chimeric descendents of archaeal and bacterial ancestors. 35

1. Kevin Padian: Darwin's enduring legacy 06 February 2008
2. Laurie S. Kaguni: DNA Replication Across Taxa (Volume 39) (The Enzymes, Volume 39) 2016
3. Samta Jain: Biosynthesis of archaeal membrane ether lipids 2014 Nov 26
4. Sultan F Alnomasy: Insights into Glucose Metabolism Inarchaea and Bacteria: Comparison Study of Embden-MeyerhofParnas (EMP) and Entner Doudoroff (ED) Pathways August 2017
5. Wolfgang Nitschke: Beating the acetyl coenzyme A-pathway to the origin of life 2013 Jul 19
6. Eugene V Koonin: The origin and early evolution of eukaryotes in the light of phylogenomics 05 May 2010
7. Eugene V Koonin: Evolution of microbes and viruses: a paradigm shift in evolutionary biology? 2012 Sep 13
8. Brett Deml: Prokaryotic vs. Eukaryotic Trancription 2006
9. Ingo Ebersberger et.al.,:The evolution of the ribosome biogenesis pathway from a yeast perspective 2014 Feb; 4
10. Alan C. Leonard: DNA Replication Origins 2013 Oct; 5
11. Eugene V. Koonin: Global Organization and Proposed Megataxonomy of the Virus World 4 March 2020
12.Viruses and the tree of life:  19 March 2009
13. Eugene V. Koonin: Multiple origins of viral capsid proteins from cellular ancestors March 6, 2017
14. Franklin M. Harold:  In Search of Cell History: The Evolution of Life's Building Blocks  page 96 October 29, 2014
15. Keith A. Webster: Evolution of the coordinate regulation of glycolytic enzyme genes by hypoxia 01 SEPTEMBER 2003
16. Mark A. Ragan: The network of life: genome beginnings and evolution 2009 Aug 12
17. Douglas L. Theobald: A formal test of the theory of universal common ancestry 2010
18. Youtube: Dr. Craig Venter Denies Common Descent in front of Richard Dawkins! 2011 
19. Evolution News: Venter vs. Dawkins on the Tree of Life — and Another Dawkins Whopper March 9, 2011
20. W. Ford Doolittle Pattern pluralism and the Tree of Life hypothesis February 13, 2007
21. D M Raup: Multiple origins of life. May 1, 1983
22. Christopher P. Kempes: The Multiple Paths to Multiple Life 12 July 2021
23. Anja Spang: Evolving Perspective on the Origin and Diversification of Cellular Life and the Virosphere  26 February 2022
24. Eörs Szathmáry: Toward major evolutionary transitions theory 2.0 April 2, 2015
25. Eugene V Koonin: Origin and evolution of eukaryotic apoptosis: the bacterial connection 28 March 2002
26. Josip Skejo: Evidence for a Syncytial Origin of Eukaryotes from Ancestral State Reconstruction July 2021
27. Evelyne Derelle et.al., Genome analysis of the smallest free-living eukaryote Ostreococcus tauri unveils many unique features 2006 Jul 25
28. K. S. KABNICK: Giardia: A Missing Link between Prokaryotes and Eukaryotes  January 1991
29. B.Alberts: Molecular Biology of the Cell, 7th edition 2022
30. Fazale Rana: Why Mitochondria Make My List of Best Biological Designs May 1, 2019
31. Eugene V Koonin: Eukaryogenesis, LECA Mar 14, 2019
32. Thomas Cavalier-Smith: Origin of the cell nucleus, mitosis and sex: roles of intracellular coevolution 2010
33. Eugene V. Koonin: Origin of eukaryotes from within archaea, archaeal eukaryote and bursts of gene gain: eukaryogenesis just made easier?  9 June 2015
34. C. G. Kurland: Genomics and the Irreducible Nature of Eukaryote Cells 19 MAY 2006
35. Eugene V. Koonin: The Evolution of Eukaryotes 27 APRIL 2007
36. Anni Kauko: Eukaryote specific folds: Part of the whole  18 April 2018
37. George E. Mikhailovsky: LUCA to LECA, the Lucacene: A model for the gigayear delay from the first prokaryote to eukaryogenesis  1 April 2021
38. Maureen A. O’Malley: Endosymbiosis and its implications for evolutionary theory April 16, 2015
39. Dr. Daniel Moran, Ph.D.: Molecular Phylogeny Proves Evolution is False. September 9, 2013
40. Arizona State University New research on the emergence of the first complex cells challenges orthodoxy AUGUST 5, 2022
41. O.Zhaxybayeva: Cladogenesis, coalescence and the evolution of the three domains of life 2004 Apr


Koonin continues: Molecular phylogenetics and phylogenomics revealed fundamental aspects of the origin of eukaryotes. The ‘standard model’ of molecular evolution, derived primarily from the classic phylogenetic analysis of 16S RNA by Woese and co-workers and supported by subsequent phylogenetic analyses of universal genes, identifies eukaryotes as the sister group of archaea, to the exclusion of bacteria. Within the eukaryotic part of the tree, early phylogenetic studies have placed into the root position several groups of unicellular organisms, primarily parasites, that unlike the majority of eukaryotes, lack mitochondria. These organisms have been construed as ‘archezoa’, i.e. the primary amitochondrial eukaryotes that were thought to have hosted the proto-mitochondrial endosymbiont. However, advances in comparative genomics jointly with discoveries of cell biology have put the archezoan scenario of eukaryogenesis into serious doubt. First, it has been shown that all the purported archezoa possess organelles, such as hydrogenosomes and mitosomes, that appeared to be derivatives of the mitochondria. These mitochondria-like organelles typically lack genomes but contain proteins encoded by genes of apparent bacterial origin that encode homologous mitochondrial proteins in other eukaryotes. Combined, the structural and phylogenetic observations leave no reasonable doubt that hydrogenosomes and mitosomes indeed evolved from the mitochondria. Accordingly, no primary amitochondrial eukaryotes are currently known, suggesting that the primary a-proteobacterial endosymbiosis antedated the LECA. Compatible with this conclusion, subsequent, refined phylogenetic studies have placed the former ‘archezoa’ within different groups of eukaryotes indicating that their initial position at the root was an artefact caused by their fast evolution, most probably causally linked to the parasitic lifestyle. These parallel developments left the archezoan scenario without concrete support but have not altogether eliminated its attractiveness. An adjustment to the archezoan scenario simply posited that the archezoa was an extinct group that had been driven out of existence by the more efficient mitochondrial eukaryotes. A concept predicated on an extinct group of organisms that is unlikely to have left behind any fossils and is refractory to evolutionary reconstruction due to the presence of mitochondria (or vestiges thereof ) in all eukaryotes is quite difficult to refute but can hardly get much traction without any concrete evidence of the existence of archezoa. The radical alternative to the elusive archezoa is offered by symbiogenetic scenarios of eukaryogenesis according to which archezoa, i.e. primary amitochondrial eukaryotes, have never existed, and the eukaryotic cell is the product of a symbiosis between two prokaryotes. Comparative genomic analysis clearly demonstrates that eukaryotes possess two distinct sets of genes, one of which shows phylogenetic affinity with homologues from archaea, whereas the other one includes genes affiliated with bacterial homologues (apart from these two classes, there are many eukaryotic genes of uncertain provenance). The eukaryotic genes of apparent archaeal descent encode, primarily, proteins involved in information processing (translation, transcription, replication, repair), whereas the genes of inferred bacterial origin encode mostly proteins with ‘operational’ functions such as metabolic enzymes, components of membranes and other cellular structures and others. Notably, altogether, the number of eukaryotic protein-coding genes of bacterial origin exceeds the number of ‘archaeal’ proteins about threefold. Thus, although many highly conserved, universal genes of eukaryotes indeed appear to be of archaeal origin, the archaeo-eukaryotic affinity certainly does not tell the entire story of eukaryogenesis, not even most of that story if judged by the proportions of genes of apparent archaeal and bacterial descent.

Although several symbiogenetic scenarios that differ in terms of the proposed partners and even the number of symbiotic events involved have been proposed, the simplest, parsimonious one that accounts for both the ancestral presence of mitochondria in eukaryotes and the hybrid composition of the eukaryotic gene complement involves engulfment of an α-proteobacterium by an archaeon. Under this scenario, a chain of events has been proposed that leads from the endosymbiosis to the emergence of eukaryotic innovations such as the endomembrane system, including the nucleus and the cytoskeleton. Subsequently, argument has been developed that the energy demand of a eukaryotic cell that is orders of magnitude higher than that of a typical prokaryotic cell cannot be met by means other than utilization of multiple ‘power stations’ such as the mitochondria.

A major problem faced by this scenario (and symbiogenetic scenarios in general) is the mechanistic difficulty of the engulfment of one prokaryotic cell by another. Although bacterial endosymbionts of certain proteobacteria have been described, such a relationship appears to be a rarity. By contrast, in many unicellular eukaryotes, such as amoeba, engulfment of bacterial cells is routine due to the phagotrophic lifestyle of these organisms. The apparent absence of phagocytosis in archaea and bacteria prompted the reasoning that the host of the proto-mitochondrial endosymbiont was a primitive phagotrophic eukaryote, which implies the presence of an advanced endomembrane system and cytoskeleton. Thus, argument from cell biology seemed to justify rescuing the archezoan scenario, the lack of positive evidence notwithstanding.

The diversity of the outcomes of phylogenetic analysis, with the origin of eukaryotes scattered around the archaeal diversity, has led to considerable frustration and suggested that a ‘phylogenomic impasse’ has been reached, owing to the inadequacy of the available phylogenetic methods for disambiguating deep relationships. 33



Last edited by Otangelo on Sat Apr 08, 2023 9:34 am; edited 3 times in total

https://reasonandscience.catsboard.com

7Refuting Darwin, confirming design Empty Re: Refuting Darwin, confirming design Wed Nov 02, 2022 9:02 pm

Otangelo


Admin

3

What is embodied life? 

In my previous book, On the Origin of Life and Virus World by means of an Intelligent Designer, I described cells as chemical factories, not just analogously to human-made factories, but in a literal sense. In order for considering alive, cells have to be able to reproduce and self-replicate, perform metabolic reactions, where chemicals are transformed,  and processed by going through complex enzyme-induced transformations. Able to take up materials, recycle, and expel waste products, grow and develop, evolve, that is to adapt to the environment, and speciate. They are complex, requiring millions of components, like monomers, polymers, proteins, cell membranes, etc. Cells must be able to keep a homeostatic milieu and maintain inner balance, and stable internal conditions. They need energy in order to be able to perform their cellular functions. And be able to respond to external stimuli. Living cells depend on complex information data storage, transmission, transcription, translation, decoding, and expression of that information used to direct the assembly and operation of the cell factory. It must be able to replicate and transmit the genetic epigenetic material to the next generation, to the daughter cells. Cells would not be able to survive without the ability of error detection and repair.  

Y.Sebag: Just briefly, to get a feel for what cells need to do, let us consider the basic autonomous cell whose task is to reproduce and synthesize the parts it needs from raw materials.

1. Information System - Building something which can reproduce and synthesize its own parts from raw materials requires a coordinated series of steps. Chemicals cannot do this. On their own, they just combine chaotically or crystallize into regular patterns such as in snowflakes. Hence, there must be information (ex. RNA or the like) storing the information to orchestrate the assembly.

2. Energy System - information by itself is useless. Implementing the instructions requires energy. A system that cannot generate or source energy just drifts chaotically or crystallizes into simple forms, forced to follow the path of least resistance. Hence, a system of producing or sourcing energy is necessary along with subsystems of distribution and management of that energy so that it goes to the proper place.

3. Copy System - in order to reproduce itself, the device must be able to implement the instructions of the information system using the energy system. This includes the ability to rebuild all critical infrastructure such as the information and energy systems and even the copy system itself.

4. Growth System - Without a growth system, the device will reduce itself every time it reproduces and vanish to zero-size after a few generations. This growth system necessitates subsystems of ingestion of materials from the outside world, processing of those materials, and assembling those materials into the necessary parts. This alone is a formidable chemical factory.

5. Transportation System - the materials must be moved to the proper places. Hence, a transportation system is needed for transporting raw materials and products from one place to another within the cell. Likewise, a system for managing the incoming of raw materials and outgoing of waste materials of all these chemical reactions.

6. Timing System - the growth system must also be coordinated with the reproduction system. Otherwise, if reproduction occurs faster than growth, it will reduce size faster than it grows and vanishes after a few generations. Hence, a timing or feedback mechanism is needed.

7. Communication System - signalling is needed to coordinate all the tasks so that they all work together. The reproduction system won't work without coordination with the growth and power systems. Likewise, the power system by itself is useless without the growth and reproduction systems. Only when all the systems and "circuitry" are in place and the power is turned on is there hope for the various interdependent tasks to start working together. Otherwise, it is like turning on a computer which has no interconnections between the power supply, CPU, memory, hard drive, video, operating system, etc - nothing to write home about.29


Cells are full of chemical factories and machines in a literal sense

Cells can be thought of as literal chemical factories and machines because they are constantly performing biochemical reactions and processes to maintain their functions and sustain life. They take in raw materials from their environment and convert them into various products that the cell needs to survive, such as proteins, lipids, and energy.

Each cell can be seen as a complex network of interlocking assembly lines, with large protein machines and complexes working together in a highly coordinated manner. For example, the nucleolus is a large factory where non-coding RNAs are transcribed, processed, and assembled with proteins to form ribonucleoprotein complexes. The endoplasmic reticulum serves as a factory for the production of almost all of the cell's lipids, and in response to DNA damage, repair factories are formed where damaged DNA is brought together and repaired.

Protein assemblies in cells contain highly coordinated moving parts, with intermolecular collisions restricted to a small set of possibilities, similar to machines invented by humans. These assemblies contain ordered conformational changes in one or more proteins driven by nucleoside triphosphate hydrolysis or other sources of energy, allowing them to function in a polarized fashion along a filament or nucleic acid strand, increase the fidelity of biological reactions, or catalyze the formation of protein complexes.

The complexity of cells can be difficult to grasp, but imagining the size of a cell magnified ten thousand million times gives a sense of the scale of the processes and structures at work. At that size, a cell would have a radius of 200 miles, which is about ten times the size of New York City. Even with that much space, the required number of buildings to host the factories and machines that cells need would greatly exceed the number of buildings in the city.

B.Alberts (2022): The surface of our planet is populated by living things—organisms—curious, intricately organized chemical factories that take in matter from their surroundings and use these raw materials to generate copies of themselves. Although all cells function as biochemical factories of a broadly similar type, many of the details of their small-molecule transactions differ. All cells operate as biochemical factories, driven by the free energy released in a complicated network of chemical reactions. Each cell can be viewed as a tiny chemical factory, performing many millions of reactions every second.  We can view RNA polymerase II in its elongation mode as an RNA factory that not only moves along the DNA synthesizing an RNA molecule but also processes the RNA that it produces. The nucleolus can be thought of as a large factory at which different noncoding RNAs are transcribed, processed, and assembled with proteins to form a large variety of ribonucleoprotein complexes. mRNA production is made more efficient in the nucleus by an aggregation of the many components needed for transcription and pre-mRNA processing, thereby producing a specialized biochemical factory. The extensive ER network serves as a factory for the production of almost all of the cell’s lipids.  In response to DNA damage, they rapidly converge on the sites of DNA damage, become activated, and form “repair factories” where many lesions are apparently brought together and repaired. The formation of these factories probably results from many weak interactions between different repair proteins and between repair proteins and damaged DNA. 26

B.Alberts (1998): We can walk and we can talk because the chemistry that makes life possible is much more elaborate and sophisticated than anything we students had ever considered. Proteins make up most of the dry mass of a cell. But instead of a cell dominated by randomly colliding individual protein molecules, we now know that nearly every major process in a cell is carried out by assemblies of 10 or more protein molecules. And, as it carries out its biological functions, each of these protein assemblies interacts with several other large complexes of proteins. Indeed, the entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. Consider, as an example, the cell cycle–dependent degradation of specific proteins that helps to drive a cell through mitosis. First, a large complex of about 10 proteins, the anaphase-promoting complex (APC), selects out a specific protein for polyubiquitination; this protein is then targeted to the proteasome's 19S cap complex formed from about 20 different subunits; and the cap complex then transfers the targeted protein into the barrel of the large 20S proteasome itself, where it is finally converted to small peptides. Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like the machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts. Within each protein assembly, intermolecular collisions are not only restricted to a small set of possibilities, but reaction C depends on reaction B, which in turn depends on reaction A—just as it would in a machine of our common experience. Underlying this highly organized activity are ordered conformational changes in one or more proteins driven by nucleoside triphosphate hydrolysis (or by other sources of energy, such as an ion gradient). Because the conformational changes driven in this way dissipate free energy, they generally proceed only in one direction. An earlier brief review emphasized how the directionality imparted by nucleoside triphosphate hydrolyses allows allosteric proteins to function in three different ways: as motor proteins that move in a polarized fashion along a filament or a nucleic acid strand; as proofreading devices or “clocks” that increase the fidelity of biological reactions by screening out poorly matched partners; and as assembly factors that catalyze the formation of protein complexes and are then recycled. 27

Magnifying a cell ten thousand million times, it would have a radius of 200 miles, about 10 times the size of New York City

Calling a cell a factory is an understatement. Magnifying the cell to a size of 200 miles, it would only contain the required number of buildings, hosting the factories to make the machines that it requires. 
New York City has about 900.000 buildings, of which about 40.000 are in Manhattan, of which 7.000 are skyscrapers of high-rise buildings of at least 115 feet (35 m), of which at least 95 are taller than 650 feet (198 m).

Refuting Darwin, confirming design 87a1f812

Cells are an entire industrial park, where only the number of factories producing the machines used in the industrial park is of size at least 10 times the size of New York City, where each building is individually a factory comparable to the size of a skyscraper like the Twin Towers of the World Trade Center. Each tower hosts a factory that makes factories that make machines. A mammalian cell may harbor as many as 10 million ribosomes. The nucleolus is the factory that makes ribosomes, the factory that makes proteins, which are the molecular machines of the cell. The nucleolus can be thought of as a large factory at which different noncoding RNAs are transcribed, processed, and assembled with proteins to form a large variety of ribonucleoprotein complexes.

L. Lindahl (2022): Ribosome assembly requires synthesis and modification of its components, which occurs simultaneously with their assembly into ribosomal particles. The formation occurs by a stepwise ordered addition of ribosome components. The process is assisted by many assembly factors that facilitate and monitor the individual steps, for example by modifying ribosomal components, releasing assembly factors from an assembly intermediate, or forcing specific structural configurations. The quality of the ribosome population is controlled by a complement of nucleases that degrade assembly intermediates with an inappropriate structure and/or which constitute kinetic traps.30

Mitochondria, the powerhouse of the cell, can host up to 5000 ATP synthase energy turbines. Each human heart muscle cell contains up to 8,000 mitochondria. That means, in each of the human heart cells, there are up to 40 million ATP synthase energy turbines caring for the production of ATP, the energy currency in the cell.

M.Denton (2020): The miracle of the Cell : Pg.11
Where the cosmos feels infinitely large and the atomic realm infinitely small, the cell feels infinitely complex. They appear in so many ways supremely fit to fulfill their role as the basic unit of biological life.

Pg. 329.
We would see [in cells] that nearly every feature of our own advanced machines had its analog in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction. In fact, so deep would be the feeling of deja-vu, so persuasive the analogy, that much of the terminology we would use to describe this fascinating molecular reality would be borrowed from the world of late-twentieth-century technology.
  “What we would be witnessing would be an object resembling an immense automated factory, a factory larger than a city and carrying out almost as many unique functions as all the manufacturing activities of man on earth. However, it would be a factory that would have one capacity not equaled in any of our own most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours. To witness such an act at a magnification of one thousand million times would be an awe-inspiring spectacle.”31

M. Denton (1985) Evolution, a theory in crisis:
To grasp the reality of life as it has been revealed by molecular biology, we must magnify a cell a thousand million times until it is twenty kilometres in diameter and resembles a giant airship large enough to cover a great city like London or New York. What we would then see would be an object of unparalleled complexity and adaptive design. On the surface of the cell we would see millions of openings, like the port holes of a vast space ship, opening and closing to allow a continual stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless highly organized corridors and conduits branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plants and processing units. The nucleus itself would be a vast spherical chamber more than a kilometre in diameter, resembling a geodesic dome inside of which we would see, all neatly stacked together in ordered arrays, the miles of coiled chains of the DNA molecules.

A huge range of products and raw materials would shuttle along all the manifold conduits in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell. We would wonder at the level of control implicit in the movement of so many objects down so many seemingly endless conduits, all in perfect unison. We would see all around us, in every direction we looked, all sorts of robot-like machines. We would notice that the simplest of the functional components of the cell, the protein molecules, were astonishingly, complex pieces of molecular machinery, each one consisting of about three thousand atoms arranged in highly organized 3-D spatial conformation... Yet the life of the cell depends on the integrated activities of thousands, certainly tens, and probably hundreds of thousands of different protein molecules.

We would see that nearly every feature of our own advanced machines had its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, error fail-safe and proof-reading devices utilized for quality control, assembly processes involving the principle of prefabrication and modular construction. In fact, so deep would be the feeling of deja-vu, so persuasive the analogy, that much of the terminology we would use to describe this fascinating molecular reality would be borrowed from the world of late twentieth-century technology.

What we would be witnessing would be an object resembling an immense automated factory, a factory larger than a city and carrying out almost as many unique functions as all the manufacturing activities of man on earth..32

Refuting Darwin, confirming design 111111

Robert M.Hazen Science matters (2009): 
Pg.239 Cells act as chemical factories, taking in materials from the environment, processing them, and producing “finished goods” to be used for the cell’s own maintenance and for that of the larger organism of which they may be part. In a complex cell, materials are taken in through specialized receptors (“loading docks”), processed by chemical reactions governed by a central information system (“the front once”), carried around to various locations (“assembly lines”) as the work progresses, and finally sent back via those same receptors into the larger organism. The cell is a highly organized, busy place, whose many different parts must work together to keep the whole functioning. While proteins supervise the cell’s chemical factories, carbohydrates provide each factory’s fuel supply.
Pg. 242 Nucleic acids. These molecules (DNA and RNA) carry the blueprint that runs the cell’s chemical factories, and also are the vehicle for inheritance
Pg. 243 Carbohydrates. While proteins supervise the cell’s chemical factories, carbohydrates provide each factory’s fuel supply. The basic building blocks of carbohydrates are sugars—small ring-
Pg. 245 Like any factory, each cell has several essential systems. It must have a front office, a place to store information, and issue instructions to the factory door to guide the work in progress. It must have bricks and mortar—a building with walls and partitions where the actual work goes on. Its production system must include the various machines that produce finished goods as well as the transportation network that moves raw materials and finished products from place to place. And finally, there must be an energy plant to power the machinery.
Pg. 246 Cellular factories consist of walls, partitions, and loading docks.
Pg. 249 Every living thing is composed of one or more cells, each of which has a complex anatomy. A “generic” cell contains many structures and organelles—tiny chemical factories.
Pg. 263 The sequence of the bases along the double helix of DNA contains the genetic code—all the information a cell needs to reproduce itself and run its chemical factories, all the characteristics and quirks that make you unique. 
Pg. 309 Shortly thereafter, the glucose is processed in cellular chemical factories to form part of the cellulose fibers that support each grass blade. The carbon atom has become an integral part of the structure of grass.25

Ben L. Feringa (2020): The miniaturization of complex physical and chemical systems is a key aspect of contemporary materials science. The bottom-up formation of dynamic structures with unusual properties has now been extended from the microscale to the nanoscale. Such extended dynamic structures are complemented by an increasing number of molecular species capable of transforming a physical or chemical stimulus into directional motion. These so-called artificial molecular machines (AMMs) are often regarded as molecular renderings of the macroscopic machines we experience in our daily lives — rotors, gears and cranks, for example. However, the inspiration for many AMMs is not from macroscopic man-made machines but, rather, from proteins or multi-protein complexes in biology that are capable of transforming energy into continuous, complex, structural motion. The process of vision, muscle contraction and bacterial flagellar movement are amazing examples of biological responsive systems. Biological molecular machines (BMMs) such as ATP synthase, ribosomes or myosin are structurally far more complex than any artificial molecular machines AMM made so far, and are an essential part of living systems. Embedded or immobilized within skeleton structures such as bilayer lipid membranes or larger protein complexes, BMMs are part of a cellular confinement in which their work is continuously synchronized with other machines of identical or different nature. Their functions are driven by chemical fuels such as ATP or electrochemical gradients and controlled by chemical or physical stimuli. Their main tasks involve intracellular, transmembrane and intercellular transport of reagents, as well as transformation of small, molecular building blocks into larger functional structures. A cell might thus be viewed as a complex molecular factory in which many different components are assembled, transformed, transported and disassembled. The dynamics of these processes at the molecular level are amplified by self-organization, cooperativity and synchronization, resulting in the living, moving organisms observed at the macroscopic scale. A modular building concept, periodical alignment and synchronization of individual dynamic components on a temporal and spatial domain are essential aspects of the performance of the whole system. Such organizational principles can also be found in macroscopic factories regardless of the difference in size, and they are considered fundamental principles in the design of cooperative dynamic systems of any size and composition. Nevertheless, biological systems strongly differ from man-made factories in certain aspects. BMMs and their complex assemblies are very versatile and selective in continuously producing a variety of complex molecules currently unobtainable by any man-made system. 28 

Von Neumann's universal constructor: We cannot replicate the cell's self-reproduction technology

Imagine a hypothetical human-made truly autonomous self-replicating factory analogous to living cells. It would have to be capable of replicating itself and constructing a copy of itself, without external help. Able to detect raw materials in its surroundings, in the environment,  that it needs, and prepare them to be transformed into the right form, so that import gates and mechanisms could import these materials into the factory inside. The daughter factory would require to get the entire information stored in the mother cell inherited. It would rely on conventional large-scale technology and automation. 

M. Sipper (1998): We would need to be able to understand the fundamental information-processing principles and algorithms involved in self-replication, even independent of their physical realization.33

Replicators have been called "von Neumann machines" after John von Neumann, who first rigorously studied the idea. Von Neumann himself used the term universal constructor to describe such a self-replicating machine. For a factory or machine to make a duplicate copy it must employ a description of itself. This description, being a part of the original factory, must itself be prescribed by something else that is not itself. That is, it must come from the outside. Why? In order to describe something, one needs to be a conscious agent, able to do so. If the factory itself was not the conscious agent, being able to observe and describe itself, it must have been something else. I, as a human being, conscious, can observe and describe myself. A non-conscious "something" has never been seen as having these necessary cognitive and intelligent capabilities. That's why the origin of biological information is an unsolvable problem for naturalists. That's why the origin of the information to make the first living self-replicating cell cannot be solved unless there was a creator. Another salient point: Parts, subunits, or an agglomeration of building blocks do not comprehend how they could join to become part of a functional interlocked complex system. So in order to construct a self-replicating system composed of many interlocking parts, foresight is required, otherwise, the parts could either remain non-assembled, disintegrate, or, eventually, driven by random external forces, interact and assemble into a basically infinite number of nonfunctional chaotic aggregation states.

R. A. Freitas (2004): Von Neumann thus hit upon a deceptively simple architecture for machine replication. The machine would have four parts:   

1. a constructor “A” that can build a machine “X” when fed explicit blueprints of that machine; 
2. a blueprint copier “B”; 
3. a controller “C” that controls the actions of the constructor and the copier, actuating them alternately; and finally 
4. a set of blueprints φ(A + B + C) explicitly describing how to build a constructor, a controller, and a copier. 

The entire replicator may therefore be described as (A + B + C) + φ(A + B + C.

Observers have noted that von Neumann’s early schema was later confirmed by subsequent research on the molecular biology of cellular reproduction, with von Neumann’s component “A” represented by the ribosomes and supporting cellular mechanisms, component “B” represented by DNA polymerase enzymes, component “C” represented by repressor and derepressor molecules and associated expression-control machinery in the cell, and finally component “φ(A + B + C)” represented by the genetic material DNA that carries the organism’s genome. (The correspondence is not complete: cells include additional complexities.) More importantly, the dual use of information — both interpreted and uninterpreted, as in von Neumann’s machine schema — was also found to be true for the information contained in DNA.34  

M. Sipper (1998): A noteworthy distinction apparent in von Neumann’s model of self-replication is the double-faceted use of the information stored in the artificial genome: It first serves as instructions to be interpreted so as to construct a new universal constructor, after which this same genome is copied unmodified, to be attached to the new offspring constructor—so that it may replicate in its turn. This aspect is quite interesting in that it bears strong resemblance to the genetic mechanisms of transcription (copying) and translation (interpretation) employed by biological life—which was discovered during the decade following von Neumann’s work. Von Neumann’s model employs a complex transition rule, with the total number of cells composing the universal constructor estimated at between 50,000 and 200,000 (the literature seems to disagree on the exact number). In the years that followed its introduction a number of researchers had worked toward simplifying this system. In the late 1960s Codd reduced the number of states required for a self-replicating universal constructor-computer from 29 to 8. His self-replicating structure comprised about 100,000,000 cells. A few years later Devore simplified Codd’s system, devising a self-replicating automaton comprising about 100,000 cells. 

Despite the complexity of von Neumann’s self-replicating universal constructor, a number of researchers have considered its implementation (or simulation) over the years. Signorini concentrated on the 29-state transition rule, discussing its implementation on a SIMD (single-instruction multiple-data) computer. Von Neumann’s constructor is divided into many functional blocks known as organs. In addition to implementing the transition rule, Signorini also presented the implementation of three such organs: a pulser, a decoder, and a periodic pulser. To date, Pesavento’s more recent work comes closest to a full simulation of von Neumann’s model. A computer simulation of the universal constructor—running on a standard workstation—even this comes short of realizing the full model: Self-replication is not demonstrated because the tape required to describe the constructor (i.e., the genome) is too large to simulate.33

R. A. Freitas (2004): Penrose, quoting Kemeny, complained that the body of the von Neumann kinematic machine “would be a box containing a minimum of 32,000 constituent parts (likely to include rolls of tape, pencils, erasers, vacuum tubes, dials, photoelectric cells, motors, batteries, and other devices) and the ‘tail’ would comprise 150,000 [bits] of information.” Macroscale kinematic replicators will require a great deal of effort to design and to build, which may explain why so few working devices have been constructed to date,* despite popular interest.34  

Comment: A Von Neumann self-replicating machine has never been constructed because it is too complicated. Man, with all its intelligence, has failed. But, if abiogenesis is true, the emergence of self-replicating cells with a minimum of one million bits of information happened from randomly distributed, nonreplicating components by entirely non-intelligent unguided means.

A Self-Replicating Box

G. SEWELL (2021): To understand why human-engineered self-replicating machines are so far beyond current human technology, let’s imagine trying to design something as “simple” as a self-replicating cardboard box. Let’s place an empty cardboard box (A) on the floor, and to the right of it let’s construct a box (B) with a box-building factory inside it. I’m not sure exactly what the new box would need to build an empty box, but I assume it would at least have to have some metal parts to cut and fold the cardboard and a motor with a battery to power these parts. In reality, to be really self-replicating like living things, it would have to go get its own cardboard, so maybe it would need wheels and an axe to cut down trees and a small sawmill to make cardboard out of wood. But let’s be generous and assume humans are still around to supply the cardboard. Well, of course box B is not a self-replicating machine, because it only produces an empty box A.

So, to the right of this box, let’s build another box C which contains a fully automated factory that can produce box B’s. This is a much more complicated box, because this one must manufacture the metal parts for the machinery in box B and its motor and battery and assemble the parts into the factory inside B. In reality it needs to go mine some ore and smelt it to produce these metal parts, but again let’s be very generous and provide it all the metals and other raw materials it needs.

But box C would still not be a self-replicating machine, because it only produces the much simpler box B. So back to work, now we need to build a box D to its right with a fully automated factory capable of building box C’s with their box B factories. Well, you get the idea, and one begins to wonder if it is even theoretically possible to build a truly self-replicating machine. When we add technology to such a machine to bring it closer to the goal of reproduction, we only move the goalposts, because now we have a more complicated machine to reproduce. Yet we see such machines all around us in the living world.

If we keep adding boxes to the right, each with a fully automated factory that can produce the box to its left, it seems to me that the boxes would grow exponentially in complexity. But maybe I am wrong. Maybe they could be designed to converge eventually to a self-replicating box Z, although I can’t imagine how.35


The Last Universal Common Ancestor (LUCA): What was its nature?

Before we can start investigating the course of evolution, we need to know what the starting point was. A lot has been speculated regarding the first life form. What did it look like? Was it indeed a Last Universal Common Ancestor (LUCA), or did life start polyphyletic? I have dedicated an entire chapter to my previous book: On the Origin of Life and Virus World by means of an Intelligent Designer,  attempting to get closer to answering what could serve as a model organism. This is a surprisingly difficult question to answer.  

The last universal common ancestor represents the primordial cellular organism from which diversified life was derived. It has been considered as the branching point on which Bacteria, Archaea and Eukaryotes have diverged.10

Carl R. Woese (2002): The central question posed by the universal tree is the nature of the entity (or state) represented by its root, the fount of all extant life. Herein lies the door to the murky realm of cellular evolution. Experience teaches that the complex tends to arise from the simple, and biologists have assumed it so in the case of modern cells. But this assumption is usually accompanied by another not-so-self-evident one: namely that the ‘‘organism’’ represented by the root of the universal tree was equivalent metabolically and in terms of its information processing to a modern cell, in effect was a modern cell. Such an assumption pushes the real evolution of modern cells back into an earlier era, which makes the problem not directly addressable through genomics. That is not a scientifically acceptable assumption. Unless or until facts dictate otherwise, the possibility must be entertained that some part of cellular evolution could have occurred during the period encompassed by the universal phylogenetic tree. There is evidence, good evidence, to suggest that the basic organization of the cell had not yet completed its evolution at the stage represented by the root of the universal tree. The best of this evidence comes from the three main cellular information processing systems. Translation was highly developed by that stage: rRNAs, tRNAs, and the (large) elongation factors were by then all basically in near-modern form; hence, their universal distributions. Almost all of the tRNA charging systems were in modern form as well. But, whereas the majority of ribosomal proteins are universal in distribution, a minority of them is not. A relatively small cadre is specific to the bacteria, a somewhat larger set common and confined to the archaea and eukaryotes, and a few others are uniquely eukaryotic. Almost all of the universal translational proteins (as well as those in transcription) show what is called the canonical pattern, i.e., the bacterial and archaeal versions of the protein are remarkably different from one another, so much so that their difference is distinguished as one of ‘‘genre’’. Except for the aminoacyl-tRNA synthetases the corresponding eukaryotic versions are virtually all of the archaeal genre. Why canonical pattern exists is a major unanswered question. In the overall it would seem that translation, although highly developed at the root of the universal tree, subsequently underwent idiosyncratic modifications in each of the three major cell types. Transcription seems to have been rather less developed at the root of the universal tree. The two largest (the catalytic) subunits of the DNA-dependent RNA polymerase are universal in distribution.

The cell is the essence of biology. At least that is how 20th-century molecular biology saw it, and the great goal was to understand how cells were organized and worked. This goal, it was assumed, could be accomplished by cataloging (and characterizing) all of the parts of the mechanism, with the tacit assumption that given such a parts list the overall organization of the cell would become apparent. Today, such lists exist for several organisms. Yet an understanding of the whole remains as elusive a goal as ever (34). The fault here lies with the reductionist perspective of molecular biology. The problem of cellular design cannot be fit into this rigid, procrustean framework. It should be obvious from the foregoing discussion that biological cell design is not a static, temporal, or local problem.

The Dilemma of Cellular Evolution. 

Evolving the cell requires evolutionary invention of unprecedented novelty and variety, the likes of which cannot be generated by any familiar evolutionary dynamic. The task can be accomplished only by a collective evolution in which many diverse cell designs evolve simultaneously and share their novelties with one another; which means that 

(i) HGT (and a genetic lingua franca) is a necessary condition for the evolution of cell designs, and 
(ii) a cell design cannot evolve in isolation; others will necessarily accompany it. 

Comment: That sounds suspiciously like a special creation. Once Woese admits that many diverse cells evolved simultaneously, he departs from the concept of universal common ancestry and resorts to polyphyly, that is the proposition, that at the beginning, there was a population of diverse cell designs, each one different from one another, that began to interact through horizontal gene transfer. 

Woese continues: There is an inherent contradiction in this situation. Although HGT is essential for sharing novelty among the various evolving cell designs, it is at the same time a homogenizing force, working to reduce diversity. Thus, what needs explaining is not why the major cell designs are so similar, but why they are so different. This apparent contradiction can be resolved by assuming that the highly diverse cell designs that exist today are the result of a common evolution in which each of them began under (significantly) different starting conditions. [Initial conditions do not necessarily damp out for complex dynamic processes; indeed, they can lead to vastly different outcomes.  1

E. V. Koonin (2020): The last universal cellular ancestor (LUCA) is the most recent population of organisms from which all cellular life on Earth descends. 

Comment:  Koonin goes with the same line of argumentation. He hypothesizes LUCA as a population of organisms. Where did it descend from? A population has to originate from self-replication, which produces offsprings. 

Berkley University's website on evolution claims:  The ability to copy the molecules that encode genetic information is a key step in the origin of life — without it, life could not exist. This ability probably first evolved in the form of an RNA self-replicator — an RNA molecule that could copy itself. Self-replication opened the door for natural selection. Once a self-replicating molecule formed, some variants of these early replicators would have done a better job of copying themselves than others, producing more “offspring.” 2

Comment: By giving careful examination, such assertions cannot be taken seriously.  This is pseudo-scientific storytelling. The evidence does not justify saying that probably, it happened. A self-replicating molecule has never been seen. But also if it existed, it would be helpless to create a living cell. If molecule A self-reproduces n-times we would have AAAAAAA....That is ridiculously trivial and has nothing to do with what we see in a cell. A cell is a cybernetic ultra-complex system, where, thanks to countless concurrent software-driven chemical and physical processes using languages and codes, the material is stored, managed, moved, assembled, converted, and positioned such that the cell survives and self-reproduces. To believe, as proponents of naturalistic mechanisms do, that AAAAAAA... leads to a cell, is like to think that by simply duplicating bricks BBBBBBB... we get a functioning complete self-replicating chemical factory.

Martina Preiner (2020): Many found the metaphor appealing: a world with a jack-of-all-trades RNA molecule, catalyzing the formation of indispensable cellular scaffolds, from which somehow then cells emerged. Others were quick to notice several difficulties with that scenario. These included the lack of templates enabling the polymerization of RNA in the prebiotic complex mixture and RNA’s extreme lability at moderate to high temperatures and susceptibility to base-catalyzed hydrolysis. 3

N. Sankaran (2017): Today, thirty years after the RNA World was first proposed, no one has seen an actual living system that is completely based in RNA. Nevertheless, the hypothesis lives on in the origins of life research community, albeit in a hotly debated, highly contentious atmosphere. Although there are strong opponents, many researchers agree that although far from complete, it remains one of the best theories we have to understand “the backstory to contemporary biology.” Gilbert himself expressed some disappointment that “a self-replicating RNA has not yet been synthesized or discovered” in the years since he predicted his hypothesis, but he remains optimistic that it will emerge eventually.4

Koonin continues:  The reconstruction of the genome and phenotype of the LUCA is a major challenge in evolutionary biology. Given that all life forms are associated with viruses and/or other mobile genetic elements, there is no doubt that the LUCA was a host to viruses.

E. V. Koonin (2017): The entire history of life is the story of virus–host coevolution. Therefore the origins and evolution of viruses are an essential component of this process. A signature feature of the virus state is the capsid, the proteinaceous shell that encases the viral genome. Although homologous capsid proteins are encoded by highly diverse viruses, there are at least 20 unrelated varieties of these proteins.  A comprehensive sequence and structure analysis of major virion proteins indicates that they evolved on about 20 independent occasions. 5

Viruses and the tree of life (2009): Viruses are polyphyletic: In a phylogenetic tree, the characteristics of members of taxa are inherited from previous ancestors. Viruses cannot be included in the tree of life because they do not share characteristics with cells, and no single gene is shared by all viruses or viral lineages. While cellular life has a single, common origin, viruses are polyphyletic – they have many evolutionary origins. Viruses don’t have a structure derived from a common ancestor.  Cells obtain membranes from other cells during cell division. According to this concept of ‘membrane heredity’, today’s cells have inherited membranes from the first cells.  Viruses have no such inherited structure.  They play an important role by regulating population and biodiversity. 6 

Comment: Since viruses are polyphyletic, and, according to Woese, many diverse cell designs evolved simultaneously, which clarifies the picture: Life arose multiple times independently, and so did viruses. The hypothesis of universal common ancestry is not supported by the evidence. Separate origins of different life forms, and viruses, are.

There is no scientific consensus about LUCA's nature

D. C. Gagler et.al., (2021): Life emerges from the interplay of hundreds of chemical compounds interconverted in complex reaction networks. Some of these compounds and reactions are found across all characterized organisms, informing concepts of universal biochemistry and allowing rooting of phylogenetic relationships in the properties of a last universal common ancestor (LUCA). Thus, universality, as we have come to understand it in biochemistry, is a direct result of the observation that all known examples of life share common details in their component compounds and reactions.13 

Eugene V. Koonin (2020): Considerable efforts have been undertaken over the years to deduce the genetic composition and biological features of the LUCA from comparative genome analyses combined with biological reasoning. These inferences are challenged by the complex evolutionary histories of most genes (with partial exception for the core components of the translation and transcription systems) that involved extensive horizontal transfer and non-orthologous gene displacement. Nevertheless, on the strength of combined evidence, it appears likely that the LUCA was a prokaryote-like organism (that is, like bacteria or archaea) of considerable genomic and organizational complexity. 8

J. D. Sutherland (2017): The latest list of genes thought to be present in LUCA is a long one. The presence of membranes, proteins, RNA and DNA, the ability to perform replication, transcription, and translation, as well as harboring an extensive metabolism driven by energy harvested from ion gradients using ATP synthase, reveal that there must have been a vast amount of evolutionary innovation between the origin of life and the appearance of LUCA. Many of the inferred proteins in LUCA use FeS clusters and other transition-metal-ion-based co-factors.9

Life started complex

Life had to start complex because the earliest known cells, including a supposed last universal common ancestor (LUCA), were already functionally and genetically complex. The simplest cells available for study have a teleonomic apparatus so powerful that no vestiges of truly primitive structures are discernible. The LUCA was sophisticated, with a complex structure recognizable as a cell, and had representatives in practically all the essential functional niches currently present in extant organisms. Even the simplest known cellular life forms possess several hundred genes that encode the components of a fully-fledged membrane, the replication, transcription, and translation machinery, a complex cell-division apparatus, and at least some central metabolic pathways. Therefore, life did not start as a primitive or simple organism, but rather as a complex entity capable of metabolism, genetic replication, and maintaining a boundary that separates the cell from its environment.

J.Monod (1972): The simplest cells available to us for study have nothing "primitive" about them. They have a teleonomic apparatus so powerful that no vestiges of truly primitive structures are discernible. 15 Elsewhere, Monod stated: ‘We have no idea what the structure of a primitive cell might have been. The simplest living system known to us, the bacterial cell… in its overall chemical plan is the same as that of all other living beings. It employs the same genetic code and the same mechanism of translation as do, for example, human cells. Thus the simplest cells available to us for study have nothing “primitive” about them… no vestiges of truly primitive structures are discernible.’ Thus the cells themselves exhibit a similar kind of ‘stasis’  in connection with the fossil record.

J. A. G. Ranea (2006): We know that the LUCA, or the primitive community that constituted this entity, was functionally and genetically complex. Life achieved its modern cellular status long before the separation of the three kingdoms. we can affirm that the LUCA held representatives in practically all the essential functional niches currently present in extant organisms, with a metabolic complexity similar to translation in terms of domain variety.18  

D.YATES (2011): New evidence suggests that LUCA was a sophisticated organism after all, with a complex structure recognizable as a cell, researchers report. Their study appears in the journal Biology Direct. The study lends support to a hypothesis that LUCA may have been more complex even than the simplest organisms alive today, said James Whitfield, a professor of entomology at Illinois and a co-author on the study. 16

G. Caetano-Anollés (2011): corresponding authorLife was born complex and the LUCA displayed that heritage. Recent comparative genomic studies support the latter model and propose that the urancestor was similar to modern organisms in terms of gene content 20

E. V. Koonin (2012): All known cells are complex and elaborately organized. The simplest known cellular life forms, the bacterial (and the only known archaeal) parasites and symbionts, clearly evolved by degradation of more complex organisms; however, even these possess several hundred genes that encode the components of a fully fledged membrane; the replication, transcription, and translation machineries; a complex cell-division apparatus; and at least some central metabolic pathways. As we have already discussed, the simplest free-living cells are considerably more complex than this, with at least 1,300 genes 36

J. C. Xavier (2014): The cell is the most complex structure in the micrometer size range known to humans. At present, the minimal cell can be defined only on a semiabstract level as a living cell with a minimal and sufficient number of components and having three main features:

1. Some form of metabolism to provide molecular building blocks and energy necessary for synthesizing the cellular components,
2. Genetic replication from a template or an equivalent information processing and transfer machinery, and
3. A boundary (membrane) that separates the cell from its environment.
4. The necessity of coordination between boundary fission and the full segregation of the previously generated twin genetic templates could be added to this definition.
5. The essential feature of a minimal cell is the ability to evolve, which is a universal characteristic among all known living cells 19

F. El Baidouri (2021): Along with two robust prokaryotic phylogenetic trees we are able to infer that the last universal common ancestor of all living organisms was likely to have been a complex cell with at least 22 reconstructed phenotypic traits probably as intricate as those of many modern bacteria and archaea. Our results depict LUCA as likely to be a far more complex cell than has previously been proposed, challenging the evolutionary model of increased complexity through time in prokaryotes.17



Last edited by Otangelo on Thu Apr 06, 2023 5:45 am; edited 7 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Defining the LUCA: What might be a Cell’s minimal requirement of parts? 

I have gone in-depth to elucidate this question in my previous book: On the Origin of Life and Virus World by means of an Intelligent Designer. I wrote: Science remains largely in the dark when it comes to pinpointing what exactly the first life form looked like. Speculation abounds. Whatever science paper about the topic one reads, confusion becomes apparent. Patrick Forterre wrote  in a science paper in 2015: The universal tree of life: an update, confessed:

There is no protein or groups of proteins that can give the real species tree, i.e., allow us to recapitulate safely the exact path of life evolution.

As such, whatever architecture one comes up with, remains in the realm of speculation. Is it therefore futile, to trace a borderline, and list a number of features, that most likely were present? No. Even if we can come up only with a hypothetical organism, it will nonetheless give us insight into the complexity involved, and bring us closer to deciding, what mechanisms most likely were involved and if intelligence was required to set up the first life forms.  

Andrew J. Crapitto (2022): The availability of genomic and proteomic data from across the tree of life has made it possible to infer features of the genome and proteome of the last universal common ancestor (LUCA). Several studies have done so, all using a unique set of methods and bioinformatics databases. No individual study shares a high or even moderate degree of similarity with any other individual study. Studies of the genome or proteome of the LUCA do not uniformly agree with one another. The set of consensus LUCA protein family predictions between all of these studies portrays a LUCA genome that, at minimum, encoded functions related to protein synthesis, amino acid metabolism, nucleotide metabolism, and the use of common, nucleotide-derived organic cofactors.

The translation process is well known to be ancient and many of the proteins involved in translation machinery appear to predate the LUCA. A corollary to the influential RNA world hypothesis is that the translation system evolved within the context of an RNA-based genetic system. Most universal Clusters of Orthologous ( Orthologous are homologous genes where a gene diverges after a speciation event, but the gene and its main function are conserved) Groups of proteins (COGs) COGs encode proteins that physically associate with the ribosome and those that do not are often involved with the translation process in some other way. Similarly, nearly all universal, vertically inherited functional RNAs (save the SRP RNA) are involved in the translation system. Translation-related genes or proteins are prevalent in the predictions of seven of the eight previously published LUCA genome or proteome studies analyzed here. We identified 366 eggNOG clusters that were predicted by four or more studies to have been present in the genome of the LUCA (Appendix S2). 7

William Martin and colleagues from the University Düsseldorf’s Institute of Molecular Evolution give us also an interesting number: The metabolism of cells contains evidence reflecting the process by which they arose. Here, we have identified the ancient core of autotrophic metabolism encompassing 404 reactions that comprise the reaction network from H2, CO2, and ammonia (NH3) to amino acids, nucleic acid monomers, and the 19 cofactors required for their synthesis. Water is the most common reactant in the autotrophic core, indicating that the core arose in an aqueous environment. Seventy-seven core reactions involve the hydrolysis of high-energy phosphate bonds, furthermore suggesting the presence of a non-enzymatic and highly exergonic chemical reaction capable of continuously synthesizing activated phosphate bonds. CO2 is the most common carbon-containing compound in the core. An abundance of NADH and NADPH-dependent redox reactions in the autotrophic core, the central role of CO2, and the circumstance that the core’s main products are far more reduced than CO2 indicate that the core arose in a highly reducing environment. The chemical reactions of the autotrophic core suggest that it arose from H2, inorganic carbon, and NH3 in an aqueous environment marked by highly reducing and continuously far from equilibrium conditions. Supplementary Table 1. in the paper lists all 402 metabolic reactions   1112

John I. Glass (2006): Mycoplasma genitalium has the smallest genome of any organism that can be grown in pure culture. It has a minimal metabolism. Consequently, its genome is expected to be a close approximation to the minimal set of genes needed to sustain bacterial life. 14

Refuting Darwin, confirming design Genita10
Metabolic pathways and substrate transport mechanisms encoded by M. genitalium. Metabolic products are colored red, and mycoplasma proteins are black. White letters on black boxes mark nonessential functions or proteins based on our current gene disruption study. Question marks denote enzymes or transporters not identified that would be necessary to complete pathways, and those missing enzyme and transporter names are colored green. Transporters are colored according to their substrates: yellow, cations; green, anions and amino acids; orange, carbohydrates; purple, multidrug and metabolic end product efflux. The arrows indicate the predicted direction of substrate transport. The ABC type transporters are drawn as follows: rectangle, substrate-binding protein; diamonds, membrane-spanning permeases; circles, ATP-binding subunits.

J. A. G. Ranea (2006): In our view, the LUCA was faced with two important challenges associated with the source of amino acids and purine/pyrimidine bases or nucleosides. Most of these compounds need complex pathways to be synthesized and our analyses suggest that these are not present in the LUCA. Based on that, we are more in favor of amino acids and nitrogenous bases being present in a possible primitive soup rather than being synthesized by the LUCA.18

From a LUCA to the last bacterial common ancestor (LBCA)

Even though the existence of LUCA is supported by the universal presence of conserved biomolecules and a vast amount of genetic data, its characteristics and identity remain unknown. LBCA, on the other hand, stands for "Last Bacterial Common Ancestor," which refers to the hypothetical ancestor of all modern bacteria. Although the characteristics of LBCA are still uncertain, recent studies suggest that it might have been a monoderm bacterium with a complete 17-gene dcw cluster, which is two genes more than in the model E. coli cluster.

The 17-gene dcw (division and cell wall) cluster is a group of bacterial genes that are involved in the regulation of cell division and the synthesis of the cell wall during the cell cycle. These genes encode proteins that are responsible for the assembly and contraction of the bacterial cell wall and septum, which eventually leads to the separation of the daughter cells. The dcw cluster includes genes that are involved in peptidoglycan synthesis, cell wall assembly, and septation, among others. These genes are found in many bacterial species and are thought to be essential for bacterial growth and survival. Understanding the composition and regulation of the dcw cluster can provide insights into bacterial cell division and the evolution of bacterial morphology.

Phylogenomic inference also reveals that the Clostridia, a class of Firmicutes, are the least diverged of the modern genomes, suggesting that the first lineage to diverge from the predicted LBCA was similar to the modern Clostridia.

In 2004, Rosario Gil proposed a minimal gene set composed of 206 genes that would sustain the main vital functions of a hypothetical simplest bacterial cell. These functions include DNA replication, transcription, translation, protein processing, folding, secretion and degradation, cell division, energetic metabolism, pentose pathway, nucleotide biosynthesis, and lipid biosynthesis. The minimal cell does not include biosynthetic pathways for amino acids or most cofactor precursors, as they can be obtained from the environment.

While some amino acids can be obtained from the environment, not all of them are readily available or in sufficient quantities to support the growth of a minimal cell. In addition, the amino acids that are available in the environment may not be in the correct proportions or forms required by the cell. Therefore, some minimal cells may require biosynthetic pathways for certain amino acids to ensure their survival and growth.

R. R. Léonard (2022): The nature of the LBCA is unknown, especially the architecture of its cell wall. The lack of reliably affiliated bacterial fossils outside Cyanobacteria makes it elusive to decide the very nature of the LBCA. Nevertheless, phylogenomic inference leads to informative results, and our analysis of the cell-wall characteristics of extant bacteria, combined with ancestral state reconstruction and distribution of key genes, opens interesting possibilities: the LBCA might have been a monoderm bacterium featuring a complete 17-gene dcw cluster, two genes more than in the model E. coli cluster. This result was also supported by a recent study, in which the authors found 146 protein families that formed a predicted core for the metabolic network of the LBCA. From these families, phylogenetic trees were produced and the divergence of the modern genomes from the root to the tips was analysed. It appears that the Clostridia (a class of Firmicutes) are the least diverged of the modern genomes and thus the first lineage to diverge from the predicted LBCA were similar to the modern Clostridia. Based on these results, the authors suggested that the LBCA could have been a monoderm bacteria. (Having a single membrane, especially a thick layer of peptidoglycan) 22

P. C. Morales et.al., (2019) reconstructed the phylogenetic tree of Clostridium species. They set Clostridium difficile at the root of the tree. 23 The genome of C. difficile strain 630 consists of a circular chromosome of 4,290,252 bp 24

Taking Rosario Gil's model organism as the basis for our forthcoming investigation

Rosario Gil (2004): Based on the conjoint analysis of several computational and experimental strategies designed to define the minimal set of protein-coding genes that are necessary to maintain a functional bacterial cell, we propose a minimal gene set composed of 206 genes. Such a gene set will be able to sustain the main vital functions of a hypothetical simplest bacterial cell with the following features.

(i) A virtually complete DNA replication machinery, composed of one nucleoid DNA binding protein, SSB, DNA helicase, primase, gyrase, polymerase III, and ligase. No initiation and recruiting proteins seem to be essential, and the DNA gyrase is the only topoisomerase included, which should perform both replication and chromosome segregation functions.

(ii) A very rudimentary system for DNA repair, including only one endonuclease, one exonuclease, and a uracyl-DNA glycosylase.

(iii) A virtually complete transcriptional machinery, including the three subunits of the RNA polymerase, a σ factor, an RNA helicase, and four transcriptional factors (with elongation, antitermination, and transcription-translation coupling functions). Regulation of transcription does not appear to be essential in bacteria with reduced genomes, and therefore the minimal gene set does not contain any transcriptional regulators.

(iv) A nearly complete translational system. It contains the 20 aminoacyl-tRNA synthases, a methionyl-tRNA formyltransferase, five enzymes involved in tRNA maturation and modification, 50 ribosomal proteins (31 proteins for the large ribosomal subunit and 19 proteins for the small one), six proteins necessary for ribosome function and maturation (four of which are GTP binding proteins whose specific function is not well known), 12 translation factors, and 2 RNases involved in RNA degradation.

(v) Protein-processing, -folding, secretion, and degradation functions are performed by at least three proteins for posttranslational modification, two molecular chaperone systems (GroEL/S and DnaK/DnaJ/GrpE), six components of the translocase machinery (including the signal recognition particle, its receptor, the three essential components of the translocase channel, and a signal peptidase), one endopeptidase, and two proteases.

(vi) Cell division can be driven by FtsZ only, considering that, in a protected environment, the cell wall might not be necessary for cellular structure.

(vii) A basic substrate transport machinery cannot be clearly defined, based on our current knowledge. Although it appears that several cation and ABC transporters are always present in all analyzed bacteria, we have included in the minimal set only a PTS for glucose transport and a phosphate transporter. Further analysis should be performed to define a more complete set of transporters.

(viii) The energetic metabolism is based on ATP synthesis by glycolytic substrate-level phosphorylation.

(ix) The nonoxidative branch of the pentose pathway contains three enzymes (ribulose-phosphate epimerase, ribose-phosphate isomerase, and transketolase), allowing the synthesis of pentoses (PRPP) from trioses or hexoses.

(x) No biosynthetic pathways for amino acids, since we suppose that they can be provided by the environment.

(xi) Lipid biosynthesis is reduced to the biosynthesis of phosphatidylethanolamine from the glycolytic intermediate dihydroxyacetone phosphate and activated fatty acids provided by the environment.

(xii) Nucleotide biosynthesis proceeds through the salvage pathways, from PRPP and the free bases adenine, guanine, and uracil, which are obtained from the environment.

(xiii) Most cofactor precursors (i.e., vitamins) are provided by the environment. Our proposed minimal cell performs only the steps for the syntheses of the strictly necessary coenzymes tetrahydrofolate, NAD+, flavin aderine dinucleotide, thiamine diphosphate, pyridoxal phosphate, and CoA. 21

Comment: That would require LUCA to have complex import and transport mechanisms of nucleotides and amino acids, and membrane import channel proteins able to distinguish and select those building blocks for life that are used in life, from those that aren't. As I have outlined in my book, On the Origin of Life and Virus World by means of an Intelligent Designer, Origin of Life researchers have failed throughout to demonstrate the possible abiotic route to synthesize the basic building blocks of life non-enzymatically.  But even IF that would be the case, that would still not explain how LUCA made the transition from external incorporation to acquire the complex metabolic and catabolic pathways to synthesize nucleotides and amino acids which constitutes a huge, often overlooked gap. Mycoplasma genitalium is held as the smallest possible living self-replicating cell. It is, however, a pathogen, an endosymbiont that only lives and survives within the body or cells of another organism ( humans ).  As such, it IMPORTS many nutrients from the host organism. The host provides most of the nutrients such bacteria require, hence the bacteria do not need the genes for producing such compounds themselves. As such, it does not require the same complexity of biosynthesis pathways to manufacture all nutrients as a free-living bacterium. Amino Acids were no readily available on the early earth. In the Miller Urey experiment, eight of the 20 amino acids were never produced. Neither in 1953 nor in the subsequent experiments.

LUCAs information system

Currently, there is no known form of life that exists without DNA and RNA. DNA is a fundamental component of all known life on Earth and serves as the genetic blueprint that encodes the information necessary for the development, function, and reproduction of living organisms. Some claim that it is possible that alternative forms of genetic material or information storage may exist in other environments beyond our current understanding. This is however an argument from ignorance. It is a fallacy that occurs when someone asserts a claim based on the absence of evidence to the contrary. It is important to base claims on positive evidence rather than on the absence of evidence. It is therefore warranted to start with the presumption that DNA was present when life started. And as such, as well the biosynthesis pathways necessary to synthesize deoxynucleotides, the monomer building blocks of DNA.

The Central Dogma

The term Central Dogma was coined by Francis Crick, who discovered the double-helix structure of DNA together with Rosalind Franklin, James Watson, and Maurice Wilkins. DNA is “the Blueprint of Life.” It contains part of the data needed to make every single protein that life can't go on without. ( Epigenetic data based on epigenetic languages is also involved). No DNA, no proteins, no life. RNA has a limited coding capacity because it is unstable.

Refuting Darwin, confirming design Crick_10
James Watson, left, with Francis Crick and their model of part of a DNA molecule SCIENCE PHOTO LIBRARY

YourGenome.org: The ‘Central Dogma’ is the process by which the instructions in DNA are converted into a functional product. It was first proposed in 1958 by Francis Crick, discoverer of the structure of DNA. The central dogma suggests that DNA contains the information needed to make all of our proteins, and that RNA is a messenger that carries this information to the ribosomes. The ribosomes serve as factories in the cell where the information is ‘translated’ from a code into a functional product. The process by which the DNA instructions are converted into the functional product is called gene expression. Gene expression has two key stages – transcription and translation. In transcription, the information in the DNA of every cell is converted into small, portable RNA messages. During translation, these messages travel from where the DNA is in the cell nucleus to the ribosomes where they are ‘read’ to make specific proteins.36

The biosynthesis of nucleotides

The de novo biosynthesis of nucleotides is essential in cells because nucleotides serve as the building blocks of nucleic acids, which are critical for many fundamental cellular processes. Here are some key reasons why de novo nucleotide biosynthesis is essential in cells:

DNA and RNA synthesis: Nucleotides are the monomeric units that make up DNA and RNA, the two types of nucleic acids that carry genetic information in cells. De novo nucleotide biosynthesis provides the necessary raw materials for the synthesis of DNA and RNA, which are essential for cellular replication, growth, and inheritance of genetic information.

Energy storage and transfer: Nucleotides, particularly ATP (adenosine triphosphate), serve as a universal currency for energy transfer and storage in cells. ATP is used as an energy source to power numerous cellular processes, such as biosynthesis, transport of molecules across cell membranes, and cellular signaling. De novo nucleotide biosynthesis provides the precursors for the synthesis of ATP and other nucleotide-based energy molecules, which are critical for cellular energy metabolism.

Coenzymes and signaling molecules: Nucleotides also serve as important coenzymes and signaling molecules in cellular metabolism and signaling pathways. For example, NAD+ (nicotinamide adenine dinucleotide) and FAD (flavin adenine dinucleotide) are nucleotide-based coenzymes that play crucial roles in cellular redox reactions and energy metabolism. Additionally, cyclic AMP (cAMP) and GTP (guanosine triphosphate) are nucleotide-based signaling molecules that regulate various cellular processes, including cell growth, differentiation, and response to external stimuli.

Regulation of cellular processes: Nucleotides play regulatory roles in various cellular processes, such as gene expression, cell cycle progression, and immune response. For example, nucleotide-dependent enzymes, such as protein kinases and GTPases, control the activity of other proteins by phosphorylation or other post-translational modifications. Nucleotides also participate in feedback inhibition of de novo nucleotide biosynthesis, helping to regulate the cellular pool of nucleotides and maintain proper cellular nucleotide balance.

The stepwise synthesis process of nucleotides involves several key reactions and steps. Here is a general overview of the synthesis process:

1. Base synthesis: The second step is the synthesis of the nucleotide base. Bases such as adenine, guanine, cytosine, thymine, and uracil are commonly found in nucleotides. These bases can be synthesized through a variety of chemical reactions, such as the Pictet-Spengler reaction, the Fischer indole synthesis, or the Vorbrüggen glycosylation, which yield the desired base molecule.

2. Sugar moiety synthesis: The first step is the synthesis of the sugar moiety, which typically involves the formation of ribose or deoxyribose, the two common sugar molecules found in nucleotides. This can be achieved through various chemical reactions, such as the formose reaction or the Wohl degradation, which generate the desired sugar molecule.

3. Phosphate group addition: The third step is the addition of the phosphate group to the sugar molecule. This is typically achieved through phosphorylation reactions using phosphate donors, such as phosphoric acid, phosphorus oxychloride, or phosphorimidazolide. The phosphate group can be added to different positions on the sugar molecule, resulting in nucleotides with different properties and functions.

4. Nucleotide condensation: The next step is the condensation of the sugar moiety with the base and the phosphate group to form the nucleotide. This is typically achieved through chemical reactions, such as nucleophilic substitution or esterification, which result in the formation of the phosphodiester bond between the sugar and phosphate groups, with the base attached to the sugar molecule.

5. Protecting group manipulation: Throughout the synthesis process, protecting groups may be used to temporarily protect certain functional groups or prevent unwanted reactions. These protecting groups can be selectively removed or modified at specific steps using chemical reactions, allowing for the desired modifications and functionalizations of the nucleotide molecule.

6. Purification and characterization: Once the nucleotide is synthesized, it needs to be purified to remove any impurities or side products. This can be achieved through various methods, such as chromatography or crystallization. The purified nucleotide can then be characterized using techniques such as nuclear magnetic resonance (NMR) spectroscopy, mass spectrometry, or X-ray crystallography to confirm its structure and purity.

7. Further modifications: Finally, the synthesized nucleotide can be further modified or functionalized to obtain specific derivatives or analogs with desired properties or functions. This can involve additional chemical reactions, such as acylation, alkylation, or oxidation, to introduce specific functional groups or modifications to the nucleotide molecule.

We will give a closer look at what it takes to synthesize RNA and DNA. We will start with the nucleobases. 

Synthesis of the RNA and DNA nucleobases

The biosynthesis of nucleobases, which are the building blocks of nucleotides, involves complex metabolic pathways that are essential for the synthesis of RNA and DNA, the two types of nucleic acids that carry genetic information in cells.

De novo nucleobase biosynthesis: Cells can synthesize nucleobases de novo, which means starting from simple precursors and synthesizing the nucleobases from scratch. De novo nucleobase biosynthesis pathways differ for RNA and DNA, although there are some similarities. The de novo biosynthesis of nucleobases generally involves a series of enzymatic reactions that convert simple precursors into complex nucleobases through multiple intermediate steps.

Purine nucleobase synthesis: Purine nucleobases, adenine (A) and guanine (G), are synthesized de novo from simpler precursors such as amino acids, bicarbonate, and phosphoribosyl pyrophosphate (PRPP). The biosynthesis of purine nucleobases involves several enzymatic steps, including ring construction, functional group modifications, and ring closure reactions, catalyzed by various enzymes such as amidotransferases, synthetases, and dehydrogenases.

Pyrimidine nucleobase synthesis: Pyrimidine nucleobases, cytosine (C), uracil (U), and thymine (T), are synthesized de novo from simpler precursors such as aspartate, bicarbonate, and PRPP. The biosynthesis of pyrimidine nucleobases also involves several enzymatic steps, including ring construction, functional group modifications, and ring closure reactions, catalyzed by various enzymes such as carbamoyl phosphate synthetase II (CPSII), dihydroorotase (DHOase), and orotate phosphoribosyltransferase (OPRT).

Salvage pathways: In addition to de novo biosynthesis, cells can also salvage nucleobases from the degradation of nucleic acids or from external sources, such as dietary intake. Salvage pathways involve the uptake of pre-formed nucleobases from the extracellular environment or the recycling of nucleobases from intracellular nucleotide degradation. Salvage pathways can provide an alternative source of nucleobases for nucleotide synthesis, and they are important for cellular nucleotide metabolism and conservation of resources. Salvage pathways, which involve the recycling or uptake of pre-formed nucleobases from the degradation of nucleic acids or from external sources, are not considered essential for life, as there are organisms that can survive without functional salvage pathways. However, salvage pathways play important roles in cellular nucleotide metabolism and can be advantageous for conserving resources and maintaining nucleotide pools under certain conditions.

Overall, the biosynthesis of nucleobases for RNA and DNA involves complex metabolic pathways that are essential for the synthesis of nucleotides, which are critical for the replication, transcription, and translation of genetic information in cells. De novo nucleobase biosynthesis, along with salvage pathways, ensures the availability of nucleobases for nucleotide synthesis, and proper regulation of these pathways is crucial for maintaining cellular nucleotide balance and function.

Here is a simplified overview of the minimum number of enzymes typically involved in the de novo biosynthesis of the four nucleobases used in genes (adenine, cytosine, guanine, and uracil) in most organisms:

Adenine (A) biosynthesis: The shortest pathway involves 5 enzymes: glutamine phosphoribosylpyrophosphate amidotransferase (GPAT), phosphoribosylaminoimidazole carboxamide formyltransferase (AICAR Tfase), phosphoribosylaminoimidazole succinocarboxamide synthetase (SAICAR synthetase), adenylosuccinate synthetase (ADSS), and adenylosuccinate lyase (ADSL).

Cytosine (C) biosynthesis: The shortest pathway involves 3 enzymes: carbamoyl phosphate synthetase II (CPSII), aspartate transcarbamylase (ATCase), and dihydroorotase (DHOase).
Guanine (G) biosynthesis: The shortest pathway involves 4 enzymes: inosine monophosphate (IMP) dehydrogenase (IMPDH), GMP synthase (GMPS), xanthosine monophosphate (XMP) aminase, and GMP reductase.
Uracil (U) biosynthesis: The shortest pathway involves 3 enzymes: carbamoyl phosphate synthetase II (CPSII), dihydroorotase (DHOase), and uracil phosphoribosyltransferase (UPRT).

These are simplified pathways and the actual biosynthesis of nucleobases in living organisms can be more complex, involving regulation, feedback mechanisms, and additional enzymes or intermediates. The specific enzymes and pathways for nucleobase biosynthesis can also vary depending on the organism, as different organisms may have different metabolic pathways for nucleotide biosynthesis. Regulation, feedback mechanisms, and additional enzymes or intermediates play important roles in nucleotide synthesis, as they help to maintain proper control and balance in the production of nucleotides in living organisms. While they may not be absolutely essential for nucleotide synthesis to occur, they are crucial for ensuring that nucleotide production is regulated and optimized for the needs of the cell or organism. Here's a brief overview:

Regulation: Nucleotide synthesis is typically regulated at multiple levels to maintain proper control over the production of nucleotides. Enzymes involved in nucleotide synthesis are often regulated through feedback inhibition, where the end products of nucleotide metabolism (i.e., nucleotides or their derivatives) act as feedback inhibitors, binding to specific enzymes in the synthesis pathway and inhibiting their activity. This helps to prevent overproduction of nucleotides and maintain a balanced pool of nucleotides in the cell.

Feedback mechanisms: Feedback mechanisms involve the sensing of intracellular nucleotide levels and subsequent regulation of nucleotide synthesis. For example, if the cell has sufficient nucleotide levels, feedback mechanisms may downregulate the activity of enzymes involved in nucleotide synthesis to prevent overproduction. Conversely, if nucleotide levels are low, feedback mechanisms may upregulate the activity of enzymes involved in nucleotide synthesis to meet the cellular demand.

Additional enzymes or intermediates: Nucleotide synthesis pathways often require multiple enzymes and intermediates to catalyze the various chemical reactions involved. These enzymes and intermediates may be essential for the proper progression of the synthesis pathway and the efficient production of nucleotides. For example, enzymes such as kinases, phosphatases, and ligases may be required for the addition or removal of phosphate groups during nucleotide synthesis, while intermediates such as PRPP (5-phosphoribosyl-1-pyrophosphate) may serve as critical precursors for nucleotide biosynthesis.

Here are some examples of enzymes that are involved in the regulation, feedback mechanisms, and additional intermediates of nucleotide synthesis, and are essential for the survival of the cell:

Ribonucleotide reductase: Ribonucleotide reductase is a key enzyme involved in the synthesis of deoxyribonucleotides, which are the building blocks of DNA. It catalyzes the conversion of ribonucleotides to deoxyribonucleotides, a crucial step in DNA synthesis. Ribonucleotide reductase is tightly regulated through allosteric feedback inhibition by the end products of the deoxyribonucleotide pathway, such as dATP, dGTP, dCTP, and dTTP, which bind to specific regulatory sites on the enzyme and inhibit its activity. This feedback inhibition helps to prevent overproduction of deoxyribonucleotides and maintains a balanced pool of nucleotides for DNA synthesis.

Purine and pyrimidine biosynthetic enzymes: Enzymes involved in the de novo biosynthesis of purine and pyrimidine nucleotides, such as phosphoribosyl pyrophosphate (PRPP) synthetase, adenylosuccinate synthase, and dihydroorotate dehydrogenase, are essential for nucleotide synthesis. These enzymes are regulated through feedback inhibition by the end products of the respective pathways, such as AMP, GMP, CMP, and UMP, which act as feedback inhibitors and help to maintain proper control over purine and pyrimidine nucleotide production.

Salvage pathway enzymes: Cells also have salvage pathways for recycling and salvaging nucleotides from cellular waste or exogenous sources. Enzymes involved in salvage pathways, such as hypoxanthine-guanine phosphoribosyltransferase (HGPRT) and thymidine kinase, are essential for salvaging and recycling nucleotides, as they help to replenish the cellular nucleotide pool and prevent nucleotide depletion. These salvage pathway enzymes are also regulated through feedback inhibition by the end products of nucleotide metabolism, which helps to regulate their activity and maintain nucleotide homeostasis.

Phosphatases and kinases: Enzymes such as nucleoside diphosphate kinases (NDPK), nucleotide monophosphate kinases (NMPK), and nucleotide diphosphatases (NDPases) are involved in the interconversion of nucleotide monophosphates, diphosphates, and triphosphates, and are essential for maintaining the proper balance of nucleotide pools in the cell. These enzymes are also regulated through feedback mechanisms and are important for regulating the cellular levels of nucleotide phosphates.

Enzymes involved in protecting group manipulations: Protecting groups are often used in nucleotide synthesis to temporarily protect specific functional groups or prevent unwanted reactions. Enzymes such as esterases or deprotecting enzymes are often used to selectively remove protecting groups at specific steps in the synthesis process, allowing for the desired modifications and functionalizations of the nucleotide molecule.

These are just a few examples of enzymes that are involved in the regulation, feedback mechanisms, and additional intermediates of nucleotide synthesis, and are essential for the survival of the cell. The specific enzymes and mechanisms involved may vary depending on the organism and the type of nucleotide being synthesized, but overall, these regulatory processes and enzymes are critical for maintaining proper control, balance, and efficiency in nucleotide synthesis, which is essential for cellular function and survival.

The biosynthesis of nucleobases is a complex process involving multiple distinct biosynthetic pathways. In total, six different biosynthetic pathways are involved in the de novo synthesis of the five nucleobases that make up DNA and RNA. Adenine and guanine are derived from the purine biosynthetic pathway, which involves 10 enzymatic steps. This pathway starts with simple precursors such as glycine, glutamine, aspartate, and CO2, and involves multiple intermediate compounds such as IMP, AMP, and GMP.

Uracil, thymine, and cytosine, on the other hand, are derived from the pyrimidine biosynthetic pathway, which involves six enzymatic steps. This pathway starts with simple precursors such as aspartate and carbamoyl phosphate, and involves intermediate compounds such as UMP, TMP, and CMP.

It's worth noting that some organisms have salvage pathways that can recycle pre-existing nucleobases to avoid the de novo synthesis of nucleobases altogether. However, the de novo synthesis of nucleobases remains a crucial process in many organisms.

The precursors for nucleotides are largely derived from amino acids, specifically glycine and aspartate, which serve as the scaffolds for the ring systems present in nucleotides. In addition, aspartate and glutamine serve as sources of NH2 groups in nucleotide formation. In de novo pathways, pyrimidine bases are assembled first from simpler compounds and then attached to ribose.

What does de novo mean?

In biochemistry, a de novo pathway is a metabolic pathway that synthesizes complex molecules from simple precursors. In other words, it is a process of creating new molecules from scratch rather than from pre-existing molecules.
De novo pathways are important for the synthesis of essential biomolecules such as nucleotides, amino acids, and fatty acids. For example, the de novo synthesis of purines and pyrimidines, the building blocks of DNA and RNA, are crucial for cell growth and replication. The term "de novo" comes from the Latin phrase "from the beginning," which reflects the fact that these pathways start with simple precursors and build up to more complex molecules through a series of biochemical reactions.

Purine bases, on the other hand, are synthesized piece by piece directly onto a ribose-based structure. These pathways consist of a small number of elementary reactions that are repeated with variations to generate different nucleotides. The simpler compounds used in the de novo pathways for nucleotide biosynthesis include carbon dioxide, amino acids (such as glycine, aspartate, and glutamine), tetrahydrofolate derivatives, ATP, and various cofactors such as NAD, NADP, and pyridoxal phosphate.  The derivatives of tetrahydrofolate (THF) that are involved as cofactors in various reactions include 

N10-formyl-THF
N5
N10-methylene-THF,
N5-formimino-THF, 
and N5-methyl-THF. 

These THF derivatives play crucial roles in providing one-carbon units for the synthesis of nucleotide bases. 

One-carbon units

One-carbon units are necessary for the construction of nucleotides because they are used as building blocks for the synthesis of the nitrogen-containing bases that make up the nucleotides. The nitrogen-containing bases of nucleotides, such as purines and pyrimidines, are synthesized through a series of enzymatic reactions that involve the transfer of one-carbon units, such as  formyl, methyl, methylene, and formimino groups. Formyl is a functional group consisting of a carbon atom double-bonded to an oxygen atom and single-bonded to a hydrogen atom, and its formula is -CHO. Methyl is a one-carbon unit (-CH3) used in nucleotide biosynthesis and other metabolic processes. Methylene is a functional group consisting of a carbon atom with two hydrogen atoms attached to it (-CH2-), which is present in many important compounds and is a building block in the synthesis of many organic compounds. Formimino is a functional group consisting of a nitrogen atom attached to a carbon atom double-bonded to an oxygen atom, and it is an important intermediate in various biochemical reactions, including the metabolism of amino acids and the biosynthesis of some neurotransmitters.

These one-carbon units are derived from various sources, including amino acids, carbon dioxide, and folate derivatives, and are incorporated into the nitrogen-containing rings of the nucleotide bases. For example, in the de novo synthesis of purine nucleotides, the carbon atoms for the C4, C5, and N7 atoms of the purine ring are derived from N10-formyl-THF, N5, N10-methylene-THF, and N5-formimino-THF, respectively.

Refuting Darwin, confirming design Atom-s11

In the de novo synthesis of thymidine nucleotides, the carbon atoms for the methyl group of thymine are derived from N5, N10-methylene-THF. The biosynthesis of nucleotides is therefore closely linked to the metabolism of folate, and deficiencies in folate intake or metabolism can lead to impaired nucleotide synthesis and various pathologies.

These compounds are assembled and converted into the nucleotide bases through a series of enzymatic reactions. For example, in the de novo pathway for pyrimidine biosynthesis, carbamoyl phosphate and aspartate are condensed to form the pyrimidine ring, which is then further modified to yield uridine monophosphate (UMP). In the de novo pathway for purine biosynthesis, the purine ring is assembled stepwise onto the ribose scaffold through a series of enzyme-catalyzed reactions that utilize a variety of simpler compounds as substrates.

L. Stryer (2002): Purines and pyrimidines are derived largely from amino acids.  The amino acids glycine and aspartate are the scaffolds on which the ring systems present in nucleotides are assembled. Furthermore, aspartate and the side chain of glutamine serve as sources of NH2 groups in the formation of nucleotides. In de novo (from scratch) pathways, the nucleotide bases are assembled from simpler compounds. The framework for a pyrimidine base is assembled first and then attached to ribose. In contrast, the framework for a purine base is synthesized piece by piece directly onto a ribose-based structure. These pathways each comprise a small number of elementary reactions that are repeated with variations to generate different nucleotides.53

The biosynthesis of glycine, one of the two amino acids required to assemble the ring systems of nucleotides, can occur through the serine hydroxymethyltransferase (SHMT) pathway or the glycine cleavage system. The biosynthesis of aspartate, the other amino acid required to assemble the ring systems of nucleotides, can occur through the transamination of oxaloacetate.

The biosynthesis of amino acids requires a series of enzymatic reactions that convert simple molecules such as glucose or other central metabolites into the final amino acid product. These pathways are highly regulated and often require energy input from ATP or other high-energy molecules.

The serine hydroxymethyltransferase (SHMT) pathway is a biosynthetic pathway that involves the interconversion of serine and glycine, two amino acids that are important building blocks for proteins and nucleotides. In this pathway, serine is converted into glycine through the action of the enzyme serine hydroxymethyltransferase (SHMT). This enzyme transfers a methyl group from serine to tetrahydrofolate (THF), a cofactor derived from folate, and produces glycine and 5,10-methylene-THF. The SHMT pathway is important for the biosynthesis of nucleotides, which are the building blocks of DNA and RNA. In this context, the glycine produced by the SHMT pathway can be used to synthesize purines, one of the two types of nucleotide bases. Additionally, the 5,10-methylene-THF produced by the pathway can be used to produce thymidylate, a precursor for the other type of nucleotide base, pyrimidines.

The starting molecules or substrates involved in the biosynthesis pathway of the serine hydroxymethyltransferase (SHMT) pathway are serine and tetrahydrofolate (THF).

Serine is an amino acid that is used in the biosynthesis of proteins. It has a hydroxyl group (-OH) attached to its side chain and is one of the 20 common amino acids found in proteins. In addition to its role in protein synthesis, serine is also involved in the biosynthesis of other molecules such as purines, pyrimidines, and phospholipids. The biosynthesis of serine involves three enzymatic steps, which are catalyzed by 3-phosphoglycerate dehydrogenase, phosphoserine phosphatase, and phosphoserine aminotransferase. The biosynthesis pathways for nucleotides, including the synthesis of serine, do require enzymes. And in turn, these enzymes are encoded by genes that are themselves made of DNA. So, in a sense, DNA is required to make the enzymes that are necessary for its own biosynthesis. This is one example of how the various components of a living system are interdependent and interconnected. This interdependence between biosynthetic pathways means that the cell must maintain a delicate balance of metabolic processes to function properly. The cell achieves this balance through a complex network of biochemical reactions and regulatory mechanisms. These reactions are finely tuned to ensure that the concentrations of various molecules are maintained within a narrow range, and that they are produced and consumed at the appropriate rates. Regulatory mechanisms, such as feedback inhibition and gene regulation, help to maintain this balance by controlling the expression and activity of enzymes involved in these pathways. Additionally, the cell has mechanisms for recycling and salvaging molecules, which helps to minimize waste and ensure that essential molecules are available for biosynthesis. Overall, the cell is able to achieve a dynamic balance through the integration of these complex biochemical and regulatory mechanisms. Maintaining the balance of biochemical reactions within the cell is essential for its survival. If the balance is disrupted or unregulated, it can lead to cell death or disease. Therefore, the cell has various mechanisms in place to regulate and control the balance of its biochemical reactions. These mechanisms can involve feedback loops, enzyme regulation, and cellular signaling pathways, among others.
Some claim that the first life forms had simpler mechanisms of regulation and that more complex regulatory systems evolved over time, but there is no concrete supportive evidence for these claims.

There are several enzymes involved in the biosynthesis of tetrahydrofolate (THF), a coenzyme that plays a critical role in nucleotide synthesis and other metabolic pathways. The pathway can vary depending on the organism, but in general, it involves at least five enzymes: GTP cyclohydrolase I (GCH1), 6-pyruvoyltetrahydropterin synthase (PTPS), dihydropteroate synthase (DHPS), dihydrofolate reductase (DHFR), and serine hydroxymethyltransferase (SHMT). These enzymes catalyze a series of reactions that convert GTP to THF, using various cofactors and substrates along the way. Tetrahydrofolate (THF) is an essential co-factor in many biological processes, including DNA synthesis, amino acid metabolism, and nucleotide biosynthesis. Cells cannot survive without it because THF is required for the synthesis of purines, pyrimidines, and certain amino acids that are essential for cell growth and division.

As mentioned above, aspartate depends on the transamination of oxaloacetate. Transamination is a metabolic process in which an amino group (-NH2) is transferred from an amino acid to a keto acid, resulting in the formation of a new amino acid and a new keto acid. The transfer of the amino group is catalyzed by enzymes known as transaminases or aminotransferases.

In the case of the transamination of oxaloacetate, the amino group is transferred from an amino acid (usually glutamate) to oxaloacetate, resulting in the formation of aspartate and alpha-ketoglutarate. This reaction is catalyzed by the enzyme aspartate aminotransferase. This transamination reaction is an important step in several metabolic pathways, including the biosynthesis and degradation of amino acids. For example, aspartate is a precursor for the synthesis of several other amino acids, including methionine and threonine, and alpha-ketoglutarate can enter the citric acid cycle and be used as a source of energy for the cell.

Oxaloacetate

Refuting Darwin, confirming design Metabo10

Oxaloacetate is a four-carbon dicarboxylic acid that is an important intermediate in many metabolic pathways. It is synthesized from pyruvate or other intermediates through a series of enzymatic reactions in the mitochondrial matrix of eukaryotic cells or the cytoplasm of prokaryotic cells. One pathway for the synthesis of oxaloacetate involves the carboxylation of pyruvate, which is catalyzed by the enzyme pyruvate carboxylase. This reaction requires ATP and bicarbonate as cofactors, and results in the formation of oxaloacetate.

The complex metabolic pathways involved in the biosynthesis of the precursors to start the synthesis of nucleotides from simpler compounds demonstrate the intricate interdependence and regulation of various biochemical processes within the cell. Providing the precursors for the biosynthesis of amino acids, co-factors, and nucleotides requires a series of enzymatic reactions that are highly regulated and often require energy input from ATP or other high-energy molecules. Moreover, the biosynthesis of one molecule often depends on the availability of another molecule, resulting in a delicate balance of metabolic processes that must be maintained for the cell to function properly. This indicates that the setup is extremely unlikely to be achievable in a step-wise fashion, and an "all or nothing" approach is required, which only an intelligent designer is capable of instantiating.  

The gap between the prebiotic, non-enzymatic synthesis of organic compounds and the complex metabolic pathways found in living cells is significant and multifaceted.

Prebiotic chemistry is concerned with the chemical processes that took place on Earth before the emergence of life. It is hypothesized that the basic building blocks of life, such as amino acids, nucleotides, and sugars, were formed through a series of chemical reactions that occurred spontaneously in the early Earth's environment. These reactions would have been driven by energy sources such as lightning, volcanic activity, and UV radiation.

However, the formation of these simple organic molecules would not immediately lead to the formation of complex metabolic pathways. The formation of simple organic molecules, such as amino acids and sugars, is a crucial step in the origin of life. However, the existence of these molecules alone does not lead to the formation of complex metabolic pathways. This is because the formation of metabolic pathways requires a precise interconnection of multiple enzymes, each of which performs a specific function in the pathway. Enzymes are complex protein molecules that catalyze specific chemical reactions within a cell. For a metabolic pathway to function properly, the enzymes involved in the pathway must be present in the correct sequence, with each enzyme catalyzing the correct reaction to produce the desired end product. This interconnection of enzymes is critical to the function of the pathway and requires a high degree of specificity and precision. Furthermore, the formation of enzymes is a complex process that requires a specific sequence of amino acids to fold into the correct three-dimensional structure, which is essential for its function. The probability of a random sequence of amino acids folding into a functional enzyme is extremely low, making the spontaneous formation of a functional enzyme highly unlikely. Moreover, metabolic pathways require energy to function, which must come from an external source. In modern cells, energy is provided by the breakdown of nutrients through metabolic pathways, but in the absence of such pathways, the origin of life required an external energy source. Hypothesized is the provision by geothermal energy, lightning, or radiation, among other sources. The problem here is however, these sources are very unspecific in their delivery of energy. In contrast, ATP (adenosine triphosphate) is a highly specific energy carrier that can be precisely funneled to the site of an enzyme where it is needed for a specific chemical reaction to occur.

ATP is a small molecule that is synthesized by cells through metabolic pathways, and it is used to power many cellular processes, including muscle contraction, nerve impulses, and the synthesis of molecules. ATP stores energy in its high-energy phosphate bonds, which can be released through hydrolysis to drive endergonic reactions. The specificity of ATP lies in its ability to interact with enzymes in a highly specific manner. Enzymes can bind ATP at specific sites, called active sites, which are precisely shaped to fit the ATP molecule. Once ATP is bound to an enzyme, the high-energy phosphate bond can be cleaved, releasing energy that can be used to power specific chemical reactions.
The precise delivery of ATP to the site of an enzyme is critical for its function in metabolic pathways. This is because the energy required for a specific reaction may be different from that required for another reaction in the same pathway. Therefore, the ability to funnel ATP precisely to the site where it is needed ensures that the energy is used efficiently and only where it is required.

The hypothesis of the origin of life by unguided means faces significant challenges in explaining how metabolic pathways, which rely on the highly specific energy carrier ATP, arose in the absence of modern cellular machinery. One proposed solution to this challenge is the concept of proto-metabolic pathways, which are thought to have arisen through a series of chemical reactions that were catalyzed by minerals or simple organic molecules on the early Earth. Over time, these pathways would have become more complex and interconnected, eventually leading to the emergence of metabolic pathways as we know them today.

One of the major challenges in bridging the gap between prebiotic chemistry and living organisms is the complexity of metabolic pathways found in living cells. These pathways involve a series of enzyme-catalyzed reactions that convert simple organic molecules into more complex molecules and generate the energy required for cellular functions. The origin of these pathways is claimed to have occurred over billions of years, through a process of trial and error.

This is similar to saying that: On the one side you have an intelligent agency-based system of irreducible complexity of tightly integrated, information-rich functional systems which have ready on hand energy directed for such, that routinely generate the sort of phenomenon being observed.  And on the other side imagine a golfer, who has played a golf ball through a 12-hole course. Can you imagine that the ball could also play itself around the course in his absence? Of course, we could not discard, that natural forces, like wind, tornadoes, or rains or storms could produce the same result, given enough time.  the chances against it, however, are so immense, that the suggestion implies that the non-living world had an innate desire to get through the 12-hole course.

The analogy of the golf ball playing itself around a course can also be applied to metabolic pathways. Metabolic pathways are complex sequences of chemical reactions that occur within cells and are responsible for the production of energy and the synthesis of various cellular components. These pathways are highly integrated, with each step depending on the previous one, and require energy to function.

Metabolic pathways require all of their parts to be present and functioning together to work. For metabolic pathways to work, all of their parts must be present and functioning together. This is because each step in the pathway is catalyzed by a specific enzyme, which is a protein that facilitates the reaction. Enzymes are highly specific in their function, meaning that each enzyme is designed to work on a specific substrate, or molecule, and produce a specific product. For example, in the process of cellular respiration, glucose is broken down into smaller molecules through a series of reactions that occur in different parts of the cell. The breakdown of glucose occurs in several stages, each catalyzed by a specific enzyme. If any one of these enzymes is missing or not functioning properly, the entire pathway is disrupted and the cell cannot produce energy efficiently. Moreover, metabolic pathways are regulated by feedback mechanisms that ensure that the rate of the pathway matches the needs of the cell. If any part of the pathway is disrupted, it can lead to a buildup of intermediate molecules that can be toxic to the cell. This highlights the importance of all the components being present and functioning together for the pathway to work correctly. Therefore, the presence and functioning of all the components of a metabolic pathway are essential for the proper functioning of the pathway. Any disruption or absence of any one of the components can lead to the breakdown of the entire pathway, emphasizing the requirement for a highly specific and integrated system to function properly.

The origin of ATP remains a significant challenge for the proto-metabolic pathway hypothesis, as the molecule is not readily available on the prebiotic Earth. One proposed solution to this challenge is that ATP would have been produced through abiotic reactions, such as the phosphorylation of ADP (adenosine diphosphate) in the presence of mineral catalysts. Other proposed mechanisms include the production of ATP through the metabolism of simpler molecules, such as acetyl-CoA. Acetyl-CoA however is not naturally found in the environment. It is synthesized within living organisms through various metabolic pathways. Another proposed solution is that ATP would have been produced through the use of alternative energy carriers, such as pyrophosphate, which is a less efficient but more readily available molecule that can be used to drive chemical reactions. While these proposed solutions are still subject to ongoing investigation and debate, it is clear that the origin of metabolic pathways and the production of highly specific energy carriers such as ATP remain significant, in my view, unsurmountable challenges for proposals of the origin of life by unguided means. Continued research in this field will probably shed even more evidence and light on the impossibility of the claim that life could have arisen on Earth by stochastic, non-designed events.

G. Zubay (2000): The most striking difference in the pathways to the purines and pyrimidines is the timing of ribose involvement. In de novo purine synthesis the purine ring is built on the ribose in a stepwise fashion. In pyrimidine synthesis, the nitrogen base is synthesized prior to the attachment of the ribose. In both instances, the ribose-5-phosphate is first activated by the addition of a pyrophosphate group to the C'-1 of the sugar to form phosphoribosyl pyrophosphate (PRPP). This activation facilitates the formation of the linkage between the C'-1 carbon of the ribose and the nitrogen of the purine and pyrimidine bases.54

D. Penny (1999): An interesting picture of the LUCA is emerging. It was a fully DNA and protein-based organism with extensive processing of RNA transcripts. 37 A. Hiyoshi (2011): All the self-reproducing cellular organisms so far examined have DNA as the genome.

E. V. Koonin (2012): All the difficulties and uncertainties of evolutionary reconstructions notwithstanding, parsimony analysis combined with less formal efforts on the reconstruction of the deep past of particular functional systems leaves no serious doubts that LUCA already possessed at least several hundred genes. In addition to the aforementioned “golden 100” genes involved in expression, this diverse gene complement consists of numerous metabolic enzymes, including pathways of the central energy metabolism and the biosynthesis of nucleotides, amino acids, and some coenzymes, as well as some crucial membrane proteins, such as the subunits of the signal recognition particle (SRP) and the H+- ATPase. 36



Last edited by Otangelo on Thu Apr 06, 2023 5:41 am; edited 23 times in total

https://reasonandscience.catsboard.com

9Refuting Darwin, confirming design Empty Re: Refuting Darwin, confirming design Tue Nov 08, 2022 1:46 pm

Otangelo


Admin

Phosphoribosyl-pyrophosphate synthetase (Prs)

Phosphoribosyl-pyrophosphate synthetase (Prs) is an enzyme that plays a crucial role in nucleotide biosynthesis, as it catalyzes the conversion of ribose-5-phosphate (R5P) and ATP (adenosine triphosphate) to phosphoribosyl pyrophosphate (PRPP), which is an essential precursor for the de novo synthesis of purine and pyrimidine nucleotides.

The overall structure of Prs typically consists of multiple domains that are responsible for ATP and R5P binding, as well as the active site for catalysis. Prs is typically a homodimer, meaning it is composed of two identical subunits that come together to form the functional enzyme. The subunits may contain different domains responsible for catalysis and binding. The minimal bacterial isoform of Prs, also known as PRS1, is the smallest version of Prs and is found in some bacteria, including Escherichia coli (E. coli). PRS1 from E. coli is composed of 265 amino acids and has a molecular weight of approximately 29.7 kDa. It contains three domains: an N-terminal domain responsible for ATP binding, a central domain responsible for R5P binding, and a C-terminal domain that contains the active site for catalysis. Its main function is to catalyze the conversion of ribose-5-phosphate (R5P) and ATP (adenosine triphosphate) to phosphoribosyl pyrophosphate (PRPP). This reaction involves transferring the pyrophosphate group from ATP to the C1 position of R5P, resulting in the formation of PRPP.

PRPP is also involved in other important cellular processes, such as the biosynthesis of NAD (nicotinamide adenine dinucleotide), histidine, and tryptophan, as well as the formation of certain coenzymes and cofactors.

In addition to nucleotide biosynthesis, PRPP serves as a key regulator of various metabolic pathways in cells, as it acts as an allosteric activator or inhibitor of several enzymes involved in purine and pyrimidine metabolism. This makes Prs and the synthesis of PRPP crucial for maintaining cellular nucleotide pools and regulating nucleotide metabolism, which are essential for cell growth, proliferation, and survival.

An intelligent designer required to set up the Metabolic Networks used in life 

Observation: The existence of metabolic pathways is crucial for molecular and cellular function.  Although bacterial genomes differ vastly in their sizes and gene repertoires, no matter how small, they must contain all the information to allow the cell to perform many essential (housekeeping) functions that give the cell the ability to maintain metabolic homeostasis, reproduce, and evolve, the three main properties of living cells. Gil et al. (2004)  In fact, metabolism is one of the most conserved cellular processes. By integrating data from comparative genomics and large-scale deletion studies, the paper "Structural analyses of a hypothetical minimal metabolism" propose a minimal gene set comprising 206 protein-coding genes for a hypothetical minimal cell. The paper lists 50 enzymes/proteins required to create a metabolic network implemented by a hypothetical minimal genome for the hypothetical minimal cell. The  50 enzymes/proteins, and the metabolic network, must be fully implemented to permit a cell to keep its basic functions.
  
Hypothesis (Prediction): The origin of biological irreducible metabolic pathways that also require regulation and which are structured like a cascade, similar to electronic circuit boards,  are best explained by the creative action of an intelligent agent.

Experiment: Experimental investigations of metabolic networks indicate that they are full of nodes with enzymes/proteins, detectors, on/off switches, dimmer switches, relay switches, feedback loops etc. that require for their synthesis information-rich, language-based codes stored in DNA. Hierarchical structures have been proved to be best suited for capturing most of the features of metabolic networks (Ravasz et al, 2002). It has been found that metabolites can only be synthesized if carbon, nitrogen, phosphor, and sulfur and the basic building blocks generated from them in central metabolism are available.


 This implies that regulatory networks gear metabolic activities to the availability of these basic resources.  So one metabolic circuit depends on the product of other products, coming from other, central metabolic pathways, one depending on the other, like in a cascade.  Further noteworthy is that Feedback loops have been found to be required to regulate metabolic flux and the activities of many or all of the enzymes in a pathway.  In many cases, metabolic pathways are highly branched, in which case it is often necessary to alter fluxes through part of the network while leaving them unaltered or decreasing them in other parts of the network (Curien et al., 2009). These are interconnected in a functional way, resulting in a living cell. The biological metabolic networks are exquisitely integrated, so the significant alterations in inevitably damage or destroys the function. Changes in flux often require changes in the activities of multiple enzymes in a metabolic sequence. Synthesis of one metabolite typically requires the operation of many pathways.

Conclusion: Regardless of its initial complexity, self-maintaining chemical-based metabolic life could not have emerged in the absence of a genetic replicating mechanism ensuring the maintenance, stability, and diversification of its components. In the absence of any hereditary mechanisms, autotrophic reaction chains would have come and gone without leaving any direct descendants able to resurrect the process. Life as we know it consists of both chemistry and information.   If metabolic life ever did exist on the early Earth, to convert it to life as we know it would have required the emergence of some type of information system under conditions that are favorable for the survival and maintenance of genetic informational molecules. ( Ribas de Pouplana, Ph.D.)
 
Biological systems are functionally organized, integrated into an interdependent network, and complex, like human-made machines and factories. The wiring or circuit board of an electrical device equals the metabolic pathways of a biological cell. For the assembly of a biological system of multiple parts, not only the origin of the genome information to produce all proteins/enzymes with their respective subunits and assembly cofactors must be explained, but also parts availability (The right materials must be transported to the building site). Often these materials in their raw form are unusable. Other complex machines come into play to transform the raw materials into a usable form.  (All this requires specific information. ) synchronization, ( these parts must be ready on hand at the building site )  manufacturing and assembly coordination ( which required information on how to assemble every single part correctly, at the right place, at the right moment, and in the right position), and interface compatibility (the parts must fit together correctly, like a lock and key). Unless the origin of all these steps is properly explained, functional complexity as existing in biological systems has not been addressed adequately. How could the whole process have started " off the hooks " from zero without planning intelligence? Why would natural, unguided mechanisms produce a series of enzymes that only generate useless intermediates until all of the enzymes needed for the end product exist, are in place, and do their job?

S.Lovtrup (1987):  "...the reasons for rejecting Darwin's proposal were many, but first of all that many innovations cannot possibly come into existence through accumulation of many small steps, and even if they can, natural selection cannot accomplish it, because incipient and intermediate stages are not advantageous." 42

On the one side, you have an intelligent agency-based system of irreducible complexity of tight integrated, information-rich functional systems which have ready on-hand energy directed for such, that routinely generate the sort of phenomenon being observed.  And on the other side imagine a golfer, who has played a golf ball through a 12-hole course. Can you imagine that the ball could also play itself around the course in his absence? Of course, we could not discard, that natural forces, like wind, tornadoes, or rains or storms could produce the same result, given enough time.  the chances against it, however, are so immense, that the suggestion implies that the non-living world had an innate desire to get through the 12-hole course.

D. Armenta-Medina (2014): Nucleotide metabolism is central in all living systems, due to its role in transferring genetic information and energy. Indeed, it has been described as one of the ancient metabolisms in evolution.  In addition, many of the intermediates associated with this metabolic module have been intimately associated with prebiotic chemistry and the origin of life. In this regard, we adopted a multigenomic strategy for the reconstruction and analysis of the metabolism of nucleotides, evaluating the contribution of the origin and diversification of de novo and salvage pathways for nucleotides in the evolution of organisms. In addition, these analyses allow the identification of a metabolic link between the LCA and the first steps in the structure of biological networks. Our strategy reveals some general rules concerning the adaptation of the first predominant chemical reactions to enzymatic steps in the LCA and allows us to infer environmental issues in the early stages of the emergence of life.39 

Purines and Pyrimidines

Biochem (2022): Nucleotides serve numerous functions in different reaction pathways. For example, nucleotides are the activated precursors required for DNA and RNA synthesis. Nucleotides form the structural moieties of many coenzymes (examples include reduced nicotinamide adenine dinucleotide [NADH], flavin adenine dinucleotide [FAD], and coenzyme A). Nucleotides are critical elements in energy metabolism (adenosine triphosphate [ATP], guanosine triphosphate [GTP]). Nucleotide derivatives are frequently activated intermediates in many biosynthetic pathways. In addition, nucleotides act as second messengers in intracellular signaling (e.g., cyclic adenosine monophosphate [cAMP], cyclic guanosine monophosphate [cGMP]). Finally, nucleotides and nucleosides act as metabolic allosteric regulators. Think about all of the enzymes that have been studied that are regulated by levels of ATP, ADP, and AMP. Because of the minimal dietary uptake of these important molecules, de novo synthesis of purines and pyrimidines is required.

Refuting Darwin, confirming design Gi1rok10
A nucleoside is a purine or pyrimidine base linked to a ribose sugar and a nucleotide is a phosphate ester bonded to a nucleoside. 

De novo purine biosynthesis

Refuting Darwin, confirming design CAh3lJw

D. Voet et.al. (2016): Widely divergent organisms such as E. coli, yeast, pigeons, and humans have virtually identical pathways for the biosynthesis of purine nucleotides 49

Biochem (2022):  Purines and pyrimidines are required for synthesizing nucleotides and nucleic acids. These molecules can be synthesized either from scratch, de novo, or salvaged from existing bases. The de novo pathway of purine synthesis is complex, consisting of 11 steps and requiring six molecules of adenosine triphosphate (ATP) for every purine synthesized. The precursors that donate components to produce purine nucleotides include glycine, ribose 5-phosphate, glutamine, aspartate, carbon dioxide, and N10-formyltetrahydrofolate (N10-formyl-FH4) (Figure below).


Refuting Darwin, confirming design Origin10
Origin of the atoms of the purine base. 
FH4, tetrahydrofolate; RP, ribose 5′-phosphate. FH4, tetrahydrofolate; RP, ribose 5′-phosphate.

Purines are synthesized as ribonucleotides, with the initial purine synthesized being inosine monophosphate (IMP). Adenosine monophosphate (AMP) and guanosine monophosphate (GMP) are each derived from IMP in two-step reaction pathways. The de novo pathway requires at least six high-energy bonds per purine produced.46

Comment: Every company that manufactures things, requires in many cases a purchasing department that is exclusively involved in acquiring and importing the goods, the basic materials used in the factory. That is already a complex process, requiring many different steps where communication plays a decisive role. Not any raw material can be used, but it must be the right materials, in the right quantities, in the right form, in purity, in concentrations, in sizes, etc. Once the raw materials are inside the factory of the company, the processing procedures can begin. Often these raw materials require specific processing before they can be used in the assembly process of the end product. In our case,  six(!) different atoms have to be recruited as precursors, to begin with, nucleotide base synthesis. How did the LUCA get its know-how of the right atoms to make purines?   

Graham Cairns-Smith (2003): We return to questions of fine-tuning, accuracy, and specificity. Any competent organic synthesis hinges on such things. In the laboratory, the right materials must be taken from the right bottles and mixed and treated in an appropriate sequence of operations. In the living cell, there must be teams of enzymes with specificity built into them. A protein enzyme is a particularly well-tuned device. It is made to fit beautifully the transition state of the reaction it has to catalyze. Something ( or someone?) must have performed the fine-tuning necessary to allow such sophisticated molecules as nucleotides to be cleanly and consistently made in the first place.47

Yitzhak Tor (2013):  How did nature “decide” upon these specific heterocycles? Evidence suggests that many types of heterocycles could have been present on early Earth. It is therefore likely that the contemporary composition of nucleobases is a result of multiple selection pressures that operated during early chemical and biological evolution. The persistence of the fittest heterocycles in the prebiotic environment towards, for example, hydrolytic and photochemical assaults, may have given some nucleobases a selective advantage for incorporation into the first informational polymers. The prebiotic formation of polymeric nucleic acids employing the native bases remains, however, a challenging problem to reconcile. Two such selection pressures may have been related to genetic fidelity and duplex stability. Considering these possible selection criteria, the native bases along with other related heterocycles seem to exhibit a certain level of fitness. We end by discussing the strength of the N-glycosidic bond as a potential fitness parameter in the early DNA world, which may have played a part in the refinement of the alphabetic bases. Even minute structural changes can have substantial consequences, impacting the intermolecular, intramolecular and macromolecular “chemical physiology” of nucleic acids 48

The enzymes of de novo purine synthesis

Donald Voet et.al. (2016): Many of the intermediates in the de novo purine biosynthesis pathway degrade rapidly in water. Their instability in water suggests that the product of one enzyme must be channeled directly to the next enzyme along the pathway. Recent evidence shows that the enzymes do indeed form complexes when purine synthesis is required.

Comment: This is remarkable and shows how foreplanning is required to get the end product without it being destroyed along the synthesis pathway. There is no natural urge or need for these intermediates to be preserved.

The purine ring system is assembled on ribose-phosphate

De novo purine biosynthesis, like pyrimidine biosynthesis, requires Phosphoribosyl pyrophosphate PRPP but, for purines, PRPP provides the foundation on which the bases are constructed step by step.

Bjarne Hove-Jensen (2016): Phosphoribosyl-pyrophosphate synthetase (Prs) catalyzes the synthesis of phosphoribosyl pyrophosphate (PRPP), an intermediate in nucleotide metabolism and the biosynthesis of the amino acids histidine and tryptophan. PRPP is required for the synthesis of purine and pyrimidine nucleotides, the pyridine nucleotide cofactor NAD(P), and the amino acids histidine and tryptophan. In nucleotide synthesis, PRPP is used both for de novo synthesis and for the salvage pathway, by which bases are metabolized to nucleotides.  Prs is thus a central enzyme in the metabolism of nitrogen-containing compounds.51

Donald Voet et.al., (2016): IMP is synthesized in a pathway composed of 11 reactions

The shortest purine biosynthetic pathway, also known as the de novo purine biosynthesis pathway, involves the synthesis of inosine monophosphate (IMP), which is a precursor for both adenine and guanine, two of the four purine nucleotide bases found in DNA and RNA. The de novo purine biosynthesis pathway typically involves a series of enzymatic reactions that convert simple precursors into IMP.

In general, the de novo purine biosynthesis pathway consists of 10 enzymatic reactions, which are catalyzed by a series of enzymes. These enzymes, in sequential order, are:

1. Ribose-phosphate diphosphokinase Catalyzes the synthesis of PRPP from ribose-5-phosphate and ATP.
2. amidophosphoribosyl transferase(GPAT): Catalyzes the transfer of an amide group from glutamine to PRPP, forming 5-phosphoribosylamine (PRA).
3. Glycinamide ribotide (GAR) transformylase (GART): Catalyzes the synthesis of formylglycinamidine ribonucleotide (FGAR) from PRA and glycine.
4. 
Formylglycinamide ribotide (FGAR) amidotransferase (GART): Catalyzes the transfer of a formyl group from N10-formyltetrahydrofolate to FGAR, forming formylglycinamidine ribonucleotide (FGAM).
5. 
Formylglycinamidine ribotide (FGAM) synthetase (GART): Catalyzes the synthesis of formylglycinamidine ribonucleotide (FGAR) from FGAM.
6. 
5-aminoimidazole ribotide (AIR) carboxylase (PurK): Catalyzes the conversion of FGAM to 5-aminoimidazole ribotide (AIR).
7. 
5-aminoimidazole-4-(N-succinylocarboxamide) ribotide (SACAIR)synthetase (PurE): Catalyzes the synthesis of 5-aminoimidazole-4-(N-succinylocarboxamide) ribotide (SACAIR) from AIR.
8. 
Carboxyaminoimidazole ribotide (CAIR) mutase (PurK): Catalyzes the conversion of SACAIR to carboxyaminoimidazole ribotide (CAIR).
9. 
5-aminoimidazole-4-carboxamide ribotide (AICAR)transformylase (PurN): Catalyzes the conversion of CAIR to 5-aminoimidazole-4-carboxamide ribotide (AICAR).
10. 
5-formaminoimidazole-4- carboxamide ribotide (FAICAR) cyclase (PurM): Catalyzes the conversion of AICAR to 5-formaminoimidazole-4-carboxamide ribotide (FAICAR).
11. 
IMP cyclohydrolase  (PurH): Catalyzes the conversion of FAICAR to inosine monophosphate (IMP).

In addition to these enzymatic reactions, the de novo purine biosynthesis pathway is also regulated at various steps to maintain cellular homeostasis and prevent excessive purine synthesis. Regulation can occur at the transcriptional, translational, and post-translational levels, involving feedback inhibition, allosteric regulation, and enzyme degradation, among other mechanisms.

Regulation of the de novo purine biosynthesis pathway

The regulation of the de novo purine biosynthesis pathway is essential to maintain cellular homeostasis and prevent excessive purine synthesis. Purines are vital components of DNA, RNA, ATP, GTP, and other important molecules involved in cellular metabolism, energy production, and signaling. However, excessive purine synthesis can lead to an imbalance in cellular nucleotide pools, disrupt cellular metabolism, and result in various pathological conditions. Purine homeostasis ensures that cells have adequate levels of purine nucleotides for their normal functions while avoiding excessive accumulation or wasteful overproduction of these molecules. Cells need to carefully regulate purine nucleotide synthesis, salvage, and degradation pathways to maintain optimal intracellular levels of purine nucleotides, as imbalances can lead to cellular dysfunction and disease.

In bacteria, the regulation of purine nucleotide biosynthesis, including the PurR-mediated regulation of the purine operon, is an important mechanism to maintain purine homeostasis. This allows bacteria to modulate the expression of purine biosynthesis genes in response to changing cellular purine nucleotide levels, ensuring that they can efficiently utilize resources and adapt to different environments.

In higher organisms, including humans, purine homeostasis is also critical for normal cellular functions. Disruptions in purine metabolism or regulation can lead to various diseases, including metabolic disorders, immune system dysfunction, and cancer. For example, deficiencies in enzymes involved in purine metabolism can result in severe immunodeficiency disorders such as severe combined immunodeficiency (SCID) or Lesch-Nyhan syndrome, which are life-threatening conditions.

Here are some key points highlighting the importance of regulation in maintaining cellular homeostasis and preventing excessive purine synthesis:

Preventing Energy Waste: The de novo purine biosynthesis pathway requires multiple ATP and GTP molecules as substrates and energy sources. Uncontrolled and excessive purine synthesis could lead to the depletion of cellular ATP and GTP pools, resulting in energy waste and compromising cellular functions.

Maintaining Nucleotide Balance: Purine nucleotides are essential for DNA and RNA synthesis, and their balance is crucial for maintaining proper nucleotide pools. Unregulated purine synthesis can result in an excessive accumulation of purine nucleotides, leading to imbalances in nucleotide pools and disrupting cellular metabolism, DNA replication, and RNA transcription.

Preventing Toxic Intermediates: The de novo purine biosynthesis pathway involves multiple enzymatic steps and intermediate metabolites. Accumulation of toxic intermediates, such as adenosine monophosphate (AMP), can have detrimental effects on cellular health and function. Regulation of the pathway prevents the excessive buildup of toxic intermediates and protects cells from potential damage.

Preventing Cell Proliferation Disorders: Purine nucleotides are essential for cell proliferation, and uncontrolled purine synthesis can lead to uncontrolled cell growth and proliferation, which is associated with cancer and other cell proliferation disorders. Proper regulation of the de novo purine biosynthesis pathway helps prevent uncontrolled cell proliferation and maintain normal cellular growth and division.

Responding to Metabolic Demands: Cells need to adjust their purine nucleotide synthesis based on their metabolic demands, growth rate, and environmental conditions. Regulation of the pathway allows cells to modulate the expression of key enzymes involved in purine biosynthesis in response to changing cellular and environmental conditions, ensuring that purine synthesis is tailored to meet the metabolic demands of the cell.

It's important to note that the specific enzymes and regulatory mechanisms involved in the de novo purine biosynthesis pathway may vary slightly among different organisms, as there can be some variation in the pathway across different species. However, the overall general outline of the pathway and the number of enzymes involved are consistent with the typical de novo purine biosynthesis pathway.


De novo purine biosynthesis pathway regulation can occur at the transcriptional, translational, and post-translational levels, involving feedback inhibition, allosteric regulation, and enzyme degradation, among other mechanisms.

At the transcriptional level

At the transcriptional level, the simplest form of regulation of the de novo purine biosynthesis pathway involves the control of gene expression through the binding of specific regulatory proteins to the promoter regions of the genes encoding the enzymes involved in the pathway. One well-studied example of transcriptional regulation of purine synthesis in bacteria is the purine repressor (PurR) system found in Escherichia coli (E. coli) and related species. The PurR protein acts as a transcriptional regulator that can bind to the promoter region of genes involved in purine synthesis, controlling their transcription.

In the absence of sufficient intracellular levels of purines, PurR binds to the purine operator sites located in the promoter regions of target genes, preventing RNA polymerase from binding and initiating transcription. This results in repression of purine synthesis gene expression, reducing the production of purine nucleotides when they are not needed. When intracellular levels of purines increase, they bind to the PurR protein, causing a conformational change that prevents PurR from binding to the operator sites. As a result, RNA polymerase can bind to the promoter regions and initiate transcription of the genes involved in purine synthesis, leading to increased production of purine nucleotides. The PurR system in bacteria is an example of negative transcriptional regulation, where the binding of a repressor protein prevents transcription of target genes. This is a simple but effective mechanism by which bacteria can control the production of purine nucleotides based on the availability of intracellular purine levels. It's important to note that while the PurR system is one example of transcriptional regulation of purine synthesis in bacteria, other bacteria may employ different mechanisms or additional regulatory proteins depending on their specific metabolic pathways and environmental conditions. Regulation of purine synthesis can also occur at other levels, such as post-transcriptional or post-translational regulation, in more complex life forms.

The purine operon regulatory system

The purine operon regulatory system is a mechanism found in bacteria that controls the expression of genes involved in the biosynthesis of purine nucleotides. The regulatory system is typically composed of two main components: the PurR protein, which acts as a transcriptional repressor, and the purine-responsive element (PRE), which is the DNA sequence that interacts with PurR.

In the presence of sufficient intracellular purine nucleotides, PurR protein binds to the PRE in the promoter region of the purine operon genes, thereby preventing RNA polymerase from initiating transcription. This results in the downregulation or repression of the purine biosynthesis genes, leading to a decrease in the production of purine nucleotides. The mechanism by which PurR protein binds to the PRE in the promoter region of the purine operon genes and prevents RNA polymerase from initiating transcription is as follows:

PurR protein is typically present in an inactive form when intracellular purine nucleotide levels are sufficient. In this state, PurR protein is bound to purine nucleotides, which induces a conformational change that allows PurR to bind to the PRE. The PRE is a specific DNA sequence located in the promoter region of the purine operon genes. When bound to the PRE, PurR protein acts as a transcriptional repressor by physically blocking the binding of RNA polymerase to the promoter. This prevents RNA polymerase from initiating transcription of the downstream genes involved in purine biosynthesis. The binding of PurR protein to the PRE is mediated by the DNA-binding domain (DBD) of PurR, which contains a winged helix-turn-helix (HTH) motif that recognizes and binds to the specific DNA sequence in the PRE. The binding of PurR protein to the PRE is stabilized by the formation of a PurR-PRE complex, which involves multiple protein-DNA interactions. The specific interactions between PurR and the PRE prevent RNA polymerase from accessing the promoter region, leading to the repression of purine biosynthesis genes.

The binding of PurR protein to the PRE is stabilized by multiple protein-DNA interactions, which involve specific molecular contacts between PurR and the DNA in the PRE. These interactions typically occur between amino acid residues in the DNA-binding domain (DBD) of PurR and the nucleotide bases in the PRE. The precise details of these interactions may vary depending on the bacterial species and the specific sequence of the PRE, but the general principles are as follows:

Hydrogen bonding: The amino acid residues in the DBD of PurR form hydrogen bonds with the nucleotide bases in the PRE. For example, amino acid residues like arginine (Arg) and lysine (Lys) can form hydrogen bonds with the purine or pyrimidine bases in the PRE. These hydrogen bonds help to stabilize the PurR-PRE complex by creating specific molecular contacts between PurR and the DNA.

Van der Waals interactions: Van der Waals interactions, which are weak attractive forces between atoms, also contribute to the stability of the PurR-PRE complex. Amino acid residues in the DBD of PurR and the nucleotide bases in the PRE come into close proximity, allowing for van der Waals interactions between their atoms. These interactions help to hold the PurR protein in place on the DNA, enhancing the stability of the complex.

Electrostatic interactions: Electrostatic interactions, which are attractive forces between charged atoms or molecules, also play a role in stabilizing the PurR-PRE complex. Amino acid residues in the DBD of PurR may carry positive or negative charges, while the phosphate backbone of the DNA in the PRE is negatively charged. This results in electrostatic interactions between PurR and the DNA, contributing to the overall stability of the complex.

Shape complementarity: The DBD of PurR and the PRE in the DNA also exhibit shape complementarity, where the shape of the protein fits precisely into the major and minor grooves of the DNA. This shape complementarity allows for optimal molecular contacts between PurR and the DNA, enhancing the stability of the PurR-PRE complex.

When intracellular levels of purine nucleotides are low, purine biosynthesis needs to be upregulated to meet cellular demands. In this case, the concentration of unbound purine nucleotides increases, and some of these molecules bind to PurR protein, causing a conformational change that reduces its affinity for the PRE. As a result, PurR is released from the PRE, allowing RNA polymerase to bind to the promoter and initiate transcription of the purine operon genes, leading to an increase in purine nucleotide biosynthesis. The purine operon regulatory system provides a feedback mechanism that helps maintain appropriate levels of purine nucleotides in the cell, ensuring that the cell has enough purines for vital cellular processes while preventing excessive accumulation of purines, which can be toxic. It allows bacteria to tightly regulate the expression of purine biosynthesis genes in response to intracellular purine levels, helping to maintain cellular homeostasis.

The promoter regions are regions of DNA that are located upstream of the coding regions of genes and contain specific DNA sequences that are recognized by regulatory proteins, also known as transcription factors.

The PurR protein

PurR protein is a transcriptional repressor enzyme found in bacteria that regulates the expression of genes involved in the biosynthesis of purine nucleotides. It is part of the purine operon regulatory system, which controls the production of enzymes required for the synthesis of purine nucleotides. The smallest version of PurR protein is typically referred to as the "core" PurR protein, which consists of the DNA-binding domain (DBD) and the helical dimerization domain (HDD). The DBD is responsible for binding to specific DNA sequences in the purine operon promoter region, while the HDD facilitates dimerization of PurR protein. The size of the smallest version of PurR protein varies among different bacterial species, but it typically contains around 90-100 amino acids. For example, in Escherichia coli (E. coli), the core PurR protein is 89 amino acids in length.

Post-transcriptional regulation of purine biosynthesis

Post-transcriptional regulation of purine biosynthesis refers to the regulatory mechanisms that occur after transcription, the process of synthesizing RNA from DNA, in the pathway responsible for producing purine nucleotides. These mechanisms play a crucial role in fine-tuning the expression of genes involved in purine biosynthesis, allowing cells to efficiently modulate purine nucleotide production in response to changing cellular conditions.

There are several post-transcriptional regulatory mechanisms involved in purine biosynthesis, including:

RNA degradation: The stability of mRNA molecules, which carry the genetic information from DNA to synthesize proteins, can be regulated by various factors, including RNA-binding proteins and small regulatory RNAs. These factors can bind to specific regions of mRNA molecules involved in purine biosynthesis and either promote their degradation or protect them from degradation, thus controlling their abundance in the cell.

Alternative splicing: In some cases, the same mRNA molecule can give rise to multiple protein isoforms through a process called alternative splicing. Alternative splicing involves the selective inclusion or exclusion of specific exons, which are the coding regions of genes, in the final mRNA molecule. This can result in the production of different protein isoforms with distinct functions or regulatory properties. Alternative splicing can occur in genes involved in purine biosynthesis, leading to the production of different protein isoforms that may have differential activity or stability.

RNA editing: RNA molecules can also undergo post-transcriptional modifications through a process called RNA editing. RNA editing involves the alteration of specific nucleotide residues in the mRNA molecule, resulting in changes in the encoded protein's amino acid sequence. RNA editing can affect genes involved in purine biosynthesis, leading to changes in the function or activity of the encoded proteins.

Riboswitches: Riboswitches are regulatory elements found in the untranslated regions (UTRs) of mRNA molecules that can undergo conformational changes in response to binding of specific metabolites or ligands. These conformational changes can affect mRNA stability, translation efficiency, or splicing, thus regulating gene expression. Riboswitches have been identified in some genes involved in purine biosynthesis, and they play a role in regulating their expression in response to cellular purine nucleotide levels.

These post-transcriptional regulatory mechanisms work in concert with transcriptional regulation, including the PurR-mediated regulation of the purine operon, to tightly control purine biosynthesis and maintain purine homeostasis in cells. They allow cells to fine-tune the expression of genes involved in purine biosynthesis in response to changing cellular conditions, ensuring efficient production of purine nucleotides for cellular processes while avoiding excessive accumulation or wasteful utilization of resources. The post-transcriptional regulation of purine biosynthesis, along with transcriptional regulation, is coordinated through information exchange within the cell. Different regulatory elements, such as RNA-binding proteins, small regulatory RNAs, riboswitches, and other factors, interact with specific regions of mRNA molecules involved in purine biosynthesis, and these interactions convey regulatory information that determines the fate of the mRNA molecules. For example, RNA-binding proteins and small regulatory RNAs can bind to specific regions of mRNA molecules and influence their stability, translation efficiency, or splicing, depending on the cellular conditions. This information exchange allows the cell to modulate the abundance of mRNA molecules and, consequently, the levels of the encoded proteins involved in purine biosynthesis. Similarly, riboswitches, which are regulatory elements located in the UTRs of mRNA molecules, can undergo conformational changes in response to binding of specific metabolites or ligands. These conformational changes convey information about the cellular purine nucleotide levels and can affect mRNA stability, translation efficiency, or splicing, ultimately regulating gene expression. In coordination with transcriptional regulation, these post-transcriptional regulatory mechanisms allow cells to fine-tune the expression of genes involved in purine biosynthesis in response to changing cellular conditions. This information exchange ensures that the production of purine nucleotides is tightly controlled and optimized for cellular needs, helping to maintain purine homeostasis in the cell.

The "code" involved in the information exchange in post-transcriptional regulation of purine biosynthesis is mediated by specific sequences and structures in the mRNA molecules and regulatory factors, such as RNA-binding proteins, small regulatory RNAs, and riboswitches, which determine the outcome of regulation and convey information about the cellular conditions that influence purine homeostasis. Here's an overview of how these actors interact in the post-transcriptional regulation of purine biosynthesis:

RNA-binding proteins: RNA-binding proteins are proteins that specifically recognize and bind to specific RNA sequences or structures in mRNA molecules. In the context of purine biosynthesis regulation, RNA-binding proteins may bind to specific mRNA molecules involved in purine biosynthesis and affect their stability, translation efficiency, or splicing. For example, RNA-binding proteins may bind to the 5' or 3' untranslated regions (UTRs) of purine biosynthesis mRNA molecules, which can affect their stability and translation efficiency. The binding of RNA-binding proteins can be influenced by the cellular levels of purine nucleotides, which serves as a form of communication between the purine nucleotide levels and gene expression.

Small regulatory RNAs: Small regulatory RNAs are short RNA molecules that can specifically base pair with complementary regions in mRNA molecules, leading to gene regulation. In the context of purine biosynthesis regulation, small regulatory RNAs may base pair with specific mRNA molecules involved in purine biosynthesis and affect their translational efficiency or stability. The small regulatory RNAs can be produced in response to changes in cellular purine nucleotide levels or other signaling cues, and their base pairing with target mRNA molecules conveys information about the cellular conditions and regulates gene expression accordingly.

Riboswitches: Riboswitches are specific RNA sequences and structures that can change conformation in response to binding of specific metabolites or ligands. In the context of purine biosynthesis regulation, riboswitches may be present in the 5' UTR of mRNA molecules involved in purine biosynthesis and can change conformation upon binding of purine nucleotides. This conformational change can affect mRNA stability, translation efficiency, or splicing, and serves as a form of communication between the cellular purine nucleotide levels and gene expression.

Interdependence of the complex regulatory network

The various actors involved in the post-transcriptional regulation of purine biosynthesis, including RNA-binding proteins, small regulatory RNAs, and riboswitches, are interdependent and form a complex regulatory network that is irreducible, meaning that the removal of any one of these actors would disrupt the regulatory system. Here's an outline of how these actors are interdependent and irreducible in the context of purine biosynthesis regulation:

RNA-binding proteins: RNA-binding proteins specifically bind to mRNA molecules involved in purine biosynthesis and can affect their stability, translation efficiency, or splicing. The binding of RNA-binding proteins is often influenced by the cellular levels of purine nucleotides or other signaling cues. Removal of RNA-binding proteins would result in loss of their regulatory function and disruption of the post-transcriptional regulation of purine biosynthesis.

Small regulatory RNAs: Small regulatory RNAs can specifically base pair with complementary regions in mRNA molecules and affect their translational efficiency or stability. These small regulatory RNAs are often produced in response to changes in cellular purine nucleotide levels or other signaling cues. Removal of small regulatory RNAs would result in loss of their base pairing and regulatory function, disrupting the post-transcriptional regulation of purine biosynthesis.

Riboswitches: Riboswitches are specific RNA sequences and structures that can change conformation in response to binding of specific metabolites or ligands, such as purine nucleotides. This conformational change can affect mRNA stability, translation efficiency, or splicing. Removal of riboswitches would result in loss of their conformational switching ability and regulatory function, disrupting the post-transcriptional regulation of purine biosynthesis.

Overall, the various actors involved in the post-transcriptional regulation of purine biosynthesis, including RNA-binding proteins, small regulatory RNAs, and riboswitches, are interdependent and form a complex regulatory network. Each of these actors plays a crucial role in the regulation of purine biosynthesis, and their removal would disrupt the regulatory system, making it irreducible. This highlights the importance of the interplay between these actors in coordinating the regulation of purine biosynthesis at the post-transcriptional level.  The individual players involved in the post-transcriptional regulation of purine biosynthesis, such as RNA-binding proteins, small regulatory RNAs, and riboswitches, typically do not function effectively on their own. Their regulatory functions are typically dependent on their interactions with other molecules and components within the cellular environment.

For example, RNA-binding proteins require specific binding sites on mRNA molecules and other factors for their regulatory function. Small regulatory RNAs typically require complementary base pairing with target mRNA molecules to exert their regulatory effects. Riboswitches require binding of specific metabolites or ligands to undergo conformational changes and regulate mRNA stability, translation, or splicing. These interactions and dependencies allow these regulatory molecules to function effectively in coordinating the post-transcriptional regulation of purine biosynthesis. Without the appropriate interactions and dependencies, these individual players may not be able to effectively regulate purine biosynthesis or perform their regulatory functions. Therefore, the interdependence of these regulatory molecules is essential for the proper functioning of the post-transcriptional regulation of purine biosynthesis in the cell. The emergence of an integrated system for post-transcriptional regulation of purine biosynthesis through unguided means, such as evolution, could indeed pose challenges in terms of intermediate steps that may not confer a functional advantage. It is a complex process that likely requires multiple components that would have to evolve in a coordinated manner to confer a selective advantage.


Interdependence Points to Design

In the world of cells, a network complex,
Regulating purines, with interwoven threads.
From RNA-binding proteins to small RNAs,
And riboswitches, dance in the elegant spread.

Each actor plays a role, unique and fine,
Dependent on others, they intertwine.
RNA-binding proteins, with binding sites specific,
Stabilizing mRNAs, making them prolific.

Small RNAs, with base pairing prowess,
Efficiently tweaking mRNAs, with finesse.
Responding to cues, like nucleotide levels,
Or other signals, that the cell unravels.

Riboswitches, like molecular switches,
Changing conformation, as the cell wishes.
Metabolite binding, causing a change,
In mRNA regulation, they rearrange.

Irreducible, this network of dance,
The removal of one would disrupt the chance,
Of proper regulation, in purine biosynthesis,
A system so intricate, it's hard to dismiss.

Interdependence points to design,
In this regulatory network, so refined.
Each actor is essential, in its own way,
Working together, day by day.

Evolution's path, complex and long,
Coordinating components, can't be wrong.
Design in every step, we see,
In the interdependence, of this regulatory decree.

So marvel at the complexity, of this dance,
In the world of cells, where interdependence enhances,
The regulation of purine biosynthesis,
A testament to design, a masterpiece of life's kiss.

Purine biosynthesis regulation at the translational level

Purine biosynthesis regulation at the translational level involves mechanisms that control the translation of mRNA molecules encoding enzymes involved in purine biosynthesis. These mechanisms can impact the production of these enzymes and thereby regulate the overall rate of purine biosynthesis in a cell. One common mechanism of translational regulation in purine biosynthesis involves the binding of small regulatory RNAs or RNA-binding proteins to the mRNA molecules encoding the enzymes involved in purine biosynthesis. These regulatory RNAs or proteins can interact with specific regions of the mRNA molecules, such as the 5' untranslated region (UTR) or the coding sequence, and modulate translation initiation or elongation, leading to changes in protein production. For example, some small regulatory RNAs called riboswitches can directly bind to mRNA molecules and undergo conformational changes in response to changing intracellular purine levels. These conformational changes can either promote or inhibit translation initiation, depending on the specific riboswitch and the intracellular purine levels. This allows the cell to tightly regulate the production of purine biosynthesis enzymes based on the cellular purine levels. RNA-binding proteins can also play a role in translational regulation of purine biosynthesis. They can bind to specific regions of the mRNA molecules and either enhance or inhibit translation initiation or elongation, depending on the binding protein and its regulatory role.

Purine biosynthesis regulation at the post - translational level

Purine biosynthesis regulation at the post-translational level involves mechanisms that control the activity or stability of enzymes involved in purine biosynthesis after they have been translated and synthesized into functional proteins. These mechanisms can impact the function or abundance of these enzymes, leading to changes in purine biosynthesis rates. One common mechanism of post-translational regulation in purine biosynthesis involves protein modification, such as phosphorylation, acetylation, or ubiquitination. These modifications can occur on specific amino acid residues of the enzymes involved in purine biosynthesis and can alter their activity, stability, or protein-protein interactions. For example, phosphorylation is a common post-translational modification that can regulate the activity of enzymes involved in purine biosynthesis. Phosphorylation can either activate or inhibit the activity of these enzymes, depending on the specific enzyme and the site of phosphorylation. Protein kinases are enzymes that add phosphate groups to specific amino acids, and protein phosphatases are enzymes that remove phosphate groups, thus controlling the phosphorylation status of proteins involved in purine biosynthesis.

Another example is protein degradation, which can regulate the stability of enzymes involved in purine biosynthesis. Ubiquitination is a common post-translational modification that targets proteins for degradation by the proteasome, cellular proteolytic machinery. Ubiquitin ligases are enzymes that add ubiquitin moieties to proteins, marking them for degradation, while deubiquitinases are enzymes that remove ubiquitin moieties. Ubiquitination can affect the stability and turnover rate of enzymes involved in purine biosynthesis, thereby regulating their abundance and activity. In addition, post-translational regulation of purine biosynthesis can also involve protein-protein interactions or protein conformational changes that modulate the activity or localization of the enzymes. For example, the formation of protein complexes or the binding of regulatory proteins to enzymes involved in purine biosynthesis can influence their activity or localization, and thereby regulate purine biosynthesis. Overall, post-translational regulation of purine biosynthesis involves a complex interplay of protein modifications, protein-protein interactions, and conformational changes that modulate the activity, stability, and localization of enzymes involved in purine biosynthesis, leading to fine-tuning of purine homeostasis in the cell.

Protein Kinases

Protein kinases are enzymes that catalyze the transfer of phosphate groups from ATP (adenosine triphosphate) or other phosphate donors to specific amino acid residues on target proteins, including enzymes involved in purine biosynthesis. Phosphorylation of these enzymes can regulate their activity, stability, localization, and protein-protein interactions, thereby influencing purine biosynthesis. In purine biosynthesis, protein kinases can phosphorylate enzymes at specific amino acid residues to either activate or inhibit their activity. For example, in the de novo purine biosynthesis pathway, the enzyme phosphoribosyl pyrophosphate (PRPP) synthetase, which catalyzes the first committed step in purine biosynthesis, can be phosphorylated by protein kinases such as AMP-activated protein kinase (AMPK) or protein kinase C (PKC) at specific serine or threonine residues. Phosphorylation of PRPP synthetase can regulate its enzymatic activity, influencing the rate of PRPP production, which in turn affects the rate of purine nucleotide synthesis. Similarly, other enzymes involved in purine biosynthesis, such as adenylosuccinate synthetase, adenylosuccinate lyase, and IMP dehydrogenase, can also be phosphorylated by protein kinases, which can modulate their activity, stability, or interactions with other proteins. The specific effects of phosphorylation on these enzymes can vary depending on the enzyme and the site of phosphorylation, and can either stimulate or inhibit their enzymatic activity. Protein kinases involved in phosphorylation of enzymes in purine biosynthesis are regulated themselves through various mechanisms, including changes in cellular energy status, cellular stress, or signaling pathways. For example, AMPK, which phosphorylates PRPP synthetase, is activated by an increase in cellular AMP-to-ATP ratio, indicating low cellular energy status. Other protein kinases involved in purine biosynthesis regulation may be activated by specific signaling pathways or cellular cues that are relevant to the metabolic state or physiological conditions of the cell. Overall, phosphorylation by protein kinases is an important post-translational mechanism that can regulate the activity of enzymes involved in purine biosynthesis, contributing to the fine-tuning of purine homeostasis in the cell.

How important is fine-tuning of cellular homeostasis?

Homeostasis, or the ability of a cell or organism to maintain a stable internal environment despite external fluctuations, is crucial for the proper functioning of biological systems. Fine-tuning of homeostasis, including purine biosynthesis homeostasis, is essential for maintaining cellular health and function.

Purine nucleotides are essential building blocks for DNA, RNA, and ATP, which are critical for various cellular processes such as DNA replication, RNA transcription, protein synthesis, and energy metabolism. Proper regulation of purine biosynthesis is necessary to ensure an adequate supply of purine nucleotides for cellular processes while avoiding an excess that could lead to toxicity or imbalance in nucleotide pools.  Fine-tuning of purine biosynthesis ensures that the cell can respond to changing metabolic demands, energy status, and other physiological cues to maintain optimal purine nucleotide levels for cellular function. Imbalances in purine homeostasis can have detrimental effects on cellular function and contribute to various diseases. The fine-tuning of purine homeostasis through various regulatory mechanisms, including post-translational regulation, is critical for maintaining cellular health and function.

The enzymes for Adenine synthesis

1. Adenylosuccinate synthase 
2. adenylosuccinase (adenylosuccinate lyase)

The enzymes for Guanine synthesis

1. IMP dehydrogenase
2. GMP synthase 



Refuting Darwin, confirming design Purine10



Last edited by Otangelo on Sat Apr 08, 2023 7:47 am; edited 19 times in total

https://reasonandscience.catsboard.com

10Refuting Darwin, confirming design Empty Re: Refuting Darwin, confirming design Thu Apr 06, 2023 6:19 pm

Otangelo


Admin

The metabolic pathway for the de novo biosynthesis of IMP. 
Here the purine residue is built up on a ribose ring in 11 enzyme-catalyzed reactions. The X-ray structures for all the enzymes are shown to the outside of the corresponding reaction arrow. The peptide chains of monomeric enzymes are colored in rainbow order from N-terminus (blue) to C-terminus (red). The oligomeric enzymes, all of which consist of identical polypeptide chains, are viewed along a rotation axis with their various chains differently colored. Bound ligands are shown in space-filling form with C green, N blue, O red, and P orange. [PDBids: enzyme 1, 1DKU; enzyme 2, 1AO0; enzyme 3, 1GSO; enzyme 4, 1CDE; enzyme 5, 1VK3; enzyme 6, 1CLI; enzyme 7, 1D7A (PurE) and 1B6S (PurK); enzyme 8, 1A48; enzyme 9, 1C3U; enzymes 10 and 11, 1G8M.] 

1. Activation of ribose-5-phosphate. 
The starting material for purine biosynthesis is α-D-ribose-5-phosphate, a product of the pentose phosphate pathway. In the first step of purine biosynthesis, ribose phosphate pyrophosphokinase activates the ribose by reacting it with ATP to form 5-phosphoribosyl-????-pyrophosphate (PRPP). This compound is also a precursor in the biosynthesis of pyrimidine nucleotides and the amino acids histidine and tryptophan. As is expected for an enzyme at such an important biosynthetic crossroads, the activity of ribose phosphate pyrophosphokinase is precisely regulated.

2. Acquisition of purine atom N9. 
In the first reaction unique to purine biosynthesis, amidophosphoribosyl transferase catalyzes the displacement of PRPP’s pyrophosphate group by glutamine’s amide nitrogen. The reaction occurs with the inversion of the α configuration at C1 of PRPP, thereby forming ????-5-phosphoribosylamine and establishing the anomeric form of the future nucleotide. The reaction, which is driven to completion by the subsequent hydrolysis of the released PPi, is the pathway’s flux-controlling step.

3. Acquisition of purine atoms C4, C5, and N7.
Glycine’s carboxyl group forms an amide with the amino group of phosphoribosylamine, yielding glycinamide ribotide (GAR). This reaction is reversible, despite its concomitant hydrolysis of ATP to ADP + Pi. It is the only step of the purine biosynthetic pathway in which more than one purine ring atom is acquired.

4. Acquisition of purine atom C8. 
GAR’s free α-amino group is formylated Section 1 Synthesis of Purine Ribonucleotides to yield formylglycinamide ribotide (FGAR). The formyl donor in the reaction is N10-formyltetrahydrofolate (N10-formyl-THF), a coenzyme that transfers C1 units (THF cofactors). The X-ray structure of the enzyme catalyzing the reaction, GAR transformylase, in complex with GAR and the THF analog 5-deazatetrahydrofolate (5dTHF) was determined by Robert Almassy. Note the proximity of the GAR amino group to N10 of 5dTHF. This supports enzymatic studies suggesting that the GAR transformylase reaction proceeds via the nucleophilic attack of the GAR amine group on the formyl carbon of N10-formyl-THF to yield a tetrahedral intermediate.

5Acquisition of purine atom N3.
The amide amino group of a second glutamine is transferred to the growing purine ring to form formylglycinamidine ribotide (FGAM). This reaction is driven by the coupled hydrolysis of ATP to ADP + Pi

6. Formation of the purine imidazole ring.
The purine imidazole ring is closed in an ATP-requiring intramolecular condensation that yields 5-aminoimidazole ribotide (AIR). The aromatization of the imidazole ring is facilitated by the tautomeric shift of the reactant from its imine to its enamine form.

7. Acquisition of C6.
Purine C6 is introduced as HCO− 3 (CO2) in a reaction catalyzed by AIR carboxylase that yields carboxyaminoimidazole ribotide (CAIR). In yeast, plants, and most prokaryotes (including E. coli), AIR carboxylase consists of two proteins called PurE and PurK. Although PurE alone can catalyze the carboxylation reaction, its KM for HCO− 3 is ∼110 mM, so the reaction would require an unphysiologically high HCO− 3 concentration (∼100 mM) to proceed. PurK decreases the HCO− 3 concentration required for the PurE reaction by >1000-fold but at the expense of ATP hydrolysis.

8. Acquisition of N1. 
Purine atom N1 is contributed by aspartate in an amide-forming condensation reaction yielding 5-aminoimidazole-4-(N-succinylocarboxamide) ribotide (SACAIR). This reaction, which is driven by the hydrolysis of ATP, chemically resembles Reaction 3.

9. Elimination of fumarate. 
SACAIR is cleaved with the release of fumarate, yielding 5-aminoimidazole-4-carboxamide ribotide (AICAR). Reactions 8 and 9 chemically resemble the reactions in the urea cycle in which citrulline is aminated to form arginine. In both pathways, aspartate’s amino group is transferred to an acceptor through an ATP-driven coupling reaction followed by the elimination of the aspartate carbon skeleton as fumarate.

10. Acquisition of C2. 
The final purine ring atom is acquired through formylation by N10-formyl-THF, yielding 5-formaminoimidazole-4- carboxamide ribotide (FAICAR). This reaction and Reaction 4 of purine biosynthesis are inhibited indirectly by sulfonamides, structural analogs of the p-aminobenzoic acid constituent of THF

11. Cyclization to form IMP. 
The final reaction in the purine biosynthetic pathway, ring closure to form IMP, occurs through the elimination of water. In contrast to Reaction 6, the cyclization that forms the imidazole ring, this reaction does not require ATP hydrolysis.

In animals, Reactions 10 and 11 are catalyzed by a bifunctional enzyme, as are Reactions 7 and 8. Reactions 3, 4, and 6 also take place on a single protein. The intermediate products of these multifunctional enzymes are not readily released to the medium but are channeled to the succeeding enzymatic activities of the pathway. As in the reactions catalyzed by the pyruvate dehydrogenase complex, fatty acid synthase, bacterial glutamate synthase, and tryptophan synthase, channeling in the nucleotide synthetic pathways increases the overall rate of these multistep processes and protects intermediates from degradation by other cellular enzymes.49

Comment: How did these multifunctional enzymes, avoid leaking the intermediate products to their surrounding, emerge in a gradualistic manner, if leaking would lead to degradation, and eventually the death of the "protocell"? Let's not forget, these metabolic pathways are life-essential and had to emerge somehow prior to life to start. What I see here, is evidence of exquisite design that was planned with foresight and intentions.

https://reasonandscience.catsboard.com

11Refuting Darwin, confirming design Empty Re: Refuting Darwin, confirming design Thu Apr 06, 2023 6:21 pm

Otangelo


Admin

1. Glycinamide ribotide (GAR) transformylase (GART)
2. Formylglycinamide ribotide (FGAR) amidotransferase (GART)

3. Formylglycinamidine ribotide (FGAM) synthetase (GART)

4. 5-aminoimidazole ribotide (AIR) carboxylase (PurK)

5. 5-aminoimidazole-4-(N-succinylocarboxamide) ribotide (SACAIR)synthetase (PurE)

6. Carboxyaminoimidazole ribotide (CAIR) mutase (PurK)

7. 5-aminoimidazole-4-carboxamide ribotide (AICAR)transformylase (PurN)

8. 5-formaminoimidazole-4- carboxamide ribotide (FAICAR) cyclase (PurM)

9. IMP cyclohydrolase 
 (PurH)



1. Ribose-phosphate diphosphokinase

Ribose-phosphate diphosphokinase catalyzes the conversion of ribose-5-phosphate (R5P) and ATP (adenosine triphosphate) into phosphoribosyl pyrophosphate (PRPP) and ADP (adenosine diphosphate). The overall structure of ribose-phosphate diphosphokinase is typically a homodimer, meaning it consists of two identical subunits. Each subunit has its own active site where the enzyme's catalytic activity takes place. The minimal bacterial isoform of ribose-phosphate diphosphokinase is a small protein with a size of approximately 150-200 amino acids, although the exact size may vary depending on the specific bacterial species. It is composed of a single polypeptide chain folded into a three-dimensional structure, with specific regions or domains that are responsible for its catalytic activity and substrate binding.

The amino acid sequence of ribose-phosphate diphosphokinase can vary among different bacterial species, but it typically contains conserved regions that are important for its function. These regions may include ATP binding sites, R5P binding sites, and catalytic residues that are involved in the enzymatic reaction. Ribose-phosphate diphosphokinase is an important enzyme in nucleotide metabolism and is found in both prokaryotic and eukaryotic organisms. It plays a crucial role in the biosynthesis of nucleotides, which are essential for DNA and RNA synthesis, energy metabolism, and various cellular processes. The specific structure and function of ribose-phosphate diphosphokinase may vary among different organisms, but its overall role in nucleotide metabolism is conserved across species.

Ribose-phosphate diphosphokinase (RPK), also known as PRPP synthase, plays a critical role in nucleotide biosynthesis, which is essential for many cellular processes including DNA and RNA synthesis. If a cell lacks RPK or has impaired RPK activity, it can have severe consequences for cellular function and viability. If a cell lacks RPK or has reduced RPK activity, it can lead to a deficiency of PRPP, which in turn can result in impaired nucleotide biosynthesis and other metabolic pathways that depend on PRPP as a precursor. This can disrupt cellular processes that require nucleotides, such as DNA and RNA synthesis, and can ultimately lead to cell death or severe cellular dysfunction.

Additionally, RPK has been found to be important for the regulation of cellular metabolism, cell proliferation, and response to stress and other environmental cues. Dysfunction or absence of RPK can have far-reaching effects on cellular metabolism and physiology, beyond nucleotide biosynthesis.

The activity of ribose-phosphate diphosphokinase, like other enzymes, depends on several factors, including:

Co-factors or co-enzymes: Ribose-phosphate diphosphokinase may require specific co-factors or co-enzymes for its activity. These are small molecules that are necessary for the enzyme to function properly. For example, ribose-phosphate diphosphokinase may require magnesium ions (Mg2+) as a co-factor for its enzymatic activity.

Protein-protein interactions: Ribose-phosphate diphosphokinase may interact with other proteins or enzymes in the cellular pathway or metabolic network in which it operates. These interactions can modulate its activity or regulation.

Post-translational modifications: Ribose-phosphate diphosphokinase or its isoforms may undergo post-translational modifications, such as phosphorylation, acetylation, or methylation, which can affect its activity, stability, or localization.

Genetic regulation: The expression and activity of ribose-phosphate diphosphokinase can be regulated at the genetic level. Transcription factors, regulatory proteins, or other cellular processes can modulate the enzyme's expression or activity.

Ribose-phosphate diphosphokinase requires two inorganic cofactors for its activity:

Magnesium ions (Mg2+): Magnesium ions are essential for the catalytic activity of ribose-phosphate diphosphokinase. They play a critical role in stabilizing the enzyme's active site and facilitating the transfer of phosphate groups between ATP and R5P during the enzymatic reaction.

Inorganic pyrophosphate (PPi): Inorganic pyrophosphate (PPi) is a high-energy phosphate molecule that serves as a donor of pyrophosphate group in the synthesis of PRPP from ATP and R5P. PPi is hydrolyzed during the enzymatic reaction, providing the energy necessary to drive the formation of PRPP.

Both magnesium ions and inorganic pyrophosphate are required for the proper functioning of ribose-phosphate diphosphokinase, and their presence is critical for the enzyme's catalytic activity. These cofactors play an essential role in stabilizing the enzyme's structure, facilitating substrate binding, and promoting the chemical reactions involved in the conversion of R5P and ATP to PRPP. It's important to note that the availability of cofactors, including magnesium ions and inorganic pyrophosphate, in the cellular environment can be regulated by cellular homeostasis and metabolic pathways. Cells tightly regulate the concentrations of cofactors to maintain optimal enzyme activity and cellular function. Additionally, the specific mechanisms by which ribose-phosphate diphosphokinase acquires these cofactors may vary depending on the organism, cellular context, and environmental conditions.

Activation of ribose-5-phosphate


The starting material for purine biosynthesis is Ribose 5-phosphate, a product of the pentose phosphate pathwayThat means the synthesis of ribonucleosides depends on the pentose phosphate pathway.

In the first step of purine biosynthesis,  Ribose-phosphate diphosphokinase ( PRPP synthetase) activates the ribose by reacting it with ATP to form  5-Phosphoribosyl-1-Pyrophosphate (PRPP). This compound is also a precursor in the biosynthesis of pyrimidine nucleotides and the amino acids histidine and tryptophan. As is expected for an enzyme at such an important biosynthetic crossroads, the activity of ribose-phosphate pyrophosphokinase is precisely regulated.

Refuting Darwin, confirming design O4FD5Ug

Ribose-phosphate diphosphokinase

The two major purine nucleoside diphosphates, ADP and GDP, are negative effectors of ribose-5-phosphate pyrophosphokinase.

That rises the question which emerged first: ADP and GDP which are the product of the pathway of which Ribose-phosphate diphosphokinase makes part or the enzyme per se.

Regulating the availability of cofactors

The cell regulates the availability of cofactors, including magnesium ions and inorganic pyrophosphate, through various mechanisms to maintain optimal enzyme activity and cellular function. Here are some examples of how cells regulate cofactor levels:

Cellular transporters: Cells can have specific transporters that actively import or export cofactors, including magnesium ions, to regulate their intracellular concentrations. These transporters can be regulated by various factors, such as cellular signaling, energy status, and cofactor availability, to maintain appropriate levels of cofactors in the cell.

Chelation and sequestration: Cells can use chelating molecules or proteins to tightly bind and sequester cofactors, such as magnesium ions, in specific cellular compartments or organelles. This can help regulate the availability and distribution of cofactors within the cell, ensuring that they are available for the appropriate enzymes or metabolic pathways.

Enzymatic synthesis and degradation: Cells can synthesize and degrade cofactors as needed to regulate their intracellular concentrations. For example, inorganic pyrophosphate (PPi), which is a byproduct of ribose-phosphate diphosphokinase activity, can be further metabolized or regenerated by other enzymes or pathways in the cell.

Feedback regulation: The activity of enzymes involved in cofactor metabolism or utilization can be regulated by feedback mechanisms. For example, high intracellular concentrations of certain cofactors, such as magnesium ions or ATP, can allosterically inhibit or activate enzymes involved in cofactor biosynthesis or utilization to maintain appropriate levels of cofactors in the cell.

Gene expression regulation: Cells can regulate the expression of genes encoding enzymes involved in cofactor metabolism or utilization to control the levels of cofactors. This can be achieved through transcriptional regulation, where specific transcription factors or regulatory proteins control the expression of these genes in response to cellular signals or environmental cues.

Overall, the regulation of cofactor levels in the cell is a tightly controlled process that involves various mechanisms, including cellular transporters, chelation and sequestration, enzymatic synthesis and degradation, feedback regulation, and gene expression regulation. These mechanisms work together to maintain optimal cofactor concentrations for the proper functioning of enzymes and metabolic pathways in the cell.

Mechanism description

The mechanism of RPK involves several steps, including substrate binding, phosphoryl transfer, and product release. Here is a general overview of the RPK mechanism:

Substrate binding: RPK binds both R5P and ATP as substrates. R5P binds first to the active site of RPK, followed by ATP binding to a separate site on the enzyme. The binding of ATP induces a conformational change in RPK that positions the two substrates for phosphoryl transfer.

Phosphoryl transfer: RPK catalyzes the transfer of a pyrophosphate (PPi) group from ATP to R5P. The phosphoryl group from ATP is transferred to the C1 position of R5P, forming PRPP and releasing ADP as a byproduct. The reaction involves a nucleophilic attack by the C1 hydroxyl group of R5P on the γ-phosphate of ATP, resulting in the formation of a phosphoester bond between R5P and the transferred phosphoryl group.

Product release: After phosphoryl transfer, PRPP is released from the active site of RPK, and the enzyme is ready for another catalytic cycle. The released ADP can be further metabolized or recycled by other cellular processes.

The mechanism of RPK is complex and involves multiple steps, including substrate binding, phosphoryl transfer, and product release. The enzyme's active site and conformational changes play a crucial role in facilitating the catalytic reaction and ensuring efficient PRPP synthesis, which is essential for nucleotide biosynthesis and other cellular processes.

Enzymatic Symphony: The Marvel of Design

Behold the marvel of design,
Ribose-phosphate diphosphokinase, divine,
An enzyme that orchestrates with grace,
The synthesis of nucleotides, in every place.

With active sites, it's built to bind,
R5P and ATP, in perfect kind,
Converting them with skill and might,
To PRPP and ADP, in the enzyme's light.

A homodimer it forms, a symphony in pairs,
With identical subunits, it truly declares,
The beauty of its structure, precise and fine,
A testament to intelligence, so divine.

Conserved regions, so carefully designed,
ATP binding sites, with precision aligned,
And R5P binding sites, they play their part,
In the enzymatic dance, a work of art.

Amino acids, the building blocks of life,
Compose this enzyme, without any strife,
Their sequence varies, but still we find,
The hand of design, so perfectly aligned.

Magnesium ions, a crucial co-factor,
Stabilize the enzyme's active site, with their benefactor,
Inorganic pyrophosphate, a high-energy bond,
Provides the driving force, so the reaction can respond.

Genetic regulation, a masterful control,
To fine-tune the enzyme's activity, a precise role,
Protein-protein interactions, a delicate dance,
To modulate its function, with elegance.

Post-translational modifications, adding flair,
Phosphorylation, acetylation, with utmost care,
Fine-tuning the enzyme's stability and location,
A master plan, of intricate coordination.

The availability of cofactors, it's tightly controlled,
Through cellular transporters, so wise and bold,
The cell maintains optimal levels, just right,
To ensure the enzyme's activity, shines bright.

Ribose-phosphate diphosphokinase, a wondrous sight,
A marvel of design, with intelligence so bright,
In nucleotide biosynthesis, it plays a key role,
A testament to an intelligent design, an awe-inspiring goal.


2. Amidophosphoribosyl transferase(GPAT)

Amidophosphoribosyl transferase (GPAT), also known as phosphoribosylamine--glycine ligase, is an enzyme that plays a key role in the biosynthesis of purine nucleotides, which are essential components of DNA, RNA, and ATP. GPAT catalyzes the transfer of an amidophosphoribosyl group from phosphoribosylamine to glycine, forming glycinamide ribonucleotide (GAR) in the presence of ATP.

GPAT is a homodimeric enzyme, meaning it consists of two identical subunits that come together to form a functional enzyme. Each subunit has a distinct domain organization, typically composed of an N-terminal ATP-binding domain, a central catalytic domain, and a C-terminal regulatory domain.

The ATP-binding domain is responsible for binding and hydrolyzing ATP, providing the necessary energy for the enzyme's catalytic activity. The catalytic domain contains the active site where the transfer of the amidophosphoribosyl group takes place, and it is highly conserved among GPAT enzymes. The regulatory domain, located at the C-terminus, serves as a regulatory switch that controls the enzyme's activity through allosteric interactions with other molecules.

The overall structure of GPAT can vary depending on the specific organism and the form of the enzyme (e.g., monomeric, dimeric, or multimeric). GPAT enzymes have been identified in various organisms, including bacteria, fungi, plants, and animals, and they exhibit structural diversity and functional specialization.

Acquisition of purine atom N9
 
In the first reaction unique to purine biosynthesis, Amidophosphoribosyl transferase (ATase) catalyzes the displacement of PRPP’s pyrophosphate group by glutamine’s amide nitrogen. The reaction occurs with inversion of the configuration at C1 of PRPP, thereby forming  Beta 5-phosphoribosylamine and establishing the anomeric form of the future nucleotide. The reaction, which is driven to completion by the subsequent hydrolysis of the released PPi, is the pathway’s flux-controlling step.

Refuting Darwin, confirming design Fl81pSA
Amidophosphoribosyl transferase 

Its Function

GPAT plays a crucial role in the de novo biosynthesis of purine nucleotides, which are essential for DNA and RNA synthesis, energy metabolism (ATP), and other important cellular processes. GPAT catalyzes the transfer of the amidophosphoribosyl group from phosphoribosylamine to glycine, forming glycinamide ribonucleotide (GAR). GAR is an important intermediate in the purine biosynthetic pathway, and it serves as a precursor for the synthesis of various purine nucleotides, such as AMP (adenosine monophosphate) and GMP (guanosine monophosphate).

GPAT activity is tightly regulated to maintain the balance of purine nucleotide production in the cell. The enzyme can be regulated by allosteric effectors, such as ATP and purine nucleotides, which bind to the regulatory domain and modulate the enzyme's activity. Additionally, GPAT can be subject to post-translational modifications, such as phosphorylation, which can further regulate its activity.

GPAT is a critical enzyme involved in the biosynthesis of purine nucleotides, and its overall structure typically consists of homodimeric subunits with distinct ATP-binding, catalytic, and regulatory domains. The enzyme's activity is tightly regulated to ensure proper cellular production of purine nucleotides, which are essential for various cellular processes.

Mechanism description

The process of GPAT enzyme activity can be likened to a machine-like process with a clear goal-oriented logic, from substrate binding to product release, and resetting of the active site for subsequent catalysis.

Substrate binding: GPAT first binds its substrates, PRPP as the donor molecule and an acceptor molecule, such as a nucleotide base or an amino acid, at its active site. The active site is a specific region of the enzyme that allows for substrate recognition and catalysis. The binding of the substrates is highly specific and precise, ensuring that only the correct substrates are bound and processed by the enzyme.

Catalysis: Once the substrates are bound, GPAT catalyzes the transfer of the amidophosphoribosyl (PRPP) group from PRPP to the acceptor molecule. This transfer results in the formation of a new bond between the PRPP group and the acceptor molecule, which is an essential step in the biosynthesis of purine nucleotides. During this process, GPAT facilitates the chemical reaction required for the transfer of the PRPP group, ensuring that the reaction occurs efficiently and effectively.

Product release: After the transfer reaction is complete, GPAT releases the newly formed product, which now contains the PRPP group, from its active site. This allows the product to be further utilized in downstream metabolic pathways for the biosynthesis of purine nucleotides, which are important for cellular processes such as DNA and RNA synthesis.

Resetting the active site: GPAT may undergo conformational changes to reset its active site for another round of catalysis. This may involve the release of any remaining pyrophosphate (PPi) or other cofactors, and the enzyme may return to its original conformation to await the binding of new substrates. This resetting process ensures that GPAT is ready to bind and process new substrates for subsequent rounds of catalysis, maintaining its efficiency and effectiveness in synthesizing purine nucleotides.

The GPAT enzyme operates with clear goal-oriented logic, akin to a machine-like process, where it binds substrates at its active site, catalyzes the transfer of a PRPP group to the acceptor molecule, releases the product, and resets its active site for subsequent rounds of catalysis. This efficient and precise process allows for the de novo synthesis of purine nucleotides, a critical cellular function.

The GPAT enzyme, like many other enzymes, follows a specific sequence of events from substrate binding to product release, and resetting of the active site for subsequent catalysis. Each step in this process is highly orchestrated and relies on precise molecular interactions to occur in a sequential and coordinated manner. If any of the intermediate stages in the GPAT enzyme process, such as substrate binding, catalysis, product release, or resetting of the active site, were not pre-programmed to occur in a clear and logical sequence, it could disrupt the proper functioning of the enzyme. Enzymes are finely-tuned biological machines that require specific molecular interactions and conformational changes to perform their functions effectively.

For example, if the substrate binding step is disrupted, the enzyme may not be able to properly recognize and bind the substrates, leading to a loss of catalytic activity. If the catalysis step is compromised, the enzyme may not be able to facilitate the chemical reaction required for the transfer of the PRPP group, leading to a failure in product formation. Similarly, if the product release or active site resetting steps are impaired, it could result in a buildup of intermediate products or a failure to prepare the enzyme for subsequent rounds of catalysis.

Any disruptions or deviations from the normal sequence of events in the GPAT enzyme process could potentially result in a breakdown of the enzyme's function, leading to a loss or reduction in its catalytic activity, and ultimately affecting the biosynthesis of purine nucleotides, which are important for cellular processes. Therefore, a clear and sequential functioning of the enzyme is crucial for its proper activity and overall biological function.

In enzyme-catalyzed reactions, each step in the process, including substrate binding, catalysis, product release, and resetting of the active site, is interconnected and serves a specific purpose in the overall enzymatic pathway. These steps are coordinated and integrated to ensure efficient and effective enzymatic activity.

Substrate binding is necessary to ensure that only the correct substrates are recognized and processed by the enzyme, and it is a crucial step for the subsequent catalytic reaction. Catalysis is the central step where the enzyme facilitates the chemical reaction required for the conversion of substrates into products. Product release allows the newly formed product to be released from the active site and utilized in downstream metabolic pathways. Resetting the active site prepares the enzyme for subsequent rounds of catalysis and maintains its efficiency.

All these steps work together in a coherent and sequential manner to achieve the desired enzymatic function. If any of these steps were missing or disrupted, it could compromise the overall effectiveness and efficiency of the enzyme, and the process may not proceed as intended.

Enzymes have to perform their functions through a tightly regulated and integrated series of steps. Each step contributes to the overall process and is advantageous when integrated into the whole process. The coordinated interplay of these steps allows enzymes to carry out their specific functions with high specificity, efficiency, and accuracy, enabling the intricate biochemical pathways that occur in living organisms.

The GPAT enzyme operates with a clear goal-oriented logic, akin to a machine-like process, where it binds substrates at its active site, catalyzes the transfer of a PRPP group to the acceptor molecule, releases the product, and resets its active site for subsequent rounds of catalysis. This efficient and precise process allows for the de novo synthesis of purine nucleotides, a critical cellular function.

Goal-orientedness is a hallmark of intelligent setup and design. It refers to the intentional and systematic alignment of actions, processes, and resources toward achieving a specific objective or purpose. Whether it is designing a physical product, developing a software application, or organizing a complex system, goal-orientedness ensures that efforts are directed towards a well-defined end goal, which increases the chances of success.

One of the key aspects of goal-orientedness is the clarity of the objective. A well-defined and specific goal provides a clear sense of direction and purpose, enabling  to focus their efforts and resources effectively. A goal acts as a guiding star that helps in making informed decisions and prioritizing tasks. Without a clear goal, efforts may be scattered, resources may be misallocated, and progress may be hindered.

Another important aspect of goal-orientedness is the ability to adapt and adjust as circumstances change. Intelligent setup and design require flexibility to respond to changing requirements, constraints, or opportunities. This means constantly reviewing and aligning actions with the changing context to ensure that the goal remains relevant and achievable. This adaptability allows for optimization and improvement, and it ensures that the design remains effective and efficient in achieving the intended purpose.

Amidophosphoribosyl transferase (GPAT), it is an enzyme that is designed to be tightly regulated in cells to maintain cellular purine levels and balance. GPAT is subject to feedback inhibition, where the end product of the purine biosynthesis pathway, inosine monophosphate (IMP), can bind to and inhibit GPAT, regulating its activity. This feedback inhibition mechanism helps to prevent the overproduction of purine nucleotides, ensuring that cellular purine levels are maintained within appropriate ranges. Additionally, the expression and activity of GPAT can also be influenced by various cellular factors, including changes in substrate availability, cellular energy status, and other environmental conditions. For example, GPAT activity has been shown to be regulated by the availability of substrates, such as phosphoribosyl pyrophosphate (PRPP) and glutamine, which are required for the biosynthesis of purine nucleotides. Changes in cellular energy status, such as alterations in ATP levels, can also impact the activity of GPAT.

Overall, the activity of Amidophosphoribosyl transferase (GPAT) is regulated through complex mechanisms to maintain cellular purine levels and adapt to changing cellular conditions. Further research is needed to fully understand the intricacies of GPAT regulation and its adaptability in different cellular contexts.

The regulation of enzyme activity, including that of Amidophosphoribosyl transferase (GPAT), can be likened to a tightly regulated process in a factory where production is carefully controlled. Enzymes are biological catalysts that facilitate specific chemical reactions in cells, and their activity needs to be precisely regulated to maintain cellular homeostasis and ensure proper cellular function. In a factory setting, production processes are typically designed and controlled to achieve specific goals, such as optimizing efficiency, maintaining quality standards, and minimizing waste. Similarly, in cells, the activity of enzymes, including GPAT, is regulated through various mechanisms to achieve specific cellular goals, such as maintaining proper purine levels, preventing overproduction, and responding to changes in cellular conditions. The regulation of GPAT activity involves complex feedback mechanisms, where the end product of the purine biosynthesis pathway, IMP, can inhibit GPAT to prevent excessive purine production. Additionally, other cellular factors, such as substrate availability and cellular energy status, can also impact GPAT activity. These regulatory mechanisms ensure that GPAT and other enzymes function optimally within the cellular context and respond to changing conditions as needed.

Goal-orientedness also promotes accountability and measurement. When a specific goal is set, it becomes easier to measure progress and success. It allows for tracking and evaluating performance against the desired outcomes. This measurement provides valuable feedback and insights that can be used to refine and improve the setup or design. It also helps in identifying any deviations or inefficiencies, enabling timely corrective actions.

The end-products of nucleotide biosynthesis, such as purine and pyrimidine nucleotides, can feedback inhibit the activity of GPAT, thereby regulating its activity and controlling the production of nucleotides. This feedback inhibition helps to prevent the overproduction of nucleotides and maintain the appropriate balance of nucleotide pools in the cell. The feedback mechanisms that regulate GPAT, and other enzymes, are an example of how biological systems must have been conceptualized and designed from the get-go with these complex and sophisticated regulatory mechanisms to ensure the proper functioning and adapt to changing cellular conditions. GPAT must have been present in the emergence of life on Earth, as these enzymes are essential for the chemical reactions that sustain life. Their origin can, therefore, not be explained by invoking evolutionary mechanisms.This is clear evidence that implies a designed manufacturing process.

Molecular Symphony: GPAT's Intelligent Design

Amidophosphoribosyl transferase, GPAT,
A molecular machine, precise and exact,
In purine nucleotide biosynthesis it plays a part,
With a goal-oriented logic, a work of intelligent art.

With ATP-binding domain, it starts the show,
Hydrolyzing ATP, providing energy to go,
Catalytic domain, the active site,
Where substrates bind, with precision so tight.

Substrate binding, a specific affair,
Ensuring only the right ones are processed with care,
Catalysis ensues, a chemical dance,
Transferring PRPP, in a well-choreographed trance.

Product released, with PRPP in tow,
Ready for downstream pathways to bestow,
Active site reset, for another round,
Efficient and effective, with precision profound.

A molecular machine, orchestrated and fine,
Evidence of intelligent design, so divine,
In the biosynthesis of nucleotides, so crucial,
GPAT's precision and complexity, truly awe-inspiring and beautiful.

With each step orchestrated and exact,
A testament to design, with undeniable impact,
GPAT's molecular dance, a symphony of life,
A marvel of creation, amidst the cellular strife.



Last edited by Otangelo on Wed May 31, 2023 7:32 am; edited 2 times in total

https://reasonandscience.catsboard.com

12Refuting Darwin, confirming design Empty Re: Refuting Darwin, confirming design Sat Apr 08, 2023 10:15 am

Otangelo


Admin

3. Glycinamide ribotide (GAR) transformylase (GART)

Glycinamide ribotide (GAR) transformylase, also known as GART, catalyzes the transfer of a formyl group from N10-formyltetrahydrofolate to GAR, forming formylglycinamidine ribonucleotide (FGAR) as an intermediate in the pathway. The overall structure of GART typically consists of a single polypeptide chain folded into a globular shape, composed of multiple alpha helices and beta sheets. GART is classified as a member of the amidotransferase family of enzymes, and it requires ATP as a cofactor for its activity. The minimal bacterial isoform of GART, commonly found in bacteria such as Escherichia coli, is known as PurN. PurN is a monomeric enzyme with a size of approximately 30-35 kDa (kilodaltons) and typically consists of around 260-290 amino acid residues. It plays a critical role in bacterial purine nucleotide biosynthesis and is essential for the survival and growth of bacteria.

The mechanism of GART

The mechanism of GART involves several steps: Substrate binding: GAR and N10-formylTHF bind to the active site of GART, which is typically located in a pocket or cleft within the protein structure. This binding brings the substrates in close proximity for the formylation reaction to occur.

Formyl transfer: The formyl group from N10-formylTHF is transferred to the amino group of GAR, resulting in the formation of FGAR. This transfer is facilitated by the catalytic residues within the active site of GART, which may include amino acid residues with specific functional groups that participate in the transfer reaction.

Product release:
FGAR is released from the active site of GART, making it available for further downstream reactions in the purine biosynthesis pathway.

Cofactor regeneration: N10-formylTHF, which acts as a cofactor in the formylation reaction, may be regenerated through other enzymatic reactions in the folate metabolic pathway, allowing it to be reused in subsequent rounds of GART catalysis.

The exact details of GART's mechanism may vary depending on the specific organism or isoform, and may involve additional cofactors or regulatory factors. Overall, GART plays a critical role in the biosynthesis of purine nucleotides, providing the formyl group necessary for the construction of purine bases, which are essential building blocks of DNA and RNA in living organisms.

The regeneration of N10-formyltetrahydrofolate (N10-formylTHF) in the folate metabolic pathway typically involves several enzymatic reactions. Here is a detailed description of the process:

Formyl transfer from N10-formylTHF: N10-formylTHF serves as a formyl donor in the formylation reaction catalyzed by enzymes like Glycinamide ribotide transformylase (GART). During this reaction, N10-formylTHF donates its formyl group to the amino group of Glycinamide ribotide (GAR), resulting in the formation of Formylglycinamide ribotide (FGAR).

Formate release: After donating its formyl group, N10-formylTHF is converted into dihydrofolate (DHF) through the release of formate, a one-carbon unit. This reaction is typically catalyzed by the enzyme formate-tetrahydrofolate ligase (FTL), which transfers the formate group to another molecule, usually tetrahydrofolate (THF), forming N10-formylTHF again.

Dihydrofolate reduction: Dihydrofolate (DHF) formed in the previous step is then converted back to tetrahydrofolate (THF) through a reduction reaction. This reaction is typically catalyzed by the enzyme dihydrofolate reductase (DHFR), which uses NADPH as a cofactor to transfer electrons and reduce DHF to THF.

Methyl group addition: Tetrahydrofolate (THF) can then be converted to N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF) through a series of enzymatic reactions involving methylene-tetrahydrofolate dehydrogenase (MTHFD) and methylene-tetrahydrofolate reductase (MTHFR). This involves the addition of a methyl group to THF, forming N5-methyltetrahydrofolate (N5-methylTHF), which is then converted to N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF).

Formyl group addition: N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF) can be converted back to N10-formylTHF through a series of enzymatic reactions involving formyl-tetrahydrofolate synthetase (FTHFS). This involves the addition of a formyl group to N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF), forming N10-formylTHF, which can then be used as a cofactor in subsequent rounds of formylation reactions catalyzed by enzymes like GART.

Overall, the regeneration of N10-formylTHF in the folate metabolic pathway involves a series of enzymatic reactions that convert dihydrofolate (DHF) back to N10-formylTHF through reduction, addition of methyl and formyl groups, and release of formate. This allows N10-formylTHF to be recycled and reused as a cofactor in multiple rounds of formylation reactions, including the formylation of GAR by GART in the biosynthesis of purine nucleotides.

Acquisition of purine atoms C4, C5, and N7

Glycine’s carboxyl group forms an amide with the amino group of phosphoribosylamine, yielding glycinamide ribotide (GAR). This reaction is reversible, despite its concomitant hydrolysis of ATP to ADP  Pi. It is the only step of the purine biosynthetic pathway in which more than one purine ring atom is acquired. 

This step is carried out by glycinamide ribonucleotide synthetase (GAR synthetase) via its ATP-dependent condensation of the glycine carboxyl group with the amine of 5-phosphoribosyl-b-amine . The reaction proceeds in two stages. First, the glycine carboxyl group is activated via ATP-dependent phosphorylation. Next, an amide bond is formed between the activated carboxyl group of glycine and the b-amine. Glycine contributes C-4, C-5, and N-7 of the purine. 15

Folate

The regeneration of N10-formyltetrahydrofolate (N10-formylTHF) in the folate metabolic pathway typically involves several enzymatic reactions. Here is a detailed description of the process:

Formyl transfer from N10-formylTHF: N10-formylTHF serves as a formyl donor in the formylation reaction catalyzed by enzymes like Glycinamide ribotide transformylase (GART). During this reaction, N10-formylTHF donates its formyl group to the amino group of Glycinamide ribotide (GAR), resulting in the formation of Formylglycinamide ribotide (FGAR).

Formate release: After donating its formyl group, N10-formylTHF is converted into dihydrofolate (DHF) through the release of formate, a one-carbon unit. This reaction is typically catalyzed by the enzyme formate-tetrahydrofolate ligase (FTL), which transfers the formate group to another molecule, usually tetrahydrofolate (THF), forming N10-formylTHF again.

Dihydrofolate reduction: Dihydrofolate (DHF) formed in the previous step is then converted back to tetrahydrofolate (THF) through a reduction reaction. This reaction is typically catalyzed by the enzyme dihydrofolate reductase (DHFR), which uses NADPH as a cofactor to transfer electrons and reduce DHF to THF.

Methyl group addition: Tetrahydrofolate (THF) can then be converted to N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF) through a series of enzymatic reactions involving methylene-tetrahydrofolate dehydrogenase (MTHFD) and methylene-tetrahydrofolate reductase (MTHFR). This involves the addition of a methyl group to THF, forming N5-methyltetrahydrofolate (N5-methylTHF), which is then converted to N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF).

Formyl group addition: N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF) can be converted back to N10-formylTHF through a series of enzymatic reactions involving formyl-tetrahydrofolate synthetase (FTHFS). This involves the addition of a formyl group to N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF), forming N10-formylTHF, which can then be used as a cofactor in subsequent rounds of formylation reactions catalyzed by enzymes like GART.

Overall, the regeneration of N10-formylTHF in the folate metabolic pathway involves a series of enzymatic reactions that convert dihydrofolate (DHF) back to N10-formylTHF through reduction, addition of methyl and formyl groups, and release of formate. This allows N10-formylTHF to be recycled and reused as a cofactor in multiple rounds of formylation reactions, including the formylation of GAR by GART in the biosynthesis of purine nucleotides.

The regeneration of N10-formyltetrahydrofolate (N10-formylTHF) in the folate metabolic pathway typically involves several enzymatic reactions. Here is a detailed description of the process:

Formyl transfer from N10-formylTHF: N10-formylTHF serves as a formyl donor in the formylation reaction catalyzed by enzymes like Glycinamide ribotide transformylase (GART). During this reaction, N10-formylTHF donates its formyl group to the amino group of Glycinamide ribotide (GAR), resulting in the formation of Formylglycinamide ribotide (FGAR).

Formate release: After donating its formyl group, N10-formylTHF is converted into dihydrofolate (DHF) through the release of formate, a one-carbon unit. This reaction is typically catalyzed by the enzyme formate-tetrahydrofolate ligase (FTL), which transfers the formate group to another molecule, usually tetrahydrofolate (THF), forming N10-formylTHF again.

Dihydrofolate reduction: Dihydrofolate (DHF) formed in the previous step is then converted back to tetrahydrofolate (THF) through a reduction reaction. This reaction is typically catalyzed by the enzyme dihydrofolate reductase (DHFR), which uses NADPH as a cofactor to transfer electrons and reduce DHF to THF.

Methyl group addition: Tetrahydrofolate (THF) can then be converted to N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF) through a series of enzymatic reactions involving methylene-tetrahydrofolate dehydrogenase (MTHFD) and methylene-tetrahydrofolate reductase (MTHFR). This involves the addition of a methyl group to THF, forming N5-methyltetrahydrofolate (N5-methylTHF), which is then converted to N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF).

Formyl group addition: N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF) can be converted back to N10-formylTHF through a series of enzymatic reactions involving formyl-tetrahydrofolate synthetase (FTHFS). This involves the addition of a formyl group to N5,N10-methylenetetrahydrofolate (N5,N10-methyleneTHF), forming N10-formylTHF, which can then be used as a cofactor in subsequent rounds of formylation reactions catalyzed by enzymes like GART.

Overall, the regeneration of N10-formylTHF in the folate metabolic pathway involves a series of enzymatic reactions that convert dihydrofolate (DHF) back to N10-formylTHF through reduction, addition of methyl and formyl groups, and release of formate. This allows N10-formylTHF to be recycled and reused as a cofactor in multiple rounds of formylation reactions, including the formylation of GAR by GART in the biosynthesis of purine nucleotides.



Last edited by Otangelo on Sun May 21, 2023 2:14 pm; edited 1 time in total

https://reasonandscience.catsboard.com

13Refuting Darwin, confirming design Empty Re: Refuting Darwin, confirming design Fri Apr 28, 2023 9:55 am

Otangelo


Admin

https://reasonandscience.catsboard.com

Sponsored content



Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum