ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview

Welcome to my library—a curated collection of research and original arguments exploring why I believe Christianity, creationism, and Intelligent Design offer the most compelling explanations for our origins. Otangelo Grasso


You are not connected. Please login or register

Uncertainty quantification of the universe and life emerging through unguided, natural, random events

Go down  Message [Page 1 of 1]

Otangelo


Admin

Uncertainty quantification of the universe and life emerging through unguided, natural, random events

https://reasonandscience.catsboard.com/t2508-abiogenesis-uncertainty-quantification-of-a-primordial-ancestor-with-a-minimal-proteome-emerging-through-unguided-natural-random-events

The maximal number of possible simultaneous interactions in the entire history of the universe, starting 13,7 billion years ago, can be calculated by multiplying the three relevant factors together: the number of atoms (10^80) in the universe, times the number of seconds that passed since the big bang (10^16) times the number of the fastest rate that one atom can change its state per second (10^43). This calculation fixes the total number of events that could have occurred in the observable universe since the origin of the universe at 10^139. This provides a measure of the probabilistic resources of the entire observable universe.

The number of atoms in the entire universe = 1 x 10^80
The estimate of the age of the universe: 13,7 Billion years.  In seconds, that would be = 1 x 10^16
The fastest rate an atom can change its state = 1 x 10^43
Therefore, the maximum number of possible events in a universe that is 18 Billion years old (10^16 seconds) where every atom (10^80) is changing its state at the maximum rate of 10^40 times per second is 10^139.

1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

If the odds for an event to occur, are less likely, than the threshold of the entire probabilistic resources of the universe, then we can confidently say, that the event is impossible to occur by chance.

140 features of the cosmos as a whole (including the laws of physics) must fall within certain narrow ranges to allow for the possibility of physical life’s existence.
402 quantifiable characteristics of a planetary system and its galaxy must fall within narrow ranges to allow for the possibility of advanced life’s existence.
Less than 1 chance in 10^390 exists that even one planet containing the necessary kinds of life would occur anywhere in the universe without invoking divine miracles.

The odds to have life from non-life by natural means:
Probability for the occurrence of a functional proteome, which is in the case of Pelagibacter, the smallest known bacteria and life-form, with 1350 proteins, average 300 Amino Acids size, by unguided means: 10^722000
Probability for occurrence of connecting all 1350 proteins in the right, functional order is about 4^3600
Probability for occurrence to have both, a minimal proteome, and interactome: about 10^725600

1. The more statistically improbable something is, the less it makes sense to believe that it just happened by blind chance.
2. Statistically, it is practically impossible, that the Laws of physics and constants, the initial conditions, and fundamental forces are finely adjusted in an unimaginably narrow range to permit the initial stretch out and expansion of the universe, and that a primordial genome, proteome, metabolome, and interactome of the first living cell arose by random, unguided events.
3. Furthermore, we see in the set up of the universe, biochemistry, and biology, purposeful design.  
4. Without a mind knowing which sequences are functional in a vast space of possible sequences, random luck is probabilistically doomed to find those that confer function

Claim: If a small chance exists, it will happen given enough time and/or enough tries.
Reply: In regards to the origin of the universe: The Big Bang was the most precisely planned event in all of history.  According to Roger Penrose, the odds to have a low-entropy state at the beginning of our universe was: 1 in 10^10123. There would be roughly 10^115 power of protons in the entire universe if we filled each atom with protons ( one atom can be filled with 25 trillion protons), and the entire volume of the observable universe with atoms, without leaving any space.  The odds, and to hit the jackpot, and to find one proton marked in red by chance, searching through one hundred universes, the size of ours, filled with protons, is the same as to find one universe like ours with the same low entropy state at the beginning. If we had to find our universe amongst an ensemble of almost infinite parallel universes, it would take 17000  billion years to find one which would be eventually ours (  shuffling in the shortest time interval, in which any physical effect can occur). Many statisticians consider that any occurrence with a chance of happening that is less than one chance out of 10^50 ( Borel's law), is an occurrence with such a slim probability that is, in general, statistically considered to be zero.
In regards to the origins of life: The probability for the occurrence of a functional proteome, which is in the case of Pelagibacter, the smallest known bacteria and life-form, with 1350 proteins, average 300 Amino Acids size, by unguided means is 10^722000. ( It would take 1.190,000 billion years to find the right sequence). That calculation however just serves for illustration, and to make a point. It would never come to a shuffling in the first place. The first reason is, that in order to have the twenty amino acids used in life, they would not only have to be available prebiotically ( they were not, and if, only in racemic mixtures ), and sorted out amongst hundreds prebiotically available, but without meaning for life ( there was no prebiotic natural selection nor chemical evolution ) and there would be at least other 20 hurdles to be overcome, amongst the most annihilating: Citing Steve Benner, Paradoxes of life, 2012: Systems, given energy and left to themselves, DEVOLVE to give uselessly complex mixtures, “asphalts”.  the literature reports (to our knowledge) exactly  ZERO CONFIRMED OBSERVATIONS where “replication involving replicable imperfections” (RIRI) evolution emerged spontaneously from a devolving chemical system. it is IMPOSSIBLE for any non-living chemical system to escape devolution to enter into the Darwinian world of the “living”. Such statements of impossibility apply even to macromolecules not assumed to be necessary for RIRI evolution.

The Criterion: The "Cosmic Limit" Law of Chance
To arrive at a statistical "proof," we need a reasonable criterion to judge it by :
As just a starting point, consider that many statisticians consider that any occurrence with a chance of happening that is less than one chance out of 10^50, is an occurrence with such a slim probability that is, in general, statistically considered to be zero. (10^50 is the number 1 with 50 zeros after it, and it is spoken: "10 to the 50th power"). This appraisal seems fairly reasonable when you consider that 10^50 is about the number of atoms that make up the planet earth. --So, overcoming one chance out of 10^50 is like marking one specific atom out of the earth, and mixing it in completely, and when someone makes one blind, random selection, which turns out to be that specifically marked atom. Most mathematicians and scientists have accepted this statistical standard for many purposes.

Julie Hannah: THE ARISING OF OUR UNIVERSE: DESIGN OR CHANCE? OCTOBER 3, 2020
Monkeys typing for an infinite length of time are supposed to eventually type out any given text, but if there are 50 keys, the probability of producing just one given five-letter word is 1 in 312,500,000
This is a tremendously low probability, and it decreases exponentially when letters are added. A computer program that simulated random typing once produced nineteen consecutive letters and characters that appear in a line of a Shakespearean play, but this result took 42,162,500,000 billion years to achieve!
According to scientists Kittel and Kroemer, the probability of randomly typing out Hamlet is, therefore, zero in any operational sense (Thermal Physics, 53).
https://crossexamined.org/the-arising-of-our-universe-design-or-chance/

Claim: the origin of life is overwhelmingly improbable, but as long as there is at least some chance a minimal proteome to kick-start life arising by natural means, then we shouldn’t reject the possibility that it did.
Reply: Chance is a possible explanation for minimal a proteome to emerge by stochastic, unguided means, and as consequence, the origin of life, but it doesn’t follow that it is necessarily the best explanation. Here is why.
What are the odds that a functional protein or a cell would arise given the chance hypothesis (i.e., given the truth of the chance hypothesis)?

Mycoplasma is a reference to the threshold of the living from the non-living, held as the smallest possible living self-replicating cell. It is, however, a pathogen, an endosymbiont that only lives and survives within the body or cells of another organism ( humans ).  As such, it IMPORTS many nutrients from the host organism. The host provides most of the nutrients such bacteria require, hence they do not need the genes for producing such compounds themselves. It does not require the same complexity of biosynthesis pathways to manufacturing all nutrients as a free-living bacterium.

Pelagibacter unique bacteria are known to be the smallest and simplest, self-replicating, and free-living cells. Pelagibacter genomes (~ 1,300 genes and 1,3 million base pairs ) devolved from a slightly larger common ancestor (~2,000 genes). Pelagibacter is an alphaproteobacterium. In the evolutionary timescale, its common ancestor supposedly emerged about 1,3 billion years ago. The oldest bacteria known however are Cyanobacteria,  living in the rocks in Greenland about 3.7-billion years ago.  With a genome size of approximately  3,2 million base pairs ( Raphidiopsis brookii D9) they are the smallest genomes described for free-living cyanobacteria. This is a paradox. The oldest known life-forms have a considerably bigger genome than Pelagibacter, which makes their origin far more unlikely from a naturalistic standpoint.  The unlikeliness to have just ONE protein domain-sized fold of 250amino acids is 1 in 10^77. That means, to find just one functional protein fold with the length of about 250AAs, nature would have to search amongst so many non-functional folds as there are atoms in our known universe ( about 10^80 atoms).   We will soon see the likeliness to find an entire functional of genome Pelagibacter with 1,3 million nucleotides, which was however based on the data demonstrated above, not the earliest bacteria....

Pelagibacter has complete biosynthetic pathways for all 20 amino acids.  These organisms get by with about 1,300 genes and 1,3 million base pairs and code for 1,300 proteins.  The chance to get its entire proteome would be 10^722,000.  The discrepancy between the functional space, and the sequence space, is staggering.

  ( To calculate the odds, you can see this website: https://web.archive.org/web/20170423032439/http://creationsafaris.com/epoi_c06.htm#ec06f12x

The chance hypothesis can be rejected as the best explanation of the origin of life not only because of the improbability of finding the functional amino acid sequence giving rise to a functional proteome but also because there are not sufficient resources to do the shuffling during the entire history of the universe. There would not be enough time available, even doing the maximum number of possible shuffling in parallel. There could be trillions and trillions of attempts at the same time, during the entire time span of the history of the universe, in parallel, and it would not be enough.   Here is why:

Steve Meyer: Signature in the Cell, chapter 10: There have been roughly 10^16 seconds since the big bang. ( 60 (seconds)x60 (minutes)x24 (hours)x365.24238 (days in a year)x13799000000 (years since the Big Bang) = 4.35454 x 10^16  seconds). Due to the properties of gravity, matter, and electromagnetic radiation, physicists have determined that there is a limit to the number of physical transitions that can occur from one state to another within a given unit of time. According to physicists, a physical transition from one state to another cannot take place faster than light can traverse the smallest physically significant unit of distance. That unit of distance is the so-called Planck length of 10–33 centimeters. Therefore, the time it takes light to traverse this smallest distance determines the shortest time in which any physical effect can occur. This unit of time is the Planck time of 10–43 seconds. Based on that, we can calculate the largest number of opportunities that any physical event could occur in the observable universe since the big bang. Physically speaking, an event occurs when an elementary particle does something or interacts with other elementary particles. But since elementary particles can interact with each other only so many times per second (at most 10^43 times), since there are a limited number (10^80) of elementary particles, and since there has been a limited amount of time since the big bang (10^16 seconds), there are a limited number of opportunities for any given event to occur in the entire history of the universe.

This number can be calculated by multiplying the three relevant factors together: the number of elementary particles (10^80) times the number of seconds since the big bang (10^16) times the number of possible interactions per second (10^43). This calculation fixes the total number of events that could have occurred in the observable universe since the origin of the universe at 10^139.  This provides a measure of the probabilistic resources of the entire observable universe.

Emile Borel gave an estimate of the probabilistic resources of the universe at 10^50.
Physicist Bret Van de Sande calculated the probabilistic resources of the universe at a more restrictive 2.6 × 10^92
Scientist Seth Lloyd calculated that the most bit operations the universe could have performed in its history is 10^120, meaning that a specific bit operation with an improbability significantly greater than 1 chance in 10^120 will likely never occur by chance.

The probability of producing a single 150-amino-acid functional protein by chance stands at about 1 in 10^164. Thus, for each functional sequence of 150 amino acids, there are at least 10^164 other possible nonfunctional sequences of the same length. Therefore, to have a  chance of producing a single functional protein of this length by chance, a random process would have to shuffle up to 10^164 sequences. That number vastly exceeds the most optimistic estimate of the probabilistic resources of the entire universe—that is, the number of events that could have occurred since the beginning of its existence.

Comparing 10^164 to the maximum number of opportunities—10^139—for that event to occur in the history of the universe, 10^164 exceeds the second 10^139 by more than twenty-four orders of magnitude, by more than a trillion trillion times.

If every event in the universe over its entire history were devoted to producing combinations of amino acids of the correct length in a prebiotic soup (an extravagantly generous and even absurd assumption), the number of
combinations thus produced would still represent a tiny fraction—less than 1 out of a trillion trillion—of the total number of events needed to have a chance of generating just ONE functional protein—any functional protein of modest length of 150 amino acids by chance alone ( consider that the average length of a protein is about 400 amino acids ). 

Even taking the probabilistic resources of the whole universe into account, it is extremely unlikely that even a single protein of that length would have arisen by chance on the early earth.

We have crudely estimated a total of 100 protons, neutrons, and electrons on average per atom.
The number of protons, neutrons, and electrons in our solar system is  around 1.8 × 10^57
The number of protons, neutrons and electrons in our galaxy is around 1.8 × 10^68
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2796651/

The odds to have the genome of the smallest known free-living cell ( Pelagibacter unique ) is 1 to 10^722000. The are 10^80 atoms in the universe. Life by chance is a ridiculously implausible plausibility assertion.

Imagine covering the whole of the USA with small coins, edge to edge. Now imagine piling other coins on each of these millions of coins. Now imagine continuing to pile coins on each coin until reaching the moon about 400,000 km away! If you were told that within this vast mountain of coins there was one coin different to all the others. The statistical chance of finding that one coin is about 1 in 10^50. In other words, the evidence that our universe is designed is overwhelming!


Abiogenesis? Impossible !!
https://www.youtube.com/watch?v=ycJblRcgqXk

The cell is the irreducible, minimal unit of life
https://sci-hub.ren/https://link.springer.com/chapter/10.1007/978-3-319-56372-5_8

A. Graham Cairns-Smith: Chemistry and the Missing Era of Evolution:
We can see that at the time of the common ancestor, this system must already have been fixed in its essentials, probably through a critical interdependence of subsystems. (Roughly speaking in a domain in which everything has come to depend on everything else nothing can be easily changed, and our central biochemistry is very much like that.
https://sci-hub.ren/https://www.ncbi.nlm.nih.gov/pubmed/18260066

Wilhelm Huck, Chemist, professor at Radboud University Nijmegen
A working cell is more than the sum of its parts. "A functioning cell must be entirely correct at once, in all its complexity
https://sixdaysblog.com/2013/07/06/protocells-may-have-formed-in-a-salty-soup/

Bit by Bit: The Darwinian Basis of Life
Gerald F. Joyce  Published: May 8, 2012
https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001323
Suppose that a  polymer  (like RNA) that is assembled into four chains of 40 subunits (quaternary heteropolymer) . Then there would be 10^24 possible compositions. To represent all of these compositions at least once, and thus to establish a certainty that this simple ribozyme could have materialized, requires 27 kg of RNA chains, which classifies spontaneous emergence as a highly implausible event.

For an enzyme to be functional, it must fold in a precise three-dimensional pattern. A small chain of 150 amino acids making up an enzyme must be tested within the cell for 10^12 different possible configurations per second, taking 10^26 ( 1,000,000,000,000,000,000,000,000,000) years to find the right one.  This example comprises a very, very, very small degree of the chemical complexity of a human cell.

My comment: The paper claims that quantum effects performed a sequence search to find functional enzymes !! Is that believable?  
https://sci-hub.ren/https://link.springer.com/chapter/10.1007/978-3-319-56372-5_8

Self-replication had to emerge and be implemented first, which raises the unbridgeable problem that DNA replication is irreducibly complex. Evolution is not a capable driving force to make the DNA replicating complex, because evolution depends on cell replication through the very own mechanism we try to explain. It takes proteins to make DNA replication happen. But it takes the DNA replication process to make proteins. That’s a catch 22 situation.

Chance of intelligence to set up life: 
100% We KNOW by repeated experience that intelligence produces all the things, as follows:
factory portals  ( membrane proteins ) factory compartments ( organelles ) a library index ( chromosomes, and the gene regulatory network ) molecular computers, hardware ( DNA ) software, a language using signs and codes like the alphabet, an instructional blueprint, ( the genetic and over a dozen epigenetic codes ) information retrieval ( RNA polymerase ) transmission ( messenger RNA ) translation ( Ribosome ) signaling ( hormones ) complex machines ( proteins ) taxis ( dynein, kinesin, transport vesicles ) molecular highways ( tubulins ) tagging programs ( each protein has a tag, which is an amino acid sequence  informing other molecular transport machines were to transport them.) factory assembly lines ( fatty acid synthase ) error check and repair systems  ( exonucleolytic proofreading ) recycling methods ( endocytic recycling ) waste grinders and management  ( Proteasome Garbage Grinders )   power generating plants ( mitochondria ) power turbines ( ATP synthase ) electric circuits ( the metabolic network ) computers ( neurons ) computer networks ( brain ) all with specific purposes.

Chance of unguided random natural events producing just a minimal functional proteome, not considering all other essential things to get a first living self-replicating cell,is:

Let's suppose, we have a fully operational raw material, and the genetic language upon which to store genetic information. Only now, we can ask: Where did the information come from to make the first living organism? Various attempts have been made to lower the minimal information content to produce a fully working operational cell. Often, Mycoplasma is mentioned as a reference to the threshold of the living from the non-living. Mycoplasma genitalium is held as the smallest possible living self-replicating cell. It is, however, a pathogen, an endosymbiont that only lives and survives within the body or cells of another organism ( humans ).  As such, it IMPORTS many nutrients from the host organism. The host provides most of the nutrients such bacteria require, hence the bacteria do not need the genes for producing such compounds themselves. As such, it does not require the same complexity of biosynthesis pathways to manufacturing all nutrients as a free-living bacterium. 

Mycoplasma are not primitive but instead descendants of soil-dwelling proteobacteria, quite possibly the Bacillus, which evolved into parasites. In becoming obligate parasites, the organisms were able to discard almost all biosynthetic capacity by a strategy of gaining biochemical intermediates from the host or from the growth medium in the case of laboratory culture.

The simplest free-living bacteria is Pelagibacter ubique. 13 It is known to be one of the smallest and simplest, self-replicating, and free-living cells.  It has complete biosynthetic pathways for all 20 amino acids.  These organisms get by with about 1,300 genes and 1,308,759 base pairs and code for 1,354 proteins.  14 That would be the size of a book with 400 pages, each page with 3000 characters.  They survive without any dependence on other life forms. Incidentally, these are also the most “successful” organisms on Earth. They make up about 25% of all microbial cells.   If a chain could link up, what is the probability that the code letters might by chance be in some order which would be a usable gene, usable somewhere—anywhere—in some potentially living thing? If we take a model size of 1,200,000 base pairs, the chance to get the sequence randomly would be 4^1,200,000 or 10^722,000. This probability is hard to imagine but an illustration may help.  

Life on earth: a cosmic origin?
http://bip.cnrs-mrs.fr/bip10/hoyle.htm
Hoyle and Wickramasinghe  examine the probability that an enzyme consisting of 300 residues could be formed by random shuffling of residues, and calculate a value of 10^250, which becomes 10^500.000 if one takes account of the need for 2000 different enzymes in a bacterial cell. Comparing this calculation with the total of 10^79 atoms in the observable universe, they conclude that life must be a cosmological phenomenon.

Imagine covering the whole of the USA with small coins, edge to edge. Now imagine piling other coins on each of these millions of coins. Now imagine continuing to pile coins on each coin until reaching the moon about 400,000 km away! If you were told that within this vast mountain of coins there was one coin different to all the others. The statistical chance of finding that one coin is about 1 in 10^55. 

Furthermore, what good would functional proteins be good for, if not transported to the right site in the Cell, inserted in the right place, and interconnected to start the fabrication of chemical compounds used in the Cell?  It is clear, that life had to start based on fully operating cell factories, able to self replicate, adapt, produce energy, regulate its sophisticated molecular machinery.

chemist Wilhelm Huck, professor at Radboud University Nijmegen
A working cell is more than the sum of its parts. "A functioning cell must be entirely correct at once, in all its complexity

https://reasonandscience.catsboard.com/t2245-abiogenesis-the-factory-maker-argument

https://www.youtube.com/watch?v=f8oGda1JxKw


It is an interesting question, to elucidate what would be a theoretical minimal Cell, since based on that information, we can figure out what it would take for first life to begin on early earth. That gives us a probability, if someone proposes natural, unguided mechanisms, based on chemical reactions, and atmospheric - and geological circumstances. The fact that we don't know the composition of the atmosphere back then doesn't do harm and is not necessary for our inquiry.

We can take rather than one of the smallest freeliving cells, the one claimed by Science magazine to be a minimal bacterial genome. That would be however not have the metabolic pathways to synthesize the 20 amino acids uses in life. According to a peer-reviewed scientific paper published in Science magazine in 2016: Design and synthesis of a minimal bacterial genome, in their best approximation to a minimal cell, it has a 531000-base pairs genome that encodes 473 gene products, being substantially smaller than M. genitalium (580 kbp), which has the smallest genome of any naturally occurring cell that has been grown in pure culture, having a genome that contains the core set of genes that are required for cellular life. That means, all its genes are essential and irreducible. It encodes for 438 proteins

Regardless of whether the actual minimum is 100,000 or 500,000 nucleotides, this is all beyond a nucleic acid technology struggling with falling apart at 200 nucleotides. The current understanding of information can give many explanations of the difficulties of creating it. It cannot explain where it comes from. 1

Protein-length distributions for the three domains of life
The average protein length of these 110 clusters of orthologous genes COGs is 359 amino acids for the prokaryotes and 459 for eukaryotes.
https://pdfs.semanticscholar.org/5650/aaa06de4de11c36a940cf29c07f5f731f63c.pdf

Proteins are the result of the instructions stored in DNA, which specifies the complex sequence necessary to produce functional 3D folds of proteins. Both, improbability and specification are required in order to justify an inference of design.
1. According to the latest estimation of a minimal protein set for the first living organism, the requirement would be about 438 proteins, this would be the absolute minimum to keep the basic functions of a cell alive.  
2. According to the Protein-length distributions for the three domains of life, there is an average between prokaryotic and eukaryotic cells of about 400 amino acids per protein. 8
3. Each of the 400 positions in the amino acid polypeptide chains could be occupied by any one of the 20 amino acids used in cells, so if we suppose that proteins emerged randomly on prebiotic earth, then the total possible arrangements or odds to get one which would fold into a functional 3D protein would be 1 to 20^400 or 1 to 10^520. A truly enormous, super astronomical number. 
4. Since we need 1300 proteins total to make a first living cell, we would have to repeat the shuffle 520 times, to get all proteins required for life. The probability would be therefore 438/10^520.  We arrive at a probability of about 1 in 10^350.000 

Exemplification
Several famous physicists, including, Stephen Hawking worked out that the relationship between all the fundamentals of our universe is so finely tuned that even if we slightly changed the 55 th decimal point then our universe could not exist. Put another way, several leading scientists have calculated that the statistical probability against this fine-tuning being by chance is in the order of 1 in 10^55.

This probability is hard to imagine but an illustration may help. Imagine covering the whole of the USA with small coins, edge to edge. Now imagine piling other coins on each of these millions of coins. Now imagine continuing to pile coins on each coin until reaching the moon about 400,000 km away! If you were told that within this vast mountain of coins there was one coin different to all the others. The statistical chance of finding that one coin is about 1 in 10^55 . In other words, the evidence that our universe is designed is overwhelming!

How to make the calculations:
https://web.archive.org/web/20170423032439/http://creationsafaris.com/epoi_c06.htm#ec06f12x

Granted, the calculation does not take into consideration nor give information on the probabilistic resources available. But the sheer gigantic number os possibilities throw any reasonable possibility out of the window. 

If we sum up the total number of amino acids for a minimal Cell, there would have to be about 1300 proteins x average 400 amino acids  =  520.000 amino acids, which would have to be bonded in the right sequence, choosing for each position amongst 20 different amino acids, and selecting only the left-handed, while sorting out the right-handed ones. That means each position would have to be selected correctly from 40 variants !! that is 1 right selection out of 40^520.000 possibilities or 10^700.000 !! Obviously, a gigantic number far above any realistic probability to occur by unguided events. Even a trillion universes, each hosting a trillion planets, and each shuffling a trillion times in a trillionth of a second, continuously for a trillion years, would not be enough. Such astronomically unimaginably gigantic odds are in the realm of the utmost extremely impossible.  

https://reasonandscience.catsboard.com/t2508-abiogenesis-calculations-of-a-primordial-ancestor-with-a-minimal-proteome-emerging-through-unguided-natural-random-events

Helicase
Helicases are astonishing motor proteins which rotational speed is up to 10,000 rotations per minute, and are life essential.

How Many Genes Can Make a Cell: The Minimal-Gene-Set Concept
https://www.ncbi.nlm.nih.gov/books/NBK2227/

We propose a minimal gene set composed of 206 genes. Such a gene set will be able to sustain the main vital functions of a hypothetical simplest bacterial cell with the following features.

(i) A virtually complete DNA replication machinery, composed of one nucleoid DNA binding protein, SSB, DNA helicase, primase, gyrase, polymerase III, and ligase. No initiation and recruiting proteins seem to be essential, and the DNA gyrase is the only topoisomerase included, which should perform both replication and chromosome segregation functions.

Helicase are a class of enzymes vital to all living organisms. Their main function is to unpackage an organism's genes, and they are essential for dna replication, and evolution to be able to occur. They require 1000 left-handed amino acids in the right specified sequence. Each of the 1000 amino acids must be the right amongst 20 to chose from.  How did they emerge by natural processes? The chance to get them by random chemical reactions is 1 to 20^1000..... there are 10^80 atoms in the universe.

The odds are in reality much greater. There exist hundreds of different amino acids, which supposedly were extant on the early earth. Amongst these, an unknown selection process would have had to select the 20 amino acids used in life, select only left-handed ones from a mix of left and right handed ones. 

But the problems do not stop there. 

Cairns-Smith, the Genetic Takeover, page 59:
For one overall reaction, making one peptide bond, there about 90 distinct operations are required. If you were to consider in more detail a process such as the purification of an intermediate you would find many subsidiary operations — washings, pH changes and so on.

1. The synthesis of proteins and nucleic acids from small molecule precursors, and the formation of amide bonds without the assistance of enzymes represents one of the most difficult challenges to the model of pre-vital ( chemical) evolution, and for theories of the orgin of life.
2. The best one can hope for from such a scenario is a racemic polymer of proteinous and non-proteinous amino acids with no relevance to living systems.
3. Polymerization is a reaction in which water is a product. Thus it will only be favoured in the absence of water. The presence of precursors in an ocean of water favours depolymerization of any molecules that might be formed.
4. Even if there were billions of simultaneous trials as the billions of building block molecules interacted in the oceans, or on the thousands of kilometers of shorelines that could provide catalytic surfaces or templates, even if, as is claimed, there was no oxygen in the prebiotic earth, then there would be no protection from UV light, which would destroy and disintegrate prebiotic organic compounds. Secondly, even if there would be a sequence, producing a functional folding protein, by itself, if not inserted in a functional way in the cell, it would absolutely no function. It would just lay around, and then soon disintegrate. Furthermore, in modern cells proteins are tagged and transported on molecular highways to their precise destination, where they are utilized. Obviously, all this was not extant on the early earth.
5. To form a chain, it is necessary to react bifunctional monomers, that is, molecules with two functional groups so they combine with two others. If a unifunctional monomer (with only one functional group) reacts with the end of the chain, the chain can grow no further at this end. If only a small fraction of unifunctional molecules were present, long polymers could not form. But all ‘prebiotic simulation’ experiments produce at least three times more unifunctional molecules than bifunctional molecules.


Uncertainty quantification of the universe and life emerging through unguided, natural, random events YK23Ods
9

Like the Holy Grail, a universal DNA ‘minimal genome’ has remained elusive despite efforts to define it. 10 Gene essentiality has to be defined within the specific context of the bacterium, growth conditions, and possible environmental fluctuations. Gene persistence can be used as an alternative because it provides a more general framework for defining the requirements for long-term survival via identification of universal functions. These functions are contained in the paleome, which provides the core of the cell chassis. The paleome is composed of about 500 persistent genes. 

We can take an even smaller organism, which is regarded as one of the smallest possible, and the situation does not change significantly:
The simplest known free-living organism, Mycoplasma genitalium, has the smallest genome of any free-living organism, has a genome of 580,000 base pairs. This is an astonishingly large number for such a ‘simple’ organism. It has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451. If we calculate the entire proteome, then the odds are 470 x 347 = 163090 amino acids, that is odds of 20^164090 , if we disconsider that nature had to select only left-handed amino acids, and bifunctional ones.

Common objections to the argument from Improbability

Objection: Arguments from probability are drivel. So far the likelihood that life would form the way it did is 1. 
Response: The classic argument is given in response is that one shouldn't be surprised to observe life to exist, since if it wouldn't, we wouldn't exist. Therefore, the fact that we exist means that life exists should only be expected by the mere fact of our own existence - not at all surprising. This is obviously a response begging the question. 

This argument is like a situation where a man is standing before a firing squad of 10000 men with rifles who take aim and fire - - but they all miss him. According to the above logic, this man should not be at all surprised to still be alive because, if they hadn't missed him, he wouldn't be alive.  The nonsense of this line of reasoning is obvious. Surprise at the extreme odds of life emerging randomly, given the hypothesis of a mindless origin, is only to be expected - in the extreme.

A protein requires a threshold of minimal size to fold and become functional within its milieu where it will operate. That threshold is an average of 400 amino acids. That means until that minimal size is reached, the amino acids polypeptide chain bears no function. So each protein can be considered irreducibly complex. Practically everyone has identically the same kind of haemoglobin molecules in his or her blood, identical down to the last amino acid and the last atom.  Anyone having different haemoglobin would be seriously ill or dead because only the very slightest changes can be tolerated by the organism.

A. I. Oparin
Even the simplest of these substances [proteins] represent extremely complex compounds, containing many thousands of atoms of carbon, hydrogen, oxygen, and nitrogen arranged in absolutely definite patterns, which are specific for each separate substance.  To the student of protein structure the spontaneous formation of such an atomic arrangement in the protein molecule would seem as improbable as would the accidental origin of the text of irgil’s “Aeneid” from scattered letter type.1

In order to start a probability calculation, it would have to be pre-established somehow, that the twenty amino acids used in life, would have been pre-selected out of over 500 different kinds of amino acids known in nature. They would have to be collected in one place, where they would be readily available. Secondly, amino acids are homochiral, that is, they are left and right-handed. Life requires that all amino acids are left-handed. So there would have to exist another selection mechanism, sorting the correct ones out, in order to remain only left-handed amino acids ( Cells use complex biosynthesis pathways and enzymes to produce only left-handed amino acids ).  So if we suppose that somehow, out of the prebiotic pond, the 20 amino acids, only with left-handed, homochiral, were sorted out,  

The probability of generating one amino acid chain with 400 amino acids in successive random trials is (1/20)400

=============================================================================================================================================

Objection: The physical laws, the laws of biochemistry, those aren't chance. The interaction of proteins, molecules, and atoms, their interaction is dictated by the laws of the universe.
Response: While it is true, that the chemical bonds that glue one amino acid to the other are subdued to chemical properties, there are neither bonds nor bonding affinities—differing in strength or otherwise—that can explain the origin of the specificity of the sequence of the 20 types of amino acids, that have to be put together in the right order and sequence, in order for a protein to bear function.  What dictates in modern cells the sequence of amino acids in proteins is the DNA code.

DNA contains true codified instructional information, or a blueprint.  Being instructional information means that the codified nucleotide sequence that forms the instructions is free and unconstrained; any of the four bases can be placed in any of the positions in the sequence of bases. Their sequence is not determined by the chemical bonding. There are hydrogen bonds between the base pairs and each base is bonded to the sugar-phosphate backbone, but there are no bonds along the longitudional axis of DNA. The bases occur in the complementary base pairs A-T and G-C, but along the sequence on one side the bases can occur in any order, like the letters of a language used to compose words and sentences. Since nucleotides can be arranged freely into any informational sequence, physical necessity could not be a driving mechanism.

=============================================================================================================================================

Objection: Logical Fallacy. Argument from Improbability. Just because a thing is highly unlikely to occur doesn't mean that its occurrence is impossible. No matter how low the odds, if there is a chance that it will happen, it will happen given enough tries, especially considering a large enough universe with countless potentially "living" planets. Nobody claims that winning the lottery doesn't happen just because the chance of a specific individual winning it is low.
Response: So you're saying there's a chance... Murphy's Law states given enough time and opportunity, anything THAT CAN HAPPEN will happen. That phrase "can happen" is very important. The natural, non thinking, random, void of purpose and reason cosmos is unable to produce what we see without information. Life without information is not only improbable but rather impossible from what we know about the nature of complexity, design, and purpose. Given a planet full of simpler non-self-replicating organic molecules, abundant but not excessive input of energy from the local star, and a few thousand million years one could imagine that the number of "tries" to assemble that one self-replicating organic molecule would easily exceed 5x10^30. You can't just vaguely appeal to vast and unending amounts of time (and other probabilistic resources) and assume that self-assembly, spontaneously by orderly aggregation and sequentially correct manner without external direction, given enough trials is possible, that it can produce anything "no matter how complex."  Rather, it has to be demonstrated that sufficient probabilistic resources or random, non-guided mechanisms indeed exist to produce the feature. Fact is, we know upon repeated experience and demonstration that intelligence can and does envision, project and elaborate complex blueprints, and upon its instructions, intelligence produces the objects in question. It has NEVER been demonstrated that unguided events without specific purposes can do the same. In such examples it is also not taken in consideration, that such shuffle requires energy applied in specific form, and the environmental conditions must permit the basic molecules not to be burned by UV radiation. The naturalistic proposal is more a matter of assaulting the intelligence of those who oppose it with a range assertions that proponents of naturalism really have no answer, how these mechanisms really work. To argue that forever is long enough for the complexity of life to reveal itself is an untenable argument. The numbers are off any scale we can relate to as possible to explain what we see of life. Notwithstanding, you have beings in here who go as far to say it's all accounted for already, as if they know something nobody else does. Fact is, science is struggling for decades to unravel the mystery of how life could have emerged, and has no solution in sight. 

Eugene Koonin, advisory editorial board of Trends in Genetics, writes in his book: The Logic of Chance: 
" The Nature and Origin of Biological Evolution, Eugene V. Koonin, page 351:
The origin of life is the most difficult problem that faces evolutionary biology and, arguably, biology in general. Indeed, the problem is so hard and the current state of 
the art seems so frustrating that some researchers prefer to dismiss the entire issue as being outside the scientific domain altogether, on the grounds that unique
events are not conducive to scientific study.


=============================================================================================================================================

Objection: The entire premise is incorrect to start off with, because in modern abiogenesis theories the first "living things" would be much simpler, not even a protobacteria, or a preprotobacteria (what Oparin called a protobiont and Woese calls a progenote), but one or more simple molecules probably not more than 30-40 subunits long.
Response: In his book: The Fifth Miracle: The Search for the Origin and Meaning of Life, Paul Davies describes life with the following characteristics of life:
Reproduction. Metabolism.  Homeostasis Nutrition. Complexity. Organization.  Growth and development. Information content.  Hardware/software entanglement.  Permanence and change.

The paper: The proteomic complexity and rise of the primordial ancestor of diversified life describes :
The last universal common ancestor as the primordial cellular organism from which diversified life was derived. This urancestor accumulated genetic information before the rise of organismal lineages and is considered to be a complex 'cenancestor' with almost all essential biological processes.
http://europepmc.org/articles/pmc3123224

=============================================================================================================================================

Objection: These simple molecules then slowly evolved into more cooperative self-replicating system, then finally into simple organisms
Response: Koonin refutes that claim in his Book:  the logic of chance, page 266
Evolution by natural selection and drift can begin only after replication with sufficient fidelity is established. Even at that stage, the evolution of translation remains highly problematic. The emergence of the first replicator system, which represented the “Darwinian breakthrough,” was inevitably preceded by a succession of complex, difficult steps for which biological evolutionary mechanisms were not accessible. The synthesis of nucleotides and (at least) moderate-sized polynucleotides could not have evolved biologically and must have emerged abiogenically—that is, effectively by chance abetted by chemical selection, such as the preferential survival of stable RNA species. Translation is thought to have evolved later via an ad hoc selective process.  ( Did you read this ???!! An ad-hoc process ?? )

=============================================================================================================================================

Objection: Another view is the first self-replicators were groups of catalysts, either protein enzymes or RNA ribozymes, that regenerated themselves as a catalytic cycle. It's not unlikely that a small catalytic complex could be formed. Each step is associated with a small increase in organisation and complexity, and the chemicals slowly climb towards organism-hood, rather than making one big leap
Response:   The Logic of Chance: The Nature and Origin of Biological Evolution By Eugene V. Koonin
Hence, the dramatic paradox of the origin of life is that, to attain the minimum complexity required for a biological system to start on the Darwin-Eigen spiral, a system of a far greater complexity appears to be required. How such a system could evolve is a  puzzle that defeats conventional evolutionary thinking, all of which is about biological systems moving along the spiral; the solution is bound to be unusual. 

=============================================================================================================================================

Objection: As to the claim that the sequences of proteins cannot be changed, again this is nonsense. There are in most proteins regions where almost any amino acid can be substituted, and other regions where conservative substitutions (where charged amino acids can be swapped with other charged amino acids, neutral for other neutral amino acids and hydrophobic amino acids for other hydrophobic amino acids) can be made.
Response: A protein requires a threshold of minimal size to fold and become functional within its milieu where it will operate. That threshold is average 400 amino acids. That means, until that minimal size is reached, the amino acids polypeptide chain bears no function. So each protein can be considered irreducibly complex. Practically everyone has identically the same kind of haemoglobin molecules in his or her blood, identical down to the last amino acid and the last atom.  Anyone having a different haemoglobin would be seriously ill or dead, because only the very slightest changes can be tolerated by the organism.

A. I. Oparin
Even the simplest of these substances [proteins] represent extremely complex compounds, containing many thousands of atoms of carbon, hydrogen, oxygen, and nitrogen arranged in absolutely definite patterns, which are specific for each separate substance.  To the student of protein structure the spontaneous formation of such an atomic arrangement in the protein molecule would seem as improbable as would the accidental origin of the text of irgil’s “Aeneid” from scattered letter type.1

In order to start a probability calculation, it would have to be pre-established somehow, that the twenty amino acids used in life, would have been pre-selected out of over 500 different kinds of amino acids known in nature. They would have to be collected in one place, where they would be readily available. Secondly, amino acids are homochiral, that is, they are left and right-handed. Life requires that all amino acids are left-handed. So there would have to exist another selection mechanism, sorting the correct ones out, in order to remain only left-handed amino acids ( Cells use complex biosynthesis pathways and enzymes to produce only left-handed amino acids ).  So if we suppose that somehow, out of the prebiotic pond, the 20 amino acids, only with left-handed, homochiral, were sorted out,  

The probability of generating one amino acid chain with 400 amino acids in successive random trials is (1/20)400

=============================================================================================================================================

Objection: We were examining sequential trials as if there was only one protein/DNA/proto-replicator being assembled per trial. In fact there would be billions of simultaneous trials as the billions of building block molecules interacted in the oceans, or on the thousands of kilometers of shorelines that could provide catalytic surfaces or templates.
Response: The maximal number of possible SIMULTANEOUS interactions in the entire history of the universe, starting 13,7 billion years ago, can be calculated by multiplying the three relevant factors together: the number of atoms (10^80) in the universe, times the number of seconds that passed since the big bang (10^16) times the number of the fastest rate that one atom can change its state per second (10^43). This calculation fixes the total number of events that could have occurred in the observable universe since the origin of the universe at 10^139. This provides a measure of the probabilistic resources of the entire observable universe.

The number of atoms in the entire universe = 1 x 10^80
The estimate of the age of the universe: 13,7 Billion years.  In seconds, that would be = 1 x 10^16
The fastest rate an atom can change its state = 1 x 10^43
Therefore, the maximum number of possible events in a universe that is 18 Billion years old (10^16 seconds) where every atom (10^80) is changing its state at the maximum rate of 10^40 times per second is 10^139.

1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

If the odds for an event to occur, are less likely, than the threshold of the entire probabilistic resources of the universe, then we can confidently say, that the event is impossible to occur by chance.

Furthermore, it is claimed, that there was no oxygen in the prebiotic earth. If that was so, then there would be no protection from UV light, which would destroy and disintegrate prebiotic organic compounds. Secondly, even if there would be a sequence, producing a functional folding protein, by itself, if not inserted in a functional way in the cell, it would absolutely no function. It would just lay around, and then soon disintegrate. Furthermore, in modern cells proteins are tagged and transported on molecular highways to their precise destination, where they are utilized. Obviously, all this was not extant on early earth.

Let's suppose the number of all atoms in the universe ( 10^80) would be small molecules. Getting the right specified sequence of amino acids to get just one-third of the size of an average-sized protein of 300 amino acids, of one of the 1300 proteins is like painting one of all these molecules in the universe ( roughly 10^80) in red, then throwing a dart randomly, and by luck, hitting at the first attempt, the red molecule.



Last edited by Otangelo on Fri Oct 06, 2023 6:40 am; edited 136 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Objection: What is entirely missing from such arguments as above is the demonstration that a specific modern "alive" configuration is the only possible one. Calculating probabilities starting with the end result commits the classic fallacy of painting the target after the arrow has been shot, pretending that a goal has been specified in advance; any event becomes arbitrarily improbable if specified with sufficient accuracy.
Response:  We're not talking about just any random outcome. We're discussing the emergence of specific, highly functional molecular machines - enzymes that perform precise tasks essential for life. When we calculate these probabilities, we're not just listing off a series of mundane daily events and calling them improbable. We're looking at the odds of forming molecules with very specific functions. Each enzyme has a particular sequence of amino acids that folds into a unique three-dimensional structure, creating an active site that catalyzes a specific chemical reaction. That's not just any outcome - it's a highly specified one. The argument from improbability isn't a fallacy when applied correctly. We're not saying "What are the odds of any proteins forming?" We're asking, "What are the odds of forming proteins that can actually do the complex jobs required for life?" That's a crucial distinction.

We're talking about the origin of life - the very first emergence of living systems from non-living matter. This isn't about evolution or natural selection, which require replication and inheritance to function. We're looking at how those first critical molecules necessary for life could have formed. The probabilities we're dealing with are staggering. The odds of even one such molecule forming by chance are astronomically low. We're not talking about just any random arrangement of amino acids, but a specific sequence that folds into a precise three-dimensional structure capable of performing a particular function. Now multiply that improbability by the number of different proteins required for the most basic form of life. The figures quickly become so large they're practically meaningless. That 10^58 number? That's actually conservative when you consider the full complexity involved. Proponents of naturalistic origins often point to chemical affinities or self-organization, but these don't address the information content required for life. They might explain how certain molecules stick together, but not how they arrange themselves into the highly specific, functional sequences necessary for life. Yes, the early Earth had a lot of time and materials to work with. But time and chance alone don't solve the problem - they actually make it worse. With more time, you get more breakdown of complex molecules, not more building up. The step-wise accumulation idea doesn't hold water either. Each step needs to be functional and provide an advantage to be preserved. But what's the function of half an enzyme or a partial DNA strand? Alternative hypotheses like RNA World or metabolism-first scenarios just push the problem back a step. They still require highly specific, functional molecules to emerge from random processes.
Our incomplete knowledge actually strengthens the design argument. The more we learn about the complexity of life at its most fundamental level, the more implausible a chance origin becomes. This isn't about giving up on science or invoking miracles. It's about following the evidence where it leads. And the evidence of extreme, specified complexity in even the simplest living systems points strongly towards intelligent design, not blind, undirected processes.

We have taken as our premise what science has demonstrated to be the minimal requirement to start life. And therefore, the objection fails.  Above calculation does not take into consideration that there are about 500 naturally occurring amino acids known. So let us suppose that they were extant on the prebiotic earth, and somehow, a selective process did chose and select, and concentrate just the 20 used in life. These 20 could still come in two versions, left-handed, and right-handed. Life uses only left-handed amino acids, so they would also have to be sorted out. Now let us suppose, that by freaky accident and crazy shuffling trillions of trillions of time, suddenly, the right protein set would be there, the right proteins, the right assortment. It is claimed, that there was no oxygen in the prebiotic earth. If that was so, then there would be no protection from UV light, which would destroy and disintegrate prebiotic organic compounds. Secondly, even if there would be a sequence, producing a functional folding protein, by itself, if not inserted in a functional way in the cell, it would have absolutely no function. It would just lay around, and then soon disintegrate. Furthermore, in modern cells proteins are tagged and transported on molecular highways to their precise destination, where they are utilized. Obviously, all this was not extant on the early earth.
 
Peptide bonding of amino acids to form proteins and its origins
https://reasonandscience.catsboard.com/t2130-peptide-bonding-of-amino-acids-to-form-proteins-and-its-origins

Most of the cell’s important functions are carried out by compounds called proteins which are a chain of amino acids linked together. There are 20 amino acids which can be arranged in any combination and the average protein consists of over 400 amino acids linked together. The protein’s characteristics and function is determined by the number and particular arrangement of amino acids. A protein can be represented by a sentence which derives its meaning from the particular arrangement of letters or amino acids. 4  According to evolutionary theories, amino acids were synthesized spontaneously and then linked together to form the first protein from a generic amino acid “soup.” In experiments attempting to synthesize amino acids, the products have been a mixture of right-handed and left-handed amino acids. (Amino acids, as well as other organic compounds, can exist in two forms which have the same chemical composition but are three-dimensional mirror images of each other; thus termed right and left-handed amino acids.) Every protein in a living cell is composed entirely of left-handed amino acids, even though the right-handed isomer can react in the same way. Thus, if both right and left-handed amino acids are synthesized in this primitive organic soup, we are faced with the question of how life has used only the left-handed amino acids for proteins.

We can represent this dilemma by picturing a huge container filled with millions of white (left-handed amino acids) and black (right-handed amino acids) jelly beans. What would be the probability of a blind-folded person randomly picking out 410 white jelly beans (representing the average sized protein) and no black jelly beans? The odds that the first 410 jelly beans would be all one color are one in 2^410 or 109^123. To put the odds in perspective, there are only about 10^18 seconds in 4.5 billion years, the approximate claimed age of the earth, and it has been estimated that there are only 10^80 particles in the universe.

Yet the probability of choosing all left-handed amino acids, without even considering their particular order or specific arrangement, is much larger than that!

So let us suppose that there are about 500 different amino acids on the prebiotic earth, and somehow, a selective process did chose and select, and concentrate just the 20 used in life. And sorted out all right-handed amino acids, 
so only left-handed would have remained. 

About 500 naturally occurring amino acids are known (though only 20 appear in the genetic code) and can be classified in many ways
https://en.wikipedia.org/wiki/Amino_acid

We had a theoretical average size of 400 amino acids per protein,  Since each of the 400 positions in the chain could be occupied by any one of the 20 amino acids, the total possible arrangements is 20^400, a truly enormous, super astronomical number. Applying the formula below, the odds for one protein with 400 amino acids to emerge randomly are 1 to 10^520.

Uncertainty quantification of the universe and life emerging through unguided, natural, random events Z3uo2dj

Uncertainty quantification of the universe and life emerging through unguided, natural, random events ClbAktZ
7

Uncertainty quantification of the universe and life emerging through unguided, natural, random events CQAsIBY

Considering the first one as already obtained, we need 559 more. The probability is therefore 559/10520.  The third one could be any of the 558 still needed, so its probability would be 558 /10520.  Calculating all of these, and allowing for one substitution per chain, we arrive at a probability far beyond  of 1 in 10^100.000

To link two amino acids together requires the removal of a water molecule and the supply of some 150 times more energy than heat in the Earth's oceans could supply. In the absence of a joining enzyme used by biology or without an excessively large flux of ultraviolet light at the ocean surface, no new arrangements could be achieved. But even if chemical barriers for the linkages are artificially and miraculously removed, the really vast improbability of 1 in 10 ^40,000 poses a serious dilemma for the whole of evolutionary science. 6

For a protein made from scratch in a prebiotic soup, the odds of finding such globally optimal solutions are infinitesimally small- somewhere between 1 in 10^140 and 1 in 10^164 for a 150 amino acid long sequence if we factor in the probabilities of forming peptide bonds and of incorporating only left handed amino acids. 5

Proteins families are grouped into Clusters of Orthologous Groups, or COGs, which typically serve the same function. This Clusters of Orthologous Groups 561-COG set gives a low-end estimate for the LUCA proteome.
Prebiotic Evolution and Astrobiology 2009 Landes Bioscience Austin Texas

A minimal estimate for the gene content of the last universal common ancestor—exobiology from a terrestrial perspective
A truly minimal estimate of the gene content of the last universal common ancestor, obtained by three different tree construction methods and the inclusion or not
of eukaryotes (in total, there are 669 ortholog families distributed in 561 functional annotation descriptions, including 52 which remain uncharacterized)
http://sci-hub.ren/https://www.ncbi.nlm.nih.gov/pubmed/16431085/

The proteomic complexity and rise of the primordial ancestor of diversified life
A more recent study of 184 genomes identified 669 orthologous protein families, which cover 561 detailed functional classes that are involved in almost all essential biological processes of extant life, including translation, transcription and its regulation, DNA replication, recombination, and repair, transport and membrane-associated functions, electron transfer, and metabolism
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3123224/

Protein-length distributions for the three domains of life
The average protein length of these 110 clusters of orthologous genes COGs is 359 amino acids for the prokaryotes and 459 for eukaryotes.
https://pdfs.semanticscholar.org/5650/aaa06de4de11c36a940cf29c07f5f731f63c.pdf

So let us suppose and make a theoretical assumption that the threshold to have a first living cell is 560 proteins with an average length of 400 amino acids.

The length of the average protein in the smallest known living thing is at least 400 amino acid links, containing more than 7,000 atoms.
https://web.archive.org/web/20170423032439/http://creationsafaris.com/epoi_c06.htm

Lies, Damned Lies, Statistics, and Probability of Abiogenesis Calculations
http://www.talkorigins.org/faqs/abioprob/abioprob.html
Claim: Firstly, the formation of biological polymers from monomers is a function of the laws of chemistry and biochemistry, and these are decidedly not random.
Response: The amino acid sequence of a polypeptide, together with the laws of chemistry and physics, cause a polypeptide to fold into a more compact structure, but the sequence that permits proteins to fold to a stable fold is not determined by the laws of chemistry, but the complex, specified, code in DNA, which is a functional Cell dictates the amino acids which will be synthesized in the Ribosome. The instruction for the 3-D structure of a protein is embedded in the sequences of amino acids and the electrochemical attractive forces among them, and these are determined by the genetic Code. Most proteins include all of the usual twenty kinds of amino acids.  Each protein has a specific exact sequence of these units.

Wiki: The genetic code is the set of rules used by living cells to translate information encoded within genetic material (DNA or mRNA sequences) into proteins. Translation is accomplished by the ribosome, which links amino acids in an order specified by messenger RNA (mRNA)

To arrive at a statistical "proof," we need a reasonable criterion to judge it by :
As just a starting point, consider that many statisticians consider that any occurrence with a chance of happening that is less than one chance out of 10^50, is an occurrence with such a slim a probability that is, in general, statistically considered to be zero. (10^50 is the number 1 with 50 zeros after it, and it is spoken: "10 to the 50th power"). This appraisal seems fairly reasonable, when you consider that 10^50 is about the number of atoms which make up the planet earth. --So, overcoming one chance out of 10^50 is like marking one specific atom out of the earth, and mixing it in completely, and then someone makes one blind, random selection, which turns out to be that specific marked atom. Most mathematicians and scientists have accepted this statistical standard for many purposes.

LES PROBABILITIES DINOMBRABLES ET LEURS APPLICATIONS ARITHMtTIOUES.
Par M. EmiIe BoreI (Paris) 8 novembre 1908
http://sci-hub.ren/https://link.springer.com/article/10.1007/BF03019651

Strong law of large numbers
https://www.encyclopediaofmath.org/index.php/Strong_law_of_large_numbers

==========================================================================================================================================

Proteins are structures of complex semantophoretic macromolecules that carry genetic information.

How Did Protein Synthesis Evolve?
The molecular processes underlying protein synthesis in present-day cells seem inextricably complex. Although we understand most of them, they do not make conceptual sense in the way that DNA transcription, DNA repair, and DNA replication do. It is especially difficult to imagine how protein synthesis evolved because it is now performed by a complex interlocking system of protein and RNA molecules; obviously the proteins could not have existed until an early version of the translation apparatus was already in place. As attractive as the RNA world idea is for envisioning early life, it does not explain how the modern-day system of protein synthesis arose.
Molecular biology of the cell, 6th ed. pg. 365

The corresponding DNA sequences dictate the amino acid sequences.  Specific functionality of a given protein is defined by a unique spatial positioning of its amino acid side chains and prosthetic groups, suggesting that such a specific spatial arrangement of functional groups in biologically active proteins is defined by their unique 3D structures predetermined by the unique amino acid sequences encoded in unique genes.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7014577/

Estimating the prevalence of protein sequences adopting functional enzyme folds.
Combined with the estimated prevalence of plausible hydropathic patterns (for any fold) and of relevant folds for particular functions, this implies the overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 10(77), adding to the body of evidence that functional folds require highly extraordinary sequences.
https://www.ncbi.nlm.nih.gov/pubmed/15321723?fbclid=IwAR2WqQIOoD3Opw1tmhd6Z5K76yAcJ-w_DbwlWnPml5jVxM34YxC9l7N3PHw

Uncertainty quantification of the universe and life emerging through unguided, natural, random events Ribonu10

Calculations of life beginning through unguided, natural, random events

https://reasonandscience.catsboard.com/t2508-calculations-of-life-beginning-through-unguided-natural-random-events

Proteins are the result of the DNA blueprint, which specifies the complex sequence necessary to produce functional 3D folds of proteins. Both, improbability and specification are required in order to justify an inference of design.
1. According to the latest estimation of a minimal protein set for the first living organism, the requirement would be about 560 proteins, this would be the absolute minimum to keep the basic functions of a cell alive. 
2. According to the Protein-length distributions for the three domains of life, there is an average between prokaryotic and eukaryotic cells of about 400 amino acids per protein. 8
3. Each of the 400 positions in the amino acid polypeptide chains could be occupied by any one of the 20 amino acids used in cells, so if we suppose that proteins emerged randomly on prebiotic earth, then the total possible arrangements or odds to get one which would fold into a functional 3D protein would be 1 to 20^400 or 1 to 10^520. A truly enormous, super astronomical number.
4. Since we need 560 proteins total to make a first living cell, we would have to repeat the shuffle 560 times, to get all proteins required for life. The probability would be therefore 560/10520.  We arrive at a probability far beyond  of 1 in 10^100.000  ( A proteome set with 239 proteins yields odds of approximately 1/10^119614 ) 7
Granted, the calculation does not take into consideration nor give information on the probabilistic resources available. But the sheer gigantic number os possibilities throw any reasonable possibility out of the window.   

Helicases are astonishing motor proteins which rotational speed is up to 10,000 rotations per minute, and are life essential. They require 1000 left-handed amino acids in the right specified sequence. Each of the 1000 amino acids must be the right amongst 20 to chose from. How did they emerge by natural processes? Chance to get them by random chemical reactions is 1 to 20^1000..... there are 10^80 atoms in the universe.

Above calculation does not take into consideration that there are about 500 naturally occurring amino acids known. So let us suppose that they were extant on the prebiotic earth, and somehow, a selective process did chose and select, and concentrate just the 20 used in life. These 20 could still come in two versions, left-handed, and right-handed. Life uses only left-handed amino acids, so they would also have to be sorted out. Now let us suppose, that by freaky accident and crazy shuffling trillions of trillions of time, suddenly, the right protein set would be there, the right proteins, the right assortment. It is claimed, that there was no oxygen in the prebiotic earth. If that was so, then there would be no protection from UV light, which would destroy and disintegrate prebiotic organic compounds. Secondly, even if there would be a sequence, producing a functional folding protein, by itself, if not inserted in a functional way in the cell, it would absolutely no function. It would just lay around, and then soon disintegrate. Furthermore, in modern cells proteins are tagged and transported on molecular highways to their precise destination, where they are utilized. Obviously, all this was not extant on the early earth.

As of 2014, Koonin serves on the advisory editorial board of Trends in Genetics and is co-Editor-in-Chief of the open access journal Biology Direct. He served on the editorial board of Bioinformatics from 1999-2001. Koonin is also an advisory board member in bioinformatics at Faculty of 1000.

Let us see what he writes in regard of the origin of life:

The Logic of Chance: The Nature and Origin of Biological Evolution, Eugene V. Koonin, page 351:
The origin of life is the most difficult problem that faces evolutionary biology and, arguably, biology in general. Indeed, the problem is so hard and the current state of the art seems so frustrating that some researchers prefer to dismiss the entire issue as being outside the scientific domain altogether, on the grounds that unique events are not conducive to scientific study.

A succession of exceedingly unlikely steps is essential for the origin of life, from the synthesis and accumulation of nucleotides to the origin of translation; through the multiplication of probabilities, these make the final outcome seem almost like a miracle. The difficulties remain formidable. For all the effort, we do not currently have coherent and plausible models for the path from simple organic molecules to the first life forms. Most damningly, the powerful mechanisms of biological evolution were not available for all the stages preceding the emergence of replicator systems. Given all these major difficulties, it appears prudent to seriously consider radical alternatives for the origin of life

The Logic of Chance: The Nature and Origin of Biological Evolution, Eugene V. Koonin page 435:
The requirements for the emergence of a primitive, coupled replication-translation system, which is considered a candidate for the breakthrough stage in this paper, are much greater. At a minimum, spontaneous formation of the following is required:
• Two rRNAs, with a total size of at least 1,000 nucleotides.
• Approximately 10 primitive adaptors of about 30 nucleotides
each, for a total of approximately 300 nucleotides.
• At least one RNA encoding a replicase, about 500 nucleotides (low bound) required. Under the notation used here, n = 1,800, resulting in E <10^1018.
In other words, even in this toy model that assumes a deliberately inflated rate of RNA production, the probability that a coupled translation replication emerges by chance in a single O-region is P < 10^1018. Obviously, this version of the breakthrough stage can be considered only in the context of a universe with an infinite (or, at the very least, extremely vast) number of O-regions ( observable regions ).
The model considered here is not supposed to be realistic, by any account. It only illustrates the difference in the demands on chance for the origin of different versions of the breakthrough system and, hence, the connections between this version and different cosmological models of the universe.

All things considered, my assessment of the current state of the art in the study of the origins of replication and translation is rather somber. Notwithstanding relevant theoretical models and suggestive experimental results, we currently do not have a credible solution to these problems and do not even see with any clarity a path to such a solution. Any even remotely realistic origin of life scenario must incorporate well-defined pre-cellular, abiogenic compartmentalization; inorganic catalysts to catalyze “pre-biochemical” reactions prior to the emergence of bona fide enzymes; thermal and/or electrochemical potential gradients required for the generation of energy in accessible forms; a solution to the extremely difficult problem of the origin of genetic information (see the discussion earlier in this chapter). In general, the early concepts underestimated the dimensions of the origin of life problem and failed to investigate special abiogenic conditions that must have been a prerequisite for the jump-start of biological evolution. Subsequently, several groups of researchers attempted to get away from the concept of the homogeneous primary soup, replacing it with some form of inorganic compartments, and sought to address all the origin of life problems in conjunction by combination of modeling, experiment, and observation in nature. The common idea of these hypotheses is the existence of a single framework that could simultaneously provide compartmentalization, energy gradients, and catalysts.

A 'review' of The Logic of Chance by Gert Korthof
https://web.archive.org/web/20180820034559/http://wasdarwinwrong.com/korthof98.htm

The probability of the spontaneous origin of this is: P < 10-10^18. The spontaneous origin of 1,800 nucleotides is the Koonin-threshold for the origin of life and evolution. No Origin of Life (OOL) researcher put it more clearly and dramatically than Koonin. Please note 1,800 nucleotides is a minimum. Every OOL researcher that skips over the Koonin threshold makes a serious scientific oversight.

At most ribozymes could spontaneously originate, but not a coupled replication-translation system (the DNA-protein world). So, if ribozymes are the beginnings of the RNA-world, Koonin claims that the RNA-world would come to a halt before a replication-translation system emerged. In our universe, certainly on our earth, the RNA-world would be a dead end.

Uncertainty quantification of the universe and life emerging through unguided, natural, random events Quote-12

The estimated number of elementary particles in the universe is 10^80. The most rapid events occur at an amazing 10^45 per second. Thirty billion years contains only 10^18 seconds. By totaling those, we find that the maximum elementary particle events in 30 billion years could only be 10^143.

The simplest known free-living organism, Mycoplasma genitalium,
has the smallest genome of any free-living organism, has a genome of 580,000 base pairs. This is an astonishingly large number for such a ‘simple’ organism.It has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451.

According to Borel's law, any occurrence with a chance of happening that is less than one chance out of 10^50, is an occurrence with such a slim a probability that is, in general, statistically considered to be zero. (10^50 is the number 1 with 50 zeros after it, and it is spoken: "10 to the 50th power")

Estimating the probability of just a single 150-amino acid functional protein coming into existence by random chance at 10E164.
To show how utterly improbable this is, he gave the example: If every elementary particle in the universe (10E80) interact with each other 10E43 times per second and each interaction produced a 150-amino acid combination every second since the big bang (10E17), the total maximum number of combinations that could have been generated by now is 10E140. 

Someone did an estimate that helped put these numbers in perspective: To demonstrate how far 10E140 combinations are from all 10E164 total combinations, consider that if 10E164 was represented by the distance from Earth to the star Sirius A (8.6 LY away), and the 10E140 maximum possible combinations generated since the big bang represented the distance we've traveled toward Sirius A so far, then we have traveled about the width of a single hydrogen atom. 

Achieving a 50/50 probability of generating a single functional protein by sampling half of all combinations would occur at the halfway point (4.3 LY away). But with the maximum 10^140 combinations that could possibly have been generated by now, we've still traveled less than the width of a single hydrogen atom, And all this is for the probability of SINGLE functional protein coming into existence by random chance. The simplest living cell needs somewhere around 250 minimum unique proteins.


Objection: every sequence is just as improbable as another.
Answer: It's true that any particular equal-length sequence is just as improbable as any other. But if the goal is to have a sequence, a particular string starting at 1, then 2,3,4,5,6 ............ 500, then intuitively you know there sequence has a specific order. The relevant point to be outlined here is: The sequence 1,2,3,4 ..........  500, exhibits a specification or particular pattern. What must be explained, is the origin not of any kind of sequence, but a particular, specific sequence.
Suppose you see a blueprint to make a car engine with 100 horse powers to drive a BMW 5X. Not any blueprint will produce this particular car engine with the right size and fit and power. Only a blueprint with the precise, specific, complex arrangement of orders that is understood by the common pre-established agreement between the engineer, and the manufacturer, will permit to be encoded, transmitted, decoded and transformed in an equivalent artifact that has the specific, recognizable function which meets the pre-established goal. The information for that particular car engine can be encoded in Bits. Lets suppose its the size of a CD, 600mb. What has to be calculated, are the odds to get that specific sequence of instructions, which permit to give rise to that particular car engine. Not any sequence will do.
We know by experience, that intelligence is able to produce factories, engines, machines, codes, computers, software, hardware etc. The odds are 1, since it happens and are a proven, it's a fact.Now you take a random character generator. The odds to have a specific string of 470 characters, equivalent of a medium-sized protein of that length are 1:10^451. So there would have to be this number of trials and errors to get the right sequence. 

Answering the Lottery fallacy fallacy
The fine tuning argument is not driven by the improbability of just any universe coming into existence, like the probability of someone winning the lottery, which might be high. Instead the argument is driven by the specified probability of a life-permitting universe coming into existence. A useful illustration that’s sometimes used is that of a gigantic swimming pool, filled with hundreds of billions of white marbles (representing life-prohibiting universes or failed universes), but containing only one black marble (representing a successful and life permitting universe). While it is true that the probability of pulling out any particular marble is the same as that of pulling out any other particular marble and provided we are going to pull out a marble, then the probability of pulling out a marble is 1 (i.e. it is certain that it will happen), it is also true that the probability of pulling out a black marble is mind bogglingly lower than the probability of pulling out a white marble.
http://rightreason.org/2010/the-lottery-fallacy-fallacy/

Enzymes: The Cell's Miniature Factories 1
The importance of the three-dimensional structure of proteins can best be illustrated by the function of enzymes. Virtually all of the complex chemical reactions in living cells involve special proteins called enzymes. Enzymes act to speed up (catalyze) chemical reactions in biological systems. Enzymes are employed in the production of DNA, RNA, proteins, and nearly every chemical reaction in the cell. Digestion, thought, sight, and the function of nerve and muscles all require the use of enzymes. In fact, these activities would be impossible without them.
Enzymatic reactions occur like "lock and key" mechanisms. An enzyme (the lock) has a highly specific three-dimensional shape which will only allow chemicals with the correct three-dimensional fit (the key) to bind and result in a chemical reaction.

Since all spark and soup experiments produce a 50/50 mix of right and left-handed amino acids, chemists have tried to decipher how only left-handed amino acids became integrated into the proteins of living systems. For decades chemists have attempted to separate out a pure mixture of left-handed amino acids from a racemic mix by chance chemistry alone. Chance or un-directed chemistry has, however, consistently proven to be an inadequate mechanism for the separation of the right and left-handed amino acid forms.31 So, how did it happen? Mathematically, random-chance would never select such an unlikely pure molecule out of a racemic primordial soup.
The solution is simple, yet it has profound implications. To separate the two amino acid forms requires the introduction of biochemical expertise or know-how, which is the very antithesis of chance! However, biochemical expertise or know-how comes only from a mind. Without such know-how or intelligent guidance, the right and left-handed building blocks of life will never separate. Consequently, enzymes, with their lock and key mechanisms, and ultimately, life, are impossible!32
However, the existence of a mind or a Creator involved in the creation of life is anathema to the atheist's scenario. But the volume of biochemical knowledge supports this fact: To produce pure mixtures of left-handed amino acids and right-handed nucleotides, requires intelligent guidance. And since no human chemists were around before the origin of life on earth, the source of this intelligent guidance must have been extraterrestrial!

Objection:
How many observable universes do we have? One. Does that universe host stars and life? Yes. One shouldn't be surprised to find these precisely sequenced proteins because if they would not exist, we wouldn't exist. Therefore, the fact that they exist means it should only be expected by the mere fact of our own existence - not at all surprising. That means that the current probability that our universe hosts stars and life is 100%.
Answer: this argument is like a situation where a man is standing before a firing squad of one thousand men with rifles who take aim and fire - - but they all miss him. According to the above logic, this man should not be at all surprised to still be alive because, if they hadn't missed him, he wouldn't be alive.  The nonsense of this line of reasoning is obvious. Surprise at the highly unlikely  instructional complex coded information stored in the Genome and irreducible complex cell structures, given the hypothesis of a mindless origin, is only to be expected - in the extreme.

Biological molecular machines and factories ( Cells ) are full of information-rich, language-based codes and code/blueprint-based structures.  Instructional/specified complex information is required to get the right amino acid sequence which is essential to get functionality in a vast sequence space ( amongst trillions os possible sequences, rare are the ones that provide function ), and every protein is irreducibly complex in the sense, that a minimal number of amino acids are required for each protein to get function. One  objection is given in response to this argument is that

Joseph Mastropaolo, Ph.D. According to the most generous mathematical criteria, abiogenesis and monogenesis are impossible to unimaginable extremes.

Abiogenic Origin of Life: A Theory in Crisis, 2005 Arthur V. Chadwick, Ph.D. Professor of Geology and Biology
To give you an idea of how incomprehensible, I use the following illustration. An ameba starts out at one side of the universe and begins walking towards the other side, say, 100 trillion light years away. He travels at the rate of one meter per billion years. He carries one atom with him. When he reaches the other side, he puts the atom down and starts back. In 10^186 years, the ameba will have transported the entire mass of the universe from one side to the other and back a trillion trillion trillion trillion trillion trillion times. That is my definition of impossible. And what resulted from success, if it did occur would not be a living cell or even a promising combination. Spontaneous origin of life on a prebiological earth is IMPOSSIBLE!

To arrive at a statistical "proof," we need a reasonable criterion to judge it by:
As just a starting point, consider that many statisticians consider that any occurrence with a chance of happening that is less than one chance out of 10^50, is an occurrence with such a slim a probability that is, in general, statistically considered to be zero. (10^50 is the number 1 with 50 zeros after it, and it is spoken: "10 to the 50th power"). This appraisal seems fairly reasonable when you consider that 10^50 is about the number of atoms which make up the planet earth. --So, overcoming one chance out of 10^50 is like marking one specific atom out of the earth, and mixing it in completely, and then someone makes one blind, random selection, which turns out to be that specific marked atom. Most mathematicians and scientists have accepted this statistical standard for many purposes.     The Criterion: The "Cosmic Limit" Law of Chance 3

A calculation of the probability of spontaneous biogenesis by information theory
Hubert P. Yockey
The Darwin-Oparin-Haldane “warm little pond” scenario for biogenesis is examined by using information theory to calculate the probability that an informational biomolecule of reasonable biochemical specificity, long enough to provide a genome for the “protobiont”, could have appeared in 109 years in the primitive soup. Certain old untenable ideas have served only to confuse the solution of the problem. Negentropy is not a concept because entropy cannot be negative. The role that negentropy has played in previous discussions is replaced by “complexity” as defined in information theory. A satisfactory scenario for spontaneous biogenesis requires the generation of “complexity” not “order”. Previous calculations based on simple combinatorial analysis over estimate the number of sequences by a factor of 105. The number of cytochrome c sequences is about 3·8 × 10^61. The probability of selecting one such sequence at random is about 2·1 ×10^65. The primitive milieu will contain a racemic mixture of the biological amino acids and also many analogues and non-biological amino acids. Taking into account only the effect of the racemic mixture the longest genome which could be expected with 95 % confidence in 109 years corresponds to only 49 amino acid residues. This is much too short to code a living system so evolution to higher forms could not get started. Geological evidence for the “warm little pond” is missing. It is concluded that belief in currently accepted scenarios of spontaneous biogenesis is based on faith, contrary to conventional wisdom.
http://www.sciencedirect.com/science/article/pii/0022519377900443

The origin of the first cell, cannot be explained by natural selection
The cell is irreducible complex, and hosts a hudge amount of codified, complex, specified information. The probability of useful DNA, RNA, or proteins occurring by chance is extremely small. Calculations vary somewhat but all are extremely small (highly improbable). If one is to assume a hypothetical prebiotic soup to start there are at least three combinational hurdles (requirements) to overcome. Each of these requirements decreases the chance of forming a workable protein. First, all amino acids must form a chemical bond (peptide bond) when joining with other amino acids in the protein chain. Assuming, for example a short protein molecule of 150 amino acids, the probability of building a 150 amino acids chain in which all linkages are peptide linkages would be roughly 1 chance in 10^45. The second requirement is that functioning proteins tolerate only left-handed amino acids, yet in abiotic amino acid production the right-handed and left-handed isomers are produced in nearly the same frequency. The probability of building a 150-amino-acid chain at random in which all bonds are peptide bonds and all amino acids are L-form is roughly 1 chance in 10^90. The third requirement for functioning proteins is that the amino acids must link up like letters in a meaningful sentence, i.e. in a functionally specified sequential arrangement. The chance for this happening at random for a 150 amino acid chain is approximately 1 chance in 10^195. It would appear impossible for chance to build even one functional protein considering how small the likelihood is. By way of comparison to get a feeling of just how low this probability is consider that there are only 10^65 atoms in our galaxy.
http://www.ncbi.nlm.nih.gov/pubmed/10818565

Paul Davies:  the fifth miracle, page 54:
Chance and the origin of life
Ask the simple question: Given the conditions that prevailed on the Earth four billion years ago, how likely was it that life arose?
The following answer won’t do: “Life was inevitable, because we are here.” Obviously life did originate—our existence proves that much. But did it have to originate? In other words, was the emergence of life from a chemical broth or whatever inevitable, given millions of years? Nobody knows the answer to this question. The origin of life may have been a sheer fluke, a chemical accident of stupendous improbability, an event so unlikely that it would never happen twice in the entire universe. Or it may have been as unremarkable and predetermined as the formation of salt crystals. How can we know which explanation is the right one? Let’s take a look at the chemical-fluke theory. Terrestrial life is based on some very complicated molecules with carefully crafted structures. Even in simple organisms, DNA contains millions of atoms. The precise sequence of atoms is crucial. You can’t have an arbitrary sequence, because DNA is an instruction manual for making the organism. 

Change a few atoms and you threaten the structure of the organism. Change too many and you won’t have an organism at all. The situation may be compared to the word sequence of a novel. Change a few words here and there at random, and the text will probably be marred. Scramble all the words and there is a very high probability that it won’t be a novel any more. There will be other novels with similar words in different combinations, but the set of word sequences that make up novels is an infinitesimal fraction of all possible word sequences. The odds are fantastic  against shuffling amino acids at random into the right sequence to form a protein molecule by accident. That was a single protein. Life as we know it requires hundreds of thousands of specialist proteins, not to mention the nucleic acids. The odds against producing just the proteins by pure chance are something like 1^40.000 to 1. This is one followed by forty thousand zeros, which would take up an entire chapter of this book if I wanted to write it out in full. Dealing a perfect suit at cards a thousand times in a row is easy by comparison. In 40000 a famous remark, the British astronomer Fred Hoyle likened the odds against the spontaneous assembly of life to those for a whirlwind sweeping through a junkyard and producing a fully functioning Boeing 747.

With a half-trillion stars wheeling through the spiral patterns of the Milky Way Galaxy, it seems illogical to assume that among them only one world supports intelligent life.’ The use of the word ‘illogical’ was unfortunate, because the logic is perfectly clear. There are indeed a lot of stars – at least ten billion billion in the observable universe. But this number, gigantic though it may appear to us, is nevertheless trivially small compared to the gigantic odds against the random assembly of even a single protein molecule. The universe may be big, but if life formed solely by random agitation in a molecular junkyard, there is scant chance it will have happened twice.

With such a extraordinary elucidation, it would/should be a easy leap of faith to infer =====>>>> DESIGN !! Why Davies does not do it, but keeps a agnostic standpoint, is a mistery to me.

Objection:  Chance can create life, despite the small possibility. There are 31 million seconds in a single year, meaning that if you multiply that by ten billion you get an astronomical amount of chances and don't forget just because something is largely unlikely doesn't mean it's impossible
Answer: Talking about life getting together is similar to talking about cars forming themselves, or even basic computer programs making themselves. These things are not just improbable, they are impossible without intelligence. 

Paul Davies once said :
How did stupid atoms spontaneously write their own software … ? Nobody knows …… there is no known law of physics able to create information from nothing.

Literature from those who posture in favor of creation abounds with examples of the tremendous odds against chance producing a meaningful code. For instance, the estimated number of elementary particles in the universe is 10^80. The most rapid events occur at an amazing 10^45 per second. Thirty billion years contains only 10^18 seconds. By totaling those, we find that the maximum elementary particle events in 30 billion years could only be 10^143. Yet, the simplest known free-living organism, Mycoplasma genitalium, has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451.

The protein that enables a firefly to glow, and also reproduce (as its illuminated abdomen also serves as a visible mating call), is a protein made up of a chain of 1,000 amino acids. The full range of possible proteins that can be coded with such a chain is 17 times the number of atoms in the visible universe. This number also represents the odds against the RANDOM coding of such a protein. Yet, DNA effortlessly assembles that protein, in the exactly correct, and absolutely necessary sequence and number of amino acids for the humble firefly. What are we to say of the 25,000 individual, highly specialized, absolutely necessary, and exactly correctly coded proteins in the human body? 

Dembsky : We also know from broad and repeated experience that intelligent agents can and do produce information-rich systems: we have positive experience-based knowledge of a cause that is sufficient to generate new instructing complex information, namely, intelligence.  the design inference  does not constitute an argument from ignorance. Instead, it constitutes an "inference to the best explanation" based upon our best available knowledge.  It asserts the superior explanatory power of a proposed cause based upon its proven—its known—causal adequacy and  based upon a lack of demonstrated efficacy among the competing proposed causes.  The problem is that nature has too many options and without design couldn’t sort them all out. Natural mechanisms are too unspecific to determine any particular outcome. Mutation and natural selection or luck/chance/probablity could theoretically form a new complex morphological feature like a  leg or a limb with the right size and form , and arrange to find out the right body location to grow them , but it could  also produce all kinds of other new body forms, and grow and attach them anywhere on the body, most of which have no biological advantage or are most probably deleterious to the organism. Natural mechanisms have no constraints, they could produce any kind of novelty. Its however that kind of freedom that makes it extremely unlikely that mere natural developments provide new specific evolutionary arrangements that are advantageous to the organism.  Nature would have to arrange almost a infinite number of trials and errors until getting a new positive  arrangement. Since that would become a highly  unlikely event, design is a better explanation. 

Even the simplest of these substances [proteins] represent extremely complex compounds, containing many thousands of atoms of carbon, hydrogen, oxygen, and nitrogen arranged in absolutely definite patterns, which are specific for each separate substance.  To the student of protein structure the spontaneous formation of such an atomic arrangement in the protein molecule would seem as im- probable as would the accidental origin of the text of irgil’s “Aeneid” from scattered letter type.1
– A. I. Oparin

Mondore, The Code Word
What is the probability of complex biochemicals like proteins and DNA arising by chance alone?
The chance that amino acids would line up randomly to create the first hemoglobin protein is 1 in 10^850. The chance that the DNA code to produce that hemoglobin protein would have randomly reached the required specificity is 1 in 10^78,000.

Harold Urey, a founder of origin-of-life research, describes evolution as a faith which seems to defy logic:
“All of us who study the origin of life find that the more we look into it, the more we feel that it is too complex to have evolved anywhere. We believe as an article of faith that life evolved from dead matter on this planet. It is just that its complexity is so great, it is hard for us to imagine that it did.

― Michael Denton, Evolution: A Theory In Crisis
“The complexity of the simplest known type of cell is so great that it is impossible to accept that such an object could have been thrown together suddenly by some kind of freakish, vastly improbable, event. Such an occurrence would be indistinguishable from a miracle.”

[size=13]Hoyle and Wickramasinghe, p. 24.

“The trouble is that there are about two thousand enzymes, and the chance of obtaining them all in a random trial is only one part in (10^20)2,000 = 10^40,000, an outrageously small probability that could not be faced even if the whole universe consisted of organic soup. If one is not prejudiced either by social beliefs or by a scientific training into the conviction that life originated on the Earth [by chance or natural processes], this simple calculation wipes the idea entirely out of court.”

Ibid., p. 130. http://hyperphysics.phy-astr.gsu.edu/nave-html/faithpathh/hoyle.html
Any theory with a probability of being correct that is larger than one part in 10^40,000 must be judged superior to random shuffling [of evolution]. The theory that life was assembled by an intelligence has, we believe, a probability vastly higher than one part in 10^40,000 of being the correct explanation of the many curious facts discussed in preceding chapters. Indeed, such a theory is so obvious that one wonders why it is not widely accepted as being self-evident. The reasons are psychological rather than scientific.

Hoyle and Wickramasinghe, p. 3.

Biochemical systems are exceedingly complex, so much so that the chance of their being formed through random shufflings of simple organic molecules is exceedingly minute, to a point indeed where it is insensibly different from zero.

In the 1970s Hoyle calculated the mathematical probability of the coincidental formation of only the 2,000 types of proteins found in a single amoeba. (There are some 200,000 different types of proteins in a human cell.) The figure that he calculated was 1 over 10^40,000—an incredible number obtained by putting 40,000 zeros after the 1 (Eastman&Missler, 1996, p. 61).

The Universe: Past and Present Reflections Sir Fred Hoyle
When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link, it's easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem - the information problem. Some super-calculating intellect must have designed the properties of the carbon atom, otherwise, the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course, you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix. 
http://calteches.library.caltech.edu/527/2/Hoyle.pdf

― Stephen C. Meyer, Darwinism, Design and Public Education
“The information contained in an English sentence or computer software does not derive from the chemistry of the ink or the physics of magnetism, but from a source extrinsic to physics and chemistry altogether. Indeed, in both cases, the message transcends the properties of the medium. The information in DNA also transcends the properties of its material medium.”

― Jonathan Wells, The Politically Incorrect Guide to Darwinism And Intelligent Design

“The secret of DNA's success is that it carries information like that of a computer program, but far more advanced. Since experience shows that intelligence is the only presently acting cause of information, we can infer that intelligence is the best explanation for the information in DNA.”

Kuhn, J. A. 2012. Dissecting Darwinism. Baylor University Medical Center Proceedings. 25 (1): 41-47.
Based on an awareness of the inexplicable coded information in DNA, the inconceivable self-formation of DNA, and the inability to account for the billions of specifically organized nucleotides in every single cell, it is reasonable to conclude that there are severe weaknesses in the theory of gradual improvement through natural selection (Darwinism) to explain the chemical origin of life. Furthermore, Darwinian evolution and natural selection could not have been causes of the origin of life, because they require replication to operate, and there was no replication prior to the origin of life.


According to Dembski and Borel  (Dembski, 1998, pp. 5, 62, 209, 210).
specified events of small probability do not occur. Dembski estimated 10^80 elementary particles in the universe and asked how many times per second an event could occur. He used the Planck value of 10^45. He then calculated the number of seconds from the beginning of the universe to the present and for good measure multiplied by ten million for 10^25 seconds in all. He thereby obtained 10^80 x 10^45 x 10^25 = 10^150, or more exactly 0.5 x 10^150, for his Law of Small Probability to eliminate chance
Currently, there does not seem to be a scientific criterion more generous to evolution than Dembski’s one chance in 0.5 x 10^150. Anything as rare as that probability had absolutely no possibility of happening by chance at any time by any conceivable specifying agent by any conceivable process throughout all of cosmic history. To test against that criterion, we take one chance in 2.3 x 10^75 for one protein (Yockey, 1992, pp. 255, 257) and multiply by the 60,000 proteins required for the abiogenesis of a minimal cell (Denton, 1986, p. 263; Morowitz, 1966, pp. 446-459) and obtain one chance in more than 104,478,296 (Mastropaolo, 1999, p. iii). That exceeds Dembski’s most generous criterion for impossible by more than 104,478,146. Or if 0.5 x 10^150 to 1 is the most generous probability science can provide to demarcate possibility from miracle, then with more than four million orders of magnitude to spare abiogenesis must be considered miraculous. To put abiogenesis in biology textbooks as evolutionists have done throughout the United States is to teach evolution religion as science and that violates the requirement of the U.S. Constitution prohibiting the establishment of a state religion (Constitution of the United States of America, 1787, Amendment I, see note).[/size]



Last edited by Otangelo on Thu Sep 19, 2024 11:05 am; edited 14 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin


Richard Dawkins,(Dawkins, 1996, pp. 144, 146).
Suppose we want to suggest, for instance, that life began when both DNA and its protein- based replication machinery spontaneously chanced to come into existence. We can allow ourselves the luxury of such an extravagant theory, provided that the odds against this coincidence occurring on a planet do not exceed 100 billion billion to one”  (Dawkins, 1996, pp. 144, 146).

Paul Davies, the fifth miracle, page 54:
Life as we know it requires hundreds of thousands of specialist proteins, not to mention the nucleic acids. The odds against producing just the proteins by pure chance are something like 1O^40000 to 1.
There are indeed a lot of stars—at least ten billion billion in the observable universe. But this number, gigantic as it may appear to us, is nevertheless trivially small compared with the gigantic odds against the random assembly of even a single protein molecule. Though the universe is big, if life formed solely by random agitation in a molecular junkyard, there is scant chance it has happened twice.

Regarding the probability of spontaneous generation, Harvard University biochemist and Nobel Laureate, George Wald stated in 1954:
"One has to only contemplate the magnitude of this task to concede that the spontaneous generation of a living organism is impossible. Yet we are here-as a result, I believe, of spontaneous generation."

The late Nobel prize winning scientist George Wald once wrote,
 “However improbable we regard this event [evolution], or any of the steps which it involves, given enough time it will almost certainly happen at least once… Time is in fact the hero of the plot… Given so much time, the ‘impossible’ becomes possible, the possible probable, the probable virtually certain. One has only to wait; time itself performs the miracles.”
[url=url=http://xwalk.ca/origin.html#fn15]url=http://xwalk.ca/origin.html#fn15[/url]

If you took thousands of letters of the alphabet, threw them high up into the air, watched them all fall to the ground, and they somehow formed themselves into Tolstoy’s War and
Peace, that would be as likely to occur as life being formed by the chance mixing of molecules.

Of all the ways that molecules can fall together by chance, we are told, an extraordinarily small proportion constitute the kind of self-replication machinery required for a process of natural selection to get going and lead to life as we know it. So the chance of life arising by chance, even given the basic organic chemical ingredients and a hospitable environment, is said to be incredibly low. w. Now let’s suppose that the process by which these complex molecules arose was not just a matter of chance, but rather was (non-intentionally) biased towards certain molecular configurations. Are self-replicating, life-producing molecules more likely to appear on this assumption? I am unable to see any reason to think so. We can think up any number of ways that the process could be biased. We can speculate about a range of possible laws and physical conditions such that simple atoms and molecules tend to cluster in certain ways rather than others. Some of these may favour life’s emergence; others will disfavour it. As in the cosmological case, what makes certain molecular configurations stand out from the multitude of possibilities seems to be that they are capable of developing into something which strikes us as rather marvellous, namely a world of living creatures. But there is no conceivable reason that blind forces of nature or physical attributes should be biased toward the marvelous. Where does this leave us? If life’s existence is no more to be expected on the assumptions of either intentional or non-intentional biasing than it is on chance, then we have no reason to doubt the Chance hypothesis.
http://web.mit.edu/rog/www/papers/does_origins.pdf

more reading:
Confusing Probability: The “Every-Sequence-Is-Equally-Improbable” Argument
https://uncommondescent.com/intelligent-design/confusing-probability-the-every-sequence-is-equally-improbable-argument/

Abiogenic Origin of Life: A Theory in Crisis
http://origins.swau.edu/papers/life/chadwick/default.html

https://blueprintsforliving.com/origin-life-research/[/size]


Infinite monkey theorem
https://en.wikipedia.org/wiki/Infinite_monkey_theorem
For physically meaningful numbers of monkeys typing for physically meaningful lengths of time the results are reversed. If there were as many monkeys as there are atoms in the observable universe typing extremely fast for trillions of times the life of the universe, the probability of the monkeys replicating even a single page of Shakespeare is unfathomably minute.

Ignoring punctuation, spacing, and capitalization, a monkey typing letters uniformly at random has a chance of one in 26 of correctly typing the first letter of Hamlet. It has a chance of one in 676 (26 × 26) of typing the first two letters. Because the probability shrinks exponentially, at 20 letters it already has only a chance of one in 2620 = 19,928,148,895,209,409,152,340,197,376 (almost 2 × 10^28). In the case of the entire text of Hamlet, the probabilities are so vanishingly small as to be inconceivable. The text of Hamlet contains approximately 130,000 letters. Thus there is a probability of one in 3.4 × 10183,946 to get the text right at the first trial. The average number of letters that needs to be typed until the text appears is also 3.4 × 10183,946, or including punctuation, 4.4 × 10360,783.

Even if every proton in the observable universe were a monkey with a typewriter, typing from the Big Bang until the end of the universe (when protons might no longer exist), they would still need a still far greater amount of time – more than three hundred and sixty thousand orders of magnitude longer – to have even a 1 in 10^500 chance of success. To put it another way, for a one in a trillion chance of success, there would need to be 10360,641 universes made of atomic monkeys.[note 6] As Kittel and Kroemer put it in their textbook on thermodynamics, the field whose statistical foundations motivated the first known expositions of typing monkeys, "The probability of Hamlet is therefore zero in any operational sense of an event...", and the statement that the monkeys must eventually succeed "gives a misleading conclusion about very, very large numbers."

In fact there is less than a one in a trillion chance of success that such a universe made of monkeys could type any particular document a mere 79 characters long.
the guys who invented these types of mathematical analogies disagree...once the chance of an event occurring exceeds a particular threshold...the chance of success is equal to zero in any operational sense of the word.

Why a Living Cell Cannot Arise by Chance
12 Reasons Why Evolution Cannot 
Explain the Origin of Life on Earth pg.39:

So how can we know that it is impossible for a living cell to arise by chance? The answer lies in understanding that a single cell is vastly more complicated than anything human minds have ever engineered. Let us consider the components of a simple cell using the well-studied organism Escherichia coli, which is a single-celled organism found in the human gastrointestinal tract. In 1996 a two-volume, 2,800-page set of articles that summarized some of our knowledge of the biochemistry and biology of this organism was published. Using this data, George Javor, professor of biochemistry at Loma Linda University, calculated the following statistics:

[size=12]A single living E. coli contains around 2.4 million protein molecules made up of approximately 4,000 different types of proteins. Along with these proteins, the cell contains around 255,000 nucleic acid molecules made up of 660 different types of nucleic acids. Included with these nucleic acids are around 1.4 million polysaccharides (long chains of sugar-type molecules) molecules made up of three different types of polysaccharides. Associated with these polysaccharides are around 22 million lipid molecules made up of 50 to 100 different types of lipids. These lipids also cooperate with many millions of metabolic intermediate molecules made up of about 800 different types of compounds that have to be at just the right concentration, otherwise the cell will die. Along with the metabolic intermediates, there are many millions of mineral molecules made up of 10 to 30 different types of minerals.[/size]

We know that intelligence is able to create high-information containing codes, like books, computer codes, and complex machines and factories. We observe in the natural world organisms made by the same principles, namely codified specified information, and irreducible and interdependent molecular machines and cell factories, while the only possible natural mechanisms, namely chance or random chemical reactions, do not have this broad range of intelligence-like capabilities. Its safe therefore to conclude, that the origin of life is best explained through a intelligent creator, and not well explained through natural mechanisms. This is not a inference based on what we do not know, commonly called " argument from ignorance", as proponents of naturalism frequently like to argue, but it is a conclusion based on what science has discovered in the last few decades about how cells work, and how they are build up. The only rational explanation for the origin of cells, and life, is creation through a intelligent designer. 

According to Dembski and Borel  (Dembski, 1998, pp. 5, 62, 209, 210).

specified events of small probability do not occur. Dembski estimated 10^80 elementary particles in the universe and asked how many times per second an event could occur. He used the Planck value of 10^45. He then calculated the number of seconds from the beginning of the universe to the present and for good measure multiplied by ten million for 10^25 seconds in all. He thereby obtained 10^80 x 10^45 x 10^25 = 10^150, or more exactly 0.5 x 10^150, for his Law of Small Probability to eliminate chance

Currently, there does not seem to be a scientific criterion more generous to evolution than Dembski’s one chance in 0.5 x 10^150. Anything as rare as that probability had absolutely no possibility of happening by chance at any time by any conceivable specifying agent by any conceivable process throughout all of cosmic history. To test against that criterion, we take one chance in 2.3 x 10^75 for one protein (Yockey, 1992, pp. 255, 257) and multiply by the 60,000 proteins required for the abiogenesis of a minimal cell (Denton, 1986, p. 263; Morowitz, 1966, pp. 446-459) and obtain one chance in more than 104,478,296 (Mastropaolo, 1999, p. iii). That exceeds Dembski’s most generous criterion for impossible by more than 104,478,146. Or if 0.5 x 10^150 to 1 is the most generous probability science can provide to demarcate possibility from miracle, then with more than four million orders of magnitude to spare abiogenesis must be considered miraculous. To put abiogenesis in biology textbooks as evolutionists have done throughout the United States is to teach evolution religion as science and that violates the requirement of the U.S. Constitution prohibiting the establishment of a state religion (Constitution of the United States of America, 1787, Amendment I, see note).

Having calculated the staggering improbability of life’s emergence by chance, Manfred Eigen (1992) concludes:  

The genes found today cannot have arisen randomly, as it were by the throw of a dice. There must exist a process of optimization that works toward functional efficiency. Even if there are several routes to optimal efficiency, mere trial and error can not be one of them. (p. 11)

It is from this conclusion that Eigen motivates his search for a physical principle that does not leave the emergence of life up to blind chance, hence making it reproducible in principle: The physical principle that we are looking for should be in a position to explain the complexity typical of the phenomena of life at the level of molecular structures and syntheses. It should show how such complex molecular arrangements are able to form reproducibly in Nature. (p. 11)

In considering how the first self-replicating machinery arose, Dawkins asks: 

“Whatis the largest single event of sheer naked coincidence, sheer unadulterated miraculous luck, that we are allowed to get away with in our theories, and still say that we have a satisfactory explanation of life?” (p. 141) 

And he answers that there are strict limits on the “ration of luck” that we are allowed to postulate in our theories. According to Dawkins, an examination of the immense complexity of the most basic mechanisms required for DNA replication is sufficient to see that any theory which makes its existence a highly improbable fluke is unbelievable, quite apart from what alternative explanations are on the table


Chance
If someone were to ask why the pebbles fell in this specific pattern, we should be satisfied with the response “They just fell that way by chance. A car wheel kicked up the stones and they happened to land this way.” Of course it is highly improbable that a random scattering of pebbles should result in precisely this arrangement on the path, but this is no reason to doubt that their arrangement was due to anything but chance. Random improbabilities are commonplace—people win lotteries, hands of cards are dealt, and so on—without giving us any reason to suspect that there is a deeper reason for their occurrence than blind chance 1

Physical necessity
The correct explanation presumably has something to do with the lawful correlations between physical properties such as volume, mass, and inertia (my rough guess is that as the waves and tides wash in and out, the smaller stones being lighter are more easily swept back to the shoreline). But even if we know next to nothing about the physics involved, the simple regularity of the pattern suggests that these pebbles didn’t get that way by accident.

Intelligent design
There can be little doubt about the general form of the correct explanation here, namely that the positioning of the pebbles was influenced on purpose by an agent. Perhaps a full explanation would require more details as to who arranged the pebbles and how 1

http://web.mit.edu/rog/www/papers/does_origins.pdf
1. http://web.mit.edu/rog/www/papers/does_origins.pdf



Last edited by Otangelo on Thu Sep 19, 2024 11:04 am; edited 3 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

It is an interesting question, to elucidate what would be a theoretical minimal Cell, since based on that information, we can figure out what it would take for first life to begin on early earth. That gives us a number of  probability, if someone proposes natural, unguided mechanisms, based on chemical reactions, and atmospheric - and geological circumstances. The fact that we don't know the composition of the atmosphere back then doesn't do harm, and is not necessary in our inquiry.

The proteomic complexity and rise of the primordial ancestor of diversified life
A more recent study of 184 genomes identified 669 orthologous protein families, which cover 561 detailed functional classes that are involved in almost all essential biological processes of extant life, including translation, transcription and its regulation, DNA replication, recombination, and repair, transport and membrane-associated functions, electron transfer, and metabolism
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3123224/

Protein-length distributions for the three domains of life
The average protein length of these 110 clusters of orthologous genes COGs is 359 amino acids for the prokaryotes and 459 for eukaryotes.
https://pdfs.semanticscholar.org/5650/aaa06de4de11c36a940cf29c07f5f731f63c.pdf

Proteins are the result of the DNA blueprint, which specifies the complex sequence necessary to produce functional 3D folds of proteins. Both, improbability and specification are required in order to justify an inference of design.
1. According to the latest estimation of a minimal protein set for the first living organism, the requirement would be about 560 proteins, this would be the absolute minimum to keep the basic functions of a cell alive.  
2. According to the Protein-length distributions for the three domains of life, there is an average between prokaryotic and eukaryotic cells of about 400 amino acids per protein. 8
3. Each of the 400 positions in the amino acid polypeptide chains could be occupied by any one of the 20 amino acids used in cells, so if we suppose that proteins emerged randomly on prebiotic earth, then the total possible arrangements or odds to get one which would fold into a functional 3D protein would be 1 to 20^400 or 1 to 10^520. A truly enormous, super astronomical number. 
4. Since we need 560 proteins total to make a first living cell, we would have to repeat the shuffle 560 times, to get all proteins required for life. The probability would be therefore 560/10^520.  We arrive at a probability far beyond  of 1 in 10^200.000  ( A proteome set with 239 proteins yields odds of approximately 1/10^119.614 ) 7
Granted, the calculation does not take into consideration nor give information on the probabilistic resources available. But the sheer gigantic number os possibilities throw any reasonable possibility out of the window. 

If we sum up the total number of amino acids for a minimal Cell, there would have to be 560 proteins x 400 amino acids  =  224.000 amino acids, which would have to be bonded in the right sequence, choosing for each position amongst 20 different amino acids, and selecting only the left-handed, while sorting out the right-handed ones. That means each position would have to be selected correctly from 40 variants !! that is 1 right selection out of 40^224.000 possibilities !! Obviously, a gigantic number far above any realistic probability to occur by unguided events. Even a trillion universes, each hosting a trillion planets, and each shuffling a trillion times in a trillionth of a second, continuously for a trillion years, would not be enough. Such astronomically unimaginably gigantic odds are in the realm of the utmost extremely impossible.  

Helicases are astonishing motor proteins which rotational speed is up to 10,000 rotations per minute, and are life essential. 

How Many Genes Can Make a Cell: The Minimal-Gene-Set Concept
https://www.ncbi.nlm.nih.gov/books/NBK2227/

We propose a minimal gene set composed of 206 genes. Such a gene set will be able to sustain the main vital functions of a hypothetical simplest bacterial cell with the following features.

(i) A virtually complete DNA replication machinery, composed of one nucleoid DNA binding protein, SSB, DNA helicase, primase, gyrase, polymerase III, and ligase. No initiation and recruiting proteins seem to be essential, and the DNA gyrase is the only topoisomerase included, which should perform both replication and chromosome segregation functions.

Helicase are a class of enzymes vital to all living organisms. Their main function is to unpackage an organism's genes. They require 1000 left-handed amino acids in the right specified sequence. Each of the 1000 amino acids must be the right amongst 20 to chose from.  How did they emerge by natural processes? The chance to get them by random chemical reactions is 1 to 20^1000..... there are 10^80 atoms in the universe.  

Uncertainty quantification of the universe and life emerging through unguided, natural, random events YK23Ods
9

Like the Holy Grail, a universal DNA ‘minimal genome’ has remained elusive despite efforts to define it. 10 Gene essentiality has to be defined within the specific context of the bacterium, growth conditions, and possible environmental fluctuations. Gene persistence can be used as an alternative because it provides a more general framework for defining the requirements for long-term survival via identification of universal functions. These functions are contained in the paleome, which provides the core of the cell chassis. The paleome is composed of about 500 persistent genes. 

We can take an even smaller organism, which is regarded as one of the smallest possible, and the situation does not change significantly:
The simplest known free-living organism, Mycoplasma genitalium,  has the smallest genome of any free-living organism, has a genome of 580,000 base pairs. This is an astonishingly large number for such a ‘simple’ organism. It has 470 genes that code for 470 proteins that average 347 amino acids in length. The odds against just one specified protein of that length are 1:10^451. If we calculate the entire proteome, then the odds are 470 x 347 = 163090 amino acids, that is odds of 20^164090 , if we disconsider that nature had to select only left-handed amino acids, and bifunctional ones. 

Argument: The physical laws, the laws of biochemistry, those aren't chance. The interaction of proteins, molecules, and atoms, their interaction is dictated by the laws of the universe. 
Response: While it is true, that the chemical bonds that glue one amino acid to the other are subdued to chemical properties, there are neither bonds nor bonding affinities—differing in strength or otherwise—that can explain the origin of the specificity of the sequence of the 20 types of amino acids, that have to be put together in the right order and sequence, in order for a protein to bear function.  What dictates in modern cells the sequence of amino acids in proteins is the DNA code. 

DNA contains true codified instructional information, or a blueprint.  Being instructional information means that the codified nucleotide sequence that forms the instructions is free and unconstrained; any of the four bases can be placed in any of the positions in the sequence of bases. Their sequence is not determined by the chemical bonding. There are hydrogen bonds between the base pairs and each base is bonded to the sugar-phosphate backbone, but there are no bonds along the longitudional axis of DNA. The bases occur in the complementary base pairs A-T and G-C, but along the sequence on one side the bases can occur in any order, like the letters of a language used to compose words and sentences. Since nucleotides can be arranged freely into any informational sequence, physical necessity could not be a driving mechanism.


Objection: Logical Fallacy. Argument from Improbability. Just because a thing is highly unlikely to occur doesn't mean that its occurrence is impossible. No matter how low the odds, if there is a chance that it will happen, it will happen given enough tries, especially considering a large enough universe with countless potentially "living" planets. Nobody claims that winning the lottery doesn't happen just because the chance of a specific individual winning it is low.
Response: So you're saying there's a chance... Murphy's Law states given enough time and opportunity, anything THAT CAN HAPPEN will happen. That phrase "can happen" is very important. The natural, non thinking, random, void of purpose and reason cosmos is unable to produce what we see without information. Life without information is not only improbable but rather impossible from what we know about the nature of complexity, design, and purpose. Given a planet full of simpler non-self-replicating organic molecules, abundant but not excessive input of energy from the local star, and a few thousand million years one could imagine that the number of "tries" to assemble that one self-replicating organic molecule would easily exceed 5x10^30. You can't just vaguely appeal to vast and unending amounts of time (and other probabilistic resources) and assume that self-assembly, spontaneously by orderly aggregation and sequentially correct manner without external direction, given enough trials is possible, that it can produce anything "no matter how complex."  Rather, it has to be demonstrated that sufficient probabilistic resources or random, non-guided mechanisms indeed exist to produce the feature. Fact is, we know upon repeated experience and demonstration that intelligence can and does envision, project and elaborate complex blueprints, and upon its instructions, intelligence produces the objects in question. It has NEVER been demonstrated that unguided events without specific purposes can do the same. In such examples it is also not taken in consideration, that such shuffle requires energy applied in specific form, and the environmental conditions must permit the basic molecules not to be burned by UV radiation. The naturalistic proposal is more a matter of assaulting the intelligence of those who oppose it with a range assertions that proponents of naturalism really have no answer, how these mechanisms really work. To argue that forever is long enough for the complexity of life to reveal itself is an untenable argument. The numbers are off any scale we can relate to as possible to explain what we see of life. Notwithstanding, you have beings in here who go as far to say it's all accounted for already, as if they know something nobody else does. Fact is, science is struggling for decades to unravel the mystery of how life could have emerged, and has no solution in sight.  

Eugene Koonin, advisory editorial board of Trends in Genetics, writes in his book: The Logic of Chance: 
" The Nature and Origin of Biological Evolution, Eugene V. Koonin, page 351:
The origin of life is the most difficult problem that faces evolutionary biology and, arguably, biology in general. Indeed, the problem is so hard and the current state of 
the art seems so frustrating that some researchers prefer to dismiss the entire issue as being outside the scientific domain altogether, on the grounds that unique
events are not conducive to scientific study.


Objection: What is entirely missing from such arguments as above is the demonstration that a specific modern "alive" configuration is the only possible one. Calculating probabilities starting with the end result commits the classic fallacy of painting the target after the arrow has been shot, pretending that a goal has been specified in advance; any event becomes arbitrarily improbable if specified with sufficient accuracy.
Response: We have taken as our premise what science has demonstrated to be the minimal requirement to start life. And therefore, the objection fails. 

Above calculation does not take into consideration that there are about 500 naturally occurring amino acids known. So let us suppose that they were extant on the prebiotic earth, and somehow, a selective process did chose and select, and concentrate just the 20 used in life. These 20 could still come in two versions, left-handed, and right-handed. Life uses only left-handed amino acids, so they would also have to be sorted out. Now let us suppose, that by freaky accident and crazy shuffling trillions of trillions of time, suddenly, the right protein set would be there, the right proteins, the right assortment. It is claimed, that there was no oxygen in the prebiotic earth. If that was so, then there would be no protection from UV light, which would destroy and disintegrate prebiotic organic compounds. Secondly, even if there would be a sequence, producing a functional folding protein, by itself, if not inserted in a functional way in the cell, it would have absolutely no function. It would just lay around, and then soon disintegrate. Furthermore, in modern cells proteins are tagged and transported on molecular highways to their precise destination, where they are utilized. Obviously, all this was not extant on the early earth.
 
Peptide bonding of amino acids to form proteins and its origins
https://reasonandscience.catsboard.com/t2130-peptide-bonding-of-amino-acids-to-form-proteins-and-its-origins

Most of the cell’s important functions are carried out by compounds called proteins which are a chain of amino acids linked together. There are 20 amino acids which can be arranged in any combination and the average protein consists of over 400 amino acids linked together. The protein’s characteristics and function is determined by the number and particular arrangement of amino acids. A protein can be represented by a sentence which derives its meaning from the particular arrangement of letters or amino acids. 4  According to evolutionary theories, amino acids were synthesized spontaneously and then linked together to form the first protein from a generic amino acid “soup.” In experiments attempting to synthesize amino acids, the products have been a mixture of right-handed and left-handed amino acids. (Amino acids, as well as other organic compounds, can exist in two forms which have the same chemical composition but are three-dimensional mirror images of each other; thus termed right and left-handed amino acids.) Every protein in a living cell is composed entirely of left-handed amino acids, even though the right-handed isomer can react in the same way. Thus, if both right and left-handed amino acids are synthesized in this primitive organic soup, we are faced with the question of how life has used only the left-handed amino acids for proteins.

We can represent this dilemma by picturing a huge container filled with millions of white (left-handed amino acids) and black (right-handed amino acids) jelly beans. What would be the probability of a blind-folded person randomly picking out 410 white jelly beans (representing the average sized protein) and no black jelly beans? The odds that the first 410 jelly beans would be all one color are one in 2^410 or 109^123. To put the odds in perspective, there are only about 10^18 seconds in 4.5 billion years, the approximate claimed age of the earth, and it has been estimated that there are only 10^80 particles in the universe.

Yet the probability of choosing all left-handed amino acids, without even considering their particular order or specific arrangement, is much larger than that!

So let us suppose that there are about 500 different amino acids on the prebiotic earth, and somehow, a selective process did chose and select, and concentrate just the 20 used in life. And sorted out all right-handed amino acids, 
so only left-handed would have remained. 

About 500 naturally occurring amino acids are known (though only 20 appear in the genetic code) and can be classified in many ways
https://en.wikipedia.org/wiki/Amino_acid

We had a theoretical average size of 400 amino acids per protein,  Since each of the 400 positions in the chain could be occupied by any one of the 20 amino acids, the total possible arrangements is 20^400, a truly enormous, super astronomical number. Applying the formula below, the odds for one protein with 400 amino acids to emerge randomly are 1 to 10^520.

Uncertainty quantification of the universe and life emerging through unguided, natural, random events Z3uo2dj

Uncertainty quantification of the universe and life emerging through unguided, natural, random events ClbAktZ
7

Uncertainty quantification of the universe and life emerging through unguided, natural, random events CQAsIBY

Considering the first one as already obtained, we need 559 more. The probability is therefore 559/10520.  The third one could be any of the 558 still needed, so its probability would be 558 /10520.  Calculating all of these, and allowing for one substitution per chain, we arrive at a probability far beyond  of 1 in 10^100.000

To link two amino acids together requires the removal of a water molecule and the supply of some 150 times more energy than heat in the Earth's oceans could supply. In the absence of a joining enzyme used by biology or without an excessively large flux of ultraviolet light at the ocean surface, no new arrangements could be achieved. But even if chemical barriers for the linkages are artificially and miraculously removed, the really vast improbability of 1 in 10 ^40,000 poses a serious dilemma for the whole of evolutionary science. 6

For a protein made from scratch in a prebiotic soup, the odds of finding such globally optimal solutions are infinitesimally small- somewhere between 1 in 10^140 and 1 in 10^164 for a 150 amino acid long sequence if we factor in the probabilities of forming peptide bonds and of incorporating only left handed amino acids. 5

Proteins families are grouped into Clusters of Orthologous Groups, or COGs, which typically serve the same function. This Clusters of Orthologous Groups 561-COG set gives a low-end estimate for the LUCA proteome. 
Prebiotic Evolution and Astrobiology 2009 Landes Bioscience Austin Texas

A minimal estimate for the gene content of the last universal common ancestor—exobiology from a terrestrial perspective
A truly minimal estimate of the gene content of the last universal common ancestor, obtained by three different tree construction methods and the inclusion or not
of eukaryotes (in total, there are 669 ortholog families distributed in 561 functional annotation descriptions, including 52 which remain uncharacterized)
http://sci-hub.tw/https://www.ncbi.nlm.nih.gov/pubmed/16431085/

The proteomic complexity and rise of the primordial ancestor of diversified life
A more recent study of 184 genomes identified 669 orthologous protein families, which cover 561 detailed functional classes that are involved in almost all essential biological processes of extant life, including translation, transcription and its regulation, DNA replication, recombination, and repair, transport and membrane-associated functions, electron transfer, and metabolism
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3123224/

Protein-length distributions for the three domains of life
The average protein length of these 110 clusters of orthologous genes COGs is 359 amino acids for the prokaryotes and 459 for eukaryotes.
https://pdfs.semanticscholar.org/5650/aaa06de4de11c36a940cf29c07f5f731f63c.pdf

So let us suppose and make a theoretical assumption that the threshold to have a first living cell is 560 proteins with an average length of 400 amino acids.

The length of the average protein in the smallest known living thing is at least 400 amino acid links, containing more than 7,000 atoms.
https://web.archive.org/web/20170423032439/http://creationsafaris.com/epoi_c06.htm

Lies, Damned Lies, Statistics, and Probability of Abiogenesis Calculations
http://www.talkorigins.org/faqs/abioprob/abioprob.html
Claim: Firstly, the formation of biological polymers from monomers is a function of the laws of chemistry and biochemistry, and these are decidedly not random.
Response: The amino acid sequence of a polypeptide, together with the laws of chemistry and physics, cause a polypeptide to fold into a more compact structure, but the sequence that permits proteins to fold to a stable fold is not determined by the laws of chemistry, but the complex, specified, code in DNA, which is a functional Cell dictates the amino acids which will be synthesized in the Ribosome. The instruction for the 3-D structure of a protein is embedded in the sequences of amino acids and the electrochemical attractive forces among them, and these are determined by the genetic Code.  Most proteins include all of the usual twenty kinds of amino acids.  Each protein has a specific exact sequence of these units.

Wiki: The genetic code is the set of rules used by living cells to translate information encoded within genetic material (DNA or mRNA sequences) into proteins. Translation is accomplished by the ribosome, which links amino acids in an order specified by messenger RNA (mRNA)

Objection: The entire premise is incorrect to start off with, because in modern abiogenesis theories the first "living things" would be much simpler, not even a protobacteria, or a preprotobacteria (what Oparin called a protobiont and Woese calls a progenote), but one or more simple molecules probably not more than 30-40 subunits long.
Response: In his book: The Fifth Miracle: The Search for the Origin and Meaning of Life, Paul Davies describes life with the following characteristics of life:
Reproduction. Metabolism.  Homeostasis Nutrition. Complexity. Organization.  Growth and development. Information content.  Hardware/software entanglement.  Permanence and change.

The paper: The proteomic complexity and rise of the primordial ancestor of diversified life describes :
The last universal common ancestor as the primordial cellular organism from which diversified life was derived. This urancestor accumulated genetic information before the rise of organismal lineages and is considered to be a complex 'cenancestor' with almost all essential biological processes.
http://europepmc.org/articles/pmc3123224

Objection: These simple molecules then slowly evolved into more cooperative self-replicating systems, then finally into simple organisms
Response: Koonin refutes that claim in his Book:  the logic of chance, page 266
Evolution by natural selection and drift can begin only after replication with sufficient fidelity is established. Even at that stage, the evolution of translation remains highly problematic. The emergence of the first replicator system, which represented the “Darwinian breakthrough,” was inevitably preceded by a succession of complex, difficult steps for which biological evolutionary mechanisms were not accessible. The synthesis of nucleotides and (at least) moderate-sized polynucleotides could not have evolved biologically and must have emerged abiogenically—that is, effectively by chance abetted by chemical selection, such as the preferential survival of stable RNA species. Translation is thought to have evolved later via an ad hoc selective process.   ( Did you read this ???!! An ad-hoc process ?? ) 


Objection: Another view is the first self-replicators were groups of catalysts, either protein enzymes or RNA ribozymes, that regenerated themselves as a catalytic cycle. It's not unlikely that a small catalytic complex could be formed. Each step is associated with a small increase in organisation and complexity, and the chemicals slowly climb towards organism-hood, rather than making one big leap
Response:   The Logic of Chance: The Nature and Origin of Biological Evolution By Eugene V. Koonin
Hence, the dramatic paradox of the origin of life is that, to attain the minimum complexity required for a biological system to start on the Darwin-Eigen spiral, a system of a far greater complexity appears to be required. How such a system could evolve is a  puzzle that defeats conventional evolutionary thinking, all of which is about biological systems moving along the spiral; the solution is bound to be unusual. 

Objection:As to the claim that the sequences of proteins cannot be changed, again this is nonsense. There are in most proteins regions where almost any amino acid can be substituted, and other regions where conservative substitutions (where charged amino acids can be swapped with other charged amino acids, neutral for other neutral amino acids and hydrophobic amino acids for other hydrophobic amino acids) can be made.
Response: A protein requires a threshold of minimal size to fold and become functional within its milieu where it will operate. That threshold is average 400 amino acids. That means, until that minimal size is reached, the amino acids polypeptide chain bears no function. So each protein can be considered irreducibly complex. Practically everyone has identically the same kind of haemoglobin molecules in his or her blood, identical down to the last amino acid and the last atom.  Anyone having a different haemoglobin would be seriously ill or dead, because only the very slightest changes can be tolerated by the organism.

A. I. Oparin
Even the simplest of these substances [proteins] represent extremely complex compounds, containing many thousands of atoms of carbon, hydrogen, oxygen, and nitrogen arranged in absolutely definite patterns, which are specific for each separate substance.  To the student of protein structure the spontaneous formation of such an atomic arrangement in the protein molecule would seem as improbable as would the accidental origin of the text of irgil’s “Aeneid” from scattered letter type.1

In order to start a probability calculation, it would have to be pre-established somehow, that the twenty amino acids used in life, would have been pre-selected out of over 500 different kinds of amino acids known in nature. They would have to be collected in one place, where they would be readily available. Secondly, amino acids are homochiral, that is, they are left and right-handed. Life requires that all amino acids are left-handed. So there would have to exist another selection mechanism, sorting the correct ones out, in order to remain only left-handed amino acids ( Cells use complex biosynthesis pathways and enzymes to produce only left-handed amino acids ).  So if we suppose that somehow, out of the prebiotic pond, the 20 amino acids, only with left-handed, homochiral, were sorted out,  

The probability of generating one amino acid chain with 400 amino acids in successive random trials is (1/20)400

Claim: We were examining sequential trials as if there was only one protein/DNA/proto-replicator being assembled per trial. In fact there would be billions of simultaneous trials as the billions of building block molecules interacted in the oceans, or on the thousands of kilometers of shorelines that could provide catalytic surfaces or templates.
Response: It is claimed, that there was no oxygen in the prebiotic earth. If that was so, then there would be no protection from UV light, which would destroy and disintegrate prebiotic organic compounds. Secondly, even if there would be a sequence, producing a functional folding protein, by itself, if not inserted in a functional way in the cell, it would absolutely no function. It would just lay around, and then soon disintegrate. Furthermore, in modern cells proteins are tagged and transported on molecular highways to their precise destination, where they are utilized. Obviously, all this was not extant on early earth.

Abiogenesis and Probability
https://www.patheos.com/blogs/tippling/2017/08/28/abiogenesis-and-probability/

Problems with the creationists’ “it’s so improbable” calculations
1) They calculate the probability of the formation of a “modern” protein, or even a complete bacterium with all “modern” proteins, by random events. This is not the abiogenesis theory at all.
2) They assume that there is a fixed number of proteins, with fixed sequences for each protein, that are required for life.
3) They calculate the probability of sequential trials, rather than simultaneous trials.
4) They misunderstand what is meant by a probability calculation.
5) They seriously underestimate the number of functional enzymes/ribozymes present in a group of random sequences.

I will try and walk people through these various errors, and show why it is not possible to do a “probability of abiogenesis” calculation in any meaningful way.

A primordial protoplasmic globule
So the calculation goes that the probability of forming a given 300 amino acid long protein (say an enzyme like carboxypeptidase) randomly is (1/20)300 or 1 chance in 2.04 x 10390, which is astoundingly, mind-beggaringly improbable. This is then cranked up by adding on the probabilities of generating 400 or so similar enzymes until a figure is reached that is so huge that merely contemplating it causes your brain to dribble out your ears. This gives the impression that the formation of even the smallest organism seems totally impossible. However, this is completely incorrect.

Firstly, the formation of biological polymers from monomers is a function of the laws of chemistry and biochemistry, and these are decidedly not random.



Last edited by Otangelo on Wed Jun 23, 2021 6:55 am; edited 2 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

How much of protein sequence space has been explored by life on Earth?
A typical estimate of the size of sequence space is 20^100 (approx. 10^130) for a protein of 100 amino acids in which any of the normally occurring 20 amino acids can be found. This number is indeed gigantic
[url=https://royalsocietypublishing.org/doi/10.1098/rsif.2008.0085#:~:text=A typical estimate of the,size of protein sequence space.]https://royalsocietypublishing.org/doi/10.1098/rsif.2008.0085#:~:text=A%20typical%20estimate%20of%20the,size%20of%20protein%20sequence%20space.[/url]

Bit by Bit: The Darwinian Basis of Life Gerald F. Joyce  Published: May 8, 2012
https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001323
Suppose that a  polymer  (like RNA) that is assembled into four chains of 40 subunits (quaternary heteropolymer) . Then there would be 10^24 possible compositions. To represent all of these compositions at least once, and thus to establish a certainty that this simple ribozyme could have materialized, requires 27 kg of RNA chains, which classifies spontaneous emergence as a highly implausible event.

OPEN QUESTIONS IN ORIGIN OF LIFE: EXPERIMENTAL STUDIES ON THE ORIGIN OF NUCLEIC ACIDS AND PROTEINS WITH SPECIFIC AND FUNCTIONAL SEQUENCES BY A CHEMICAL SYNTHETIC BIOLOGY APPROACH February 2014
There is a conceptual problem, namely the emergence of specific sequences among a vast array of possible ones, the huge “sequence space”, leading to the question “why these macromolecules, and not the others?” One of the main open questions in the field of the origin of life is the biogenesis of proteins and nucleic acids as ordered sequences of monomeric residues, possibly in many identical copies. The first important consideration is that functional proteins and nucleic acids are chemically speaking copolymers, i.e., polymer formed by several different monomeric units, ordered in a very specific way.

Attempts to obtain copolymers, for instance by a random polymerization of monomer mixtures, yield a difficult to characterize mixture of all different products. To the best of our knowledge, there is no clear approach to the question of the prebiotic synthesis of macromolecules with an ordered sequence of residues. The copolymeric nature of proteins and nucleic acid challenges our understanding of origin of life also from a theoretical viewpoint. The number of all possible combinations of the building blocks (20 amino acids, 4 nucleotides) forming copolymers of even moderate length is ‘astronomically’ high, and the total number of possible combinations it is often referred as the “sequence space”. Simple numerical considerations suggest that the exhaustive exploration of the sequence spaces, both for proteins and nucleic acid, was physically not possible in the early Universe, both for lack of time and limited chemical material. There are no methods described in the literature to efficiently generate long polypeptides, and we also lack a theory for explaining the origin of some macromolecular sequences instead of others.

The theoretical starting point is the fact that the number of natural proteins on Earth, although apparently large, is only a tiny fraction of all the possible ones. Indeed, there are thought to be roughly 10^13 proteins of all sizes in extant organisms. This number, however, is negligible when compared to the number of all theoretically possible different proteins. The discrepancy between the actual collection of proteins and all possible ones stands clear if one considers that the number of all possible 50-residues peptides that can be synthesized with the standard 20 amino acids is 20^50, namely 10^65. Moreover, the number of theoretically possible proteins increases with length, so that the related sequence space is beyond contemplation; in fact, if we take into account the living organisms, where the average length of proteins is much greater, the number of possible different proteins becomes even bigger. The difference between the number of possible proteins (i.e. the sequence space) and the number of those actually present in living organisms is comparable, in a figurative way, to the difference that exists between a drop of water and an entire Ocean. This means that there is an astronomically large number of proteins that have never been subjected to the long pathway of natural evolution on Earth: the “Never Born Proteins” (NBPs). Furthermore, the question whether a functionality is a common feature in the sequence space, or a rare result of natural selection, is of the utmost importance to elucidate the role of proteins in the origin of life and to fully exploit its biological potential and find new scaffolds for biological activities.
https://www.sciencedirect.com/science/article/pii/S2001037014600076


The Universe: Past and Present Reflections Fred Hoyle
https://calteches.library.caltech.edu/527/2/Hoyle.pdf
The big problem in biology, as.I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn't so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn't give. The case of the enzymes is well known. Enzymes act as catalysts in speeding up chemical reactions that would otherwise go far too slowly, as in the breakdown, for example, of starch into sugar. If amino acids were linked at random, there would be a vast number of arrangements that would be useless in serving the purposes of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link, it's easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem - the information problem. It's easy to frame a deceitful answer to it. Start with much simpler, much smaller enzymes, which are sufficiently elementary to be discoverable by chance; then let evolution in some chemical environment cause the simple enzymes to change gradually into the complex ones we have today. The deceit here comes from omitting to explain what is in the environment that causes such an evolution. The improbability of finding the appropriate orderings of amino acids is simply being concealed in the behavior of the environment if one uses that style of argument.

I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe.  So try as I would, I couldn't convince myself that even the whole universe would be sufficient to find life by random processes - by what are called the blind forces of nature. The thought occurred to me one day that:

The human chemical industry doesn't chance on its products by throwing chemicals at random into a stewpot. To suggest to the research department at DuPont that it should proceed in such a fashion would be thought ridiculous.

Wasn't it even more ridiculous to suppose that the vastly more complicated systems of biology had been obtained by throwing chemicals at random into a wildly chaotic astronomical stewpot? By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes. And given a knowledge of the appropriate ordering of amino acids, it would need only a slightly superhuman chemist to construct the enzymes with 100 percent accuracy. It would need a somewhat more superhuman scientist, again given the appropriate instructions, to assemble it himself, but not a level of scale outside our comprehension. Rather than accept the fantastically small probability of life having arisen through the blind forces of nature, it seemed better to suppose that the origin of life was a deliberate intellectual act. By "better" I mean less likely to be wrong. Suppose a spaceship approaches the earth, but not close enough for the spaceship's imaginary inhabitants to distinguish individual terrestrial animals. They do see growing crops, roads, bridges, however, and a debate ensues. Are these chance formations or are they the products of an intelligence? Taking the view, palatable to most ordinary folk but exceedingly unpalatable to scientists, that there is an enormous intelligence abroad in the universe, it becomes necessary to write blind forces out of astronomy.

Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language supercalculating intellects use: Some supercalculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.

A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question.

Steve Meyer, Signature in the Cell:
The conditional probability that just one of these information-rich molecules arose by chance—in effect, the chance that chance is true—is much less than one-half. It is less than one in a trillion trillion. Thus, I concluded that it is more reasonable to reject the chance hypothesis than to accept it.



Last edited by Otangelo on Wed Jun 23, 2021 7:40 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

How much is the universe finely tuned to host life?

https://reasonandscience.catsboard.com/t2508-abiogenesis-uncertainty-quantification-of-a-primordial-ancestor-with-a-minimal-proteome-emerging-through-unguided-natural-random-events#8681

1. The more statistically improbable something is, the less it makes sense to believe that it just happened by blind chance.
2. In order to have a universe, able to host various forms of life on earth, 1364 (!!) different features and fine-tune parameters must be just right.
3. Statistically, it is practically impossible, that the universe was finely tuned to permit life by chance.  
4. Therefore, an intelligent Designer is by far the best explanation of origins.  

RTB Design Compendium (2009) 5

Fine-Tuning for Life in the Universe
140 features of the cosmos as a whole (including the laws of physics) that must fall within certain narrow ranges to allow for the possibility of physical life’s existence. 1

Fine-Tuning for Intelligent Physical Life
402 quantifiable characteristics of a planetary system and its galaxy that must fall within narrow ranges to allow for the possibility of advanced life’s existence. This list includes comment on how a slight increase or decrease in the value of each characteristic would impact that possibility. 2

Probability Estimates for Features Required by Various Life Forms
922 characteristics of a galaxy and of a planetary system physical life depends on and offers conservative estimates of the probability that any galaxy or planetary system would manifest such characteristics. This list is divided into three parts, based on differing requirements for various life forms and their duration. 3 and 4

Lets make an illustration here:
Mega Millions is up to $970 million—there’s one way to up the odds of winning, according to a Harvard statistics professor  1
Players may pick six numbers from two separate pools of numbers - five different numbers from 1 to 70 (the white balls) and one number from 1 to 25 (the gold Mega Ball) You win the jackpot by matching all six winning numbers in a drawing. The odds to win the Jackpot 1 in 302,575,350, or approx. 1 to 3^8.

To facilitate our calculation, and make an illustration, let's suppose the odds would then be 1 in 100,000,000, or 1 to 10^7
If that person would win twice in a row, the odds would be 1 to 10^14
3 times in a row, 1 to 10^28
4 times in a row, 1 to 10^64
5 times in a row  1 to 10^128
6 times in a row  1 to 10^256
7 times in a row  1 to 10^512
8 times in a row  1 to 10^1024
9 times in a row  1 to 10^2024
10 times in a row 1 to 10^4048
11 times in a row 1 to 10^8096
12 times in a row 1 to 10^16192
13 times in a row 1 to 10^31384
14 times in a row 1 to 10^62768
15 times in a row 1 to 10^125536
16 times in a row 1 to 10^251072
17 times in a row 1 to 10^502144

That's about the odds for ONE fine-tune parameter of the universe. The odds to have the right expansion rate of the big bang is close, 1 to 10^55 power.

Initial entropy:  1 part in 10^ (10^123)
Gravitational constant: 1 part in 10^34
Electromagnetic force versus force of gravity: 1 part in 10^37
Cosmological constant: 1 part in 10^123
The mass density of the universe:  1 part in 10^59
The chance to get a universe with stars is 10^229
The chance to get the force of gravity just right for life to exist is 1 out of 10^21
The chance to get the strong nuclear force  just right for life to exist is 1 out of 10^21


Here I am mentioning just 8 fine-tune parameters. But literally, thousands must be finely tuned. By cosmic evolution, or design ?

1. https://d4bge0zxg5qba.cloudfront.net/files/compendium/compendium_part1.pdf
2. https://d4bge0zxg5qba.cloudfront.net/files/compendium/compendium_part2.pdf
3. https://d4bge0zxg5qba.cloudfront.net/files/compendium/compendium_Part3_ver2.pdf
4. https://d4bge0zxg5qba.cloudfront.net/files/compendium/compendium_Part4_ver2.pdf
5. https://reasons.org/explore/publications/articles/rtb-design-compendium-2009
6. https://www.megamillions.com/How-to-Play.aspx



Last edited by Otangelo on Wed Jun 23, 2021 5:37 am; edited 3 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The Criterion: The "Cosmic Limit" Law of Chance

https://reasonandscience.catsboard.com/t2508-abiogenesis-uncertainty-quantification-of-a-primordial-ancestor-with-a-minimal-proteome-emerging-through-unguided-natural-random-events#8688


1. The more statistically improbable something is, the less it makes sense to believe that it just happened by blind chance. There is a threshold ( 1 to 10^50), where something can be considered statistically impossible.
2. Statistically, it is impossible, that the laws of physics, the Big Bang, the initial conditions of the universe, the four fundamental forces, fine-tuning the subatomic particles, the Milky Way Galaxy,  the Solar System,  the sun, the earth, the moon, the electromagnetic spectrum, and biochemistry, that is the primordial genome, proteome, metabolome, and interactome of the first living cell arose by chance.
3. Furthermore, we see in biochemistry purposeful design.  
4. Therefore, an intelligent Designer is by far the best explanation of the origin of the physical universe, life, and biodiversity. 

Threshold of what can be considered possible/impossible to be produced by unguided random events:
To arrive at a statistical "proof," we need a reasonable criterion to judge it by :
As just a starting point, consider that many statisticians consider that any occurrence with a chance of happening that is less than one chance out of 10^50, is an occurrence with such a slim a probability that is, in general, statistically considered to be zero. (10^50 is the number 1 with 50 zeros after it, and it is spoken: "10 to the 50th power"). This appraisal seems fairly reasonable, when you consider that 10^50 is about the number of atoms which make up the planet earth. --So, overcoming one chance out of 10^50 is like marking one specific atom out of the earth, and mixing it in completely, and then someone makes one blind, random selection, which turns out to be that specific marked atom. Most mathematicians and scientists have accepted this statistical standard for many purposes.

The odds to have life from non-life by natural means: 
The odds to have a functional proteome, which is in the case of Pelagibacter, the smallest known bacteria and life-form, with 1350 proteins, average 300 Amino Acids size, by unguided means: 10^722000
The odds to connect all 1350 proteins in the right, functional order is about 4^3600
The odds to have both, a proteome, and interactome: about 10^725600

But even before the prebiotic earth could start the shuffling, there would have to be:

A life-permitting universe. That entails:
Finely tuned laws of physics, fine-tuning of the Big Bang, the cosmological constant,  the initial conditions of the universe, the four fundamental forces and coupling constants, fine-tuning the subatomic particles, forces and constants, the fine-structure constant,  the Milky Way Galaxy,  the Solar System,  the sun, the earth, the moon, the electromagnetic spectrum, and biochemistry.

According to RTB Design Compendium (2009):
Hugh Ross: Part 1. Fine-Tuning for Life in the Universe  2008
in order to have a life-permitting universe, fine-Tuning for Life in the Universe requires 140 features of the cosmos as a whole (including the laws of physics) that must fall within certain narrow ranges to allow for the possibility of physical life’s existence. 

To give just a few examples, the odds for: 
The expansion rate of the universe: 1 part in 10^55
Initial entropy:  1 part in 10^ (10^123)
Cosmological constant: 1 part in 10^123
Gravitational constant: 1 part in 10^34
Electromagnetic force versus the force of gravity: 1 part in 10^37
The mass density of the universe:  1 part in 10^59
The chance to get a universe with stars is 10^229
The chance to get the force of gravity just right for life to exist is 1 out of 10^21
The chance to get the strong nuclear force  just right for life to exist is 1 out of 10^21
The electromagnetic force constant exhibits moderate fine-tuning of 1 part in 25
The strong nuclear force constant is fine-tuned to 1 part in 200
The ratio of the weak nuclear force constant to the strong nuclear force constant had to have been set with a precision of 1 part in 10,000

Hugh Ross:  Part 2. Fine-Tuning for Intelligent Physical Life  2008
In order to have intelligent physical life,  402 quantifiable characteristics of a planetary system and its galaxy must fall within narrow ranges to allow for the possibility of advanced life’s existence. This list includes comments on how a slight increase or decrease in the value of each characteristic would impact that possibility.  

Hugh Ross: Part 3. Probability Estimates for the Features Required by Various Life Forms  2008
And the probability estimates for features required by various life forms: 922 characteristics of a galaxy and of a planetary system physical life depends on and offers conservative estimates of the probability that any galaxy or planetary system would manifest such characteristics.

The earth, in order to be life-permitting, requires:
Steady plate tectonics, the right amount of water in the crust, a large moon with the right planetary rotation period, proper concentration of sulfur, the right planetary mass, be near the inner edge of the circumstellar habitable zone,  having a low-eccentricity orbit outside spin-orbit, and giant planet resonances, having a few, large Jupiter-mass planetary neighbors in large circular orbits, protection of the Earth from the most dangerous comets, be at the outside spiral arm of the galaxy,  be at the near co-rotation circle of the galaxy, within the galactic habitable zone, having the earth’s magnetic field, having the right pressure of the atmosphere, having the earth slightly tilted, having a stabilization of the surface temperature

Probability for occurrence of all 816 parameters ≈ 10^1054
Maximum possible number of life support bodies in observable universe ≈ 10^22 
Thus, less than 1 chance in 10^1032 exists that even one such life-support body would occur anywhere in the universe without invoking divine miracles. 

The total number of events that could have occurred in the observable universe since the origin of the universe at 10^139.  This provides a measure of the probabilistic resources of the entire observable universe.
Estimate of the number of grains of sand on earth: 7.5 x 10^18 grains of sand
How Many Stars Are In The Universe? 10^24
How Many Atoms Are There in the Universe? 10^82 atoms in the known, observable universe.

Mega Millions is up to $970 million—there’s one way to up the odds of winning, according to a Harvard statistics professor  
Players may pick six numbers from two separate pools of numbers - five different numbers from 1 to 70 (the white balls) and one number from 1 to 25 (the gold Mega Ball) You win the jackpot by matching all six winning numbers in a drawing. The odds to win the Jackpot 1 in 302,575,350, or approx. 1 to 3^8.
Odds to win 17 times in a row 1 to 10^502144
The odds to have both, a proteome, and interactome: about 10^725600

https://www.youtube.com/watch?v=7YOkOmTxOWs

https://reasonandscience.catsboard.com

Otangelo


Admin

Can random unguided events by chance explain the emergence of a finely tuned universe, and life?


https://reasonandscience.catsboard.com/t3154-can-random-unguided-events-by-chance-explain-the-emergence-of-a-finely-tuned-universe-and-life

Claim: the origin of life is overwhelmingly improbable, but as long as there is at least some chance a minimal proteome to kick-start life arising by natural means, then we shouldn’t reject the possibility that it did.
Reply: Chance is a possible explanation for the various finely tuned parameters and constants required to have a life-permitting universe, and a minimal genome, proteome, metabolome, and interactome to emerge by stochastic, unguided means, and as consequence, the origin of life, but it doesn’t follow that it is possible, and necessarily the best explanation. Here is why.

What is the maximal number of possible simultaneous interactions in the entire history of the universe, starting 13,7 billion years ago? 
There have been roughly 10^16 seconds since the big bang.  There is a limit to the number of physical transitions that can occur from one state to another within a given unit of time. A physical transition from one state to another cannot take place faster than light can traverse the smallest physically significant unit of distance. That unit of distance is the so-called Planck length. Therefore, the time it takes light to traverse this smallest distance determines the shortest time in which any physical effect can occur. This unit of time is the Planck time of 10 minus 43 seconds. Based on that, we can calculate the largest number of opportunities that any physical event could occur in the observable universe since the big bang. Since all 10^80 elementary particles can interact with each other simultaneously maximally 10^43 per second, since there are a limited number of elementary particles, and since there has been a limited amount, that is,10^16 seconds since the big bang, there are a limited number of opportunities for any given event to occur in the entire history of the universe.

The maximal number of possible simultaneous interactions in the entire history of the universe, starting 13,7 billion years ago, can be calculated by multiplying the three relevant factors together: the number of atoms (10^80) in the universe, times the number of seconds that passed since the big bang (10^16) times the number of possible simultaneous interactions of all atoms per second (10^43). This calculation fixes the total number of events that could have occurred in the observable universe since the origin of the universe at 10^139.  This provides a measure of the probabilistic resources of the entire observable universe.
 
There are about 10^80 atoms in the observable universe. Thus, if the odds of having a specific outcome had been 1 in 10^80, we could have said that’s like finding a specific atom marked red among all the particles in the universe.
The odds to have an interactome ( finding a functional set of about 1300 proteins in bacteria P.Ubique, the smallest known life-form, and interconnecting them to become functional) is 10^725600. The probability is 725520 orders of magnitude (or powers of ten) smaller than the probability of finding the marked particle in the whole universe. Another way to say that is the probability of finding a functional interactome of P.Ubique by chance alone is 725537 times a trillion, trillion, trillion, trillion, trillion, trillion, trillion times smaller than the odds of finding a single specified particle among all the particles in the universe. And the problem is even worse than this. More typical proteins have hundreds of amino acids. I used an average of 300 amino acids to calculate the odds. But, for example, the typical RNA polymerase—the large molecular machine the cell uses to copy genetic information during transcription, has over 3,000 functionally specified amino acids.  The Ribosome had to be fully operational when life began. The bacterial ribosome consists of three rRNA molecules and approximately 55 protein components that must be put together in an intricate and tightly regulated way. Formation of the ribosome involves a complex series of processes, i.e., synthesis, processing, and modification of both rRNA and ribosomal proteins, and assembly of the components. The quality of the particle must also be checked and the amount of active ribosomes monitored. All of these events must be tightly regulated and coordinated to avoid energy losses and imbalances in cell physiology. The small subunit is made of 1,542 nucleotides) and 21 ribosomal proteins (r-proteins), while the large subunit is composed of two rRNAs, 23S (2,904 nucleotides) and 5S (120 nucleotides) rRNA, and 33 proteins.  The probability of producing the ribosome alone by chance would be trillions of orders smaller than the maximal number of possible simultaneous interactions in the entire history of the universe, starting 13,7 billion years ago.


We can calculate the odds that a minimal functional proteome would arise by unguided random natural events, not considering all other essential things to get a first living self-replicating cell.

Mycoplasma is a reference to the threshold of the living from the non-living, held as the smallest possible living self-replicating cell. It is, however, a pathogen, an endosymbiont that only lives and survives within the body or cells of another organism ( humans ).  As such, it IMPORTS many nutrients from the host organism. The host provides most of the nutrients such bacteria require, hence they do not need the genes for producing such compounds themselves. It does not require the same complexity of biosynthesis pathways to manufacturing all nutrients as a free-living bacterium.

Pelagibacter unique bacteria are known to be the smallest and simplest, self-replicating, and free-living cells. Pelagibacter genomes (~ 1,300 genes and 1,3 million base pairs ) devolved from a slightly larger common ancestor (~2,000 genes). Pelagibacter is an alphaproteobacterium. In the evolutionary timescale, its common ancestor supposedly emerged about 1,3 billion years ago. The oldest bacteria known however are Cyanobacteria,  living in the rocks in Greenland about 3.7-billion years ago.  With a genome size of approximately  3,2 million base pairs ( Raphidiopsis brookii D9) they are the smallest genomes described for free-living cyanobacteria. This is a paradox. The oldest known life-forms have a considerably bigger genome than Pelagibacter, which makes their origin far more unlikely from a naturalistic standpoint.  The unlikeliness to have just ONE protein domain-sized fold of 250amino acids is 1 in 10^77. That means, to find just one functional protein fold with the length of about 250AAs, nature would have to search amongst so many non-functional folds as there are atoms in our known universe ( about 10^80 atoms).   We will soon see the likeliness to find an entire functional of genome Pelagibacter with 1,3 million nucleotides, which was however based on the data demonstrated above, not the earliest bacteria....

Pelagibacter has complete biosynthetic pathways for all 20 amino acids.  These organisms get by with about 1,300 genes and 1,3 million base pairs and code for 1,300 proteins.  The chance to get its entire proteome would be 10^722,000.  The discrepancy between the functional space, and the sequence space, is staggering.

  ( To calculate the odds, you can see this website: https://web.archive.org/web/20170423032439/http://creationsafaris.com/epoi_c06.htm#ec06f12x

Steve Meyer: Signature in the Cell, chapter 10: Taking this into account only causes the improbability of generating the necessary proteins by chance—or the genetic information to produce them—to balloon beyond comprehension. In 1983 distinguished British cosmologist Sir Fred Hoyle calculated the odds of producing the proteins necessary to service a simple one-celled organism by chance at 1 in 10^40,000 . At that time scientists could have questioned his figure. Scientists knew how long proteins were and roughly how many protein types there were in simple cells. But since the amount of functionally specified information in each protein had not yet been measured, probability calculations like Hoyle’s required some guesswork. Axe’s experimental findings suggest that Hoyle’s guesses were pretty good. If we assume that a minimally complex cell needs at least 250 proteins of, on average, 150 amino acids and that the probability of producing just one such protein is 1 in 10^164 as calculated above, then the probability of producing all the necessary proteins needed to service a minimally complex cell is 1 in 10^164 multiplied by itself 250 times, or 1 in 10^41,000.

https://reasonandscience.catsboard.com

Otangelo


Admin

The probability of a certain event occurring depends on how many possible outcomes the event has. If an event has only one possible outcome, the probability for this outcome is always 1 (or 100 percent). If there is more than one possible outcome, however, this changes. A simple example is the coin toss. If you toss a coin, there are two possible outcomes (heads or tails). As long as the coin was not manipulated, the theoretical probabilities of both outcomes are the same–they are equally probable. The sum of all possible outcomes is always 1 (or 100 percent) because it is certain that one of the possible outcomes will happen.

The maximal number of possible simultaneous interactions in the entire history of the universe, starting 13,7 billion years ago, can be calculated by multiplying the three relevant factors together: the number of atoms (10^80) in the universe, times the number of seconds that passed since the big bang (10^16) times the number of possible simultaneous interactions of all atoms per second (10^43). This calculation fixes the total number of events that could have occurred in the observable universe since the origin of the universe at 10^139. This provides a measure of the probabilistic resources of the entire observable universe.

If the odds for an event to occur, are less likely, than the threshold of the entire probabilistic resources of the universe, then we can confidently say, that the event is impossible to occur by chance.

140 features of the cosmos as a whole (including the laws of physics) must fall within certain narrow ranges to allow for the possibility of physical life’s existence.
402 quantifiable characteristics of a planetary system and its galaxy must fall within narrow ranges to allow for the possibility of advanced life’s existence.
Less than 1 chance in 10^390 exists that even one planet containing the necessary kinds of life would occur anywhere in the universe without invoking divine miracles.

The odds to have life from non-life by natural means:
Probability for occurrence of a functional proteome, which is in the case of Pelagibacter, the smallest known bacteria and life-form, with 1350 proteins, average 300 Amino Acids size, by unguided means: 10^722000
Probability for occurrence of connecting all 1350 proteins in the right, functional order is about 4^3600
Probability for occurrence to have both, a minimal proteome, and interactome: about 10^725600

https://reasonandscience.catsboard.com

Otangelo


Admin

The odds to have life from non-life by natural means:

The Big Bang
1. Gravitational constant G: 1/10^60
2. Omega Ω, the density of dark matter: 1/10^62 or less
3. Hubble constant H0: 1 part in 10^60
4. Lambda: the cosmological constant: 10^122
5. Primordial Fluctuations Q:  1/100,000
6. Matter-antimatter symmetry: 1 in 10,000,000,000
7. The low-entropy state of the universe: 1 in 10^10^123

Fine-tuning of the  fundamental forces of the universe to permit life
1. Gravity: 1 part in 10^21
2. Strong force: 1 part in 10^12
3. Weak force: 1 chance out of 1000
4. Electromagnetic force: 1 chance out of 1000

Masses of atoms:  the fine-tuning of the masses of up and down quarks: 1 part in 10^21 .

Carbon synthesis: 1 part in 100

Life supporting planet, like the earth
One life-support planet:  Less than 1 chance in 10^390

A minimal life form, like Pelagibacter Ubique
Probability for the occurrence of a functional proteome, with 1350 proteins, average 300 Amino Acids size, by unguided means: 10^722000
Probability for occurrence of connecting all 1350 proteins in the right, functional order is about 4^3600
Probability for occurrence to have both, a minimal proteome, and interactome: about 10^725600

https://reasonandscience.catsboard.com

Otangelo


Admin

The maximal number of possible simultaneous interactions in the entire history of the universe, starting 13,7 billion years ago, can be calculated by multiplying the three relevant factors together:

The number of atoms in the entire universe = 1 x 10^80
The estimate of the age of the universe: 13,7 Billion years. In seconds, that would be = 1 x 10^16
The fastest rate an atom can change its state = 1 x 10^43
Therefore, the maximum number of possible events in a universe that is 18 Billion years old (10^16 seconds) where every atom (10^80) is changing its state at the maximum rate of 10^40 times per second is 10^139.

1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

This provides a measure of the probabilistic resources of the entire observable universe. If the odds for an event to occur, are less likely, than the threshold of the entire probabilistic resources of the universe, then we can confidently say, that the event is impossible to occur by chance.

https://reasonandscience.catsboard.com

Otangelo


Admin

I have never seen random, irregular, undetermined, undirected, stochastic, accidental, events driven by no mechanism or agent, but  by pure luck and chance/probability on its favor, hitting the jackpot, creating a hardware like a harddisk, a information storage device, and in parallel, creating the accompanying software, composed of a language with alphabet,  statistics, grammar, syntax and apobetics, a collection of punctuation marks and regulatory sites, , sharing 10 of the 13 characteristics of human language, furthermore a translation mechanism, cipher or code, information transmission systems that encode, transmit, decode through transcription and translation.

Then, furthermore, using that computer-like information system to create a digital blueprint, complex specified instructional assembly information instituted through the function bearing sequence of the words of that language, stored in that information storage device, and transforming it into an identical representation in analog 3D form, the physical 'reality' of that description. That information directing the making and operation of thousands of machines, each composed of several irreducibly complex parts, and many interlinked in the right way to create robot-like production lines, resulting in a self-replicating factory of highest technological sophistication, robust, error-prone, and able to adapt to the most varigated external conditions.  

All this, driven by energy turbines, that permit the operation of the factory. Making a factory depends on hardware/software, buiding blocks, and energy, that bear no function on their own. All three need to work in a joint venture, and if one component is missing, nothing functions.

Concluding that unguided stochastic events are not a plausible explanation for the origin of all above described, which is what analogously living cells are, is not an argument from incredulity or ignorance. When we have two competing alternatives, intelligence, and non-intelligence, and we have repeated experience that intelligence can instantiate all this, while we have never observed non-intillgence being capable of doing it, we are rationally warranted to conclude that intelligent design is the more case-adequate explanation.

https://reasonandscience.catsboard.com

Otangelo


Admin

Claim: The staggering improbability of you ever being born is mind-boggling, yet, here you are.
Reply: While it is true that the odds of any specific individual being born are incredibly low, this objection fails to address the fundamental issues raised by the fine-tuning argument and the astoundingly low probabilities associated with the conditions necessary for life and the universe. The objection conflates two distinct levels of improbability: the improbability of an individual's existence and the improbability of the universe being finely tuned to support life. These are separate issues that cannot be equated or used to dismiss one another. The improbability of an individual's existence arises from the vast number of potential genetic combinations and the specific circumstances that led to their conception and birth. While this improbability is indeed mind-boggling, it operates within the framework of an already existing universe with specific laws and conditions that permit life. The fine-tuning argument, on the other hand, addresses the improbability of the universe itself being finely-tuned to allow for the existence of life. The odds presented, such as 1 in 10^(10^123) or 1 in 10^(10^243), relate to the precise combination of fundamental constants, physical laws, and initial conditions that make our universe hospitable to life. These two levels of improbability are fundamentally different in scale and significance. While the improbability of an individual's existence is indeed remarkable, it pales in comparison to the staggering improbability of the universe being finely-tuned to support life at all. Furthermore, the objection fails to address the implications of these astoundingly low probabilities for the fine-tuning of the universe. The existence of any individual, while improbable, does not negate the need to explore and understand the underlying principles and mechanisms that have given rise to a life-permitting universe.

Claim: all stated odds are meaningless when in an infinite setting.
Reply: The claim that all stated odds are meaningless in an infinite setting is an attempt to dismiss the significance of the incredibly low probabilities associated with the fine-tuning of the universe and the conditions necessary for life. However, this argument fails to adequately address the fundamental issues raised by these astoundingly low probabilities. While it is true that in an infinite setting, even the most improbable events could theoretically occur, this does not negate the importance or relevance of the stated odds. The odds presented, such as 1 in 10^(10^123) for key cosmic parameters or 1 in 10^(10^243) as an overall upper bound, are so infinitesimally small that they challenge our understanding of what can be reasonably attributed to chance, even in an infinite setting. The claim that these odds are meaningless in an infinite setting assumes that the universe or the opportunities for fine-tuning are indeed infinite. However, this assumption itself is highly debatable and lacks empirical evidence. Even if the universe were infinitely large or infinitely old, it does not necessarily follow that the opportunities for fine-tuning are infinite or that the conditions necessary for life can be met an infinite number of times.

Furthermore, the fine-tuning argument is not solely concerned with the mere possibility of life arising, but rather with the specific conditions and parameters required for the universe and life as we know it to exist. 
While the concept of infinity can sometimes lead to counterintuitive conclusions, it does not render probabilities or odds meaningless. Even if we entertain the idea of a multiverse generating an infinite number of universes, it does not truly solve the problem of the astoundingly low odds for the fine-tuning required for our specific universe and the conditions necessary for life. If we assume that a multiverse generator exists and is capable of spawning an infinite number of universes with different parameters and conditions, the fact remains that the precise combination of finely-tuned parameters that enable our observable reality would still be an incredibly rare and improbable event. The odds, such as 1 in 10^(10^123) or 1 in 10^(10^243), are so infinitesimally small that even in an infinite setting, the existence of our universe would be an utterly extreme rarity. This line of reasoning merely pushes the problem back one step. Even if a multiverse generator could produce an infinite number of universes, the existence of such a generator itself would require an explanation. A multiverse generator would need to be an incredibly complex and finely-tuned system, capable of generating an infinite number of universes with different parameters and conditions. The question then arises: What is the origin of this multiverse generator, and what are the odds of its existence? A multiverse generator would also require a beginning, implying the need for a cause or an underlying principle that brought it into existence. This cause or principle would itself need to be explained, leading to an infinite regress of explanations or a fundamental first cause. Rather than truly solving the problem of the astoundingly low odds for the fine-tuning required for our universe, the multiverse hypothesis merely shifts the issue to a different level. It does not address the fundamental question of why our specific universe, with its precise combination of finely-tuned parameters, exists in the first place. Additionally, the multiverse hypothesis raises its own set of philosophical and scientific questions, such as the nature of the multiverse, the mechanisms by which universes are generated, and the possibility of observing or interacting with other universes. Rather than dismissing the stated odds as meaningless, it is more productive to critically examine the assumptions, models, and evidence that underlie these calculations. If the odds truly are as astoundingly low as presented, it is incumbent upon us to explore the implications and seek deeper explanations for the apparent fine-tuning of the universe and the conditions necessary for life.


David J. Hand (2014)  Math Explains Likely Long Shots, Miracles and Winning the Lottery Why you should not be surprised when long shots, miracles and other extraordinary events occur—even when the same six winning lottery numbers come up in two successive drawings  

Claim: A set of mathematical laws that I call the Improbability Principle tells us that we should not be surprised by coincidences. In fact, we should expect coincidences to happen. One of the key strands of the principle is the law of truly large numbers. This law says that given enough opportunities, we should expect a specified event to happen, no matter how unlikely it may be at each opportunity. Sometimes, though, when there are really many opportunities, it can look as if there are only relatively few. This misperception leads us to grossly underestimate the probability of an event: we think something is incredibly unlikely, when it's actually very likely, perhaps almost certain. How can a huge number of opportunities occur without people realizing they are there? The law of combinations, a related strand of the Improbability Principle, points the way. It says: the number of combinations of interacting elements increases exponentially with the number of elements. The “birthday problem” is a well-known example.

Reply: The claim made about the Improbability Principle and the law of truly large numbers is an attempt to downplay the significance of the incredibly low probabilities associated with the fine-tuning of various parameters required for life and our universe. However, this argument fails to adequately address the staggering odds presented in the detailed list of finely-tuned parameters. While it is true that with a large enough number of opportunities, even highly improbable events can occur, the odds listed here are so infinitesimally small that they defy reasonable explanations based solely on the law of truly large numbers or the law of combinations.
For example, the overall odds of fine-tuning for the parameters related to particle physics, fundamental constants, and initial conditions of the universe range from 1 in 10^111 to 1 in 10^911. These are astonishingly low probabilities, and it is difficult to conceive of a scenario where there are enough opportunities to make such events likely, let alone almost certain. Furthermore, the odds for fine-tuning the key cosmic parameters influencing structure formation and universal dynamics are as low as 1 in 10^(10^123) when including the low-entropy state, and 1 in 10^258 when excluding it. These numbers are so vast that they challenge our comprehension and stretch the boundaries of what can be reasonably attributed to mere coincidence or a large number of opportunities. Even when considering individual categories, such as the odds of fine-tuning inflationary parameters (1 in 10^745), density parameters (1 in 10^300), or dark energy parameters (1 in 10^580), the probabilities are still astonishingly low. The cumulative effect of these extremely low probabilities across multiple domains and scales makes the argument based on the Improbability Principle and the law of truly large numbers untenable. The total number of distinct parameters that require precise fine-tuning for life and the universe  to exist is an impressive 507, spanning various domains and scales. Even the most optimistic lower bound for the overall odds is 1 in 10^(10^238), while the upper bound (lowest chance) is an almost inconceivable 1 in 10^(10^243).

Claim: The birthday problem demonstrates that seemingly improbable events can become likely with a large number of opportunities and combinations.
Refutation: While the birthday problem illustrates how the probability of a shared birthday increases with more people in a room, the probabilities involved are still within a comprehensible range. The odds of two people sharing a birthday in a room of 23 people are approximately 1 in 2, which is not an astronomically low probability. However, the fine-tuning odds presented, such as 1 in 10^(10^123) for key cosmic parameters or 1 in 10^(10^243) as an overall upper bound, are so infinitesimally small that they defy reasonable explanations based on the law of truly large numbers or the law of combinations.

Claim: The repeated lottery numbers in Bulgaria and Israel are examples of the Improbability Principle in action, where highly improbable events become likely given a large number of opportunities.
Refutation: While the repetition of lottery numbers may seem surprising, the odds of such an event occurring are still substantially higher than the fine-tuning odds presented. For a six-out-of-49 lottery, the odds of any particular set of six numbers coming up are 1 in 13,983,816, which is relatively high compared to the fine-tuning odds. Additionally, the text acknowledges that after 43 years of weekly draws, it becomes more likely than not for a repeat set of numbers to occur. However, the fine-tuning odds presented are so incredibly low that even considering a vast number of opportunities and combinations, it strains credulity to attribute such events solely to chance.

Claim: The law of combinations amplifies the probability of seemingly improbable events when considering interactions between many people or objects.
Refutation: While the law of combinations can indeed increase the probability of certain events when considering interactions between many elements, the fine-tuning odds presented go far beyond what can be reasonably explained by this principle. Even with the example of 30 students and over a billion possible groups, the probabilities involved are still vastly higher than the fine-tuning odds discussed. The claim that even events with very small probabilities become almost certain with a large number of opportunities fails to hold true for the astoundingly low probabilities associated with the fine-tuning of the universe and the conditions necessary for life.

Claim: You don't understand probability. You assume the common use of probability, of all the available options, how often and therefore how likley is an event is to occur. However, this is an outdated definition of probability (just as Newton's definition of Gravity is outdated). When you factor in chaos theory, probability is actually a measure of uncertainty, which itself is a measure of our inability to factor in all the variables. Take the simple flip of a coin. 50% chance if a head, right? Wrong. If we could measure all the variables such as exact force and angle of strike, localised air pressure and density, molecular distribution within the coin etc (and granted, a number of the variables cannot be measured until the event occurs, if at all), the outcome is 100%. We are only 50% confident based in the information we have, what the outcome will be. Therefore, tge probabilistic argument for God is, functionally, nonsensical, as we don't have (and may never be able to obtain) a full account of the variables that contribute to the formation of life.
Refutation: While it's true that our understanding of probability has evolved, especially with the advent of chaos theory, this doesn't invalidate its use in assessing the likelihood of complex events. Even in a deterministic universe, probability remains a useful tool for describing outcomes when we lack complete information. In many scientific fields, including biology and physics, probabilistic models are still widely used and accepted. They provide valuable insights and predictive power, even if they don't capture every variable. The utility of probability in science is well-established and goes beyond simple coin flips. The numbers we're dealing with in the origin of life scenario are so astronomically small that they go beyond issues of measurement precision or hidden variables. Even if we accept that some variables are unknown, the probability of life arising by chance remains vanishingly small. The presence of complex specified information in living systems is not just about randomness, but about the specific arrangement that allows for life to function. While it's true that we may not have a full account of all variables, the known variables already pose significant challenges to purely naturalistic explanations. The best explanation for this level of complexity and apparent design is an intelligent cause. Even if we accept that our probability calculations are imperfect, the time required for random processes to produce life, given the probabilities involved, far exceeds the age of the universe.



Last edited by Otangelo on Sat Sep 21, 2024 10:06 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Aron Ra Argument from Improbability Fallacy
https://www.youtube.com/watch?v=40gXiLJYVSU

Lets give a critical analysis of Aron Ra's new video: 

Claim: But with religion, we just have religions continuously splintering into ever more varied subsets due to all these disagreements on who God is or how many gods there are, and all the many different conflicting stories behind all of them. Yet every faith combined can't even show that there's a "there" there, that there is any truth to their assertions, or that there is even a supernatural at all.
Response:  Many atheists believe in concepts like multiverses, abiogenesis, and macroevolution despite lack of direct observation, yet reject belief in God for the same reason. This apparent double standard raises questions about the nature of proof and faith in both worldviews. Neither theism nor materialism can be conclusively proven. Science cannot definitively demonstrate the origin of reality. We can only examine the available evidence and develop philosophical explanations based on it. The scientific method cannot prove God's existence or that the material universe is all there is. Historical events cannot be repeated for testing. All worldviews require some degree of faith. The question is which view requires the smallest "leap." Some argue that the evidence for a creator is compelling, making theism the more reasonable choice.

Without God:
- Anything could be possible or arbitrary
- Nothing would be truly impossible
- Laws of physics could appear or disappear at any moment
- No ultimate foundation for facts or possibilities

With God:
- A singular, ultimate source for all facts and possibilities
- A foundation for consistent natural laws

Gods hiddenness: 
God's partial hiddenness allows for significant human freedom. If God were fully visible, even those opposed to Him would struggle to live autonomously. By remaining somewhat hidden, God enables people to freely choose whether to seek Him or remain independent.

There is enough light for those who desire to find him, and enough darkness for those who prefer to live autonomously to HIM.

God values free will, even if it means some choose atheism. For those genuinely seeking truth, an open mind and willingness to follow evidence wherever it leads is crucial, even if the conclusion challenges preconceptions.

Claim: In the mindset of the faithful, they believe that they can believe whatever they want to believe as long as it hasn't been absolutely and conclusively proven false. Although, as young Earth creationists and flat earthers and such constantly demonstrate, they will still believe whatever they want to believe even if it has been proven false conclusively. 
Response:  Brandolini's Law, also known as the "bullshit asymmetry principle," states that the amount of energy needed to refute misinformation is an order of magnitude larger than that needed to produce it. This principle highlights the challenges in concisely addressing multiple claims or arguments, especially when those claims touch on complex scientific and philosophical issues. Let me do it, here.  

The landscape of human belief is vast and varied, encompassing a wide range of perspectives from deeply held religious convictions to strictly evidence-based scientific worldviews. This spectrum of beliefs often leads to misunderstandings and oversimplifications when discussing complex topics such as the origins of the universe, life, and human existence.

The initial statement presents a generalized and potentially biased view of religious believers, suggesting that they adhere to beliefs regardless of evidence. This characterization fails to capture the nuanced and diverse nature of religious thought and the relationship between faith and reason that many believers navigate. Religious belief is not monolithic. Many theists engage critically with their beliefs, integrating scientific understanding with their spiritual worldviews. We often seek to reconcile discoveries and evidence with our religious frameworks, rather than simply dismissing contradictory information.

The Complexity of Young Earth Creationism (YEC): The statement equates Young Earth Creationism (YEC) with flat Earth beliefs, which is an oversimplification that doesn't accurately represent the YEC position. While both views contradict mainstream scientific consensus, they differ significantly in their historical context, supporting arguments, and level of engagement with scientific data. Young Earth Creationism is a theological interpretation that believes the Earth is approximately 6,000 to 10,000 years old, based on a literal reading of Biblical genealogies and creation accounts. We YEC often engage with scientific data, albeit interpreting it through a different framework than mainstream geology and biology. We propose alternative explanations for phenomena such as the fossil record, radiometric dating results, and geological formations.

For instance,  we point to evidence that we believe supports a global flood narrative, interpreting geological layers and fossil distributions as results of this catastrophic event rather than gradual processes over millions of years. We have also examples of soft tissue preservation in dinosaur fossils as evidence against the conventional timeline of Earth's history. See here

Furthermore, Noah's Ark has been found on Mount Ararat with "high certainty". An ongoing scientific investigation is expected to provide results in about five years, radiocarbon dating of the wood found, earlier. See here

The Role of Arguments from Ignorance:  An argument from ignorance is a logical fallacy that assumes something is true because it hasn't been proven false, or vice versa. What we provide, IMHO, are reasonable inferences based on available evidence and supported claims based on positive evidence. See here

Claim:  That's where all religions are: in the category of unsupported assertions.
Response: Miracles of Naturalism & the New Atheists: The Cosmic Comedy of Random Chance

The Spontaneous Universe: Out of utter nothingness, the universe decides to burst into existence, entirely on its own, without a hint of cause, reason, or a magic wand. Pure cosmic spontaneity!
Multiverse Lottery Winner: Our universe, just one lucky draw in an eternal cosmic lottery with an infinite number of tickets. The odds? Don't bother calculating, it's the ultimate jackpot!
Precision Without a Precisionist: The universe, fine-tuning its own parameters with the meticulousness of a Swiss watchmaker. Who needs a fine-tuner when you have cosmic coincidence?
The Lucky Cell: One cell, billions of years ago, hit the biological jackpot, assembling itself from a cosmic soup with odds that make lottery winning seem like a sure bet.
Random Code Generator: The genetic code, a marvel of complexity, just happened to assemble itself. Who knew that random chance was such an adept coder?
Self-Writing Genetic Saga: The first living cell's genetic information just magically appeared, like a story writing itself without an author. A natural masterpiece!
Molecular Morse Code: The translation from genetic information to proteins, a complex process that just decided to evolve by itself. No translator needed, it's all naturally coded!
DIY Molecular Factories: Molecules and cells, in their spare time, figured out how to assemble themselves into complex structures and factories. The ultimate in self-service!
Epigenetic Evolution's Encore: Over 20 epigenetic codes, popping up naturally, orchestrating the symphony of life's complexity and diversity. No conductor, just the music of the genes!
Consciousness from Stardust: From mere atoms to conscious minds, a leap that defies the gap between the physical and the metaphysical. Who knew matter was so introspective?
Moral Matter: Atoms and brains, dabbling in moral philosophy, developing a sense of right and wrong. Apparently, particles can ponder ethics too!

Claim: You know, there are so many evidences that make it impossible for the no-God scenario to be true. First, the word "evidence" is already plural. You don't have to pluralize it by saying "evidences". We're talking about facts in evidence, or we would if you had any facts and evidence, but you don't.
Response: This is just drivel, and unsupported claims.

Claim: Second, shifting the burden of proof is logically fallacious. Quit trying to disprove a negative, the no-God scenario. Instead, the burden of proof is on the one making the positive claim. That means you have to support your assertion of there being a god. It's not whether the no-God scenario is impossible; it's whether your God is possible.
Response: We totally understand why atheists avoid the burden of proof. We all know that atheists are the very definition of a "WIMP" and why? Because they always scream about the "burden of proof"  We both KNOW that if the burden of proof was placed on their tiny narrow shoulders, they will IMMEDIATELY COLLAPSE!! Their worldview is irrational and pathetic  We don't mind to demonstrate why intelligence is an infinitely more adequate and potent cause in comparison to - wait - what exactly ?!! There is NO ALTERNATIVE to an eternal necessary powerful Creator; Maybe I'll rub that in your face and it will help to wake-up your dormant brain and start thinking.

Claim: And third, the problem you have with that is twofold. To begin with, in order to say whether something is possible, there must be a precedent or parallel or verified phenomenon indicating that such possibility exists. You don't have that for gods or magic. And then the other problem is not only is your god not even a possibility to consider, but your God is literally impossible. God is defined by his miraculous nature, and while miracles have essentially the same definition as magic, being the evocation of supernatural forces or entities to control or forecast natural phenomena in ways that are inexplicable by science because they defy the laws of physics, thus miracles are physically impossible by definition, and God is therefore impossible by extension.

Response: There is no logical reason to believe that God's existence is not possible. What can be said with certainty is, that was never a state o absolute nothingness ( in a philosophical sense), since otherwise, there would still be absolutely nothing. An eternal universe is not plausible based on scientific and philosophical reasons. The Big bang theory points to the origin and beginning of the universe. The second law of thermodynamics refutes an eternal universe or Cosmos ( Multiverses, Bubble universes, etc. ), and we cannot traverse an infinite period in the past. More, see here

Claim: I mean, the simplest protein molecule has 400 linked amino acids in a very specific order and sequence. It's impossible mathematically for that to even form on its own. The simplest protein molecule is glycine. It consists of two carbons, five hydrogens, one nitrogen, and two oxygens. Not really all that amazing, actually. And for comparison, here are the chemical formulas for the most common compounds found in a piece of granite rock, just to give you an idea. The mathematical probability of the simplest protein molecule forming on its own is a higher number than the estimated number of atoms in the known universe. According to NASA, of the trillions of estimated galaxies that exist in the universe, NASA has calculated what they believe the estimated number of atoms in the entire known universe are, and they estimate that to be somewhere in the neighborhood of 10 to the 78th power. The mathematical probability of the simplest protein molecule forming is a higher probability than the number of atoms in the entire known universe. Actually, you just recited the same number twice, and you still got it wrong. It's not 2 * 10 to whatever power; it's just 10 to that power. And the estimate for the number of atoms in the universe is 10 to the 78th power up to 10 to the 82nd power, which are both significantly higher than your number. Your number doesn't matter because it's arbitrarily contrived nonsense.
Response:

- Harold Morowitz estimated the probability of a simple bacterial cell forming by chance at 1 in 10^100,000,000,000.
- Fred Hoyle likened the probability of life emerging by chance to a tornado sweeping through a junkyard and assembling a Boeing 747.
- The odds of even a single protein forming by chance are astronomical, on the order of 1 in 10^164 for a relatively short protein.

The Astronomical Improbability of Random Protein Assembly: Implications for the Origin of Life

1. Average protein size:
 Total amino acids / Total proteins = 374,575 / 924 ≈ 405 amino acids per protein on average

2. Critical regions per protein (assuming an average-sized protein):
 - Active site: 4 amino acids
 - Binding site: 8 amino acids (middle of 5-10 range)
 - Structural core: 38% of 405 ≈ 154 amino acids
 Total critical amino acids per protein: 4 + 8 + 154 = 166

3. Probability calculation:
 - There are 20 possible amino acids at each position
 - The probability of getting the right amino acid by chance at each position is 1/20
 - For one protein's critical regions: (1/20)^166
 - For all 924 proteins: [(1/20)^166]^924

4. Calculating the odds:
 Odds = 1 / [(1/20)^166]^924
 = 1 / (1/20)^(166*924)
 = 20^(166*924)
 ≈ 10^(166*924*log10(20))
 ≈ 10^227,221

This is an astronomically large number, although smaller than the previous calculation due to the smaller number of proteins and slightly larger average protein size. To put it in perspective:
- The number of atoms in the observable universe is estimated to be around 10^80
- The number of possible chess games is estimated to be around 10^120

The odds we calculated (10^227,221) are still enormously large and effectively impossible in any realistic scenario. This recalculation continues to demonstrate the extreme improbability of a functional proteome of this size arising by pure chance, illustrating why the spontaneous formation of a functional proteome is considered statistically impossible.

Calculating the Minimal Cell Population Size to Avoid Genetic Meltdown

Several elements influence the minimal population size, including mutation rate, effective population size (Ne), genetic drift, selection coefficient (s), and horizontal gene transfer (HGT).

Mutation Rate Calculation: For a minimal cell with a genome size of about 1 million base pairs, we can calculate the mutation rate per genome: Mutation Rate per Genome = Genome Size × Mutation Rate per Nucleotide = 1 × 10^6 × 1 × 10^-9 = 0.001 mutations per generation. Minimal Effective Population Size Estimation: To prevent deleterious mutations from becoming fixed due to genetic drift, we must satisfy the conditions: Ne × s > 1 Ne > 1/s = 1/0.001 = 1,000
HGT and Environmental Considerations: A larger population is necessary to increase gene exchange events, enhance the genetic material pool, and ensure sufficient cell interactions for effective HGT. We must also account for environmental variability and demographic stochasticity, which can cause sudden population decreases.
Recommended Minimal Population Size: Considering all factors, a minimal cell population size of at least 10,000 individuals is recommended. This larger number buffers against genetic drift enhances HGT efficiency, and provides resilience to stochastic events.
Probability of Deleterious Mutation Fixation: With an effective population size (Ne) of 10,000, the probability of fixation for neutral mutations is approximately 0.00005, decreasing further with stronger selection.
Impact of Population Size: In small populations (Ne <1,000), genetic drift can overpower selection, leading to the fixation of harmful mutations. In larger populations (Ne ≥10,000), selection effectively removes deleterious alleles, maintaining population fitness.
Role of HGT: Horizontal gene transfer increases genetic diversity and provides repair mechanisms to combat mutational load. It introduces beneficial alleles and new functions, allowing for the replacement of damaged genes.
Implications for Population Growth: A larger population size reduces extinction risk, enhances adaptability to environmental changes, and promotes sustainable growth. With effective HGT and sufficient population size, minimal cells can thrive and expand.

Conclusion: To overcome the challenges of synthesizing all necessary molecules for a minimally viable cell and ensure population growth without extinction, it's crucial to maintain a minimal population size of at least 10,000 cells. This population size balances the effects of genetic drift and selection, facilitates effective horizontal gene transfer, and provides resilience against environmental and demographic challenges. By achieving this population size, minimal cells can avoid the genetic meltdown ratchet, maintain genetic health, and have a higher likelihood of survival and proliferation.

Calculating the Improbability

To grasp the enormity of these numbers, let's consider the following:

1. Number of Different Proteins: 924 unique proteins required for a minimal cell.
2. Total Protein Molecules: The cell requires approximately 200,000 protein molecules, considering multiple copies of each protein.
3. Average Protein Length: Assuming an average protein length of 400 amino acids, which is a reasonable estimate for bacterial proteins.
4. Total Amino Acids Needed: This results in approximately 80 million amino acids (200,000 proteins × 400 amino acids/protein).
5. Amino Acid Variability: Each position in a protein chain can be one of 20 standard amino acids.
6. Total Possible Combinations: The number of possible sequences for a single protein of 400 amino acids is 20^400, and for all proteins, it's exponentially larger.

Let's calculate the combined probability of randomly assembling 200,000 proteins, each consisting of 400 amino acids, and then extend this calculation to 10,000 cells.

1. Probability of Assembling One Protein: For a single protein of length 400 amino acids, the probability (P_one_protein) of assembling it correctly by random chance is: P_one_protein = (1/20)^400
Explanation: There are 20 standard amino acids. Each position in the protein chain has a 1/20 chance of being the correct amino acid. Since the positions are independent, we multiply the probabilities.

Numeric Calculation: log_10 P_one_protein = 400 × log_10(1/20) = 400 × (1.3010) = 520.4 So, P_one_protein = 10^520.4

2. Probability of Assembling 200,000 Proteins for One Cell: For 200,000 proteins, the combined probability (P_one_cell) is: P_one_cell = (P_one_protein)^200,000 = ((1/20)^400)^200,000 = (1/20)^(400 × 200,000)

Exponent Calculation: 400 × 200,000 = 80,000,000
Combined Probability: P_one_cell = (1/20)^80,000,000
Logarithmic Form: log_10 P_one_cell = 80,000,000 × log_10(1/20) = 80,000,000 × (-1.3010) = 104,080,000. So, Probability for one Cell = 10^104,080,000

3. Probability for 10,000 Cells For 10,000 cells, each requiring the assembly of 200,000 proteins, the combined probability (P_10,000_cells) is: P_10,000_cells = (P_one_cell)^10,000 = ((1/20)^80,000,000)^10,000 = (1/20)^(80,000,000 × 10,000)

Exponent Calculation: 80,000,000 × 10,000 = 800,000,000,000
Combined Probability: P_10,000_cells = (1/20)^800,000,000,000
Logarithmic Form: log_10 P_10,000_cells = 800,000,000,000 × log_10(1/20) = 800,000,000,000 × (-1.3010) = 1,040,800,000,000. So, the probability for 10,000 cells = 10^1,040,800,000,000

The calculated probabilities are astronomically small:
Probability for One Cell: = 10^104,080,000
For 10,000 Cells:  10^-1,040,800,000,000

To put these numbers into perspective:
- The total number of atoms in the observable universe is estimated to be around 10^80.
- The probability for one cell is 10^104,080,000, which is a number with 104 million zeros before the decimal point.
- For 10,000 cells, the probability is even more negligible, with over 1 trillion zeros before the decimal point.

Claim: The argument from improbability fallacy depends on ignoring natural processes that will ultimately drive a particular product or conclusion, you know, guiding or controlling the outcome. Instead, believers want to pretend that everything just fell together somehow accidentally. Creationists employ this fallacy along with a whole lot of other fallacies because they're trying to deny the very existence of natural processes and their cumulative effects.
Response: Molecules, when left alone, tend to break down and become less complex instead of evolving into components of a living system

Ilya Prigogine (1972): The probability that at ordinary temperatures a macroscopic number of molecules is assembled to give rise to the highly ordered structures and to the coordinated functions characterizing living organisms is vanishingly small. The idea of spontaneous genesis of life in its present form is therefore highly improbable, even on the scale of the billions of years during which prebiotic evolution occurred. 1
Prigogine, I. Thermodynamics of evolution https://pubs.aip.org/physicstoday/article-abstract/25/11/23/428444/Thermodynamics-of-evolutionThe-functional-order

Steven A. Benner (2014): The Asphalt Paradox:  An enormous amount of empirical data have established, as a rule, that organic systems, given energy and left to themselves, devolve to give uselessly complex mixtures, “asphalts”. The literature reports (to our knowledge) exactly zero confirmed observations where “replication involving replicable imperfections” (RIRI) evolution emerged spontaneously from a devolving chemical system. Further, chemical theories, including the second law of thermodynamics, bonding theory that describes the “space” accessible to sets of atoms, and structure theory requiring that replication systems occupy only tiny fractions of that space, suggest that it is impossible for any non-living chemical system to escape devolution to enter into the Darwinian world of the “living”. Such statements of impossibility apply even to macromolecules not assumed to be necessary for RIRI evolution. Lipids that provide tidy compartments under the close supervision of a graduate student (supporting a protocell first model for origins) are quite non-robust with respect to small environmental perturbations, such as a change in the salt concentration, the introduction of organic solvents, or a change in temperature.
https://sci-hub.ren/10.1007/s11084-014-9379-0

David Deamer (2017):
It is clear that non-activated nucleotide monomers can be linked into polymers under certain laboratory conditions designed to simulate hydrothermal fields. However, both monomers and polymers can undergo a variety of decomposition reactions that must be taken into account because biologically relevant molecules would undergo similar decomposition processes in the prebiotic environment. Decomposition of Monomers, Polymers and Molecular Systems: An Unresolved Problem 2017 Jan 17 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5370405/

Luisi (2014):  Attempts to obtain copolymers, for instance by a random polymerization of monomer mixtures, yield a difficult to characterize mixture of all different products. To the best of our knowledge, there is no clear approach to the question of the prebiotic synthesis of macromolecules with an ordered sequence of residues.
https://www.sciencedirect.com/science/article/pii/S2001037014600076

Rob Stadler ( 2020):  Even in a very short DNA of just two nucleotides, there are dozens of incorrect possible arrangements of the components, and only one correct arrangement. The probability of consistent arrangement decreases exponentially as the DNA lengthens. If natural processes could polymerize these monomers, the result would be chaotic “asphalt,” not highly organized, perfectly consistent biopolymers. Think about it — if monomers spontaneously polymerized within cells, the cell would die because all monomers would be combined into useless random arrangements.
https://evolutionnews.org/2021/12/long-story-short-a-strikingly-unnatural-property-of-biopolymers/ More, here

Claim: And that's just the simplest protein molecule forming. According to this study, a 75-atom macrocycle within saltwater solution forms a peptide nucleobase with a hydrophobic core and a hydrophilic surface, just like a protein, which also folds according to non-covalent interactions of the different elements involved, and some of them are interchangeable. It's just straight-up chemistry. 
Response: Before we even start talking about protein formation, a huge number of problems are unsolved:

Amino acid synthesis requires solutions to four key biochemical problems

1. Nitrogen fixation
Nitrogen is an essential component of amino acids. Earth has an abundant supply of nitrogen, but it is primarily in the form of atmospheric nitrogen gas (N2), a remarkably inert molecule. Thus, a fundamental problem for biological systems is to obtain nitrogen in a more usable form. This problem is solved by certain microorganisms capable of reducing the inert N = N triple-bond molecule of nitrogen gas to two molecules of ammonia in one of the most amazing reactions in biochemistry. Nitrogen in the form of ammonia is the source of nitrogen for all the amino acids. The carbon backbones come from the glycolytic pathway, the pentose phosphate pathway, or the citric acid cycle. But on early earth, biosynthesis of fixed nitrogen was not available. 

2. Selection of the 20 canonical bioactive amino acids
Why are 20 amino acids used to make proteins ( in some rare cases, 22) ?  Why not more or less ? And why especially the ones that are used amongst hundreds available? In a progression of the first papers published in 2006, which gave a rather shy or vague explanation, in 2017, the new findings are nothing short than astounding.  In January 2017, the paper : Frozen, but no accident – why the 20 standard amino acids were selected, reported:

" Amino acids were selected to enable the formation of soluble structures with close-packed cores, allowing the presence of ordered binding pockets. Factors to take into account when assessing why a particular amino acid might be used include its component atoms, functional groups, biosynthetic cost, use in a protein core or on the surface, solubility and stability. Applying these criteria to the 20 standard amino acids, and considering some other simple alternatives that are not used, we find that there are excellent reasons for the selection of every amino acid. Rather than being a frozen accident, the set of amino acids selected appears to be near ideal. Why the particular 20 amino acids were selected to be encoded by the Genetic Code remains a puzzle." 

3. Homochirality
In amino acid production, we encounter an important problem in biosynthesis—namely, stereochemical control. Because all amino acids except glycine are chiral, biosynthetic pathways must generate the correct isomer with high fidelity. In each of the 19 pathways for the generation of chiral amino acids, the stereochemistry at the a -carbon atom is established by a transamination reaction that includes pyridoxal phosphate (PLP) by transaminase enzymes, which however were not extant on a prebiotic earth, which creates an unpenetrable origin of life problem. One of the greatest challenges of modern science is to understand the origin of the homochirality of life: why are most essential biological building blocks present in only one handedness, such as L-amino acids and D-sugars ?

4. Amino acid synthesis regulation
Biosynthetic pathways are often highly regulated such that building blocks are synthesized only when supplies are low. Very often, a high concentration of the final product of a pathway inhibits the activity of allosteric enzymes ( enzymes that use cofactors ) that function early in the pathway to control the committed step. These enzymes are similar in functional properties to aspartate transcarbamoylase and its regulators. Feedback and allosteric mechanisms ensure that all 20 amino acids are maintained in sufficient amounts for protein synthesis and other processes. More, here

Proteins are remarkable molecular machines that catalyze essential chemical reactions with astonishing efficiency and specificity, accelerating processes by factors of millions or even billions. The enzyme catalase, for instance, can decompose millions of hydrogen peroxide molecules per second, showcasing the incredible speed at which these biological catalysts operate. The journey from the prebiotic Earth to the sophisticated enzymatic systems we observe today presents a classic chicken-and-egg dilemma: how did complex proteins arise when their synthesis often requires catalysts, which are themselves proteins? This paradox is further complicated by the harsh conditions of early Earth, devoid of the biochemical machinery we take for granted in modern cells. Researchers exploring this puzzle must consider various factors, including early energy sources for molecular synthesis, the role of mineral surfaces in facilitating reactions, and the transition from simple abiotic catalysts to complex biocatalysts. The formation of stable, functional peptides in a primordial environment is another crucial aspect of this investigation.

Each enzyme consists of a specific sequence of amino acids folded into a unique three-dimensional configuration, creating an active site capable of binding particular substrates with remarkable precision. This structure-function relationship is at the heart of enzymatic activity. The emergence of enzymatic networks adds another level of sophistication to this story. These interconnected reactions form the basis of cellular metabolism, allowing organisms to respond to environmental changes and maintain homeostasis. 

Open questions related to the origin of enzymatic proteins and catalysts on prebiotic Earth, see here

Claim: But creationists want to forget about natural processes and imagine that we're just saying that everything just comes together by chance, as opposed to chemical and physical properties, which completely nullify their argument from improbability.
Response: Laurent Boiteau Prebiotic Chemistry: From Simple Amphiphiles to Protocell Models, page 3:
Spontaneous self-assembly occurs when certain compounds associate through noncovalent hydrogen bonds, electrostatic forces, and nonpolar interactions that stabilize orderly arrangements of small and large molecules.  The argument that chemical reactions in a primordial soup would not act upon pure chance, and that chemistry is not a matter of "random chance and coincidence, finds its refutation by the fact that the information stored in DNA is not constrained by chemistry. Yockey shows that the rules of any communication system are not derivable from the laws of physics.  He continues: “there is nothing in the physicochemical world that remotely resembles reactions being determined by a sequence and codes between sequences.” In other words, nothing in nonliving physics or chemistry obeys symbolic instructions.

Ulrich E. Stegmann:  The arbitrariness of the genetic code March 2004 5
Some of the processes expected to involve semantic information are certainly not chemically arbitrary and, therefore, chemical arbitrariness is not a necessary condition for a semantic relation. More, here

Claim: Okay, and as a matter of fact, when you look at the several million molecules that are operating in perfect precision, which they are not because mutations still occur, which are how bacteria diversified into the innumerable species that exist today, the mathematical probability of even those million evolving, if you will, through positive mutations is a higher number than the previous number that I told you: 10 to the 58th power. But you don't know what that number is, nor does anyone else, because again, it's nonsense. The argument from improbability fallacy works like this: List all of the events that happened to you yesterday. The more items that are on your list, the more extremely improbable it will be, especially when you figure in the exact times in sequence. How improbable is it that all these things happened in that precise order at exactly those times? Thus, if you are so inclined, you can make the most mundane of daily occurrences seem statistically impossible. That's what creationists are doing here.
Response:  We're not talking about just any random outcome. We're discussing the emergence of specific, highly functional molecular machines - enzymes that perform precise tasks essential for life. When we calculate these probabilities, we're not just listing off a series of mundane daily events and calling them improbable. We're looking at the odds of forming molecules with very specific functions. Each enzyme has a particular sequence of amino acids that folds into a unique three-dimensional structure, creating an active site that catalyzes a specific chemical reaction. That's not just any outcome - it's a highly specified one. The argument from improbability isn't a fallacy when applied correctly. We're not saying "What are the odds of any proteins forming?" We're asking, "What are the odds of forming proteins that can actually do the complex jobs required for life?" That's a crucial distinction.

We're talking about the origin of life - the very first emergence of living systems from non-living matter. This isn't about evolution or natural selection, which require replication and inheritance to function. We're looking at how those first critical molecules necessary for life could have formed. The probabilities we're dealing with are staggering. The odds of even one such molecule forming by chance are astronomically low. We're not talking about just any random arrangement of amino acids, but a specific sequence that folds into a precise three-dimensional structure capable of performing a particular function. Now multiply that improbability by the number of different proteins required for the most basic form of life. The figures quickly become so large they're practically meaningless. That 10^58 number? That's actually conservative when you consider the full complexity involved. Proponents of naturalistic origins often point to chemical affinities or self-organization, but these don't address the information content required for life. They might explain how certain molecules stick together, but not how they arrange themselves into the highly specific, functional sequences necessary for life. Yes, the early Earth had a lot of time and materials to work with. But time and chance alone don't solve the problem - they actually make it worse. With more time, you get more breakdown of complex molecules, not more building up. The step-wise accumulation idea doesn't hold water either. Each step needs to be functional and provide an advantage to be preserved. But what's the function of half an enzyme or a partial DNA strand? Alternative hypotheses like RNA World or metabolism-first scenarios just push the problem back a step. They still require highly specific, functional molecules to emerge from random processes.
Our incomplete knowledge actually strengthens the design argument. The more we learn about the complexity of life at its most fundamental level, the more implausible a chance origin becomes. This isn't about giving up on science or invoking miracles. It's about following the evidence where it leads. And the evidence of extreme, specified complexity in even the simplest living systems points strongly towards intelligent design, not blind, undirected processes.

Claim: And one way to understand physics is as reality being interpreted and expressed mathematically. This is Jeremy England. He's a professor of physics at MIT, but he's also an Orthodox Jewish rabbi, so he believes in the Bible just like Bob Doko does, at least the Hebrew Bible. Rabbi England obviously doesn't believe in the Christian revision of those stories, and Jewish scientists rarely interpret their scriptures literally like undereducated American Christians so often do. What makes this particular rabbinical physicist so interesting is how Professor England cites the second law of thermodynamics, and I mean the exact example that Doko uses himself to argue the opposite point, not as an atheist and not as a science-denying creationist either. Rabbi England concludes that if the God he believes in created life, then maybe he didn't do it by magical miracles or miraculous magic. He didn't have to chant some incantation in Aramaic to speak everything out of nothing. Maybe instead, if God only figuratively said, "Let the Earth bring forth the living thing," then God must have provided some way in which the Earth could do that, and Professor England wants to find what that way is. He then famously proposed the idea that life exists specifically because of the law of increasing entropy driving matter to acquire ever more lifelike physical properties to more efficiently dissipate heat. He has since devised and published a number of mathematical formulas to explain how the origin of life follows from the fundamental laws of nature, where he also notes that the properties of self-replicators must be constrained by thermodynamic laws. I'll put a link to that paper below, and we'll talk more about the evolution or the origin of life later in this series, where we'll get specific on several successive stages of that process.
Response: Life in any form presents one of the deepest enigmas and puzzles in science. No matter the specific biochemical pathways, machinery, or enzymes involved, life does something truly extraordinary—it continuously recruits Gibbs free energy from its environment to reduce its own entropy. This is a process that, under normal physical laws, should not happen spontaneously, and certainly not with the precision and complexity life exhibits.  Imagine a rock that somehow continuously pulls itself uphill, or a rusty nail that spontaneously galvanizes itself to prevent further corrosion. In the same way, unintelligent, simple chemicals cannot self-organize into complex, information-rich systems capable of building solar panels (like photosystems I and II), hydroelectric dams (ATP synthase), propulsion systems (motor proteins), or sophisticated self-repair mechanisms (p53 tumor suppressor proteins). Even more astonishing, these systems know when to self-destruct (caspases) if they become too damaged—a level of foresight and regulation that defies purely natural processes. Abiogenesis—the idea that life arose spontaneously from non-living matter—is not just a challenging puzzle that needs more research or time to solve. It's a fundamental problem that materialism struggles to address. The notion that blind, unguided forces could organize such sophisticated biological machinery without any direction stretches the boundaries of what we understand to be possible within the natural laws that govern the universe.

Claim:  We have no idea of the variables involved in regards of prebiotic protein synthesis.
Response:  While it's true that there are many unknowns in prebiotic chemistry, the assertion that "we have no idea of the variables involved" in prebiotic protein synthesis is an overstatement that doesn't align with current scientific knowledge. In fact, we have substantial information about several critical aspects of this process, particularly regarding sequence space and functional constraints.

1. Sequence Space Exploration
Extensive research has been conducted on protein sequence space, providing valuable insights into the constraints on functional proteins.

Key points:
- The total possible sequence space for a 100-amino acid protein is 20^100, an astronomically large number.
- Studies estimate that only about 1 in 10^12 random sequences would fold into stable structures.
- Experiments with random sequence libraries have shown that functional proteins are even rarer, estimated at about 1 in 10^24 to 1 in 10^77, depending on the specific function.

These figures demonstrate that we do have quantitative data on the rarity of functional sequences, contradicting the claim that we have "no idea" of the variables involved.

2. Functional Constraints
Research on extant proteins and de novo protein design has revealed significant information about the sequence requirements for protein function.

Key points:
- Many protein functions require specific catalytic residues in precise spatial arrangements.
- Secondary structure elements (alpha-helices, beta-sheets) have specific sequence preferences that are well-understood.
- Protein-protein interaction interfaces often require complementary charge distributions and hydrophobic patches.

This knowledge allows us to estimate the probability of functional sequences emerging randomly, which is crucial for understanding prebiotic protein synthesis.

3. Experimental Prebiotic Chemistry
Numerous experiments have explored the conditions and mechanisms of prebiotic amino acid and peptide formation.

Key points:
- We know the types and relative abundances of amino acids that form under various simulated prebiotic conditions.
- Experiments have revealed mechanisms for peptide bond formation in the absence of biological catalysts.
- Studies have identified potential prebiotic activating agents and concentration mechanisms.

While gaps in our knowledge remain, these experiments provide concrete data on many variables involved in prebiotic protein synthesis.

4. Thermodynamic and Kinetic Parameters
Extensive research has quantified the thermodynamic and kinetic parameters of peptide bond formation and hydrolysis.

Key points:
- The Gibbs free energy change for peptide bond formation in water is known (approximately +3.5 kcal/mol at 25°C and pH 7).
- Rate constants for uncatalyzed peptide bond formation and hydrolysis have been measured under various conditions.
- The effects of temperature, pH, and salt concentrations on these reactions are well-studied.

This quantitative data directly contradicts the claim that we have "no idea" of the variables involved.

5. Phylogenetic Studies
Comparative genomics and phylogenetic analyses have provided insights into the characteristics of early proteins.

Key points:
- Studies have identified potential "protein fossils" that may resemble early functional peptides.
- Reconstruction of ancestral protein sequences offers clues about the minimal functional requirements of early proteins.
- Analysis of protein domains across all life forms suggests that certain protein folds may have been present in the last universal common ancestor.

While not directly observing prebiotic conditions, these studies offer valuable constraints on early protein evolution.

In conclusion, while many aspects of prebiotic protein synthesis remain uncertain, the claim that we have "no idea of the variables involved" is not supported by the current state of scientific knowledge. We have significant quantitative data on sequence space, functional constraints, reaction parameters, and plausible prebiotic conditions. This information, while incomplete, provides a foundation for understanding the challenges and possibilities of prebiotic protein synthesis. The known rarity of functional sequences and the measured difficulties of spontaneous peptide formation in water do present significant challenges to naturalistic origin-of-life scenarios. However, these challenges are based on concrete scientific data, not a complete lack of knowledge about the relevant variables.



Last edited by Otangelo on Sat Sep 21, 2024 2:29 pm; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The Argument from Improbability DEBUNKED
https://www.youtube.com/watch?v=y-vUSQMqyOo

1. The argument from improbability:

"Premise one: life was either created by random chance or by God
Premise 2: the probability of life emerging by random chance is extremely unlikely
Conclusion: therefore life must have been created by God"

Steven Woodford attempts to refute the argument from improbability in several ways:

1. Black and White Fallacy:
Woodford points out that this argument commits a black and white fallacy by presenting only two options (random chance or God) without justifying why there are only these two possibilities. He states: "The most obvious flaw of the argument from improbability is that it commits a black and white fallacy. It does this because it forces us to consider just two explanations for a phenomenon without justifying why there are only two." He illustrates this with an analogy about the pyramids, showing how limiting options can lead to false conclusions: "It's as erroneous as asserting that the pyramids were either built by random chance or by aliens and then stating that since it's infinitesimally unlikely that they are the product of a tornado sweeping through the desert, they must have been built by aliens."

Response: We are faced with a stark dichotomy that cannot be easily dismissed. The emergence of the first self-replicating system capable of evolving through natural selection is the critical juncture we must consider. At this point, we are dealing with prebiotic chemistry, where the explanatory power of Darwinian evolution is not applicable. This leaves us with two fundamental options: either life arose through random chemical processes, or it was the result of intelligent design. Woodford's analogy to the pyramids is a false equivalence. We know humans built the pyramids, and we have evidence of their methods. In contrast, we have no observational evidence of life spontaneously arising from non-life, despite decades of research and experiments. At the point of life's origin, evolutionary processes were not yet in play. This eliminates a major category of explanation that can be invoked to later biological complexity. While natural laws guide chemical interactions, the specific arrangements that led to life would still be considered chance events within those parameters if not directed by an intelligence. Any process that is not guided by intelligence would, by definition, fall under the broader category of "chance" in this context. Logically, something either happens by design (implying intent and foresight) or it doesn't. If it's not designed, it occurs by default through unguided processes, which we label as chance. While there may be various subcategories or specific mechanisms within "chance," they all fall under the umbrella of unguided processes as opposed to intelligent design.

2. Special Pleading: 
Steve argues that proponents of this argument often commit the fallacy of special pleading by not applying the same logic to God: "The argument is predicated on improbable entities being the product of either random chance or intelligent design, but proponents make an unjustified special exception to this rule for the intelligent creator itself."

Response:   There are instances where special pleading can be justified, particularly when dealing with unique or fundamentally different entities. The concept of an Infinite Creator would indeed fall into this category. The incommunicable attributes (omniscience, omnipresence, omnisapience, aseity, immutability, omnitemporality) do set such a being apart from finite entities in a significant way. This fundamental difference  justifies treating this being as a special case. The distinction between an uncreated Creator and created beings is a crucial one in this argument. If we accept the premise of an Infinite Creator, it follows that this being would be fundamentally different from everything else in existence. The attribute of aseity - self-existence or being uncaused - is particularly relevant here. An uncaused cause would, by definition, be exempt from the need for an external explanation for its existence.
Accepting an Infinite Creator as a special case is not necessarily inconsistent if it's part of a coherent worldview that acknowledges fundamental differences between the infinite and the finite. Both views (universe with no maker vs. universe with Infinite Creator) require some form of special consideration. Given these points, it's fair to say that Woodford's argument about special pleading doesn't address the nuances of theological reasoning about an Infinite Creator. The concept of justifiable special pleading in this context is a valid consideration that adds complexity to the discussion.

3. Personal Incredulity:
Woodford points out that many proponents of this argument are committing the fallacy of personal incredulity by rejecting evolution due to their own inability to understand or accept it: "Evolution by natural selection has indisputably proven that extremely complex organisms can and have emerged through natural, unconscious processes."

Response: Evolution doesn't address the origin of life itself. The argument from incredulity regarding abiogenesis is distinct from arguments about subsequent evolutionary processes.  The characterization of naturalistic explanations as "just so" stories highlights a valid concern about the difficulty of empirically verifying hypotheses about past events, especially regarding life's origins. The apparent improbability of complex biological systems and a finely-tuned universe arising without intentional design isn't merely personal incredulity, but an inference based on observations of complexity and specificity. The vast amount of information in DNA and the apparent purposefulness in biological systems present significant challenges to purely naturalistic explanations. This goes beyond personal disbelief to questioning the sufficiency of known natural processes. The idea of the universe spontaneously emerging from non-existence or setting its own parameters does present conceptual difficulties that aren't easily dismissed. In light of these points, it's fair to say that dismissing these arguments as mere personal incredulity oversimplifies the issue. The broader questions about the origin of life and the universe involve complex philosophical and scientific considerations that go beyond simple disbelief. The inference to design, based on observations of complexity and apparent purpose, represents a reasoned position in an ongoing dialogue about ultimate origins.

5. Arbitrary Significance:
Finally, he argues that the perceived improbability is often due to assigning arbitrary significance to specific outcomes:

"My point is that when it comes to the argument from improbability, the man behind the curtain is the arbitrary significance that its proponents assign."

He uses the example of dice rolls to illustrate how any specific sequence of outcomes is equally improbable, yet we don't assign special meaning to most of them.

Response:  While it's true that any specific sequence of dice rolls is equally improbable, this comparison overlooks the qualitative differences between random outcomes and complex, functional systems. The emergence of life or the fine-tuning of universal constants are not equivalent to dice rolls – they exhibit specific patterns and functionality that distinguish them from purely random outcomes. In the case of life or cosmic fine-tuning, we're not just looking at any arbitrary outcome, but at systems that exhibit high levels of complexity, order, and functionality. These characteristics are fundamentally different from random sequences and warrant more consideration. While any sequence of dice rolls may be equally improbable, not all outcomes in nature are equally likely or meaningful. For example, the precise conditions required for life to exist are far more specific and constrained than those that would prevent life from forming. The argument fails to account for the cumulative effect of multiple improbable events occurring together. While individual improbable events might be dismissed as coincidence, the combination of many such events becomes increasingly difficult to attribute to chance alone. The dice roll analogy doesn't account for the fact that in the case of our universe and life, we are the observers who resulted from these improbable conditions. This introduces a selection bias that needs to be considered when evaluating probabilities. Recognizing patterns or significance in certain outcomes can lead to valuable scientific insights and theories. Dismissing all perceived improbabilities as arbitrary could hinder scientific progress and our understanding of the universe. While it's important to be cautious about assigning undue significance to improbable events, it's equally important to recognize that some outcomes in nature do exhibit special characteristics that warrant investigation and explanation. The key is to distinguish between truly arbitrary assignments of significance and those based on observable patterns, functionality, and scientific reasoning.

6. Fred Hoyle's argument against abiogenesis: 
"The chance that higher life-forms might have emerged in this way is comparable to the chance that a tornado sweeping through a junkyard might assemble a Boeing 747 from the materials therein"
Carl Sagan's counter-argument to the idea of God always existing: "If we say that God always existed, why not save a step and conclude that the universe always existed"
Richard Dawkins' "Ultimate Boeing 747" argument: "If proponents of the argument from improbability insist that us existing without a god is as implausible as a tornado sweeping through a junkyard and assembling a Boeing 747 from the materials therein, then we must insist that the chance of god existing without an even greater God is the ultimate Boeing 747"

Response:  While Hoyle's analogy is imperfect, it effectively illustrates the core challenge facing abiogenesis theories. The complexity and specificity of even the simplest living systems are indeed astronomical. Consider that:

a) Even "simple" life requires numerous interdependent systems (DNA, RNA, proteins, cell membranes) functioning in concert.
b) The probabilistic resources of the entire universe over billions of years are insufficient to explain the chance assembly of functional proteins, let alone entire living systems.
c) Appeals to gradual processes or natural selection fail to address the initial hurdle of achieving a self-replicating system capable of evolution.
d) Laboratory experiments demonstrating the formation of basic organic compounds fall far short of explaining the origin of life's information-rich systems.

2. Addressing Sagan's argument about God's existence: Sagan's argument misses crucial distinctions between the universe and the concept of God:

a) The universe, being physical and temporal, requires an explanation for its beginning. God, as conceived in classical theism, is non-physical and eternal, thus not subject to the same need for a causal explanation.
b) The fine-tuning of universal constants and the apparent design in biological systems point to an intelligent cause outside the physical universe.
c) The principle of sufficient reason suggests that the contingent universe requires a necessary being as its ultimate explanation, which aligns with the concept of God.

3. Countering Dawkins' "Ultimate Boeing 747" argument: Dawkins' argument fundamentally misunderstands the nature of God as conceived in classical theism:

a) God is not a complex physical being, but a simple, immaterial mind. The complexity we observe in the universe stems from God's thoughts and intentions, not from physical complexity in God's nature.
b) The argument conflates physical complexity with the richness of information. An intelligent agent can conceive and create complex systems without being physically complex itself.
c) The regress Dawkins posits doesn't apply to a necessarily existent being, which by definition doesn't require an external cause.
d) Empirically, we observe that complex specified information invariably stems from intelligent minds, not from undirected physical processes.

These arguments against intelligent design and theism, while rhetorically effective, fail to grapple with the real evidence for design in nature and the logical necessity of a transcendent, intelligent cause for the universe. The specified complexity we observe in living systems and the fine-tuning of the cosmos are best explained by an intelligent designer. While science continues to investigate natural processes, it's becoming increasingly clear that purely materialistic explanations fall short in accounting for the origin and diversity of life, as well as the existence and nature of the universe itself.

https://reasonandscience.catsboard.com

Otangelo


Admin

The Probabilistic Improbability of the Spontaneous Origin of Life: A Case for Intelligent Design

https://reasonandscience.catsboard.com/t2508-uncertainty-quantification-of-the-universe-and-life-emerging-through-unguided-natural-random-events#12780

The maximum number of possible simultaneous interactions in the entire history of the universe, starting from 13.7 billion years ago, can be calculated by multiplying the three relevant factors together: the number of atoms (10^80) in the universe times the number of seconds that have passed since the big bang (10^16) times the number of the fastest rate an atom can change its state per second (10^43). This calculation fixes the total number of events that could have occurred in the observable universe since the origin of the universe at 10^139. This provides a measure of the probabilistic resources of the entire observable universe.

The number of atoms in the entire universe = 1 x 10^80
Estimated age of the universe: 13.7 billion years.  In seconds, this would be = 1 x 10^16
The fastest rate an atom can change its state = 1 x 10^43
Therefore, the maximum number of possible events in a universe that is 18 billion years old (10^16 seconds), where each atom (10^80) is changing its state at the maximum rate of 10^40 times per second is 10^139.

1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000

The Astronomical Improbability of Random Protein Assembly: Implications for the Origin of Life

1. Average protein size:
  Total amino acids / Total proteins = 374,575 / 924 ≈ 405 amino acids per protein on average

2. Critical regions by protein (assuming a medium-sized protein):
  - Active site: 4 amino acids
  - Binding site: 8 amino acids (middle of the 5-10 range)
  - Structural core: 38% of 405 ≈ 154 amino acids
  Total critical amino acids per protein: 4 + 8 + 154 = 166

3. Probability calculation:
  There are 20 possible amino acids in each position
  The probability of getting the right amino acid by chance at each position is 1/20
  For critical regions of a protein: (1/20)^166
  - For all 924 proteins: [(1/20)^166]^924

4. Calculating the probabilities:
  Odds = 1 / [(1/20)^166]^924
  = 1 / (1/20)^(166*924)
  = 20^(166*924)
  ≈ 10^(166*924*log10(20))
  ≈ 10^227,221

This is an astronomically large number, although smaller than the previous calculation due to the smaller number of proteins and slightly larger average protein size. To put it in perspective:
the number of atoms in the observable universe is estimated at about 10^80
the number of possible chess games is estimated at about 10^120

The probabilities we calculate (10^227,221) they are still immensely large and effectively impossible in any realistic scenario. This new calculation continues to demonstrate the extreme improbability of a functional proteome of this size that arises by pure chance, illustrating why the spontaneous formation of a functional proteome is considered statistically impossible.


Given that early life forms likely inhabited unstable environments with fluctuating resources, genetic drift would have played a strong role. Genetic drift becomes significant when population sizes fall below Ne ~10,000, meaning that random fluctuations can cause the fixation of harmful mutations more often. To avoid this, early life forms would probably need to:
An effective population size of at least 10,000 to protect against genetic drift.
this population size would ensure that natural selection could effectively remove harmful mutations faster than they accumulate, keeping the genetic load manageable.

For early life forms, an effective minimum population size (Ne) at least 10,000 individuals it would probably be necessary to avoid the harmful effects of Muller's Ratchet. This would provide enough genetic diversity and selection power to eliminate deleterious mutations, even in the absence of recombination and with a higher mutation rate. Below this threshold, the accumulation of harmful mutations would become more likely, increasing the risk of genetic deterioration and eventual extinction.

Calculating Improbability

To understand the enormity of these numbers, let us consider the following:

1. Number of Different Proteins: 924 unique proteins needed for a minimal cell.
2. Total Protein Molecules: The cell requires approximately 200,000 protein molecules, considering multiple copies of each protein.
3. Average Protein Length: Assuming an average protein length of 400 amino acids, which is a reasonable estimate for bacterial proteins.
4. Total Amino Acids Required: This results in approximately 80 million amino acids (200,000 proteins × 400 amino acids/protein).
5. Amino Acid Variability: Each position in a protein chain can be one of the 20 standard amino acids.
6. Total Possible Combinations: The number of possible sequences for a single 400-amino acid protein is 20^400, and for all proteins, it is exponentially greater.

Let's calculate the combined probability of randomly assembling 200,000 proteins, each consisting of 400 amino acids, and then extend that calculation to 10,000 cells.

1. Probability of Assembling a Protein: For a single protein of 400 amino acid length, the probability (P_one_protein) of assembling it correctly by chance is: P_one_protein = (1/20)^400
Explanation: There are 20 standard amino acids. Each position in the protein chain has a 1/20 chance of being the correct amino acid. Because the positions are independent, we multiply the probabilities.

Numerical Calculation: log_10 P_one_protein = 400 × log_10(1/20) = 400 × (1.3010) = 520.4 So, P_one_protein = 10^520.4

2. Probability of Assembling 200,000 Proteins for a Cell: For 200,000 proteins, the combined probability (P_one_cell) is: P_one_cell = (P_one_protein)^200,000 = ((1/20)^400)^200,000 = (1/20)^(400 × 200,000)

Exponent Calculation: 400 × 200,000 = 80,000,000
Combined Probability: P_one_cell = (1/20)^80,000,000
Logarithmic Form: log_10 P_one_cell = 80,000,000 × log_10(1/20) = 80,000,000 × (-1.3010) = 104,080,000. So, Probability for a = Cell 10^104,080,000

3. Probability for 10,000 cells For 10,000 cells, each requiring the assembly of 200,000 proteins, the combined probability (P_10,000_cells) is: P_10,000_cells = (P_one_cell)^10,000 = ((1/20)^80,000,000)^10,0000 = (1/20)^(80,000,00000 × 100)

Exponent Calculation: 80,000,000 × 10,000 = 800,000,000,000
Combined Probability: P_10,000_cells = (1/20)^800,000,000,000
Logarithmic Form: log_10 P_10,000_cells = 800,000,000,000 × log_10(1/20) = 800,000,000 × (-1.3010) = 1,040,800,000,000,0000,0000.000. So the probability for 10,000 cells = 10^1,040,800,000,000

The calculated probabilities are astronomically small:
Probability for One Cell: = 10^104,080,000
For 10,000 cells: 10^-1,040,800,000,000

To put these numbers into perspective:
The total number of atoms in the observable universe is estimated at about 10^80.
The probability for a cell is 10^104,080,000, which is a number with 104 million zeros before the decimal point.
for 10,000 cells, the probability is even more insignificant, with more 1 Trillion zeros before the decimal point.

Understanding When Quotas Become Virtually Zero

When analyzing probabilities in the context of the origin of life, it is crucial to establish thresholds beyond which events can be considered virtually impossible. This analysis helps us to appreciate the magnitude of the improbabilities involved in the spontaneous formation of life. The maximum number of possible simultaneous interactions in the entire history of the universe, starting 13.7 billion years ago, can be calculated by multiplying three key factors:

1. The number of atoms in the universe (~10^80).
2. The number of seconds that have passed since the Big Bang (~10^16 seconds).
3. The fastest rate at which an atom can change its state per second (~10^43 state changes per second).

By multiplying these factors together, we find that the total number of events that could have occurred in the observable universe since its origin is approximately 10^139. This number represents the upper limit of probabilistic resources available in our universe.  If the probability of an event is less than 1/10^139, it can be considered effectively impossible within the context of our universe. Such an event is so unlikely that we did not expect it to occur once in the entire history of the cosmos. Extreme probability analysis provides a framework for assessing the plausibility of chance-based explanations for complex biological and cosmological phenomena. When probabilities fall far below established limits, such as Borel's Law or the universal limit of probability, we can reasonably conclude that such events are virtually impossible without invoking some form of intelligent causality that fundamentally alters the probability scenario.


Uncertainty quantification of the universe and life emerging through unguided, natural, random events File-a10

https://reasonandscience.catsboard.com

Sponsored content



Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum