ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview

Otangelo Grasso: This is my library, where I collect information and present arguments developed by myself that lead, in my view, to the Christian faith, creationism, and Intelligent Design as the best explanation for the origin of the physical world.


You are not connected. Please login or register

The algorithmic origins of life

Go down  Message [Page 1 of 1]

1The algorithmic origins of life Empty The algorithmic origins of life Wed Nov 04, 2020 6:23 pm

Otangelo


Admin

The algorithmic origins of life

https://reasonandscience.catsboard.com/t3061-the-algorithmic-origins-of-life

1. Creating a recipe to make a cake is always a mental process. Creating a blueprint to make a machine is always a mental process.
2. To suggest that a physical process can create instructional assembly information, a recipe or a blueprint, is like suggesting that a throwing ink on paper will create a blueprint. It is never going to happen!  
3. Physics and chemistry alone do not possess the tools to create a concept, or functional complex machines made of interlocked parts for specific purposes
4. The only cause capable of creating conceptual semiotic information is a conscious intelligent mind.
5. DNA stores codified information to make proteins, and cells, which are chemical factories in a literal sense.


https://biosemiosis.net/?fbclid=IwAR1xTZ-JWsSeSaTHqN5MmsaXpPN6dlFQz4liWCcOYzt6ugib-TVWv9y8YIE
Information is not a tangible entity, it has no energy and no mass, it is not physical,  it is conceptual.

- Life is a software/information-driven process.
- Information is not physical it is conceptual.
- The only known source of semiotic information is prior to intelligence.
- Life is therefore the direct product of a deliberate creative intellectual process.

Semiotic functional information is not a tangible entity, and as such, it is beyond the reach of, and cannot be created by any undirected physical process.
This is not an argument about probability. Conceptual semiotic information is simply beyond the sphere of influence of any undirected physical process. To suggest that a physical process can create semiotic code is like suggesting that a rainbow can write poetry... it is never going to happen!  Physics and chemistry alone do not possess the tools to create a concept. The only cause capable of creating conceptual semiotic information is a conscious intelligent mind.
Life is no accident, the vast quantity of semiotic information in life provides powerful positive evidence that we have been designed.
To quote one scientist working at the cutting edge of our understanding of the programming information in biology, he described what he saw as an “alien technology written by an engineer a million times smarter than us”

If you convert the idea to a sentence to communicate (as I do here) or to remember it, that sentence may be physical, yet is dependent upon the non-physical idea, which is in no way dependent upon it.

Howard Hunt Pattee: Evolving Self-reference: Matter, Symbols, and Semantic Closure 28 August 2012
Von Neumann noted that in normal usages matter and symbol are categorically distinct, i.e., neurons generate pulses, but the pulses are not in the same category as neurons; computers generate bits, but bits are not in the same category as computers, measuring devices produce numbers, but numbers are not in the same category as devices, etc. He pointed out that normally the hardware machine designed to output symbols cannot construct another machine, and that a machine designed to construct hardware cannot output a symbol. Von Neumann also observed that there is a “completely decisive property of complexity,” a threshold below which organizations degenerate and above which open-ended complication or emergent evolution is possible. Using a loose analogy with universal computation, he proposed that to reach this threshold requires a universal construction machine that can output any particular material machine according to a symbolic description of the machine. Self-replication would then be logically possible if the universal constructor is provided with its own description as well as means of copying and transmitting this description to the newly constructed machine.
https://link.springer.com/chapter/10.1007/978-94-007-5161-3_14

Information is not physical
https://arxiv.org/pdf/1402.2414.pdf
Information  is a disembodied abstract entity independent of its physical carrier. ”Information is always tied to a physical representation. It is represented by engraving on a stone tablet, a spin, a charge, a hole in a punched card, a mark on paper, or some other equivalent. This ties the handling of information to all the possibilities and restrictions of our real physical word, its laws of physics and its storehouse”. However, the legitimate questions concern the physical properties of information carriers like ”stone tablet, a spin, a charge, a hole in a punched card, a mark on paper”, but not the information itself.  Information is neither classical nor quantum, it is independent of the properties of physical systems used to its processing.

An algorithm is a finite sequence of well-defined, computer-implementable instructions resulting in precise intended functions. A prescriptive algorithm in biological context can be described as performing control operations using  rules, axioms and coherent instructions. These instructions are performed, using a linear, digital, cybernetic string of symbols representing syntactic, semantic and pragmatic prescriptive information. 


Cells host algorithmic programs for cell division, cell death, enzymes pre-programmed to perform DNA splicing, programs for dynamic changes of gene expression in response to the changing environment. Cells use pre-programmed adaptive responses to genomic stress,  pre-programmed genes for fetal development regulation, temporal programs for genome replication, pre-programmed animal genes dictating behaviors including reflexes and fixed action patterns, pre-programmed biological timetables for aging etc.

A programming algorithm is like a recipe that describes the exact steps needed to solve a problem or reach a goal. We've all seen food recipes - they list the ingredients needed and a set of steps for how to make a meal. Well, an algorithm is just like that.  A programming algorithm describes how to do something, and it will be done exactly that way every time.

Okay, you probably wish you could see an example, how that works in the cell, right? Lets make an analogy. Lets suppose you have a receipe to make spaghetti with a special tomato sauce written on a word document saved on your computer.  You have a japanese friend, and only communicate with him using the google translation program. Now he wants to try out that receipe, and asks you to send him a copy. So you write an email, annex the word document, and send it to him. When he receives it, he will use google translate, and get the receipe in japanese, written in kanji, in logographic japanese characters which he understands. With the information at hand, he can make the spaghetti  with that fine special tomato souce exactly as described in the receipe. In order for that communication to happen, you use at your end 26 letters from the alphabet to write the receipe, and your friend has 2,136 kanji characters that permits him to understand the receipe in Japanese. Google translate does the translation work.

While the receipe is written on a word document saved on your computer, in the cell, the receipe (instructions or master plan) for the construction of proteins which are the life essential molecular machines, veritable working horses, is written in genes through DNA. While you use the 26 letters of the alphabet to write your receipe, the Cell uses DNA, deoxyribonucleotides, four monomer "letters". In kanji there are 2136 characters, the alphabet uses 26,   computer codes being binary, use 0,1. The language of DNA is digital, but not binary. Where binary encoding has 0 and 1 to work with (2 - hence the 'bi'nary)  DNA uses four different organic bases, which are adenine (A), guanine (G), cytosine (C) and thymine (T)

The way by which DNA stores the genetic information consists of codons equivalent to words, consisting of an array of three DNA nucleotides. These triplets form "words". While you used sentences to write the spaghetti receipt, the equivalent sentences are  called genes written through codon "words", . With four possible nucleobases, the three nucleotides can give 4^3 = 64 different possible "words" (tri-nucleotide sequences). In the standard genetic code, three of these 64 codons (UAA, UAG and UGA) are stop codons.

There has to be a mechanism to extract the information in the genome, and send it to the ribosome,  the factory that makes proteins, which is at another place in the cell, free floating in the cytoplasm. The message contained in the genome is transcribed by a very complex molecular machine, called RNA polymerase. It makes a transcript, a copy of the message in the genome, and that transcript is sent to the Ribosome. That transcript is called messenger RNA or typically mRNA. 

In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, word, into another form, ( as another word, letter, etc. ) In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underling the translation process. Assignment means designating, ascribing, corresponding, correlating.
  
The Ribosome does basically what google translate does. But while google translate just gives the receipt in another language, and our japanese friend still has to make the spaghettis,  the Ribosome actually makes in one step the end product, which are proteins. 

Imagine the brainpower involved in the entire process from inventing the receipt to make spaghettis, until they are on the table of your japanese friend. What is involved ?
 
1. Your imagination of the receipt
2. Inventing an alphabet, a language
3. Inventing the medium to write down the message
4. Inventing the medium to store the message
5. Storing the message in the medium
6. Inventing the medium to extract  the message
7. Inventing the medium to send the message
8. Inventing the second language ( japanese)
9. Inventing the translation code/cipher from your language, to japanese
10. Making the machine that performs the translation
11. Programming the machine to know both languages, to make the translation
12. Making the translation
12. Makin of the spaghettis on the other end using the receipt in japanese  

1. Cells host Genetic information
2. This information prescribes functional outcomes due to the right particular specified complex sequence of triplet codons and ultimately the translated sequencing of amino acid building blocks into protein strings.  The sequencing of nucleotides in DNA also prescribes highly specific regulatory micro RNAs and other epigenetic factors.
3. Algorithms, prescribing functional instructions, digital programming, using symbols and coding systems are abstract respresentations and non-physical, and originate always from thought—from conscious or intelligent activity. 
4. Therefore, genetic and epigenetic information comes from an intelligent mind. Since there was no human mind present to create life, it must have been a supernatural agency.

1. Algorithms, prescribing functional instructions, digital programming, using symbols and coding systems are abstract respresentations and non-physical, and originate always from thought—from conscious or intelligent activity. 
2. Genetic and epigenetic information is characterized containing prescriptive codified information, which result in functional outcomes due to the right particular specified complex sequence of triplet codons and ultimately the translated sequencing of amino acid building blocks into protein strings.  The sequencing of nucleotides in DNA also prescribes highly specific regulatory micro RNAs and other epigenetic factors.
3. Therefore, genetic and epigenetic information comes from an intelligent mind. Since there was no human mind present to create life, it must have been a supernatural agency.

Three subsets of sequence complexity and their relevance to biopolymeric information
https://link.springer.com/article/10.1186/1742-4682-2-29
An algorithm is a finite sequence of well-defined, computer-implementable instructions. Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC), ordered (OSC), and functional (FSC). FSC alone provides algorithmic instruction.  A linear, digital, cybernetic string of symbols representing syntactic, semantic and pragmatic prescription; each successive sign in the string is a representation of a decision-node configurable switch-setting – a specific selection for function. Selection, specification, or signification of certain "choices" in FSC sequences results only from nonrandom selection.

Nucleotides are grouped into triplet Hamming block codes, each of which represents a certain amino acid. No direct physicochemical causative link exists between codon and its symbolized amino acid in the physical translative machinery. Physics and chemistry do not explain why the "correct" amino acid lies at the opposite end of tRNA from the appropriate anticodon. Physics and chemistry do not explain how the appropriate aminoacyl tRNA synthetase joins a specific amino acid only to a tRNA with the correct anticodon on its opposite end. Genes are not analogous to messages; genes are messages. Genes are literal programs. They are sent from a source by a transmitter through a channel.   Prescriptive sequences are called "instructions" and "programs." They are not merely complex sequences. They are algorithmically complex sequences. They are cybernetic.

Leroy Hood  The digital code of DNA   23 January 2003
The discovery of the double helix in 1953 immediately raised questions about how biological information is encoded in DNA. A remarkable feature of the structure is that DNA can accommodate almost any sequence of base pairs — any combination of the bases adenine (A), cytosine (C), guanine (G) and thymine (T) — and, hence any digital message or information. 
https://www.nature.com/articles/nature01410

Translation occurs after the messenger RNA (mRNA) has carried the transcribed ‘message’ from the DNA to protein-making factories in the cell, called ribosomes?.
The message carried by the mRNA is read by a carrier molecule called transfer RNA ?(tRNA).
https://www.yourgenome.org/facts/what-is-gene-expression


The capabilities of chaos and complexity.
http://europepmc.org/article/PMC/2662469
Do symbol systems exist outside of human minds?
Molecular biology’s two-dimensional complexity (secondary biopolymeric structure) and three-dimensional complexity (tertiary biopolymeric structure) are both ultimately determined by linear sequence complexity (primary structure; functional sequence complexity, FSC).  The codon table is arbitrary and formal, not physical. The linking of each tRNA with the correct amino acid depends entirely upon on a completely independent family of tRNA aminoacyl synthetase proteins. Each of these synthetases must be specifically prescribed by separate linear digital programming, but using the same MSS. These symbol and coding systems not only predate human existence, they produced humans along with their anthropocentric minds.


The algorithmic origins of life Ijms-10-00247f3
The image above shows the prescriptive coding of a section of DNA. Each letter represents a choice from an alphabet of four options. The particular sequencing of letter choices prescribes the sequence of triplet codons and ultimately the translated sequencing of amino acid building blocks into protein strings.  The sequencing of nucleotides in DNA also prescribes highly specific regulatory micro RNAs and other epigenetic factors. Thus linear digital instructions program cooperative and holistic metabolic proficiency.

Chaos is neither organized nor a true system, let alone “self-organized.” A bona fide system requires organization. Chaos by definition lacks organization.  What formal functions does for example a hurricane perform? It doesn’t DO anything constructive or formally functional because it contains no formal organizational components. It has no programming talents or creative instincts. A hurricane is not a participant in Decision Theory. A hurricane does not set logic gates according to arbitrary rules of inference. A hurricane has no specifically designed dynamically-decoupled configurable switches. No means exists to instantiate formal choices or function into physicality. A highly self-ordered hurricane does nothing but destroy organization. That applies to any unguided, random, natural events. 

The capabilities of stand-alone chaos, complexity, self-ordered states, natural attractors, fractals, drunken walks, complex adaptive systems, and other subjects of non linear dynamic models are often inflated. Scientific mechanism must be provided for how purely physicodynamic phenomena can program decision nodes, optimize algorithms, set configurable switches so as to achieve integrated circuits, achieve computational halting, and organize otherwise unrelated chemical reactions into a protometabolism. We know only of conscious or intelligent agents able to provide such things. 


Information and the Nature of Reality From Physics to Metaphysics page 149
The concept of information has been a victim of a philosophical impasse that has a long and contentious history: the problem of specifying the ontological status of the representations or contents of our thoughts. How can the
content (aka meaning, reference, significant aboutness) of a sign or thought have any causal efficacy in the world if it is by definition not intrinsic to whatever physical object or process represents it?

Consider the classic example of a wax impression left by a signet ring in wax. Except for the mind that interprets it, the wax impression is just wax, the ring is just a metallic form, and their conjunction at a time when the wax was still warm and malleable was just a physical event in which one object alters another when they are brought into contact. Something more makes the wax impression a sign that conveys information. It must be interpreted by someone.

In order to develop a full scientific understanding of information we will be required to give up thinking about it, even metaphorically, as some artifact or commodity. To make sense of the implicit representational function that distinguishes information from other merely physical relationships, we will need to find a precise way to characterize its defining nonintrinsic feature – its referential content – and show how it can be causally efficacious despite its physical absence. The enigmatic status of this relationship was eloquently, if enigmatically, framed by Brentano’s use of the term “inexistence” when describing mental phenomena.

Signature in the Cell, Stephen Meyer page 16
What humans recognize as information certainly originates from thought—from conscious or intelligent activity. A message received via fax by one person first arose as an idea in the mind of another. The software stored and sold on a compact disc resulted from the design of a software engineer. The great works of literature began first as ideas in the minds of writers—Tolstoy, Austen, or Donne. Our experience of the world shows that what we recognize as information invariably reflects the prior activity of conscious and intelligent persons.

We now know that we do not just create information in our own technology; we also find it in our biology—and, indeed, in the cells of every living organism on earth. But how did this information arise? The age-old conflict between the mind-first and matter-first world-views cuts right through the heart of the mystery of life’s origin. Can the origin of life be explained purely by reference to material processes such as undirected chemical reactions or random collisions of molecules?


The algorithmic origins of life
https://royalsocietypublishing.org/doi/full/10.1098/rsif.2012.0869
The key distinction between the origin of life and other ‘emergent’ transitions is the onset of distributed information control, enabling context-dependent causation, where an abstract and non-physical systemic entity (algorithmic information) effectively becomes a causal agent capable of manipulating its material substrate.

Biological information is functional due to the right sequence. There have been a variety of terms employed for measuring functional biological information — complex and specified information (CSI), Functional Sequence Complexity (FSC) Instructional complex Information.  I like the term instructional because it defines accurately what is being done, namely instructing the right sequence of amino acids to make proteins, and also the sequence of messenger RNA, which is used for gene regulation, and a variety of yet unexplored function.

Another term is prescriptive information (PI). It describes as well accurately what genes do. They prescribe how proteins have to be assembled. But it smuggles in as well a meaning, which is highly disputed between proponents of intelligent design, and unguided evolution. Prescribing implies that an intelligent agency preordained the nucleotide sequence in order to be functional. The following paper states:



Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3319427/
Biological information frequently manifests its “meaning” through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI. This paper looks at this dichotomy as expressed in both the genetic code and in the central dogma of protein synthesis. An example of a genetic algorithm is modeled after the ribosome, and an examination of the protein synthesis process is used to differentiate PI data from PI algorithms.

Both the method used to combine several genes together to produce a molecular machine and the operational logic of the machine are examples of an algorithm. Molecular machines are a product of several polycodon instruction sets (genes) and may be operated upon algorithmically. But what process determines what algorithm to execute?

In addition to algorithm execution, there needs to be an assembly algorithm. Any manufacturing engineer knows that nothing (in production) is built without plans that precisely define orders of operations to properly and economically assemble components to build a machine or product. There must be by necessity, an order of operations to construct biological machines. This is because biological machines are neither chaotic nor random, but are functionally coherent assemblies of proteins/RNA elements. A set of operations that govern the construction of such assemblies may exist as an algorithm which we need to discover. It details real biological processes that are operated upon by a set of rules that define the construction of biological elements both in a temporal and physical assembly sequence manner.

An Algorithm is a set of rules or procedures that precisely defines a finite sequence of operations. These instructions prescribe a computation or action that, when executed, will proceed through a finite number of well-defined states  that leads to specific outcomes.  In this context an algorithm can be represented as: Algorithm = logic + control; where the logic component expresses rules, operations, axioms and coherent instructions. These instructions may be used in the computation and control, while decision-making components determines the way in which deduction is applied to the axioms according to the rules as it applies to instructions.

A ribosome is a biological machine consisting of nearly 200 proteins (assembly factors) that assist in assembly operations, along with 4 RNA molecules and 78 ribosomal proteins that compose a mature ribosome. This complex of proteins and RNAs collectively produce a new function that is greater than the individual functionality of proteins and RNAs that compose it.

The DNA (source data), RNA (edited mRNA), large and small RNA components of ribosomal RNA, ribosomal protein, tRNA, aminoacyl-tRNA synthetase enzymes, and "manufactured" protein (ribosome output) are part of this one way, irreversible bridge contained in the central dogma of molecular biology.

One of the greatest enigmas of molecular biology is how codonic linear digital programming is not only able to anticipate what the Gibbs free energy folding will be, but it actually prescribes that eventual folding through its sequencing of amino acids. Much the same as a human engineer, the nonphysical, formal PI instantiated into linear digital codon prescription makes use of physical realities like thermodynamics to produce the needed globular molecular machines.

The functional operation of the ribosome consists of logical structures and control that obeys the rules for an algorithm. The simplest element of logical structure in an algorithm is a linear sequence. A linear sequence consists of one instruction or datum, followed immediately by another as is evident in the linear arrangement of codons that make up the genes of the DNA.

The mRNA (which is itself a product of the gene copy and editor subroutine) is a necessary input which is formatted by grammatical rules.

Top-down causation by information control: from a philosophical problem to a scientific research programme
https://sci-hub.st/https://royalsocietypublishing.org/doi/10.1098/rsif.2008.0018



Last edited by Otangelo on Sat Jul 09, 2022 12:44 pm; edited 13 times in total

https://reasonandscience.catsboard.com

2The algorithmic origins of life Empty Re: The algorithmic origins of life Wed Nov 18, 2020 11:28 am

Otangelo


Admin

1. Intelligence can and does describe reality, and objects in the real world. That's descriptive information.
2. But intelligence also structures, organizes, controls, and orders reality. That's using prescriptive information.
3. That is a quality of power - exclusive to intelligence.

How the DNA Computer Program Makes You and Me
https://www.quantamagazine.org/how-the-dna-computer-program-makes-you-and-me-20180405/


Dynamic changes of genome, pre-programmed or in response to the changing environment.
In the last decade or so, however, it has been revealed that genetic material is not stable or static but a dynamic one, changing incessantly and rapidly, the changes being either pre-programmed or in response to the changing environment.
https://agris.fao.org/agris-search/search.do;jsessionid=12397CF37F046B9EB4DEA093BC909F0B?request_locale=fr&recordID=KR19900040981&query=&sourceQuery=&sortField=&sortOrder=&agrovocString=&advQuery=&centerString=&enableField=

These alterations in the genome size occurred right at the first generation of amphidiploids, revealing the rapidity of the event. They suggest that these alterations, observed after allopolyploidization and without additive effect on the genome size, represent a pre-programmed adaptive response to the genomic stress caused by hybridization, which might have the function of stabilizing the genome of the new cell.
https://www.scielo.br/scielo.php?pid=S1413-70542003000100003&script=sci_arttext

Early pre-programming of genes
Special proteins are pre-programming genes which later regulate fetal development. This pre-programming occurs at an earlier stage than previously known.
https://partner.sciencenorway.no/dna-forskningno-norway/early-pre-programming-of-genes/1403186

[Pre-programmed genes]
https://pubmed.ncbi.nlm.nih.gov/28823208/

The evolution of the temporal program of genome replication
In yeast, active origins are distributed throughout the genome at non-transcribed and nucleosome-depleted sequences and comprise a specific DNA motif called ARS consensus sequence which is bound by the Origin Recognition Complex throughout the cell cycle 4–6. Despite of this partially pre-programmed replication activity, different cells in a population may use different subsets of active origins.
https://www.biorxiv.org/content/10.1101/210252v1.full

Learn about behaviors that are pre-programmed into an animal's genes, including reflexes and fixed action patterns.
https://www.khanacademy.org/science/ap-biology/ecology-ap/responses-to-the-environment/a/innate-behaviors

A number of theories have been generated to account for this spatial heterogeneity, including a zonated response to spatial gradients, or an internal clock where epithelial cells are pre-programmed to express different functional genes.
https://www.epistem.co.uk/spotlight/Lgr5-telocytes-signalling-source

The cells of the human body are governed by a set of pre-programmed processes, known as the cell cycle, which determines how cells progress and divide.
https://www.news-medical.net/life-sciences/The-Role-of-Cell-Division-in-Tumor-Formation.aspx

CRISPR (again, shorthand for CRISPR-Cas9), utilizes the Cas9 enzyme, a naturally produced protein in cell types built for DNA splicing, to “unzip” these chained nucleotides at a specific spot and then replace the nucleotide chain with the one attached. The location is based on pre-programmed information in the enzyme—essentially it floats around inside the nucleus until it finds the correct spot, then gets to work.
https://nanocellect.com/blog/using-crispr-technology-to-engineer-genetically-modified-cell-lines/

What are telomeres?
Are our cells just following a pre-programmed biological timetable regardless of any other factors? Most likely it’s a combination of all of these, plus some other causes we haven’t yet discovered.
https://www.science.org.au/curious/people-medicine/what-are-telomeres



https://www.sciencedirect.com/science/article/abs/pii/S0306987715003175

Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3319427/

"Prescriptive Information (PI)" defines the sources and nature of programming controls, regulation, and algorithmic processing. Such prescriptions are ubiquitously instantiated into all known living cells

The DNA polynucleotide molecule consists of a linear sequence of nucleotides, each representing a biological placeholder of adenine (A), cytosine (C), thymine (T) and guanine (G). This quaternary system is analogous to the base two binary scheme native to computational systems. As such, the polynucleotide sequence represents the lowest level of coded information expressed as a form of machine code.

When a functional choice of a nucleotide is  made, the polymerization of each prescriptive nucleotide into a programmed "messenger molecule" instantiates a quaternary programming choice into that syntax.

Information can be transferred from source to destination via an agreed-upon set of rules, and a language acted upon by algorithms. Each letter in the sentence "The glass contains water" is formally selected as a symbol from one of 26 alphabetical characters plus space. Each letter selection generates a simple form of Prescriptive Information (PI) as each letter contributes to forming a finite string of symbols, characterized as words having semantic meaning. PI is inherent in the selection of each letter from among 26 options even prior to the selection of words and word syntax. In both language and molecular biology synonyms occur where different letter selections can spell different words with the same semantic meaning. Sentence construction begins with letter selection.

The question becomes, are the words "the," "glass," "contains," and "water" algorithms or data? Each word is composed of a linear sequence of symbols in the form of letters, which collectively transfer a greater meaning than the individual meaning of each character. This transfer is accomplished by defining semantic meaning to a prescribed sequence of letters the intent of which is to map meaning to an arbitrary sequence of tokens. This mapping is arbitrary as evidenced by the multitude of languages that exist in our world, each language mapping "meaning" to a multitude of arbitrary sequences of symbols or tokens, be it letters, shapes or pictures. This semiotic relationship transfers into biocybernetics and biosemiotics when viewed from the biological realm

Biological information frequently manifests its "meaning" through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI.

Prescriptive Information (PI)
https://www.davidabel.us/papers/Prescriptive%20Information%20PI%20SciTopics.pdf

When processed, Prescriptive Information PI is used to produce formal function. Computational cybernetic programs and linguistic instructions are examples of Prescriptive Information.

Intuitive information entails syntax, semantics and pragmatics. Syntax deals with symbol sequence, various symbol associations, and related arbitrary rules of grouping. Semantics deals with the meanings represented within any symbol system. Pragmatics addresses the formal function of messages conveyed using that symbol system.

No random number generator has ever been observed to generate a meaningful message or a computational program. No physical law can determine each selection, either. If selections were dictated by law, all selections would be the same. Empirical evidence of PI arising spontaneously from inanimate nature is sorely lacking.

Particular constraints must be deliberately chosen and others rejected to steer a cause-and-effect chain towards formal pragmatic worth. The false claim is made of stochastic generation of ‘‘candidate solutions.’’ No explanation is provided as to why or how inanimate nature would prefer a solution over a non solution. . Optimization is goal-oriented and formal. Neither chance nor necessity problem-solves. Physicodynamics cannot generate ‘‘chromosomes’’ of abstract representations. The iterations are steered toward formal pragmatic success artificially by agents. The investigator pursues a goal. Evolution has no goal.  Physicochemical dynamics unaided by agent-steering has never been observed to generate formal organization. Just as pragmatic control cannot be reduced to spontaneously occurring physicodynamic constraints, arbitrarily-written rules cannot be reduced to the ‘‘necessary’’ laws of physics and chemistry. Whether we are talking about specific prescriptions or the system rules that govern those prescriptions, to talk about prescription is to talk about choice with intent at objective decision making.

We arbitrarily assign meaning to small syntactical groups of alphabetical characters, the equivalent of words. By arbitrarily, we do not mean randomly. We mean not only (1) uncoerced by determinism, but (2) deliberately chosen according to voluntarily obeyed rules, not forced laws. But how can a physical symbol vehicle, or a group of such physical symbol vehicles represent an idea in a purely materialistic world? Physicalism has never been able to answer this question. The Mind-Body problem prevails. No physical object can take on representational meaning apart from formal arbitrary assignment of abstract meaning by agents. Physicality itself cannot generate a sign/symbol/token semiotic system.


When it comes to biopolymeric syntax, semantics, and pragmatics, we fanatically insist for metaphysical reasons that the system is purely physical. No empirical, rational, or prediction-fulfillment support exists for this dogma . What determined the monomeric syntax, the sequencing, of its positive-strand template? Not chance, and not necessity. We cannot conclude that mathematics is physical just because it is instantiated into computer hardware or human brains. The same is true of genetic instruction and the PI management of life at the cellular level. Both mathematics and life are fundamentally formal. Even most epigenetic factors can be shown to be formally produced and integrated into a conceptual, cooperative, computational scheme of holistic metabolism. Life cannot exist without sophisticated, formal, genetic PI.

Semiosis, cybernetics, and formal organization all require deliberate programming decisions, not just self-ordering physicodynamic redundancy.

Three nucleotides are used to prescribe each amino acid. No physicochemical explanation exists for such sophisticated triplet codon sequencing and encryption. 

Prescriptive information requires anticipation and “choice with intent”

The biosemiosis of prescriptive information
Prescriptive information either instructs or directly produces nontrivial function at its destination.

https://reasonandscience.catsboard.com

3The algorithmic origins of life Empty Re: The algorithmic origins of life Wed Dec 08, 2021 7:43 am

Otangelo


Admin

Claus Emmeche FROM LANGUAGE TO NATURE - the semiotic metaphor in biology 1991 1

The teleonomic character of living systems continues to challenge the conception of life prevailing among biologists. No matter how forcefully vitalistic or finalistic explanations have been defeated through developments in experimental biology such attitudes apparently never totally disappear, even among professional biologists. Rather they reappear in new guises for every new generation.

My comment: Teleonomy is the quality of apparent purposefulness and of goal-directedness of structures and functions in living organisms brought about by natural processes like natural selection. The term derives from the Greek "τελεονομία", compound of two Greek words, τέλος, from τελε-, ("end", "goal", "purpose") and νόμος nomos ("law"). Teleonomy is sometimes contrasted with teleology, where the latter is understood as a purposeful goal-directedness brought about through human or divine intention. Teleonomy is thought to derive from evolutionary history, adaptation for reproductive success, and/or the operation of a program. Teleonomy is related to programmatic or computational aspects of purpose.
https://en.wikipedia.org/wiki/Teleonomy

It is not surprising, that teleonomy is disputed by teleology. purposeful goal-directed outcomes are better explained by someone with intent and goals that brought forward certain things for specific reasons.

In the history of science controversies of this kind going on for centuries have rarely, if ever, been resolved through the unambiguous victory of one of the sides. In the first decades following the neo-Darwinistic synthesis of the 1940s, however, most biologists considered the matter settled once and for all. The purposeful character of living organisms was seen as an inevitable consequence of evolution to be causally explained by the mechanism of natural selection gradually favouring the spread of adaptive mutations within populations.

This provisional cease-fire, however, did not survive the 1970s. Severe criticism from areas ranging from paleontology to embryology and molecular biology succeeded in provoking a renewed theoretical debate on the role of natural selection in evolution, and thus the gradual and adaptive character of this process. At the deepest level, as we see it, this renewed criticism concerns the question of biological form. Is the development of form simply to be explained through the gradual improvement of function? Do organisms and parts of organisms develop their characteristic forms, just because such forms were the most functional (the most successful)?

The neo-Darwinian belief in functionality (i.e., success in reproduction) as the key to the creation of form is in fact a modern version of this substance-preference dating back to antiquity. It requires the conception of form as something to be assembled through a series of evolutionary steps, each of which would in itself be capable of passing the test for functionality. Evolutionary change of form can be seen then as divided into 'atoms of change' much in the same way as a substance may be divided into molecules. Form, in other words, is seen as a phenomenon of more or less, not as a question of the generation of qualitatively different patterns.
Opponents to this belief would claim that forms are not reducible to such a series of single steps. For instance a die of four cannot be obtained simply by adding an extra dot to a die of three. Rather quite a new pattern has to be constructed. And there would be no guarantee that the intermediate steps between two different patterns or forms would be functional at all - if anything, the opposite would seem plausible. Form, according to this view, must be considered an autonomous factor in evolution. Historical (i.e., phylogenetic), architectonical and embryological constraints would be reflected in the actual forms of living systems on this planet, and the rules governing such constraints would tell a lot more about evolution than natural selection, which can only modify the given patterns on which it works.

It is difficult to escape the feeling that much of the energy now invested in defending the functionalist image of evolution is in the end invested in order to defend a causal mechanism for evolution. Since the time of William Paley, the argument from form has always been associated with religious conceptions, whether the claim for Godly 'design' or only for the existence of vital forces or final forces. To give up natural selection as the prime mover in the world of living creatures seems identical to giving up the firm hold of science over this strange teleonomic aspect of life.

It is with this background that one must understand the temptation among biologists in recent years to consider life from the point of view of communication or information theory rather than from the point of view of classical physics and chemistry. After all, the only place to go for models of purposeful behaviour would be in the cultural sphere of the human being. And whatever the reasons for purposeful behaviour of living systems are, a more appropriate description of such behaviour might help formulating scientific explanation. The choice need not be between natural selection or vital force. Maybe a third route might be found.

This idea, of course, would probably not have been so attractive had it not been for the introduction into biology during the 1950s and 1960s of a whole set of terms borrowed from information theory. A meaningful description of the genetic processes going on at the molecular level of the cell seemed to require terms such as 'genetic code', 'messenger RNA', 'feedback', 'information', etc.

Unfortunately, in spite of their widespread use, these concepts are far from unambiguous. Thus in information theory (e.g., Shannon 1949) information is understood as an objective quantifiable entity. The information content of a message is equal to the improbability of that message. 

While this definition makes theory easy, it also removes the concept of information from any use in real life situations. In human communication statistical analysis of the probabilities to be ascribed to any definite statement is not only not feasible, it is impossible for theoretical reasons. Nobody would deny that totally unforeseen events are an essential part of life. The eventual appearance of such events obviously makes it impossible to ascribe distinct probabilities to any event. We strongly feel, nevertheless, that we often get informed through conversation. Mere improbability does not cover the real meaning of information.

In fact, most statements in human communication are only understandable at a semantic level of analysis. Evidently, the 'information' of the mathematical theory of information is a much less comprehensive category than the information exchanged between people talking.

When this concept of information is introduced into biology these sophisticated problems are imported as well. However, the tradition of biology is very unprepared to cope with such problems. Actually in the daily praxis in the laboratory information is simply identified with a substance, a piece of DNA, a gene. And in this way of confusing information with substance the new terminology of molecular genetics automatically reinforces the functionalist theory of neo-Darwinism.

We doubt that a scientific understanding of the teleonomic character of living systems will ever be possible based on this restricted concept of information. Rather we propose that biological information must be understood as embracing the semantic openness characteristic of information exchange in human communication. The cost of this, of course, is that we shall have to abandon the belief in information as an objective entity to be measured in units of bits (or genes).

In consequence, any theory which tries to describe the dynamics of living systems from the perspective of communication or exchange of signs, i.e. semiotics (from Greek: semeion = sign) would have to rely on a concept of information as a subjective category. Following Gregory Bateson we take information to mean: a difference that makes a difference to somebody. According to this definition, information is inseparable from a subject to whom this information makes sense. Our thesis is that living systems are real interpretants of information: They respond to selected differences in their surroundings. Only on this premiss, we claim, may analogies from the sphere of human communication serve as explanatory tools in the understanding of the purposeful behavior of living systems.

This of course begs the question about the significance of metaphor in science. Therefore, we first consider the nature of metaphor in science in order to distinguish between the metaphorical transfer of signification at various levels in scientific theories. Second, we focus on the metaphor of nature as language in different versions, to give an impression of the very general character and the cognitive appeal of this metaphor, and to criticize some of the models. In fact, nature perceived as language, or a language-like system, constitutes a complex web of cognate ideas of various heuristic value. In arguing for a semiotic perspective on living nature, we have to consider the existence of at least two different semiotic traditions - the linguistic structuralism of Ferdinand de Saussure, and the theory of signs of Charles S. Peirce, both of which may inspire the new view of nature. We shall argue, however, third, that the Saussurean theory as applied to living systems, raises some decisive problems in relation to an account of the different coding processes of biological information in the evolution and development of living beings in ecosystems. To spell out these problems, we shall criticize Forti's analogy between language and living species. Fourth, in the last section we show that the basic concepts of the semiotics of C. S. Peirce fit very well into the requirements of a `subjective' category of biological information (here subjective should be taken in the epistemological sense, and not as equivalent to non-rational or non-scientific). Biological structures in general and the gene in particular may be understood as signs forming a network of triadic semiotic relations through space and time.


Biosemiotics points to design

One must understand the temptation among biologists in recent years to consider life from the point of view of communication or information theory rather than from the point of view of classical physics and chemistry.

All life forms, each cell contains a description of itself stored by genes and epigenetic storage mechanisms. This description in the form of codified information furthermore must stay inactive and protected from its intra and extra-cellular milieu - until required otherwise and must be able to be faithfully replicated with a minimal number of errors, or else the description will change influenced by entropy and damage, deteriorate - and ultimately die. The function of this description is to assure the identity of the system through time. It is the memory of the living system. This description exists in the form of information stored in DNA and epigenetic storage mechanisms.

That raises the question: Why would chemicals on prebiotic earth generate a system, that self-describes itself, in digital form, and has the know-how to transform its information content into an identical representation in analog 3D form, the physical 'reality' of the actual living system, based on codified information transfer, transcription, and translation -  a system, able to decipher the DNA-code as well as to follow its instructions in a given way? Once, the 'analog phase', the message of the memory is expressed, life can flourish and perpetuate. This state of affairs sets life apart from non-life. It is a flow from sign to form.

But there have to be actually two systems of prescribed information. One that stores all the information to make the system, and a second that dictates the behavior of the system for action, responding to extra and intracellular cues, environmental conditions, nutrition demands, and energy supply. We can call them the 'replication' information, and 'interaction' information distinction. Both have to be set up right from the beginning.

The probability of God's existence can be quantified. The more information is required to make a first living cell, the more improbable it is that such a message came by by unguided means. Information simply is a quantitative measure of improbability. The higher the probability of a given event, the less information does it convey. Genetic molecules can only carry information based on unpredictable and aperiodic specified sequences that characterize instructional blueprints. Biological information must be understood as embracing the semiotic characteristic of information transmission and exchange analogous to human communication and language, based on syntax, pragmatics, and semantics. Statistical analysis however does not fully elucidate the problem. Mere improbability does not cover the real meaning or essence of information, measured in units of bits (or genes).

The price to be paid for this quantification is a loss of semantic content. In information theory, the value of information only reflects the statistical structure. But most statements in human communication are only understandable at a semantic level of analysis.

That becomes clear when one takes the semantic level of information into consideration, which introduces the concept of subjectiveness as a distinct category. Semiotics means signs, and those have meaning based on a convention agreed upon by someone entailed with intelligence. Only conscious agents can make sense of and interpret the meaning of a sign. And the information stored in cells is pregnant with meaning.

During the translation of genetic information into the amino acid sequence that forms functional polypeptide chains, that fold into proteins, through the ribosome machine, living systems are preprogrammed to give interpretations to incoming bits of information. A codon is assigned to amino acids utilizing the genetic code.  This state of affairs in biochemical systems is not a metaphorical description but translation is what is literally observed. The nucleic message is translated into the 'peptidic language'. That is analogous to a word in English being translated to a word in Chinese, where the meaning is equivalent.  If an intelligent designer is excluded to explain its origin, then natural, non-intelligent mechanisms become an external substitutive agent in the image of agency. I see this as a fallacy of misplaced reduction of what we only observe intelligence able to generate, to mere mechanistic natural causes.  

Analogy means 'similarity', or 'accordance'. An analogy is a relation between two descriptions of objects which allows inferences to be made about one on the basis of the other.  It is often of great value to construct and use analogies to clarify a state of affairs. It is valid to project the understanding of a concept, a causal order that we are familiar with, into the natural phenomena under study. An explanation can be understood as the description of natural phenomena, giving reference to an analogous system, like ordinary human language, mathematics, general physics etc. - that is already known, and so its causes. There is a signification-transfer involved between the different areas that is helpful to understand previously not known causal relationships.

Abduction - the process of inference to the best explanation captures this aspect of signification-transfer by analogy very well. Without our capacity to relate two bodies of knowledge abductively, realizing them as both falling under the same rules, we would have no science at all.  The argument by analogy is the fundamental technique in the process of understanding the world.

The Natural Theology of William Paley (1802) illustrates the concept of the book of nature as the sign of divinity in the Creation. The book of nature was often conceived as the visible sign of an otherwise invisible and transcendent God. It was readable by anyone, although the meaning of the book might be accessible only for the specially chosen.  The informational complexity of living systems points to the consciousness of the immanent larger mind of nature.

When "the hand of the creator" was replaced in the explanatory scheme by »natural selection«, it permitted incorporating most of the natural theology literature on living organisms almost unchanged into evolutionary biology. 
Natural selection became a modernized hand of God.

Biological systems start from the (digital) axioms and definitions and develop an analogic three-dimensional geometry: an instance of the morphology of life. Genotype is like a set of axioms, for instance Euclid's, and a phenotype is like a three-volume treatise on Euclidean geometry. 

The real-life of organisms in their ecological niche constitutes the more explorative phase of preprogrammed evolution, where selective and stochastic processes are in action. The termination of the life cycle occurs through sexual reproduction which corresponds to a 'back-translation' of the environmental experiences of the (analogic) population, to the digital level of epigenetic coding and the DNA inside the cohort of zygotes starting the next generation.

Deciphering of the DNA code has revealed our possession of a language much older than hieroglyphics, a language as old as life itself, a language that is the most living language at all - even if its letters are invisible and its words are buried in the cells of our bodies. When the structure of DNA was elucidated and the genetic code was broken, the concept of the organism as determined by a genetic program seemed to be an established biochemical fact, notwithstanding the enormous gap in knowledge about the epigenetic relationship between genotype and phenotype.

To state that an event is improbable, one first has to know that it might occur at all. Therefore the totally unforeseen - and thus the real new - cannot be accounted for through the statistical theory of probability. Semiotic functional information is not a tangible entity, and as such, it is beyond the reach of, and cannot be created by any undirected physical process. This is not an argument about probability. Conceptual semiotic information is simply beyond the sphere of influence of any undirected physical process. To suggest that a physical process can create semiotic code is like suggesting that a rainbow can write poetry... it is never going to happen!  Physics and chemistry alone do not possess the tools to create a concept. The only cause capable of creating conceptual semiotic information is a conscious intelligent mind. Life is no accident, the vast quantity of semiotic information in life provides powerful positive evidence that we have been designed.

To quote one scientist working at the cutting edge of our understanding of the programming information in biology, he described what he saw as an “alien technology written by an engineer a million times smarter than us”

https://reasonandscience.catsboard.com

4The algorithmic origins of life Empty Re: The algorithmic origins of life Sun Dec 26, 2021 8:12 pm

Otangelo


Admin

The algorithmic origins of life The_se10

1. The genetic code, and genetic information, are analogous to human language. Codons are words, the sequence of codons ( genes) are sentences. Both contain semantic meaning.
2. Codon words are assigned, and code for amino acids, and genes which are strings containing codified, complex specified information instruct the assembly of proteins.
3. Semantic meaning is non-material. Therefore, the origin of the genetic code, and genetic information, are non-material.
4. Instructional assembly information to make devices for specific purposes comes always from a mind. Therefore, genetic information comes from a mind.

https://reasonandscience.catsboard.com

5The algorithmic origins of life Empty Re: The algorithmic origins of life Mon Apr 18, 2022 10:20 am

Otangelo


Admin

Timothy R. Stout Information-Driven Machines and Predefined Specifications: Implications for the Appearance of Organic Cellular Life April 8, 2019

One purpose of scientific investigation is to determine the scope and limits of physical processes. This is particularly true for abiogenesis because this field of study is dedicated to reconstructing a possible explanation for the origin of life on the basis of processes we see today at work in physics, chemistry, geology, and biology. A proper understanding of the scope and limitations of available processes is essential for legitimate attempts at historical reconstruction. A living cell may be viewed as an information-driven machine. A body of information is stored in a genome within the cell. Cellular “hardware” then reads, decodes, and uses the information. The information drives the operation in a manner analogous to how software in a computer drives computer hardware. In both cases, proper information needs to be available for use by functioning hardware which in turn is controlled by it. The gradual step-by-step developmental processes characteristic of evolution are not compatible with the first appearance of a computer. There is a minimum amount of functioning information required for computer operation. There is a minimum amount of functioning hardware required for computer operation. The information and hardware must interact with each other in a very intricate, intertwined manner. The minimum amounts required for each are staggeringly complex. In industry, a computer needs to be designed before it is fabricated. The probability is virtually zero for an unguided, random combination of logic gates to form a functioning computer, complete with internal memory, memory address logic, data registers, a central processing unit, data input, and output components, control signal inputs, and outputs, and connections between internal components. Beyond this, there are no known means for random combinations of logic to generate a body of information tailored to work with a specific form of computer hardware. There are no known means for such information to be stored for use by the computer and to be accessible by it. Computers are the product of deliberate intelligent action, not random processes. Since computers and living cells are both information-driven machines, this suggests the possibility that the difficulties facing initial computer fabrication could also apply to initial cell fabrication. If this suggestion proves valid, it poses serious issues concerning the adequacy of natural processes being adequate to account for the information-driven physical life we see around us. There is another aspect of this problem that has particular significance. In industry, both computers and processer-driven applications ranging from microwave ovens to self-driving automobiles start with a predefined system specification. 

Typically, this will define an overall task for the machine to accomplish. Some tasks may be done in hardware or software. Typically, the software is cheaper and more readily adapts to a wide range of possible variations in operation. However, hardware is faster and requires minimal input to trigger its operation. The specification determines whether a particular task is to be done in hardware or software. It also determines how the software and the hardware interact with each other to accomplish a given task. A major objective of the system specification is to define a software specification describing what the software needs to do and a hardware specification defining what the hardware needs to do. In industry, separate hardware and software design engineering teams then design a product meeting their specified goals. In an ideal world, the system specification will be so complete and accurate and the proficiency of the software and hardware engineers in implementing their specifications will likewise be so complete and accurate that the system will work the first time the power is turned on and the two are brought together. In real life, this is not typical. 

Concluding Analyses 
If a living cell is more complex than a physical computer and if debug of computer design typically is an extremely difficult task, this suggests that a living cell must have its origin in a being so intelligent that it can anticipate all of the behaviors of the various arrangements of building block amino acids and nucleotides. The first cell must appear in working form without needing debug. This is particularly the case since special test equipment for identifying design problems would not be available in a prebiotic scenario. Although mutation and natural selection can have use in adapting an already living cell to changing environmental conditions, they appear inadequate to meet the requirements of initial cellular appearance. Slight modification of an existing, already working design is trivial compared to the difficulties of implementing an initial design. During my experience as an industrial design engineer, I was active on many design projects that were canceled for various reasons. I have worked on designs that were ready for a prototype to be built, but funds were not provided to make it. There is a difference between having a paper design, no matter how good it might be, and actually having resources to build the product. It is insufficient for an intelligent being to design a living cell capable of survival in the environment in which it will appear. Since the design specification appears outside of natural law, its physical implementation must also take place outside of natural law. Natural processes have no ability to implement non-material plans. The actual appearance on Earth of a living cell required an intelligent being to work outside of natural law in order to arrange molecules and atoms into dynamic relationships with each other in accordance with a predefined specification, one which was developed through intelligence and apart from natural processes. There is a word we use to call an extremely intelligent being who can move molecules and atoms into predetermined, dynamic relationships at will—God. This paper has plausibly demonstrated how unsuppressed, unbiased scientific observation leads to a Being with the characteristics of God as the source of the physical life we see around us.

https://osf.io/qz7bn/

https://reasonandscience.catsboard.com

Otangelo


Admin

Biosemiotic information: Where does it come from?

https://reasonandscience.catsboard.com/t3061-the-algorithmic-origins-of-life#9374

So far, I have dealt mostly with the physical aspect of life and the origin of the basic building blocks. In this chapter, we will give a closer look in regards to a fundamental and essential aspect of life: The information stored in biomolecules. Life is more than physics and chemistry. In a conversation with J.England, Paul Davies succinctly described life as Chemistry + information 1. Witzany (2015) gave a similar description: "Life is physics and chemistry and communication. 2. Its even more than just information. Life employs advanced languages, analogous to human languages.

Paul Davies (2013): Chemistry is about substances and how they react, whereas biology appeals to concepts such as information and organization. Informational narratives permeate biology. DNA is described as a genetic "database", containing "instructions" on how to build an organism. The genetic "code" has to be "transcribed" and "translated" before it can act. And so on. If we cast the problem of life's origin in computer jargon, attempts at chemical synthesis focus exclusively on the hardware – the chemical substrate of life – but ignore the software – the informational aspect. To explain how life began we need to understand how its unique management of information came about. In the 1940s, the mathematician John von Neumann compared life to a mechanical constructor, and set out the logical structure required for a self-reproducing automaton to replicate both its hardware and software. But Von Neumann's analysis remained a theoretical curiosity. Now a new perspective has emerged from the work of engineers, mathematicians and computer scientists, studying the way in which information flows through complex systems 31

SUNGCHUL JI (2006): Biological systems and processes cannot be solely accounted for based on the laws of physics and chemistry. They require in addition the principles of semiotics, the science of symbols and signs, including linguistics. It was Von Neumann recognizing first the interrelationship required for self-replication: symbol-matter complementarity. Linguistics provides a fundamental principle to account for the structure and function of the cell. Cell language has counterparts to 10 of the 13 design features of human language characterized by Hockett and Lyon. 22

V A Ratner (1993): The genetic language is a collection of rules and regularities of genetic information coding for genetic texts. It is defined by alphabet, grammar, collection of punctuation marks and regulatory sites, semantics. 16

Cells are information-driven factories
Specified complex information observed in biomolecules dictates and directs the making of irreducible complex molecular machines, robotic molecular production lines, and chemical cell factories. In other words: Cells have a codified description of themselves in digital form stored in genes and have the machinery to transform that blueprint through information transfer from genotype to phenotype, into an identical representation in analog 3D form, the physical 'reality' of that description.  No law in physics or in chemistry, is known to specify that A should represent, or be assigned to mean B. The cause leading to a machine’s and factory's functionality has only been found in the mind of the engineer and nowhere else. 

Paul Davies (1999): How did stupid atoms spontaneously write their own software … ? Nobody knows … … there is no known law of physics able to create information from nothing. 7

Timothy R. Stout (2019): A living cell may be viewed as an information-driven machine. 9

David L Abel (2005): An algorithm is a finite sequence of well-defined, computer-implementable instructions. Genetic algorithms instruct sophisticated biological organization. A linear, digital, cybernetic string of symbols representing syntactic, semantic, and pragmatic prescription.  Genes are not analogous to messages; genes are messages. Genes are literal programs. They are sent from a source by a transmitter through a channel.   Prescriptive sequences are called "instructions" and "programs." They are algorithmically complex sequences. They are cybernetic. 11

G. F. Joyce (1993): A blueprint cannot produce a car all by itself without a factory and workers to assemble the parts according to the instructions contained in the blueprint; in the same way, the blueprint contained in RNA cannot produce proteins by itself without the cooperation of other cellular components which follow the instructions contained in the RNA. 8

Claus Emmeche (1991): Biological systems start from the (digital) axioms and definitions and develop an analogic three-dimensional geometry: an instance of the morphology of life. 10

Is the claim that DNA stores information just a metaphor? 
There has been a long-standing dispute: Is DNA a code? Does DNA store information in a literal sense or is it just a metaphor?  Many have objected and claimed that DNA or its information content can be described in a metaphorical sense storing information, using a code, but not literally. Some have also claimed that DNA is just chemistry. That has raised a lot of confusion.

Sergi Cortiñas Rovira (2008): The most popular metaphor is the one of information (DNA = information). It is an old association of ideas that dates back to the origins of genetics, when research was carried out into the molecule (initially thought to be proteins) that should have contained the information to duplicate cells and organisms. In this type of popularisation model, DNA was identified with many everyday-use objects able to store information: a computer file of living beings, a database for each species, or a library with all the information about an individual. To Dawkins, the human DNA is a “user guide to build a living being” or “the architect’s designs to build a building”. 19

Massimo Pigliucci (2010):Genes are often described by biologists using metaphors derived from computational science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Modern proponents of Intelligent Design, the latest version of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case. . In this article we illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. Thus, we argue that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public.  We will see that analogies between living organisms and machines or programs (what we call ‘‘machine-information metaphors’’) are in fact highly misleading in several respects.

This is the claim. How does Pigliucci justify his accusation? He continues:

‘‘direct encoding systems’’, such as human-designed software, suffer from ‘‘brittleness’’, that is they break down if one or a few components stop working, as a result of the direct mapping of instructions to outcomes. If we think of living organisms as based on genetic encoding systems—like blueprints—we should also expect brittleness at the phenotypic level which, despite the claims of creationists and ID supporters that we have encountered above, is simply not observed.  Indeed, the fact that biological organisms cannot possibly develop through a type of direct encoding of information is demonstrated by calculations showing that the gap between direct genetic information (about 30,000 protein-coding genes in the human genome) and the information required to specify the spatial position and type of each cell in the body is of several orders of magnitude. Where does the difference come from? An answer that is being explored successfully is the idea that the information that makes development possible is localized and sensitive (as well as reactive) to the conditions of the immediate surroundings. In other words, there is no blueprint for the organism, but rather each cell deploys genetic information and adjusts its status to signals coming from the surrounding cellular environment, as well as from the environment external to the organism itself. 20

The answer to this claim is a resounding no. What defines organismal architecture and body plans, or phenotypic complexity, anatomical novelty, as well as the ability of adaptation, is preprogrammed prescribed instructional complex information encoded through ( at least ) 33 variations of genetic codes, and 45 epigenetic codes, and complex communication networks using signaling that act on a structural level in an integrated interlocked fashion, which is pre-programmed do respond to nutrition demands, environmental cues, control reproduction, homeostasis, metabolism, defense systems, and cell death. So the correct answer is, that the phenomena described by Pigliucci, and the fact that genes alone do not explain phenotype, is not explained by denying that genes store literally information, but that even more prescribing, instructional information is in operation, also on an epigenetic level. Pigliucci's claims are entirely misleading in the opposite direction of the truth.    

Pigliucci argues that: phenotypes are fault-tolerant—to use software engineering terminology—because they are not brittle: giving up talk of blueprints and computer programs immediately purchases an understanding of why living organisms are not, in fact, irreducibly complex.

Agreed, the metaphor of a blueprint or computer program might be faulty or not fully up to the task to describe what goes on in biological information systems - but not because they do not describe literally the state of affairs, but because they do not fully clarify and/or describe or illustrate the sophistication, the superb information engineering feat that goes on in the living, to which what we in comparison as intelligent human agents have come up with in comparison, is pale, rudimentary, and primitive.

Richard Dawkins (2008): After the seventh minute of his speech, Dawkins admits that: Can you think of any other class of molecule, that has that property, of folding itself up, into a uniquely characteristic enzyme, of which there is an enormous repertoire, capable of catalyzing an enormous repertoire of chemical reactions, and this is in itself to be absolutely determined by a digital code. 13

Hubert Yockey (2005): Information, transcription, translation, code, redundancy, synonymous, messenger, editing, and proofreading are all appropriate terms in biology. They take their meaning from information theory (Shannon, 1948) and are not synonyms, metaphors, or analogies. 15

DNA is a semantophoretic molecule (a biological macromolecule that stores genetic information). RNA and DNA are analogous to a computer hard disk. DNA monomers are joined to long strings        (like train bandwagons) made up of the four nucleobases ( Adenine, Guanine, Cytosine, and Thymine, (Uracil in RNA). The aperiodic sequence of nucleotides carries instructional information that directs the assembly and polymerization of amino acids in the ribosome, forming polymer strands that make up proteins, the molecular workers of the cell.

No one who understands the subject argues that the information stored in DNA is called so just as a “metaphor”, by means that it only ‘looks like’ coded information and information processing but is not really so. This is blatantly false. The sequence of the nucleotides stored in DNA, the trinucleotide codon "words" lined up are exactly parallel to the way that the alphabetic letters are arranged, and work in this sentence. The words that I write here have symbolic meanings that you can look up in a dictionary, and I have strung them together in a narrative sequence to tell you a story about biological information. The genetic code, each codon, have symbolic meanings that a cell (and you) can look up in a ‘dictionary’ of the genetic code table, and they are strung together in sequences that have meaning for the workings of the cell. The cell exercises true information storage, retrieval, and processing, resulting in functional proteins, required to make a living organism, and no educated person in biology would deny it.

DNA and RNA are the hardware, and the specified complex sequence of nucleotides is the software. That information is conveyed using a genetic code, which is a set of rules, where meaning is assigned to trinucleotide codon words. The information in DNA is first transcribed to messenger RNA (mRNA) ( which acts like a post officer, sending a message from A to B ) and then translated in the ribosome. A set of three nucleotides (trinucleotides) form a codon. Life uses 64 codon "words" that are assigned or mapped to 20 ( in certain cases 22) amino acids. Origin of Life researchers are confronted with the problem of explaining the origin of the complex, specified ( or instructional assembly) information stored in DNA, and on top of that, the origin of the genetic code. These are two, often conflated, but very distinct problems, which have caused a lot of confusion, which comes from the ambiguity in using the term “genetic code”. Here is a quote from Francis Crick, who seems to be the one who coined this term: Unfortunately the phrase “genetic code” is now used in two quite distinct ways. Laymen often use it to mean the entire genetic message in an organism. Molecular biologists usually mean the little dictionary that shows how to relate the four-letter language of the nucleic acids to the twenty-letter language of the proteins, just as the Morse code relates the language of dots and dashes to the twenty-six letters of the alphabet… The proper technical term for such a translation is, strictly speaking, not a code but a cipher. In the same way, the Morse code should really be called the Morse cipher. I did not know this at the time, which was fortunate because “genetic code” sounds a lot more intriguing than “genetic cipher”.

The specification, from triplet codon to amino acid, is called a cipher. It is like a translation from one language to another. We can use for example the google translate program. We write the English word language, and the program translates it and we can get the word "Sprache", in German, which is equivalent to the word  "language" in English.  As in all translations, there must be someone or something, that is bilingual, in this case, to turn the coded instructions written in nucleic acid language into a result written in the amino-acid language. In Cells the adaptor molecule, tRNA, performs this task. One end of the tRNA mirrors the code on the codons on the messenger RNA and the other end is attached to the amino acid that is coded for.  the correct amino acid is attached to the correct tRNA by an enzyme called amino acid tRNA Syntethase.. This raises a huge - an even tougher problem concerning the coding assignments—i.e., which triplets code for which amino acids. How did these designations come about? Because nucleic-acid bases and amino acids don’t recognize each other directly but have to deal via the tRNA chemical intermediary, there is no obvious reason why particular triplets should go with particular amino acids. Other translations are conceivable. Coded instructions are a good idea, but the actual code seems to be pretty arbitrary. Perhaps it is simply a frozen accident, a random choice that just locked itself in, with no deeper significance? That is what Crick proposed. How could that not be called just an "ad-hoc" assertion, face no other reasonable or likely explanation? - unless, of course, we permit the divine into the picture.

One deals with sequence specificity, and the other with mapping or assigning the meaning of one biomolecule to another ( the codon TTA ( Adenine - Adenine - Thymine) is assigned to the amino acid Leucine (Leu).  That means, that when an mRNA strand with the codon sequence TTA enters the ribosome translation machine, specialized molecules ( tRNAs, aminoacyl tRNA synthetases, etc.) are recruited, Leucine is picked and added to the growing and elongating polymer strand that is being formed in the ribosome, that will, in the end, fold into a very specific, functional 3D configuration, and be part of a protein, which will bear a precise function in the cell. As the instructions of a floorplan or a blueprint direct the making of a machine, so does the information ( conveyed in the sequence of trinucleotide codons) direct the making of molecular machines. There is a precise 1:1 analogous situation here. But it goes further than that. Individual machines often operate in a joint venture with other machines, composing production lines, being part of a team that constructs products, that are still just intermediate products, that only later are assembled together with other intermediate products, to form a functional device of high integrated complexity. Mete data is necessary, or diverse levels of information, that operate together. DNA contains coding and non-coding genes. Non-coding genes are employed in the gene regulatory network. They dictate the timeframe upon which genes are expressed and orchestrate the spatiotemporal pattern upon which individual cells, or in multicellular organisms, the embryo develops. This is the second level of DNA information.  

Paul Davies (2013): The significant property of biological information is not its complexity, great though that may be, but the way it is organised hierarchically. In all physical systems there is a flow of information from the bottom upwards, in the sense that the components of a system serve to determine how the system as a whole behaves.  31

So why has that been such a conundrum? Because many want to avoid the design inference at all costs.  Hume wrote in the form of a dialogue between three characters. Philo, Cleanthes, and Demea. The design argument is spoken by Cleanthes in Part II of the Dialogues: The curious adapting of means to ends, throughout all nature, resembles exactly, though much exceeds, the production of human contrivance, or human design, the thought, wisdom and intelligence. Since therefore the effects resemble each other, we are led to infer, by all the rules of analogy, that the causes also resemble; and that the Author of nature is somewhat similar to the mind of man; though possesses of much large faculties, proportioned to the grandeur of the work executed. By this argument a posteriori, and by this argument alone, do we prove at once the existence of a Deity, and his similarity to human mind and intelligence. 21

Does DNA store prescriptive, or descriptive information? 
One common misconception is that natural principles are just discovered, and described by us. In other words, that life is supposedly just chemistry, and we describe the state of affairs going on there.  Two cans with Coca Cola, one is normal, the other is diet. Both bear information that we can describe. We describe the information transmitted to us that one can contain Coca Cola, and the other is diet. But that does not occur naturally. A chemist invented the formula of how to make Coke, and Diet Coke, and that is not dependent on descriptive, but PREscriptive information. The same occurs in nature. We discover that DNA contains a genetic code. But the rules upon which the genetic code operates are prescriptive. The rules are arbitrary. The genetic Code is constraint to behave in a certain way. But what genes store, is information that is similarly organized to a library (the genome), which stores many books (genes) each containing either the instructions, the know-how to make proteins, or there is the non-coding section, which stores regulatory elements (promoters, enhancers, silencers, insulators, MicroRNAs (miRNAs), etc. that works like a program, directing/controlling the operation of the cell, like when a gene has to be expressed (when the information in a gene has to be transcribed and translated). This is information that prescribes how to assemble and operate the cell factory, so it's prescriptive information.

How exactly is information related to biology?
It is related in several ways. I will address two of them. DNA contains information in the sense that the nucleotides sequences or arrangements of characters instruct how to produce a specific amino acid chain that will fold into functional form. DNA base sequences convey instructions. They perform functions and produce specific effects. Thus, they not only possess statistical information but instructional assembly information.

Instructional assembly information
Paul Davies Origin of Life (2003), page 18: Biological complexity is instructed complexity or, to use modern parlance, it is information-based complexity. Inside each and every one of us lies a message. Decrypted, the message contains instructions on how to make a human being. Inside each and every one of us lies a message. It is inscribed in an ancient code, its beginnings lost in the mists of time. Decrypted, the message contains instructions on how to make a human being.  The message isn't written in ink or type, but in atoms, strung together in an elaborately arranged sequence to form DNA, short for deoxyribonucleic acid. It is the most extraordinary molecule on Earth. Although DNA is a material structure, it is pregnant with meaning. The arrangement of the atoms along the helical strands of your DNA determines how you look and even, to a certain extent, how you feel and behave. DNA is nothing less than a blueprint, or more accurately an algorithm or instruction manual, for building a living, breathing, thinking human being. We share this magic molecule with almost all other life forms on Earth. From fungi to flies, from bacteria to bears, organisms are sculpted according to their respective DNA instructions. Each individual's DNA differs from others in their species (with the exception of identical twins), and differs even more from that of other species. But the essential structure – the chemical make-up, the double helix architecture – is universal. 18

Tan, Change; Stadler, Rob (2020): In DNA and RNA, no chemical or physical forces impose a preferred sequence or pattern upon the chain of nucleotides. In other words, each base can be followed or preceded by any other base without bias, just as the bits and bytes of information on a computer are free to represent any sequence without bias. This characteristic of DNA and RNA is critical—in fact, essential—for DNA and RNA to serve as unconstrained information carriers. However, this property also obscures any natural explanation for the information content of life—the molecules themselves provide no explanation for the highly specific sequence of nucleotides required to code for specific biologic functions. Only two materialistic explanations have been proposed for the information content of life: fortuitous random arrangements that happen to be functional or the combination of replication, random mutations, and natural selection to improve existing functionality over time. 27

George M Church (2012): DNA is among the densest and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing. 12

Peter R. Wills (2016): The biological significance of DNA lies in the role it plays as a carrier of information, especially across generations of reproducing organisms, and within cells as a coded repository of system specification and stability. 17

David L Abel (2005): Genes are not analogous to messages; genes are messages. 11

Leroy Hood: (2003): The value of having an entire genome sequence is that one can initiate the study of a biological system with a precisely definable digital core of information for that organism — a fully delineated genetic source code. Genes that encode the protein and RNA molecular machines of life, and the regulatory networks that specify how these genes are expressed in time, space and amplitude. 14

Information related to the genetic code:
Information is divided into five levels. These can be illustrated with a STOP sign.
The first level, statistics, tells us the STOP sign is one word and has four letters. It is related to the improbability of a sequence of symbols (or the uncertainty to obtain it).
The second level, syntax, requires the information to fall within the rules of grammar such as correct spelling, word, and sentence usage. The word STOP is spelled correctly.
The third level, semantics, provides meaning and implications. The STOP sign means that when we walk or drive and approach the sign we are to stop moving, look for traffic and proceed when it is safe.
The fourth level, pragmatics, is the application of the coded message. It is not enough to simply recognize the word STOP and understand what it means; we must actually stop when we approach the sign.
The fifth level, apobetics, is the overall purpose of the message. The STOP signs are placed by our local government to provide safety and traffic control.

The code in DNA completely conforms to all five of these levels of information.

Perry Marshall, Evolution 2.0:The alphabet (symbols), syntax (grammar), and semantics (meaning) of any communication system must be determined in advance before any communication can take place. Otherwise, you could never be certain that what the transmitter is saying is the same as what the receiver is hearing. It’s like when you visit a Russian website and your browser doesn’t have the language plug-in for Russian. The text just appears as a bunch of squares. You would never have any idea if the Russian words were spelled right. When a message’s meaning is not yet decided, it requires intentional action by conscious agents to reach a consensus. The simple process of creating a new word in English, like a blog, requires speakers who agree on the meaning of the other words in their sentences. Then they have to mutually agree to define the new word in a specific way. Once a word is agreed upon, it is added to the dictionary. The dictionary is a decode table for the English language. Even if noise might occasionally give you a real word by accident, it could never also tell you what that word means. Every word has to be defined by mutual agreement and used in the correct context in order to have meaning. 25

Okay, you probably wish you could see an example, of how that works in the cell, right? Let's make an analogy. Let's suppose you have a recipe to make spaghetti with a special tomato sauce written on a word document saved on your computer.  You have a Japanese friend and only communicate with him using the google translation program. Now he wants to try out that recipe and asks you to send him a copy. So you write an email, annex the word document, and send it to him. When he receives it, he will use google translate and get the recipe in Japanese, written in kanji, in logographic Japanese characters which he understands. With the information at hand, he can make the spaghetti with that fine special tomato sauce exactly as described in the recipe. In order for that communication to happen, you use at your end 26 letters from the alphabet to write the recipe, and your friend has 2,136 kanji characters that permit him to understand the recipe in Japanese. Google translate does the translation work.

While the recipe is written on a word document saved on your computer, in the cell, the recipe (instructions or master plan) for the construction of proteins which are the life essential molecular machines, veritable working horses, is written in genes through DNA. While you use the 26 letters of the alphabet to write your recipe, the Cell uses DNA, deoxyribonucleotides, and four monomer "letters". In kanji there are 2136 characters, the alphabet uses 26,   computer codes being binary, use 0,1. The language of DNA is digital, but not binary. Where binary encoding has 0 and 1 to work with (2 - hence the 'binary)  DNA uses four different organic bases, which are adenine (A), guanine (G), cytosine (C), and thymine (T). The way by which DNA stores the genetic information consists of codons equivalent to words, consisting of an array of three DNA nucleotides. These triplets form "words". While you used sentences to write the spaghetti receipt, the equivalent sentences are called genes written through codon "words". With four possible nucleobases, the three nucleotides can give 4^3 = 64 different possible "words" (tri-nucleotide sequences). In the standard genetic code, three of these 64 codons (UAA, UAG, and UGA) are stop codons. There has to be a mechanism to extract the information in the genome, and send it to the ribosome,  the factory that makes proteins, which is at another place in the cell, free-floating in the cytoplasm. The message contained in the genome is transcribed by a very complex molecular machine, called RNA polymerase. It makes a transcript, a copy of the message in the genome, and that transcript is sent to the Ribosome. That transcript is called messenger RNA or typically mRNA.  In communications and information processing, code is a system of rules to convert information—such as assigning the meaning of a letter, or word, into another form, ( as another word, letter, etc. ) In translation, 64 genetic codons are assigned to 20 amino acids. It refers to the assignment of the codons to the amino acids, thus being the cornerstone template underlying the translation process. Assignment means designating, ascribing, corresponding, and correlating. The Ribosome does basically what google translate does. But while google translate just gives the receipt in another language, and our Japanese friend still has to make the spaghettis,  the Ribosome actually makes in one step the end product, which are proteins.  Imagine the brainpower involved in the entire process from inventing the receipt to making spaghetti, until they are on the table of your Japanese friend. What is involved?

1. Your imagination of the recipe
2. Inventing an alphabet, a language
3. Inventing the medium to write down the message
4. Inventing the medium to store the message
5. Storing the message in the medium
6. Inventing the medium ( the machine) to extract the message
7. Inventing the medium to send the message
8. Inventing the second language (Japanese)
9. Inventing the translation code/cipher from your language to Japanese
10. Making the machine that performs the translation
11. Programming the machine to know both languages, to make the translation
12. Making ( performing) the translation
12. Making of the spaghettis on the other end using the receipt in Japanese  

1. Creating a recipe to make a cake is always a mental process. Creating a blueprint to make a machine is always a mental process.
2. To suggest that a physical process can create instructional assembly information, a recipe or a blueprint, is like suggesting that a throwing ink on paper will create a blueprint. It is never going to happen!  
3. Physics and chemistry alone do not possess the tools to create a concept, or functional complex machines made of interlocked parts for specific purposes
4. The only cause capable of creating conceptual semiotic information is a conscious intelligent mind.
5. DNA stores codified information to make proteins, and cells, which are chemical factories in a literal sense.

The language of Cells
Cells store a genetic language. Marshall. Nirenberg, American biochemist, and geneticist,  received the Nobel Prize in 1968  for "breaking the genetic code" and describing how it operates in protein synthesis 23 He wrote in 1967:  The genetic language now is known. and it seems clear that most, if not all, forms of life on this planet use the same language. with minor variations. 24

Sedeer el-Showk (2014): The genetic code combines redundancy and utility in a simple, elegant language. Four letters make up the genetic alphabet: A, T, G, and C. In one sense, a gene is nothing more than a sequence of those letters, like TTGAAGCATA…, which has a certain biological meaning or function.  The beauty of the system emerges from the fact that there are 64 possible words but they only need 21 different meanings — 20 amino acids plus a stop sign. That creates the first layer of redundancy since codons can be synonyms. Just like ‘cup’ and ‘glass’ mean (essentially) the same thing, two different codons can refer to the same amino acid; for example, the GAG and GAA both mean ‘glutamic acid’. Synonymous codons offer some protection against mutation. If the last letter of a GAA happened to mutate into a G in a gene, it would still get a glutamic acid at that point, since GAA and GAG are synonyms.

Information is not physical
Robert Alicki (2014): Information is a disembodied abstract entity independent of its physical carrier. ”Information is always tied to a physical representation. It is represented by engraving on a stone tablet, a spin, a charge, a hole in a punched card, a mark on paper, or some other equivalent. Information is neither classical nor quantum, it is independent of the properties of physical systems used for its processing. 3

Paul C. W. Davies (2013): The key distinction between the origin of life and other ‘emergent’ transitions is the onset of distributed information control, enabling context-dependent causation, where an abstract and non-physical systemic entity (algorithmic information) effectively becomes a causal agent capable of manipulating its material substrate. Biological information is functional due to the right sequence. There have been a variety of terms employed for measuring functional biological information — complex and specified information (CSI), Functional Sequence Complexity (FSC) Instructional complex Information.  I like the term instructional because it defines accurately what is being done, namely instructing the right sequence of amino acids to make proteins, and also the sequence of messenger RNA, which is used for gene regulation, and a variety of yet unexplored function. Another term is prescriptive information (PI). It describes as well accurately what genes do. They prescribe how proteins have to be assembled. But it smuggles in as well a meaning, which is highly disputed between proponents of intelligent design, and unguided evolution. Prescribing implies that an intelligent agency preordained the nucleotide sequence in order to be functional. 4

David L Abel (2012): Biological information frequently manifests its “meaning” through instruction or actual production of formal bio-function. Such information is called Prescriptive Information (PI). PI programs organize and execute a prescribed set of choices. Closer examination of this term in cellular systems has led to a dichotomy in its definition suggesting both prescribed data and prescribed algorithms are constituents of PI.  In addition to algorithm execution, there needs to be an assembly algorithm. Any manufacturing engineer knows that nothing (in production) is built without plans that precisely define orders of operations to properly and economically assemble components to build a machine or product. There must be by necessity, an order of operations to construct biological machines. This is because biological machines are neither chaotic nor random, but are functionally coherent assemblies of proteins/RNA elements. An Algorithm is a set of rules or procedures that precisely defines a finite sequence of operations. These instructions prescribe a computation or action that, when executed, will proceed through a finite number of well-defined states that leads to specific outcomes.  One of the greatest enigmas of molecular biology is how codonic linear digital programming is not only able to anticipate what the Gibbs free energy folding will be, but it actually prescribes that eventual folding through its sequencing of amino acids. Much the same as a human engineer, the nonphysical, formal PI instantiated into linear-digital codon prescription makes use of physical realities like thermodynamics to produce the needed globular molecular machines. 5

An algorithm is a finite sequence of well-defined, computer-implementable instructions resulting in precise intended functions. A prescriptive algorithm in a biological context can be described as performing control operations using rules, axioms, and coherent instructions. These instructions are performed, using a linear, digital, cybernetic string of symbols representing syntactic, semantic, and pragmatic prescriptive information. Cells host algorithmic programs for cell division, cell death, enzymes pre-programmed to perform DNA splicing, programs for dynamic changes of gene expression in response to the changing environment. Cells use pre-programmed adaptive responses to genomic stress,  pre-programmed genes for fetal development regulation, temporal programs for genome replication, pre-programmed animal genes dictating behaviors including reflexes and fixed action patterns, pre-programmed biological timetables for aging etc. A programming algorithm is like a recipe that describes the exact steps needed to solve a problem or reach a goal. We've all seen food recipes - they list the ingredients needed and a set of steps for how to make a meal. Well, an algorithm is just like that.  A programming algorithm describes how to do something, and it will be done exactly that way every time.

Albert Voie (2006): Life expresses both function and sign systems. Due to the abstract character of function and sign systems, life is not a subsystem of natural laws. This suggests that our reason is limited in respect to solving the problem of the origin of life and that we are left accepting life as an axiom. Memory-stored controls transform symbols into physical states. Von Neumann made no suggestion as to how these symbolic and material functions in life could have originated. He felt, "That they should occur in the world at all is a miracle of the first magnitude."

The algorithmic origins of life Von_ne11

No natural law restricts the possibility-space of a written or spoken text. Languages are indeed abstract, and non-physical, and it is really easy to see that they are subsystems of the mind and belong to another category of phenomena than subsystems of the laws of nature, such as molecules. Another similar set of subsystems is functional objects. In the engineering sense, a function is a goal-oriented result coming of an intelligent entity.  The origin of a machine cannot be explained solely as a result of physical or chemical events. Machines can go wrong and break down - something that does not happen to laws of physics and chemistry. In fact, a machine can be smashed and the laws of physics and chemistry will go on operating unfailingly in the parts remaining after the machine ceases to exist. Engineering principles create the structure of the machine which harnesses energy based on the laws of physics for the purposes the machine is designed to serve. Physics cannot reveal the practical principles of design or coordination which are the structure of the machine. The cause leading to a machine’s functionality is found in the mind of the engineer and nowhere else.

In life, there is interdependency between biological sign systems, data, and the construction, machine assembly, and operation, that is directed by it. The abstract sign-based genetic language stores the abstract information necessary to build functional biomolecules. 

Von Neumann believed that life was ultimately based on logic. Von Neumann’s abstract machine consisted of two central elements: a Universal Computer and a Universal Constructor. The Universal Constructor builds another Universal Constructor based on the directions contained in the Universal Computer. When finished, the Universal Constructor copies the Universal Computer and hands the copy to its descendant. As a model of a self-replicating system, it has its counterpart in life where the Universal Computer is represented by the instructions contained in the genes, while the Universal Constructor is represented by the cell and its machinery. 6

On the one side, there is the computer storing the data, on the other, the construction machines. The construction machines build/replicate and make another identical construction machine, based on the data stored in the computer. Once finished, the construction machines copy the computer and the data and hand it down to the descendant.  As a model of a self-replicating system, it has its counterpart in life where the computer is represented by the instructions contained in the genes, while the construction machines are represented by the cell and its machinery that transcribes, translates, and replicates the information stored in genes.  RNA polymerase transcribes, and the ribosome translates the information stored in DNA and produces a Fidel reproduction of the cell and all the machinery inside of the cell. Once done, the genome is replicated, and handed over to the descendant replicated cell, and the mother cell has produced a daughter cell.   The entire process of self-replication is data-driven and based on a sequence of events that can only be instantiated by understanding and knowing the right sequence of events. There is an interdependence of data and function. The function is performed by machines that are constructed based on the data instructions. The cause to instantiate such a sequence of events and state of affairs has never been shown to be otherwise, then a mind. 

Timothy R. Stout (2019): A body of information is stored in a genome within the cell. Cellular “hardware” then reads, decodes, and uses the information. The information drives the operation in a manner analogous to how software in a computer drives computer hardware. In both cases, proper information needs to be available for use by functioning hardware which in turn is controlled by it. The gradual step-by-step developmental processes characteristic of evolution are not compatible with the first appearance of a computer. There is a minimum amount of functioning information required for computer operation. The information and hardware must interact with each other in a very intricate, intertwined manner. The minimum amounts required for each are staggeringly complex. In industry, a computer needs to be designed before it is fabricated. The probability is virtually zero for an unguided, random combination of logic gates to form a functioning computer, complete with internal memory, memory address logic, data registers, a central processing unit, data input, and output components, control signal inputs, and outputs, and connections between internal components. Beyond this, there are no known means for random combinations of logic to generate a body of information tailored to work with a specific form of computer hardware. There are no known means for such information to be stored for use by the computer and to be accessible by it. Computers are the product of deliberate intelligent action, not random processes. Since computers and living cells are both information-driven machines, this suggests the possibility that the difficulties facing initial computer fabrication could also apply to initial cell fabrication. If this suggestion proves valid, it poses serious issues concerning the adequacy of natural processes being adequate to account for the information-driven physical life we see around us. There is another aspect of this problem that has particular significance. In industry, both computers and processer-driven applications ranging from microwave ovens to self-driving automobiles start with a predefined system specification.

Typically, this will define an overall task for the machine to accomplish. Some tasks may be done in hardware or software. Typically, the software is cheaper and more readily adapts to a wide range of possible variations in operation. However, hardware is faster and requires minimal input to trigger its operation. The specification determines whether a particular task is to be done in hardware or software. It also determines how the software and the hardware interact with each other to accomplish a given task. A major objective of the system specification is to define a software specification describing what the software needs to do and a hardware specification defining what the hardware needs to do. In industry, separate hardware and software design engineering teams then design a product meeting their specified goals. In an ideal world, the system specification will be so complete and accurate and the proficiency of the software and hardware engineers in implementing their specifications will likewise be so complete and accurate that the system will work the first time the power is turned on and the two are brought together. In real life, this is not typical.

If a living cell is more complex than a physical computer and if debug of computer design typically is an extremely difficult task, this suggests that a living cell must have its origin in a being so intelligent that it can anticipate all of the behaviors of the various arrangements of building block amino acids and nucleotides. The first cell must appear in working form without needing debug. This is particularly the case since special test equipment for identifying design problems would not be available in a prebiotic scenario. Although mutation and natural selection can have use in adapting an already living cell to changing environmental conditions, they appear inadequate to meet the requirements of initial cellular appearance. Slight modification of an existing, already working design is trivial compared to the difficulties of implementing an initial design. During my experience as an industrial design engineer, I was active on many design projects that were canceled for various reasons. I have worked on designs that were ready for a prototype to be built, but funds were not provided to make it. There is a difference between having a paper design, no matter how good it might be, and actually having resources to build the product. It is insufficient for an intelligent being to design a living cell capable of survival in the environment in which it will appear. Since the design specification appears outside of natural law, its physical implementation must also take place outside of natural law. Natural processes have no ability to implement non-material plans. The actual appearance on Earth of a living cell required an intelligent being to work outside of natural law in order to arrange molecules and atoms into dynamic relationships with each other in accordance with a predefined specification, one which was developed through intelligence and apart from natural processes. There is a word we use to call an extremely intelligent being who can move molecules and atoms into predetermined, dynamic relationships at will—God. This paper has plausibly demonstrated how unsuppressed, unbiased scientific observation leads to a Being with the characteristics of God as the source of the physical life we see around us. 9

Instructional assembly information has always a mental origin
David L Abel: (2009): Even if multiple physical cosmoses existed, it is a logically sound deduction that linear-digital genetic instructions using a representational material symbol system (MSS) cannot be programmed by the chance and/or fixed laws of physicodynamics. This fact is not only true of the physical universe but would be just as true in any imagined physical multiverse. Physicality cannot generate non-physical Prescriptive Information (PI). Physicodynamics cannot practice formalisms (The Cybernetic Cut). Constraints cannot exercise formal control unless those constraints are themselves chosen to achieve formal function. 28

Edward J. Steele (2018): The transformation of an ensemble of appropriately chosen biological monomers (e.g. amino acids, nucleotides) into a primitive living cell capable of further evolution appears to require overcoming an information hurdle of super astronomical proportions, an event that could not have happened within the time frame of the Earth except, we believe, as a miracle. All laboratory experiments attempting to simulate such an event have so far led to dismal failure. It would thus seem reasonable to go to the biggest available “venue” in relation to space and time. A cosmological origin of life thus appears plausible and overwhelmingly likely to us

Katarzyna Adamala (2014):  There is a conceptual problem, namely the emergence of specific sequences among a vast array of possible ones, the huge “sequence space”, leading to the question “why these macromolecules, and not the others?” One of the main open questions in the field of the origin of life is the biogenesis of proteins and nucleic acids as ordered sequences of monomeric residues, possibly in many identical copies. The first important consideration is that functional proteins and nucleic acids are chemically speaking copolymers, i.e., polymers formed by several different monomeric units, ordered in a very specific way.

Attempts to obtain copolymers, for instance by a random polymerization of monomer mixtures, yield a difficult-to-characterize mixture of all different products. To the best of our knowledge, there is no clear approach to the question of the prebiotic synthesis of macromolecules with an ordered sequence of residues. The copolymeric nature of proteins and nucleic acid challenges our understanding of the origin of life also from a theoretical viewpoint. The number of all possible combinations of the building blocks (20 amino acids, 4 nucleotides) forming copolymers of even moderate length is ‘astronomically’ high, and the total number of possible combinations it is often referred as the “sequence space”. Simple numerical considerations suggest that the exhaustive exploration of the sequence spaces, both for proteins and nucleic acid, was physically not possible in the early Universe, both for lack of time and limited chemical material. There are no methods described in the literature to efficiently generate long polypeptides, and we also lack a theory for explaining the origin of some macromolecular sequences instead of others.

The theoretical starting point is the fact that the number of natural proteins on Earth, although apparently large, is only a tiny fraction of all the possible ones. Indeed, there are thought to be roughly 10^13 proteins of all sizes in extant organisms. This number, however, is negligible when compared to the number of all theoretically possible different proteins. The discrepancy between the actual collection of proteins and all possible ones stands clear if one considers that the number of all possible 50-residues peptides that can be synthesized with the standard 20 amino acids is 20^50, namely 10^65. Moreover, the number of theoretically possible proteins increases with length, so that the related sequence space is beyond contemplation; in fact, if we take into account the living organisms, where the average length of proteins is much greater, the number of possible different proteins becomes even bigger. The difference between the number of possible proteins (i.e. the sequence space) and the number of those actually present in living organisms is comparable, in a figurative way, to the difference that exists between a drop of water and an entire Ocean. This means that there is an astronomically large number of proteins that have never been subjected to the long pathway of natural evolution on Earth: the “Never Born Proteins” (NBPs). 30

Sir Fred Hoyle (1981): The big problem in biology, as I see it, is to understand the origin of the information carried by the explicit structures of biomolecules. The issue isn't so much the rather crude fact that a protein consists of a chain of amino acids linked together in a certain way, but that the explicit ordering of the amino acids endows the chain with remarkable properties, which other orderings wouldn't give. If amino acids were linked at random, there would be a vast number of arrangements that would be useless in serving the purposes of a living cell. When you consider that a typical enzyme has a chain of perhaps 200 links and that there are 20 possibilities for each link, it's easy to see that the number of useless arrangements is enormous, more than the number of atoms in all the galaxies visible in the largest telescopes. This is for one enzyme, and there are upwards of 2000 of them, mainly serving very different purposes. So how did the situation get to where we find it to be? This is, as I see it, the biological problem - the information problem.

It's easy to frame a deceitful answer to it. Start with much simpler, much smaller enzymes, which are sufficiently elementary to be discoverable by chance; then let evolution in some chemical environment cause the simple enzymes to change gradually into the complex ones we have today. The deceit here comes from omitting to explain what is in the environment that causes such an evolution. The improbability of finding the appropriate orderings of amino acids is simply being concealed in the behavior of the environment if one uses that style of argument. I was constantly plagued by the thought that the number of ways in which even a single enzyme could be wrongly constructed was greater than the number of all the atoms in the universe.  So try as I would, I couldn't convince myself that even the whole universe would be sufficient to find life by random processes - by what are called the blind forces of nature. The thought occurred to me one day that:

The human chemical industry doesn't chance on its products by throwing chemicals at random into a stewpot. To suggest to the research department at DuPont that it should proceed in such a fashion would be thought ridiculous.
Wasn't it even more ridiculous to suppose that the vastly more complicated systems of biology had been obtained by throwing chemicals at random into a wildly chaotic astronomical stewpot? By far the simplest way to arrive at the correct sequences of amino acids in the enzymes would be by thought, not by random processes. And given a knowledge of the appropriate ordering of amino acids, it would need only a slightly superhuman chemist to construct the enzymes with 100 percent accuracy. It would need a somewhat more superhuman scientist, again given the appropriate instructions, to assemble it himself, but not a level of scale outside our comprehension. Rather than accept the fantastically small probability of life having arisen through the blind forces of nature, it seemed better to suppose that the origin of life was a deliberate intellectual act. By "better" I mean less likely to be wrong. Suppose a spaceship approaches the earth, but not close enough for the spaceship's imaginary inhabitants to distinguish individual terrestrial animals. They do see growing crops, roads, bridges, however, and a debate ensues. Are these chance formations or are they the products of intelligence? Taking the view, palatable to most ordinary folk but exceedingly unpalatable to scientists, that there is an enormous intelligence abroad in the universe, it becomes necessary to write blind forces out of astronomy.

Now imagine yourself as a superintellect working through possibilities in polymer chemistry. Would you not be astonished that polymers based on the carbon atom turned out in your calculations to have the remarkable properties of the enzymes and other biomolecules? Would you not be bowled over in surprise to find that a living cell was a feasible construct? Would you not say to yourself, in whatever language super calculating intellects use: Some super calculating intellect must have designed the properties of the carbon atom, otherwise the chance of my finding such an atom through the blind forces of nature would be utterly minuscule. Of course, you would, and if you were a sensible superintellect you would conclude that the carbon atom is a fix.



Last edited by Otangelo on Thu Jul 14, 2022 3:15 pm; edited 1 time in total

https://reasonandscience.catsboard.com

7The algorithmic origins of life Empty Re: The algorithmic origins of life Thu Jul 14, 2022 3:14 pm

Otangelo


Admin

A common-sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology and that there are no blind forces worth speaking about in nature. The numbers one calculates from the facts seem to me so overwhelming as to put this conclusion almost beyond question. 31a

Robert T. Pennock (2001): Once considered a crude method of problem-solving, trial-and-error has so risen in the estimation of scientists that it is now regarded as the ultimate source of wisdom and creativity in nature. The probabilistic algorithms of computer science all depend on trial-and-error. So too, the Darwinian mechanism of mutation and natural selection is a trial-and-error combination in which mutation supplies the error and selection the trial. An error is committed after which a trial is made. But at no point is complex specified information generated. All historical, observational, testable and repeatable examples PROVE information and operational functionality come from intelligent sources. "The inadequacy of proposed materialistic causes forms only a part of the basis of the argument for intelligent design. We know from broad and repeated experience that intelligent agents can and do produce information rich systems: we have positive experience based on knowledge of a cause that is sufficient to generate new specified information, namely, intelligence. We are not ignorant of how information arises. According to information theorist Henry Quastler...'the creation of new information is habitually associated with conscious activity' "....I described indirect evidence which is a recognized form of proof for a causal agent...if you have no theory which explains the formation of complex specified information or functional operational activity without an intelligent origin then you cannot dismiss a known cause for such phenomena. Seen or unseen such phenomena require a sufficient cause. 33

Paul davies (2013): If life is more than just complex chemistry, its unique informational management properties may be the crucial indicator of this distinction, which raises the all-important question of how the informational properties characteristic of living systems arose in the first place. This key question of origin may be satisfactorily answered only by first having a clear notion of what is meant by “biological information”.  While standard information-theoretic measures, such as Shannon information, have proved useful, biological information has an additional quality which may roughly be called “functionality” – or “contextuality” – that sets it apart from a collection of mere bits as characterized by Shannon Information content. Biological information shares some common ground with the philosophical notion of semantic information (which is more commonly – and rigorously – applied in the arena of “high-level” phenomena such as language, perception, and cognition). We, therefore, identify the transition from non-life to life with a fundamental shift in the causal structure of the system, specifically, a transition to a state in which algorithmic information gains direct, context-dependent, causal efficacy over matter. 4

Perry Marshall: Evolution 2.0 (2015) page 170: Information possesses another very interesting property that distinguishes it from matter and energy. That property is freedom of choice. In communication, your ability to choose whether “1 = on and 0 = off” or “1 = off and 0 = on” is the most elementary example of the human capacity to choose. Mechanical encoders and decoders can’t make choices, but their very existence shows that the choice was made. By definition, none of these decisions can be derived from the laws of physics because they are freely chosen. In the history of the computer industry, somewhere along the way, somebody got to decide that 1 = “on” and 0 = “off.” Then everyone else decided to adopt that standard. Physics and chemistry alone want us to be fat, lazy, and unproductive. Gravity pulls us down. Entropy makes us old and tired. Clocks wind down. Cars rust. Signals get static. LPs scratch. Desks become cluttered. Bedrooms get strewn with dirty clothes. Choice rises up against this. Evolution 2.0, far from mindless, is literally mind over matter. The unfit adapt. Order and structure increase. Cells exert control over their environments. That means materialism cannot explain the origin of information, the nature of information, or the ability to create a code or language from scratch. It can’t explain thought, feeling, mind, will, or communication. 25

Paul Davies The Fifth Miracle: The Search for the Origin and Meaning of Life (1998) page 82:   The theory of self-organization as yet gives no clue how the transition is to be made between spontaneous, or self-induced, organization—which in even the most elaborate nonbiological examples still involves relatively simple structures—and the highly complex, information-based, genetic organization of living things. An explanation of this genetic takeover must account for more than merely the origin of nucleic acids and their potent entanglement with proteins at some later stage. It is not enough to know how these giant molecules arose or started to interact. We also need to know how the system’s software came into existence. Indeed, we need to know how the very concept of software control was discovered. But how did all these optimized and stringent codes of biological communication come about? Here we are up against one of the great enigmas of biology, which thus far has defied all efforts of penetration. Darwinian evolutionary theory offers no help here. Its principle of natural selection or survival of the fittest, as even staunch evolutionists do concede, entails a tautology: it identifies the fittest with the survivors, and so boils down to no more than a "survival of the survivors." Outside biology, the question how codes get to be hardly comes up. It is usually uncalled for; the origins are obvious in the case of man-made codes. Take our telegraph code, for instance, or the codes bankers or military men use. Those are concocted more or less arbitrarily, arising almost fully fledged from the cryptographer's mind. But no one but the most die-hard creationist would believe that the codes of biological communication—the natural codes—have such one-fell-swoop beginnings. The question of the origins of natural codes has haunted two generations of information theorists interested in human languages and, regarding biomolecular communication, the question has loomed large ever since the genetic code was found. Indeed, one cannot help being drawn to something that speaks so eloquently for the unity of life and the early origin of its master code, like the fact that all living beings use the same four-letter alphabet for their genetic information, the same twenty-letter alphabet for their proteins, and the same code to translate from one language to the other. Each of the 64 possible triplets of the 4 RNA bases here is a "codon" that assigns one of the 20 amino acids in the polypeptide chain. Such a code and the hardware that executes it are surely too complex to have started with one swoop.  34 

Paul Davies (2013): The way life manages information involves a logical structure that differs fundamentally from mere complex chemistry. Therefore chemistry alone will not explain life's origin, any more than a study of silicon, copper, and plastic will explain how a computer can execute a program. 31

Davies does not like the idea of God. Therefore, he attempts to substitute God's efficacious mind with physics: "Our work suggests that the answer will come from taking information seriously as a physical agency, with its own dynamics and causal relationships existing alongside those of the matter that embodies it – and that life's origin can ultimately be explained by importing the language and concepts of biology into physics and chemistry, rather than the other way round."

But he confesses in The Fifth Miracle, on page 258:  “We are still left with the mystery of where biological information comes from.… If the normal laws of physics can’t inject information, and if we are ruling out miracles, then how can life be predetermined and inevitable rather than a freak accident? How is it possible to generate random complexity and specificity together in a lawlike manner? We always come back to that basic paradox” [url=https://www.amazon.com/FIFTH-MIRACLE-Search-Origin-Meaning/dp/068486309X#:~:text=Are We Alone in the,years ago%2C Mars resembled earth.]34[/url] and in his book: The origin of Life, on page 17:  "A law of nature could not alone explain how life began, because no conceivable law would compel a legion of atoms to follow precisely a prescribed sequence of assemblage. " 35

So he is well aware that physical laws are an inadequate explanation. So why is Davies not going with God? In a conversation at unbelievable? he confessed: " I don't like the idea of miracles I don't like the idea of a god who uh sort of meddles in the affairs of the world and also I don't like the idea of a God who's sitting around for all eternity and then made the big bang go bang at some other free moment and so I think however we live in a universe that is remarkable in many ways the laws of physics themselves where did they come from?  Why do they have the form that they do?  Most of my colleagues just accept them as a brute fact but it seems to me that there should be a deeper level of explanation I'm not sure that God is an appropriate term for that but if part of what is involved in the laws of physics is something like a life principle then what we're talking about is something that explains the order of the universe and our place within it what is undeniable we can't be a scientist without supposing that there is a rational order in nature that is at least in part intelligible to us. God of some agency or something supernatural or something that transcends even somehow then zeroes in and does something it gives me an experience or if it was a miracle or something extraordinary happening around me that's not a type of god that I feel very comfortable with you know God who uh as one English bishop once described it's like the laser beam god so you know what would convince me um it would have to be some direct experience and at this stage, I haven't had that." 36 
 
So it is very clear that Davies does not infer God, not based on rational ground, but because of his personal ( maybe emotional?) preferences.

Davies believes that (2019): "The laws of nature as we know them today are insufficient to explain what life is and how it came about. We need to find new laws, he says, or at least new principles, which describe how information courses around living creatures. Those rules may not only nail down what life is but actively favor its emergence. “I really think we need new physics to understand how information couples to matter and makes a difference in the world,” he says." 37

I would call that " physical laws of the gaps". We don't know how information came to direct the making of living cells, but eventually, one day, we will find out, and it will be a new principle of physics, that we have not found out yet. 

The "Cosmic Limit", or shuffling possibilities of our universe
We need to consider the number of possibilities that such an event in question could have occurred. We have to consider the upper number of probabilistic resources that theoretically were available to produce the event by unguided occurrences. 

The number of atoms in the entire universe = 1 x 10^80
The estimate of the age of the universe is 13,7 Billion years.  In seconds, that would be = 1 x 10^16
The fastest rate an atom can interact with another atom = 1 x 10^43
Therefore, the maximum number of possible events in a universe, 13,7 Billion years old, (10^16 seconds) where every atom (10^80) is changing its state at the maximum rate of 10^40 times per second during the entire time period of the universe is 10^139.

This calculation provides us with a measure of the probabilistic resources of our universe. There could have been a maximum of 10^139 events, (number of possible shuffling events in the entire history of our universe)

If the first proteins on early earth were to originate without intelligent input, the only alternative is random events. How can we calculate the odds? What is the chance, or likelihood that a minimal proteome of the smallest free-living cell could emerge by chance? Let us suppose that the 20 amino acids used in life would be separated, purified, and concentrated, and the only ones available to interact with each other excluding all others. What would be the improbability to get a functional sequence? If we had to select a chain of two amino acids bearing a function, in each position of the 2 positions, there would be 20 possible alternatives. Just one of the 20 would be providing a functional outcome. So the odds are 2^20, or 2x20 = 400. One in 400 possible options will be functional.  If the chain has 3 amino acids, the odds are 3^20, or 20x20x20 = 8,000. One in 8,000 options will be functional. And so on. As we can see, the odds or the unlikelihood to get a functional sequence becomes very quickly, very large.

David T.F Dryden (2008): A typical estimate of the size of sequence space is 20^100 (approx. 10^130) for a protein of 100 amino acids in which any of the normally occurring 20 amino acids can be found. This number is indeed gigantic. 39

That means, just to find one protein of 100 amino acids by random shuffling, could have required exhausting the probabilistic resources of our universe. That means, it is far more probable that such an extraordinarily improbable event would never have occurred.  ( For a more extended explanation, see S.Meyers Signature in the cell, page 194)

The simplest free-living bacteria is Pelagibacter ubique. It is known to be one of the smallest and simplest, self-replicating, and free-living cells.  It has complete biosynthetic pathways for all 20 amino acids.  These organisms get by with about 1,300 genes and 1,308,759 base pairs and code for 1,354 proteins. They survive without any dependence on other life forms. Incidentally, these are also the most “successful” organisms on Earth. They make up about 25% of all microbial cells.   If a chain could link up, what is the probability that the code letters might by chance be in some order which would be a usable gene, usable somewhere—anywhere—in some potentially living thing? If we take a model size of 1,200,000 base pairs, the chance to get the sequence randomly would be 4^1.200,000 or 10^722,000. 38

The odds or uncertainty to:
have a functional proteome, which is in the case of Pelagibacter, the smallest known bacteria and life-form, with 1,350 proteins, average 300 Amino Acids size, by unguided means: 10^722,000
connecting all 1,350 proteins ( each, average 300 amino acids in length)  in the right, functional order is about 4^3,600
for the occurrence to have both, a minimal proteome, and interactome: about 10^725,600

It could require up to 5220 universes like ours, exhausting their shuffling resources during the time period of 13,7 billion years, to find one genome and interactome with the right sequence of Pelagibacter Ubique. We have elucidated the probabilistic resources needed to encounter one functional sequence of a minimal interactome of P.Ubique. Another question is the likelihood of such an event occurring. 

Intelligent, or natural selection?
One could say: But there is still a chance that natural events would encounter a functional sequence.  In fact, in the best case, given the probabilistic resources, even extrapolating those of our universe, it could still be possible, that the first shuffling event would give a positive result. No matter how large the odds, or how unlikely an event, such an event would still be, theoretically possible. 

The probability of picking 7 from a hat of 10 numbers is 1/10. The probability of picking 7 from a hat of 1,000 numbers is 1/1,000. The probability of picking 7 from a hat of 1.000,000 numbers is 1/1.000,00. The probability of picking 7 from 10^756,000 ( a number of 1 with 756,000 zeroes ) is 1/10^756,000.  The larger the number, the larger the unlikeliness of picking the right number. But what, if the number is infinitely large, or limitless? The probability of picking 7 from an infinitely large number means the probability is indistinguishable from 0.

So what is the criterion upon which chance is a less plausible explanation rather than design? We need a threshold, upon which we can say: In face of the unlikelihood of event X, the intelligent selection inference seems to be a more plausible explanation. So if the odds to find a functional interactome would require exhausting the probabilistic resources of 5220 universes, and we have just ours, then the odds would be 1/5,220. 

Let us suppose someone was playing the lottery 200 times and would win every time, what would be more likely: That someone was cheating, or having luck? 

S.Meyer, Signature in the Cell (2009):  Imagine that after starting my demonstration with Scrabble letters, I left the classroom for a few minutes and instructed my students to continue picking letters at random and writing the results on the board in my absence. Now imagine that upon my return they showed me a detailed message on the blackboard such as Einstein’s famous dictum: “God does not play dice with the universe.” Would it be more reasonable for me to suppose that they had cheated (perhaps, as a gag) or that they had gotten lucky? Clearly, I should suspect (strongly) that they had cheated. I should reject the chance hypothesis. Why? I should reject chance as the best explanation not only because of the improbability of the sequence my students had generated but also because of what I knew about the probabilistic resources available to them. If I had made a prediction before I had left the room, I would have predicted that they could not have generated a sequence of that length by chance alone in the time available. But even after seeing the sequence on the board, I still should have rejected the chance hypothesis. Indeed, I should know that my students did not have anything like the probabilistic resources to have a realistic chance of generating a sequence of that improbability by chance alone. In one hour my students could not have generated anything but a minuscule fraction of the total possible number of 40- character sequences corresponding to the length of the message they had written on the board. The odds that they could have produced that sequence—or any meaningful sequence at all of that length—in the time available by choosing letters at random was exceedingly low. They simply did not have time to sample anything close to the number of 40-character sequences that they would have needed to have a 50 percent chance of generating a meaningful sequence of that length. Thus, it would be much more likely than not that they would not produce a meaningful sequence of that length by chance in the time available and, therefore, it was also vastly more likely than not that something other than chance had been in play. 40

Hubert P. Yockey (1977):  The Darwin-Oparin-Haldane “warm little pond” scenario for biogenesis is examined by using information theory to calculate the probability that an informational biomolecule of reasonable biochemical specificity, long enough to provide a genome for the “protobiont”, could have appeared in the primitive soup. Certain old untenable ideas have served only to confuse the solution to the problem. Negentropy is not a concept because entropy cannot be negative. The role that negentropy has played in previous discussions is replaced by “complexity” as defined in information theory. A satisfactory scenario for spontaneous biogenesis requires the generation of “complexity” not “order”. Previous calculations based on simple combinatorial analysis overestimate the number of sequences by a factor of 10^5. The number of cytochrome c sequences is about 3·8 × 10^61. The probability of selecting one such sequence at random is about 2·1 ×10^65. The primitive milieu will contain a racemic mixture of the biological amino acids and also many analogs and non-biological amino acids. Taking into account only the effect of the racemic mixture the longest genome which could be expected with 95 % confidence in 10^9 years corresponds to only 49 amino acid residues. This is much too short to code a living system so evolution to higher forms could not get started. Geological evidence for the “warm little pond” is missing. It is concluded that belief in currently accepted scenarios of spontaneous biogenesis is based on faith, contrary to conventional wisdom. 32

1. The more statistically improbable something is, the less it makes sense to believe that it just happened by blind chance.
2. Statistically, it is extremely unlikely, that a primordial genome, proteome, metabolome, and interactome of the first living cell arose by random, unguided events.
3. Furthermore, we see in the set up in biochemistry, and biology, purposeful design.  
4. Without a mind knowing which sequences are functional in a vast space of possible sequences, random luck is probabilistically doomed to find those that confer function

1. There is a vast "structure-space", or "chemical space".  A ‘virtual chemistry space’ exists that contains perhaps 10^100 possible molecules. There would have been almost no limit of possible molecular compositions, or "combination space" of elementary particles bumping and eventually joining each other to form any sort of molecules. There was no goal-oriented mechanism, selecting the "bricks" used in life, and producing them equally in the millions. 
2. Even if that hurdle would have been overcome, and, let's say, a specified set of 20 selected amino acids, left-handed and purified, able to polymerize on their own, were available, and a natural mechanism to perform the shuffling process, the "sequence space" would have been 10^756,000 possible sequences amongst which the functional one would have had to be selected. The shuffling resources of 5,220 universes like ours would have eventually to be exhausted to generate a functional interactome.

Chance of intelligence to set up life: 
100%. We know by repeated experience that humans know how to design drugs, and create them for specific purposes. We also know, that the origin of blueprints containing instructional complex assembly information, dictating the fabrication of complex machines, robotic production lines, computers, transistors, turbines, energy plants,  and interlinked factories based on these instructions, which produce goods for specific purposes, are both always the result of intelligent setup. 

Florian Lauck (2013): A common process in drug design is to systematically search portions of chemical space for new possible and probable lead or drug molecules. In practice, this is realized with different techniques for constructing a new or altering a known bioactive molecule. Some sort of template or target usually guides this process.  It can be integrated into other techniques as a source of compounds or can stand for itself implementing unique search strategies for new drug candidates. 

Chemical space encompasses the lot of all possible molecules. In the context of biological systems, it is usually used to describe ‘‘all the small carbon-based molecules that could in principle be created’’. Despite the limitations introduced with the terms small and carbon-based, this space is vast. It encompasses all molecules that occur naturally in biological systems, as well as artificially created substances that have an effect on an organism, such as medicinal drugs. An efficient way of modeling chemical space is via a combinatorial approach. We define combinatorial chemical space as a tuple of atomic or molecular building blocks and connection rules. The nature of the building blocks determines the kind of connection between them; the rules determine how building blocks relate and in which case connections are allowed. The evaluation of a combinatorial space, through enumeration or search, then yields actual molecular graphs that can be translated into molecules, the so-called products. In this way, huge chemical libraries can be represented only by the building blocks and the connection rules. There are essentially two options for evaluation: systematic enumeration and searching with on-the-fly molecule construction. We will discuss later that the former is only feasible for small chemical spaces or when heavy constraints are imposed, because it quickly leads to a combinatorial explosion. The latter is much more efficient and can be realized with short computation times. Several methods have been published, implementing an efficient, and in some cases exhaustive, search in combinatorial spaces. In nature, several examples of combinatorial chemical spaces exist. The two probably most important and famous ones describe DNA with four nucleotides as building blocks and the creation of phosphoester bonds as connection rule and proteins with 20 amino acids as building blocks and amide bond formation as connection rule. As building blocks can be combined in arbitrary order, the resulting space is huge. 41

W. Patrick Walters (1998): There are perhaps millions of chemical ‘libraries’ that a trained chemist could reasonably hope to synthesize. Each library can, in principle, contain a huge number of compounds – easily billions.  A ‘virtual chemistry space’ exists that contains perhaps 10^100 possible molecules 42

1. Chemical space encompasses the lot of all possible molecules. Despite the limitations introduced with the terms small and carbon-based, this space is vast. if not limiting the search of combinatorial space into small chemical spaces or when heavy constraints are imposed, it quickly leads to a combinatorial explosion.  There are perhaps millions of chemical ‘libraries’ that a trained chemist could reasonably hope to synthesize. Each library can, in principle, contain a huge number of compounds – easily billions.  A ‘virtual chemistry space’ exists that contains perhaps 10^100 possible molecules. In nature, several examples of combinatorial chemical spaces exist. The two probably most important and famous ones describe DNA with four nucleotides as building blocks and the creation of phosphodiester bonds as connection rule and proteins with 20 amino acids as building blocks and amide bond formation as connection rule. As building blocks can be combined in arbitrary order, the resulting space is huge.
2. In drug design, researchers employ systematic search, utilizing different techniques, using targets ( or goals) as a guide, integrating techniques, implementing search strategies, modeling, evaluating, enumerating, and implementing efficient methods of search.
3. Hume: "Since the effects resemble each other, we are led to infer, by all the rules of analogy, that the causes also resemble; and that the Author of nature is somewhat similar to the mind of man; though possesses of much large faculties, proportioned to the grandeur of the work executed. By this argument a posteriori, and by this argument alone, do we prove at once the existence of a Deity, and his similarity to human mind and intelligence. ..." If humans employ their mind, and various techniques resorting and using their intelligence, to sort out which molecules in a vast "chemical space" bear the desired function, the same must apply to the quartet of macromolecules used in life, and the genome, proteome, and interactome, necessary to create self-replicating, living cells.

A statistical impossibility is a probability that is so low as to not be worthy of mentioning. Sometimes it is quoted as 10^50 although the cutoff is inherently arbitrary. Although not truly impossible the probability is low enough so as to not bear mention in a rational, reasonable argument. 43




1. Paul Davies & Jeremy England:  The Origins of Life: Do we need a new theory for how life began? at 15:30 Life = Chemistry + information  Jun 25, 2021
2. Guenther Witzany: [url=https://pubmed.ncbi.nlm.nih.gov/25557438/#:~:text=Manfred Eigen extended Erwin Schroedinger's,quasispecies and hypercycles%2C which have]Life is physics and chemistry and communication[/url] 2014 Dec 31
3. Robert Alicki: Information is not physical 11 Feb 2014
4. Paul C. W. Davies: The algorithmic origins of life 06 February 2013
5. David L Abel: Dichotomy in the definition of prescriptive information suggests both prescribed data and prescribed algorithms: biosemiotics applications in genomic systems 2012 Mar 14
6. Albert Voie: Biological function and the genetic code are interdependent 2006
7. Paul Davies: Life force 18 September 1999
8. G. F. Joyce, L. E. Orgel: Prospects for Understanding the Origin of the RNA World 1993
9. Timothy R. Stout: Information-Driven Machines and Predefined Specifications: Implications for the Appearance of Organic Cellular Life April 8, 2019
10. Claus Emmeche: FROM LANGUAGE TO NATURE - the semiotic metaphor in biology 1991
11. David L Abel: Three subsets of sequence complexity and their relevance to biopolymeric information 11 August 2005
12. George M Church: Next-generation digital information storage in DNA 2012 Aug 16
13. Richard Dawkins on the origins of life (1 of 5) Sep 29, 2008
14. Leroy Hood: The digital code of DNA 2003 Jan 23
15. Hubert P. Yockey: Information Theory, Evolution, and the Origin of Life 2005
16. V A Ratner: The genetic language: grammar, semantics, evolution 1993 May;29
17. Peter R. Wills: DNA as information 13 March 2016
18. Paul Davies: The Origin of Life January 31, 2003
19. Sergi Cortiñas Rovira: Metaphors of DNA: a review of the popularisation processes  21 March 2008
20. Massimo Pigliucci:  Why Machine-Information Metaphors are Bad for Science and Science Education 2010
21. David Hume: https://philosophy.lander.edu/intro/introbook2.1/x4211.html
22. SUNGCHUL JI: The Linguistics of DNA: Words, Sentences, Grammar, Phonetics, and Semantics  06 February 2006
23. Marshall Warren Nirenberg Nobelprize
24. MARSHALL W. NIRENBERG Will Society Be Prepared? 11 August 1967
25. Evolution 2.0: Breaking the Deadlock Between Darwin and Design September 1, 2015
26. Sedeer el-Showk: The Language of DNA July 28, 2014
27. Change Laura Tan, Rob Stadler: The Stairway To Life: An Origin-Of-Life Reality Check  March 13, 2020 
28. David L Abel: The Universal Plausibility Metric (UPM) & Principle (UPP) 2009; 6: 27
29. Edward J. Steele: Cause of Cambrian Explosion -Terrestrial or Cosmic? 2018
30. Katarzyna Adamala OPEN QUESTIONS IN ORIGIN OF LIFE: EXPERIMENTAL STUDIES ON THE ORIGIN OF NUCLEIC ACIDS AND PROTEINS WITH SPECIFIC AND FUNCTIONAL SEQUENCES BY A CHEMICAL SYNTHETIC BIOLOGY APPROACH February 2014
31. Paul Davies: The secret of life won't be cooked up in a chemistry lab Sun 13 Ja
31a. Sir Fred Hoyle: The Universe: Past and Present Reflections November 1981
32. Hubert P.Yockey: A calculation of the probability of spontaneous biogenesis by information theory 7 August 1977
33. Robert T. Pennock: Intelligent Design Creationism and Its Critics: Philosophical, Theological, and Scientific Perspectives 2001
34. Paul Davies: [url=https://www.amazon.com/FIFTH-MIRACLE-Search-Origin-Meaning/dp/068486309X#:~:text=Are We Alone in the,years ago%2C Mars resembled earth.]The FIFTH MIRACLE: The Search for the Origin and Meaning of Life[/url]  March 16, 2000
35. Paul Davies: The Origin of Life  January 31, 2003
36. Paul Davies & Jeremy England:  The Origins of Life: Do we need a new theory for how life began? Jun 25, 2021
37. Ian Sample:  'I predict a great revolution': inside the struggle to define life Sat 26 Jan 2019
38. Evolution: Possible, or impossible? Probability and the First Proteins
39. David T.F Dryden: How much of protein sequence space has been explored by life on Earth? 15 April 2008
40. Steve Meyer, Signature in the Cell 2009
41. Florian Lauck: Coping with Combinatorial Space in Molecular Design October 2013
42. W.Patrick Walters: Virtual screening—an overview 1 April 1998
43. M. EmiIe BoreI: LES PROBABILITIES DINOMBRABLES ET LEURS APPLICATIONS ARITHMtTIOUES. 8 novembre 1908

https://reasonandscience.catsboard.com

Sponsored content



Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum