ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview

Welcome to my library—a curated collection of research and original arguments exploring why I believe Christianity, creationism, and Intelligent Design offer the most compelling explanations for our origins. Otangelo Grasso


You are not connected. Please login or register

The hardware and software of the cell, evidence of design

Go down  Message [Page 1 of 1]

Otangelo


Admin

The hardware and software of the cell, evidence of design

https://reasonandscience.catsboard.com/t2221-the-hardware-and-software-of-the-cell-evidence-of-design

Paul Davies: the fifth miracle page 62
Due to the organizational structure of systems capable of processing algorithmic (instructional) information, it is not at all clear that a monomolecular system – where a single polymer plays the role of catalyst and informational carrier – is even logically consistent with the organization of information flow in living systems, because there is no possibility of separating information storage from information processing (that being such a distinctive feature of modern life). As such, digital–first systems (as currently posed) represent a rather trivial form of information processing that fails to capture the logical structure of life as we know it. 1

We need to explain the origin of both the hardware and software aspects of life, or the job is only half finished. Explaining the chemical substrate of life and claiming it as a solution to life’s origin is like pointing to silicon and copper as an explanation for the goings-on inside a computer. It is this transition where one should expect to see a chemical system literally take-on “a life of its own”, characterized by informational dynamics which become decoupled from the dictates of local chemistry alone (while of course remaining fully consistent with those dictates). Thus the famed chicken-or-egg problem (a solely hardware issue) is not the true sticking point. Rather, the puzzle lies with something fundamentally different, a problem of causal organization having to do with the separation of informational and mechanical aspects into parallel causal narratives. The real challenge of life’s origin is thus to explain how instructional information control systems emerge naturally and spontaneously from mere molecular dynamics.

Software and hardware are irreducible complex and interdependent. There is no reason for information processing machinery to exist without the software, and vice versa.
Systems of interconnected software and hardware are irreducibly complex. 2

Is the cell really a machine?
Following the Second World War, the pioneering ideas of cy- bernetics, information theory, and computer science captured the imagination of biologists, providing a new vision of the MCC that was translated into a highly successful experimental research program, which came to be known as ‘molecular biology’ ( Keller, 1995; Morange, 1998; Kay, 20 0 0 ). At its core was the idea of the computer, which, by introducing the conceptual distinction between ‘software’ and ‘hardware, directed the attention of re- searchers to the nature and coding of the genetic instructions (the software) and to the mechanisms by which these are im- plemented by the cell’s macromolecular components (the hardware).
https://sci-hub.ren/10.1016/j.jtbi.2019.06.002


All cellular functions are  irreducibly complex 3

http://reasonandscience.heavenforum.org/t2179-the-cell-is-a-interdependent-irreducible-complex-system

chemist Wilhelm Huck, professor at Radboud University Nijmegen 5
A working cell is more than the sum of its parts. "A functioning cell must be entirely correct at once, in all its complexity," 

Paul Davies, the fifth miracle page 53: 
Pluck the DNA from a living cell and it would be stranded, unable to carry out its familiar role. Only within the context of a highly specific molecular milieu will a given molecule play its role in life. To function properly, DNA must be part of a large team, with each molecule executing its assigned task alongside the others in a cooperative manner. Acknowledging the interdependability of the component molecules within a living organism immediately presents us with a stark philosophical puzzle. If everything needs everything else, how did the community of molecules ever arise in the first place? Since most large molecules needed for life are produced only by living organisms, and are not found outside the cell, how did they come to exist originally, without the help of a meddling scientist? Could we seriously expect a Miller-Urey type of soup to make them all at once, given the hit-and-miss nature of its chemistry?

Being part of a large team,cooperative manner,interdependability,everything needs everything else, are just other words for irreducibility and interdependence. 

For a nonliving system, questions about irreducible complexity are even more challenging for a totally natural non-design scenario, because natural selection — which is the main mechanism of Darwinian evolution — cannot exist until a system can reproduce. For an origin of life we can think about the minimal complexity that would be required for reproduction and other basic life-functions. Most scientists think this would require hundreds of biomolecular parts, not just the five parts in a simple mousetrap or in my imaginary LMNOP system. And current science has no plausible theories to explain how the minimal complexity required for life (and the beginning of biological natural selection) could have been produced by natural process before the beginning of biological natural selection.

If we attempt to build model systems for studying chemical evolution prior to the time when information-bearing templates were part of the developing prebiotic system, we immediately encounter three problems: (1) very few experimental systems have been developed for such studies; (2) a complex chemical system has no obvious repository for the storage of useful chemical information; (3) the rules governing the evolution of purely chemical systems are not known. 4 

Perry Marshall, Evolution 2.0, page 153:
Wanna Build a Cell? A DVD Player Might Be Easier Imagine that you’re building the world’s first DVD player. What must you have before you can turn it on and watch a movie for the first time?

A DVD. How do you get a DVD? You need a DVD recorder first. How do you make a DVD recorder? First you have to define the language. When Russell Kirsch (who we met in chapter   )  created the world’s first digital image, he had to define a language for images first. Likewise you have to define the language that gets written on the DVD, then build hardware that speaks that language. Language must be defined first. Our DVD recorder/player problem is an encoding-decoding problem, just like the information in DNA. You’ll recall that communication, by definition, requires four things to exist:

1. A code
2. An encoder that obeys the rules of a code
3. A message that obeys the rules of the code
4. A decoder that obeys the rules of the code

These four things—language, transmitter of language, message, and receiver of language—all have to be precisely defined in advance before any form of communication can be possible at all.
hardware - The hardware and software of the cell, evidence of design Coding10


A camera sends a signal to a DVD recorder, which records a DVD. The DVD player reads the DVD and converts it to a TV signal. This is conceptually identical to DNA translation. The only difference is that we don’t know how the original signal—the pattern in the first DNA strand—was encoded. The first DNA strand had to contain a plan to build something, and that plan had to get there somehow. An original encoder that translates the idea of an organism into instructions to build the organism (analogous to the camera) is directly implied.


The rules of any communication system are always defined in advance by a process of deliberate choices. There must be prearranged agreement between sender and receiver, otherwise communication is impossible. By definition, a communication system cannot evolve from something simpler because evolution itself requires communication to exist first. You can’t make copies of a message without the message, and you can’t create a message without first having a language. And before that, you need intent. A code is an abstract, immaterial, nonphysical set of rules. There is no physical law that says ink on a piece of paper formed in the shape T-R-E-E should correspond to that large leafy organism in your front yard. You cannot derive the local rules of a code from the laws of physics, because hard physical laws necessarily exclude choice. On the other hand, the coder decides whether “1” means “on” or “off.” She decides whether “0” means “off” or “on.” Codes, by definition, are freely chosen. The rules of the code come before all else. These rules of any language are chosen with a goal in mind: communication, which is always driven by intent. That being said, conscious beings can evolve a simple code into a more complex code—if they can communicate in the first place. But even simple grunts and hand motions between two humans who share no language still require communication to occur. Pointing to a table and making a sound that means “table” still requires someone to recognize what your pointing finger means. 

https://reasonandscience.catsboard.com/t2221-the-hardware-and-software-of-the-cell-evidence-of-design

In order to explain the origin of life, the origin of the physical parts, that is DNA, RNA, organelles , proteins, enzymes etc. of the cell must be explained, and the origin of the information and various code systems of the cell. Following excerpt will elucidate why the origin of both, the software, and the hardware are best explained through the action of a intentional creator.

Replication upon which mutations and natural selection act could not begin prior when life started and cell's began with self-replication. According to geneticist Michael Denton, the break between the nonliving and the living world ‘represents the most dramatic and fundamental of all the discontinuities of nature. Before that remarkable event, a fully operating cell had to be in place, various organelles, enzymes, proteins, DNA, RNA, tRNA, mRNA, and a a extraordinarily complex machinery, that is : a complete DNA replication machinery, topoisomerases for replication and chromosome segregation functions, a DNA repair system, RNA polymerase and transcription factors, a fully operational ribosome for translation, including   20 aminoacyl-tRNA synthases, tRNA and the complex machinery to synthesize them, proteins for posttranslational modifications and chaperones for correct folding of a series of essential proteins, FtsZ microfilaments for celldivision and formation of cell shape, a transport system for proteins etc., a complex metabolic system consistent of several different enzymes for energy generation, lipid biosynthesis to make the cell membrane, and machinery for nucleosynthesis.

This constitutes a minimal set of basic parts, superficially described. If one, just ONE of these parts is missing, the cell will not operate. That constitutes a interdependent , interlocked and irreducibly complex biological system of extraordinary complexity, which had to arise ALL AT ONCE. No step by step build up over a long period of time is possible.

http://reasonandscience.heavenforum.org/t2110-what-might-be-a-protocells-minimal-requirement-of-parts#3797

That constitutes a formidable catch22 problem.

Biochemist David Goodsell describes the problem, "The key molecular process that makes modern life possible is protein synthesis, since proteins are used in nearly every aspect of living. The synthesis of proteins requires a tightly integrated sequence of reactions, most of which are themselves performed by proteins." and continues : this "is one of the unanswered riddles of biochemistry: which came first, proteins or protein synthesis? If proteins are needed to make proteins, how did the whole thing get started?"45 The end result of protein synthesis is required before it can begin. To make it more clear what we are talking about:

That chain is constituted by INITIATION OF TRANSCRIPTION, CAPPING,  ELONGATION,  SPLICING, CLEAVAGE,POLYADENYLATION AND TERMINATION, EXPORT FROM THE NUCLEUS TO THE CYTOSOL, INITIATION OF PROTEIN SYNTHESIS (TRANSLATION), COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING. In order for evolution to work, the robot-like working machinery and assembly line  must be in place, fully operational.

Jacques Monod noted: "The code is meaningless unless translated. The modern cell's translating machinery consists of at least fifty macromolecular components which are themselves coded in DNA: the code cannot be translated otherwise than by products of translation." (Scientists now know that translation actually requires more than a  hundred proteins.)

http://reasonandscience.heavenforum.org/t2059-catch22-chicken-and-egg-problems?highlight=catch22

Furthermore, to build up this system, following conditions in a primordial earth would have to be met :

( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

Availability. Among the parts available for recruitment to form the system, there would need to be ones capable of performing the highly specialized tasks of individual parts, even though all of these items serve some other function or no function.
Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.
Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.
Coordination. The parts must be coordinated in just the right way: even if all of the parts of a system are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.
Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if sub systems or parts are put together in the right order, they also need to interface correctly.

http://reasonandscience.heavenforum.org/t1468-irreducible-complexity#2133

Fred Hoyles example is not far fetched but rather a excellent illustration. If it is ridiculous to think that a perfectly operational 747 Jumbo-jet could come into existence via a lucky accident, then it is likewise just as illogical to think that such a sophisticated organism like a first living cell could assemble by chance . It gets even more absurd to think that the OOL  would also form by chance and have the capability to reproduce. Life cannot come from non-life even if given infinite time and oportunities. If life could spontaneously arise  from non-life than it still should be doing that today.  Hence the Law of Biogenesis: The principle stating that life arises from pre-existing life, not from non-living matter. Life is clearly best explained through the wilful action of a extraordinarily intelligent and powerful designer.

http://reasonandscience.heavenforum.org/t1279p30-abiogenesis-is-impossible#4171

that is the hardware part, which you can compare to a computer with hard disk, cabinet, cpu etc. 


The software

BIOLOGICAL CELL AND ITS SYSTEM SOFTWARE 3
Abstract: Looking for a role that DNA has in the cell, we are elaborating that DNA can be viewed as a Cell Operating System. In the paper we consider several aspects of that approach and we propose some possible explanations for some of the not yet understood phenomena in molecular genetics in terms of systems software.

We recognize the operon as a monitor a piece of software that contains the data, but also it controls the access over its data. It is an object in a sense that it contains data structures (genes) and control structures (operators) for accessing the data. This suggests that the prokaryote systems software is object oriented. In the cell hierarchical control system, the operon is the smallest object. The exons can be viewed as subprograms, programs, or data of modular software. In some cases they are pieces of reusable software, since combination of exons can produce different programs, as in the case with program preparation for antibodies.

Secondly, you need coded, specified, complex information. Thats the software. And constitutes the second major hurdle that buries any naturalistic just so fairy tale stories and fantasies. In following paper : 

Origin and evolution of the genetic code: the universal enigma
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/

In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made.

Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.

http://reasonandscience.heavenforum.org/t2001-origin-and-evolution-of-the-genetic-code-the-universal-enigma

The genetic code is nearly optimal for allowing additional information within protein-coding sequences
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1832087/

DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information.

if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.

Fazale Rana wrote in his book Cell's design:   In 1968, Nobel laureate Francis Crick argued that the genetic code could not undergo significant evolution. His rationale is easy to understand. Any change in codon assignments would lead to changes in amino acids in every polypeptide made by the cell. This wholesale change in polypeptide sequences would result in a large number of defective proteins. Nearly any conceivable change to the genetic code would be lethal to the cell. 

Even if the genetic code could change over time to yield a set of rules that  allowed for the best possible error-minimization capacity, is there enough time for this process to occur? Biophysicist Hubert Yockey addressed this question. He determined that natural selection would have to explore 1.40 x 10^70 different genetic codes to discover the universal genetic code found in nature. The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code. 

http://reasonandscience.heavenforum.org/t1404-the-genetic-code-is-nearly-optimal-for-allowing-additional-information-within-protein-coding-sequences

The cell converts the information carried in an mRNA molecule into a protein molecule. This feat of translation was a focus of attention of biologists in the late 1950s, when it was posed as the “coding problem”: how is the information in a linear sequence of nucleotides in RNA translated into the linear sequence of a chemically quite different set of units—the amino acids in proteins?

The first scientist after Watson and Crick to find a solution of the coding problem, that is the relationship between the DNA structure and protein synthesis was Russian  physicist George Gamow. Gamow published  in the October 1953 issue of Nature  a solution called the “diamond code”, an overlapping triplet code based on a combinatorial scheme in which 4 nucleotides arranged 3-at-a-time would specify 20 amino acids.  Somewhat like a language, this highly restrictive code was primarily hypothetical, based on then-current knowledge of the behavior of nucleic acids and proteins.

The concept of coding applied to genetic specificity was somewhat misleading, as translation between the four nucleic acid bases and the 20 amino acids would obey the rules of a cipher instead of a code. As Crick acknowledged years later, in linguistic analysis, ciphers generally operate on units of regular length (as in the triplet DNA scheme), whereas codes operate on units of variable length (e.g., words, phrases). But the code metaphor worked well, even though it was literally inaccurate, and in Crick’s words, “‘Genetic code’ sounds a lot more intriguing than ‘genetic cipher’.”



Question: how did the translation of the triplet anti codon to amino acids, and its assignment, arise ?  There is no physical affinity between the anti codon and the amino acids. What must be explained, is the arrangement of the codon " words " in the standard codon table which is highly non-random, redundant and optimal, and serves to translate the information into the amino acid sequence to make proteins, and the origin of the assignment of the 64 triplet codons to the 20 amino acids. That is, the origin of its translation. The origin of a alphabet through the triplet codons is one thing, but on top, it has to be translated to a other " alphabet " constituted through the 20 amino acids. That is as to explain the origin of capability to translate the english language into chinese. We have to constitute the english and chinese language and symbols first, in order to know its equivalence. That is a mental process. 

http://reasonandscience.heavenforum.org/t2057-origin-of-translation-of-the-4-nucleic-acid-bases-and-the-20-amino-acids-and-the-universal-assignment-of-codons-to-amino-acids


1) http://arxiv.org/pdf/1207.4803v2.pdf
2) http://www.evolutionnews.org/2012/07/software_machin062211.html
3) http://ciit.finki.ukim.mk/data/papers/2CiiT/2CiiT-19.pdf
4) Origins of Life on the Earth and in the Cosmos page 185



Last edited by Otangelo on Tue Jul 16, 2024 1:41 pm; edited 13 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Software: Such Is Life

"What i life?" That's a question Erwin Schrödinger tried to answer in an influential lecture at Trinity College, Dublin, in 1943. Last week the same question was addressed to pioneering synthetic geneticist Craig Venter (pictured) on the same stage, with the benefit of six decades of progress in genetics since Watson and Crick unveiled the structure of DNA in 1951. Watson was present at the lecture, according to Claire O'Connell, who reported on the event for New Scientist.

So, what is life from the perspective of a genetic engineer whose team programmed DNA in a computer in the first attempt to build a synthetic organism? Venter told the packed audience in Dublin that life is DNA-software-driven machinery that operates protein robots. Here's the key passage in the article:

"All living cells that we know of on this planet are 'DNA software'-driven biological machines comprised of hundreds of thousands of protein robots, coded for by the DNA, that carry out precise functions," said Venter. "We are now using computer software to design new DNA software."


That's a remarkable statement. It has intelligent design written all through it.
O'Connell describes how Schrödinger realized 69 years ago that a living cell had to carry information. Without knowing the structure of DNA, he envisioned it as an "aperiodic crystal" that could store instructions. She quotes Luke O'Neill, a professor of biochemistry at Trinity and master of ceremonies for the July 12 lecture:


"The gene had to be stable, so it had to be a crystal, and it had to have information so it was aperiodic," he explained.
"Equally important, Schrödinger also discussed the possibility of a genetic code, stating the concept in clear physical terms."



The rest, as they say, is history: DNA did turn out to be aperiodic, stable, and the bearer of a genetic code.
It's serendipitous that the history of molecular genetics parallels the history of software engineering. Just when Schrödinger was pondering how cells might store information in a genetic code, software engineers were figuring out how to program the new computers being invented, first the clunky vacuum-tube monstrosities, followed by devices with increasing power and decreasing size as transistors (1947) and integrated circuits (1958) became available.
Software engineers faced the challenges of informational systems: How can instructions be stored and executed to command robotic devices like input-output machines and printers? How can software respond and adapt robustly to changing environments? How can hardware and software be integrated into systems and networks? Simultaneously but independently, geneticists were learning how the newly discovered DNA code stored instructions and executed them, solving the very same challenges. The timing of these discoveries was as uncanny as the similarities between them.
In Signature in the Cell (2009), Stephen Meyer used software as a simile for genetic information, quoting Microsoft founder Bill Gates: "DNA is like a computer program but far, far more advanced than any software ever created" (p. 12).
Venter's lecture essentially brings the parallel tracks together. DNA is not justlike software, he said: that's what it is. To prove the point, he added, "We are now using computer software to design new DNA software."
O'Connell continues,


The digital and biological worlds are becoming interchangeable, he added, describing how scientists now simply send each other the information to make DIY biological material rather than sending the material itself.


Today's answer to "What is life?" therefore, is: it's software. That's a very ID-friendly idea, for numerous reasons:

Our uniform experience with software is that it is intelligently designed.
Software runs on machines, and machines are intelligently designed.
Software operates other machines (e.g., robots) that are also intelligently designed.
Systems of interconnected software and hardware are irreducibly complex.
Functional systems imply purposefully planned architecture of the whole.
Software is comprised of information, which is immaterial.
Information is independent of the storage medium bearing it (e.g., electrons, magnets, silicon chips, molecules of DNA).
Meaningful information is aperiodic; so is DNA.
As a form of information, DNA software is complex and specified.
Epigenetics regulates genetics just as computer software can regulate other software.
Software can improve over time, but only by intelligent design, not by random mutation.
Software can contain bugs and still be intelligently designed.

O'Connell told a humorous story that illustrates that last point. When Venter's team programmed their synthetic organism by running their computer-generated "DNA software" through a bacterium's "hardware," it was buggy. They had inserted some text as a watermark, including a quote by late physicist Richard Feynman -- but got it wrong. They had to go back later on and fix it.
No one sensible would claim that a mistake in the software by Venter's team counts as evidence against its being the product of intelligent design, nor should anyone look to dysteleology in life as a disproof of design. Intelligent design theory makes no claims regarding the quality of the design reflected in any phenomenon. It only points to the presence of design.
One might, of course, raise legitimate questions about the wisdom of tinkering with living software. ID theory leaves such questions in the capable hands of ethicists, philosophers, theologians, policy wonks and voters.
Nevertheless, viewing life as software represents a fundamental paradigm shift with profound implications. It's no longer a Paley-like design argument from analogy, as Stephen Meyer explained in Signature in the Cell (p. 386): it's an inference to the best explanation based on stronger premises.

The argument does not depend upon the similarity of DNA to a computer program or human language, but upon the presence of an identical feature in both DNA and intelligently designed codes, languages, and artifacts. Because we know intelligent agents can (and do) produce complex and functionally specified sequences of symbols and arrangements of matter, intelligent agency qualifies as an adequate causal explanation for the origin of this effect. Since, in addition, materialistic theories have proven universally inadequate for explaining the origin of such information, intelligent design now stands as the only entity with the causal power known to produce this feature of living systems. Therefore, the presence of this feature in living systems points to intelligent design as the best explanation of it, whether such systems resemble human artifacts in other ways or not.

We might even say now, using Venter's description, that ID is not just an inference but a logical deduction. Life and software don't only contain anidentical feature, but constitute an identity. They are one and the same.
Image credit: Wikicommons.

http://www.evolutionnews.org/2012/07/software_machin062211.html



Last edited by Admin on Sat Jan 30, 2016 5:35 am; edited 3 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Darwin’s Linux: Did Evolution Produce a Computer?

How is a cell like a computer?  Some Yale scientists asked that question, and embarked on a project to compare the genome of a lowly bacterium to a computer’s operating system.1.  Their work was published in PNAS.2  As with most analogies, some things were found to be similar, and some different – but in the end, these two entities might be more similar overall in important respects.
    The interdisciplinary team, composed of members of the Computer Science department and the Molecular Biophysics and Biochemistry department, calls itself the Program in Computational Biology and Bioinformatics.  Recognizing that “The genome has often been called the operating system (OS) for a living organism,” they decided to explore the analogy.  For subjects, they took the E. coli bacterium, one of the best-studied prokaryotic cells, and Linux, a popular Unix-based operating system.  The abstract reveals the basic findings, but there’s more under the hood:

To apply our firsthand knowledge of the architecture of software systems to understand cellular design principles, we present a comparison between the transcriptional regulatory network of a well-studied bacterium (Escherichia coli) and the call graph of a canonical OS (Linux) in terms of topology and evolution.  We show that both networks have a fundamentally hierarchical layout, but there is a key difference: The transcriptional regulatory network possesses a few global regulators at the top and many targets at the bottom; conversely, the call graph has many regulators controlling a small set of generic functions.  This top-heavy organization leads to highly overlapping functional modules in the call graph, in contrast to the relatively independent modules in the regulatory network.  We further develop a way to measure evolutionary rates comparably between the two networks and explain this difference in terms of network evolution.  The process of biological evolution via random mutation and subsequent selection tightly constrains the evolution of regulatory network hubs.  The call graph, however, exhibits rapid evolution of its highly connected generic components, made possible by designers’ continual fine-tuning.  These findings stem from the design principles of the two systems: robustness for biological systems and cost effectiveness (reuse) for software systems.

We see they have already concocted a curious mixture of designer language and evolution language.  The design language continues in the heart of the paper.  Design principles, optimization, constraints, frameworks, interconnections, information processing – these engineering phrases are ubiquitous.  Consider this paragraph that starts with “master control plan.”  They applied it not to Linux but to the cell, which is found to have many similarities to the master control plan of the computer operating system:

The master control plan of a cell is its transcriptional regulatory network.  The transcriptional regulatory network coordinates gene expression in response to environmental and intracellular signals, resulting in the execution of cellular processes such as cell divisions and metabolism.  Understanding how cellular control processes are orchestrated by transcription factors (TFs) is a fundamental objective of systems biology, and therefore a great deal of effort has been focused on understanding the structure and evolution of transcriptional regulatory networks.  Analogous to the transcriptional regulatory network in a cell, a computer OS consists of thousands of functions organized into a so-called call graph, which is a directed network whose nodes are functions with directed edges leading from a function to each other function it calls.  Whereas the genome-wide transcriptional regulatory network and the call graph are static representations of all possible regulatory relationships and calls, both transcription regulation and function activation are dynamic.  Different sets of transcription factors and target genes forming so-called functional modules are activated at different times and in response to different environmental conditions.  In the same way, complex OSs are organized into modules consisting of functions that are executed for various tasks.

And yet, on the other hand, the team felt that both the cell and Linux vary under processes of evolution:

Like biological systems, software systems such as a computer operating system (OS) are adaptive systems undergoing evolution.  Whereas the evolution of biological systems is subject to natural selection, the evolution of software systems is under the constraints of hardware architecture and customer requirements.  Since the pioneering work of Lehman, the evolutionary pressure on software has been studied among engineers.  Interestingly enough, biological and software systems both execute information processing tasks.  Whereas biological information processing is mediated by complex interactions between genes, proteins, and various small molecules, software systems exhibit a comparable level of complexity in the interconnections between functions.  Understanding the structure and evolution of their underlying networks sheds light on the design principles of both natural and man-made information processing systems.

These paragraphs provide a flavor of the basic assumptions of the paper: that cells and OSs are analogous in their design principles and in their evolution.  So what did they find?  Their most eye-catching chart shows that Linux is top-heavy with master regulators and middle management functions, whereas a cell’s transcription network is bottom-heavy with workhorse proteins and few top management functions.  The illustration has been reproduced in an article on PhysOrg with the interesting headline, “Scientists Explain Why Computers Crash But We Don’t.”

A table in the Discussion section of the paper summarizes the main similarities and differences they found.  Here are some noteworthy examples:

Cells are constrained by the environment; Linux by the hardware and customer needs.
Cells evolve by natural selection; Linux evolves by designers’ fine-tuning.
Cells have a pyramid-shaped hierarchy; Linux is top-heavy.
Cells don’t reuse genes much, but Linux reuses function calls often.
Cells don’t allow much overlap between modules, but Linux does.
Cells have many specialized workhorses; Linux concentrates on generic functions.
Cell evolutionary rates are mostly conservative; in Linux, they are conservative to adaptive.
Cell design principles are bottom up; in Linux, they are top down.
Cells are optimized for robustness; Linux is optimized for cost effectiveness.

The differences seem to be winning.  Cells and Operating Systems have different constraints; therefore, they have different design principles and optimization.  But not so fast; the team only studied a very lowly bacterium.  What would happen if they expanded their study upward into the complex world of eukaryotes?  Here’s how the paper ended:

Reuse is extremely common in designing man-made systems.  For biological systems, to what extent they reuse their repertoires and by what means sustain robustness at the same time are questions of much interest.  It was recently proposed that the repertoire of enzymes could be viewed as the toolbox of an organism As the genome of an organism grows larger, it can reuse its tools more often and thus require fewer and fewer new tools for novel metabolic tasks.  In other words, the number of enzymes grows slower than the number of transcription factors when the size of the genome increases.  Previous studies have made the related finding that as one moves towards more complex organisms, the transcriptional regulatory network has an increasingly top-heavy structure with a relatively narrow base.  Thus, it may be that further analysis will demonstrate the increasing resemblance of more complex eukaryotic regulatory networks to the structure of the Linux call graph.

1.  An operating system is the foundational software on a computer that runs applications.  A useful analogy is the management company for a convention center.  It doesn’t run conventions itself, but it knows the hardware (exhibit halls, restrooms, lights, water, power, catering) and has the personnel to operate the facilities so that a visiting company (application) can run their convention at the center.
2.  Yan, Fang, Bhardwaj, Alexander, and Gerstein, “Comparing genomes to computer operating systems in terms of the topology and evolution of their regulatory control networks,” Proceedings of the National Academy of Sciences published online before print May 3, 2010, doi: 10.1073/pnas.0914771107.

This is a really interesting paper, because it illustrates the intellectual schizophrenia of the modern Darwinist in the information age.  It might be analogous to a post-Stalin-era communist ideologue trying to recast Marxist-Leninist theory for the late 1980s, when the failures of collectivism have long been painfully apparent to everyone except the party faithful.  With a half-hearted smile, he says, “So we see, that capitalism does appear to work in certain environments under different constraints; in fact, it may well turn out to be the final stage of the proletarian revolution.”  Well, for crying out loud, then, why not save a step, and skip over the gulags to the promised land of freedom!
    You notice that the old Darwin Party natural-selection ideology was everywhere assumed, not demonstrated.  The analogy of natural selection to “customer requirements and designers’ fine-tuning” is strained to put it charitably; to put it realistically, it is hilariously funny.  The authors nowhere demonstrated that robustness is a less worthy design goal than cost-effectiveness.  For a cell cast into a dynamic world, needing to survive, what design goal could be more important than robustness?  Linux lives at predictable temperatures in nice, comfortable office spaces.  Its designers have to design for paying customers.  As a result, “the operating system is more vulnerable to breakdowns because even simple updates to a generic routine can be very disruptive,” PhysOrg admitted.  Bacteria have to live out in nature.  A cost-effective E. coli is a dead E. coli.  The designer did a pretty good job to make those critters survive all kinds of catastrophes on this planet.  The PhysOrg article simply swept this difference into the evolutionary storytelling motor mouth, mumbling of the bacterial design, that “over billions of years of evolution, such an organization has proven robust.”  That would be like our communist spin doctor alleging that the success of capitalism proves the truth of Marxist doctrine.
    A simple bacterial genome shows incredibly successful design for robustness when compared to a computer operating system, albeit at the cost of low reuse of modules.  But then the authors admitted the possibility that eukaryotes might well have achieved both robustness and modular reusability.  That would make the comparison to artificial operating systems too close to call.  If we know that Linux did not evolve by mutations and natural selection, then it is a pretty good bet that giraffes and bats and whales and humans did not, either.  That should be enough to get Phillip Johnson’s stirring speech, “Mr. Darwin, Tear down this wall!” to stimulate a groundswell of discontent with the outmoded regime.  May it lead to a sudden and surprising demise of its icons, and a new birth of academic freedom. 

http://creationsafaris.com/crev201005.htm#20100504a

https://reasonandscience.catsboard.com

Otangelo


Admin

Self-organization, Natural Selection, and Evolution: Cellular Hardware and Genetic Software
https://academic.oup.com/bioscience/article/60/11/879/328810/Self-organization-Natural-Selection-and-Evolution

The computer science concepts of hardware and software provide a useful metaphor for understanding the nature of biological systems. All the chemical compounds within a cell, and the stable organizational relationships among them, form the hardware of the biological system; this includes the cell membranes, organelles, and the DNA molecules. These structures and processes encompass information stored in genes, as well as information inherent in their organization. The software of the cell corresponds to programs implemented on the hardware for adaptive responses to the environment and for information storage and transmission. Such programs are contained not only in DNA but also in stable selforganization patterns, which are inherited across generations as ongoing processes.



Last edited by Otangelo on Fri Jan 01, 2021 8:46 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Chaitin in his life as software talk as cited by VJT:

. . . the point is that now there is a well-known analogy between the software in the natural world and the software that we create in technology. But what I’m saying is, it’s not just an analogy. You can actually take advantage of that, to develop a mathematical theory of biology, at some fundamental level…

Here’s basically the idea. We all know about computer programming languages, and they’re relatively recent, right? Fifty or sixty years, maybe, I don’t know. So … this is artificial digital software – artificial because it’s man-made: we came up with it. Now there is natural digital software, meanwhile, … by which I mean DNA, and this is much, much older – three or four billion years. And the interesting thing about this software is that it’s been there all along, in every cell, in every living being on this planet, except that we didn’t realize that … there was software there until we invented software on our own, and after that, we could see that we were surrounded by software…

So this is the main idea, I think: I’m sort of postulating that DNA is a universal programming language. I see no reason to suppose that it’s less powerful than that. So it’s sort of a shocking thing that we have this very very old software around…

So here’s the way I’m looking at biology now, in this viewpoint. Life is evolving software . . .

https://uncommondescent.com/the-design-of-life/logic-first-principles-analogy-induction-and-the-power-of-the-principle-of-identity-with-application-to-the-genetic-code/

https://reasonandscience.catsboard.com

Otangelo


Admin

2011 On a fundamental structure of gene networks in living cells
Using a theoretical approach to identify patterns in gene expression in a variety of species, organs, and cell types, we found that biological systems similarly are comprised of a relatively unchanging hardware-like gene
pattern
. Orthogonal patterns of software-like transcripts vary greatly, even among tumors of the same type from different individuals. Two distinguishable classes could be identified within the hardware-like
component: those transcripts that are highly expressed and stable and an adaptable subset with lower expression that respond to external stimuli.  It is possible to extend the computer analogy and relate software to those easily modified and personalized transcription patterns that describe the biological change.
https://www.pnas.org/doi/pdf/10.1073/pnas.1200790109

https://reasonandscience.catsboard.com

Sponsored content



Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum