ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview

Otangelo Grasso: This is my library, where I collect information and present arguments developed by myself that lead, in my view, to the Christian faith, creationism, and Intelligent Design as the best explanation for the origin of the physical world.


You are not connected. Please login or register

My articles

Go to page : Previous  1, 2, 3, ... 11, 12, 13  Next

Go down  Message [Page 2 of 13]

Otangelo


Admin

Irreducible complexity is not a argument from ignorance

What i most routinely see, is oponents of my posts arguing that intelligent design is a argument from ignorance. I disagree.

There are many parts, subunits of enzymes and proteins, co-factos etc. that have no apparent multiple functions and could not be co-opted or be available through horizontal gene transfer, or whatever. One example are the last 8 enzymes used in the biosynthesis pathway of chlorophyll http://reasonandscience.heavenforum.org/t1546-chlorophyll-biosynthesis-pathway  for what reason would these enzymes emerge naturally , if by their own , and not duly embedded in the whole biosynthesis process, they have no use at all ? and even lets say chlorophyll : why would nature invent such extremely complex pathways to produce such a complex molecule, if, even if ready to go, it is not embedded in the whole process of photosynthesis ? and how could they be embedded in the system, if the light harvesting complex were not present ? please explain. Furthermore, i wonder how nature came up with the information to make the individual parts, the subunits, and providing the right assembly instructions for the individual parts, and the whole thing. As its known, body plans and 3d cell shape does not depend only on genetic information, but also epigenetic information and other unknown factors.

Darwins doubt, pg.268


What natural selection lacks, intelligent design—purposive, goal-directed selection—provides. Rational agents can arrange both matter and symbols with distant goals in mind. In using language, the human mind routinely "finds" or generates highly improbable linguistic sequences to convey an intended or preconceived idea. In the process of thought, functional objectives precede and constrain the selection of words, sounds, and symbols to generate functional (and meaningful) sequences from a vast ensemble of meaningless alternative possible combinations of sound or symbol. Similarly, the construction of complex technological objects and products, such as bridges, circuit boards, engines, and software, results from the application of goal-directed constraints. Indeed, in all functionally integrated complex systems where the cause is known by experience or observation, designing engineers or other intelligent agents applied constraints on the possible arrangements of matter to limit possibilities in order to produce improbable forms, sequences, or structures. Rational agents have repeatedly demonstrated the capacity to constrain possible outcomes to actualize improbable but initially unrealized future functions. Repeated experience affirms that intelligent agents (minds) uniquely possess such causal powers.  Analysis of the problem of the origin of biological information, therefore, exposes a deficiency in the causal powers of natural selection and other undirected evolutionary mechanisms that corresponds precisely to powers that agents are uniquely known to possess. Intelligent agents have foresight. Such agents can determine or select functional goals before they are physically instantiated. They can devise or select material means to accomplish those ends from among an array of possibilities. They can then actualize those goals in accord with a preconceived design plan or set of functional requirements. Rational agents can constrain combinatorial space with distant information-rich outcomes in mind. The causal powers that natural selection lacks—by definition—are associated with the attributes of consciousness and rationality—with purposive intelligence. Thus, by invoking intelligent design to overcome a vast combinatorial search problem and to explain the origin of new specified information, contemporary advocates of intelligent design are not positing an arbitrary explanatory element unmotivated by a consideration of the evidence.

Irreducible complexity is not based  on a negative, namely that there is no evidence for a naturalistic pathway. Rather than that, it makes a positive claim, which can be falsified, upon :  (a) gene knockout, (b) reverse engineering, (c) examining homologous systems, and (d) sequencing the genome of the biochemical structure. ( Dennis Jones ) Gene knockout has been done several times, providing evidence that the organism was unable to replace given gene or protein by natural means. 1 The absence of evidence that evolution is not capable to replace given part is empirical evidence, that falsifies the claim of the ToE. Its therefore not justified to claim the inference is a argument of ignorance. Quit the contrary is the case. As for example, if i ask you : can you change a us$100 bill ? and you answer: sorry, i have no smaller bills. You open your wallet, and and its confirmed, no change in your wallet, then you have proven that you have indeed no smaller bills. You have proven a negative, which is not a argument of ignorance, since you checked and got a empirical proof.

If proponents of intelligent design were arguing in the preceding manner, they would be guilty of arguing from ignorance. But the argument  takes the following form:

Premise One: Despite a thorough search, no material causes have been discovered that demonstrate the power to produce large amounts of specified information, irreducible and interdependent biological systems.
Premise Two: Intelligent causes have demonstrated the power to produce large amounts of specified information, irreducible and interdependent systems of all sorts.
Conclusion: Intelligent design constitutes the best, most causally adequate, explanation for the information and irreducible complexity in the cell, and interdependence of proteins, organelles, and bodyparts, and even of animals and plants, aka moths and flowers, for example.  

http://reasonandscience.heavenforum.org/t2099-irreducible-complexity-is-not-a-argument-from-ignorance

My articles - Page 2 Dfsdfd11



Last edited by Admin on Wed Mar 08, 2017 2:01 am; edited 1 time in total

https://reasonandscience.catsboard.com

27My articles - Page 2 Empty Re: My articles Tue Jul 21, 2015 11:22 am

Otangelo


Admin

The monkey trial from 1925, and what Jude Jones could have learned before giving his verdict at the Dover trial.

That were still good oold times. When judges had a healty sense of discernment, and were able to distinguish pseudo science from real science, and the laws were reasonable. Judge Jones should have taken a lesson with Judge John Raulston. 

Why do proponents of natural unguided origins refer with frequency to the Dover trial, as if that would be a good reference to discern  truth from lies, in regard of  origins ? 
It happens frequently, when i provide concise, clear, precise and undeniable evidence for creation, logical inferences based on  scientific evidence and why design is the best explanation, red herring is common , and pointing to the Dover trial and Judge Jones verdict seems to provide a solid reason to reject my arguments. But is it really? 

http://reasonandscience.heavenforum.org/t1795-the-dover-case-a-good-argument-against-id?highlight=dover

everybody who has ever had experience in a courtroom (in ANY country),
knows that courtrooms are literally full of lies, nonsense, injustice, and obfuscation.

Only a fool would today claim that "truth" is best resolved, or in fact resolved at all in a modern courtroom. It matters not whether the case 
is big or small, rich or poor, intelligent or moronic. 

Courtrooms are of course run by lawyers, and without prejudice,
lawyers are human beings, most of whom are motivated mainly by money,
secondly by political passion, and perhaps as a limping third, justice and truth.
Or at least community justice, or perhaps pragmatism.

Yet when a court rules in favour of the cause of Evolution, we are suddenly
treated to the most amazing fairy-story of all:

Courts are now the "ultimate" arbiters of truth: 
Perhaps even the best discoverers and establlishers of scientific truth.

Suddenly, the lawyers have become our heroes, accurately dissecting the bitter pill of Intelligent Design, to discover the horror of Creationism, 
masquerading as 'science' and daring to "infect our children".

Please.

If a person born anytime during the post-war baby-boom knows anything,
he knows this is pure horse-manure.

I'm telling you what everybody already knows:

After the Kennedy and Martin Luther King assassinations,
the Vietnam war, and Nixon, 
the Bush elections and Arnold Schwarzenegger, 
the O.J. Simpson trials and Hurricane Katrina, 
the Gulf Oil spills and Enron,

nobody does, nor should they, trust the government, courts, politicians or lawyers.

Its not about age groups, its about history.

But lest there be any doubt,
ask ANY real scientist, if he thinks the best way to proceed in scientific truth,
is to have courts of law decide what scientific theories should be accepted.

http://www.discovery.org/a/443

Michael Behe : 

In the context of my book it is easy to realize that I meant there has been little work on the details of the evolution of irreducibly complex biochemical systems by Darwinian means. I had clearly noted that of course a large amount of work in many books and journals was done under the general topic of "molecular evolution," but that, overwhelmingly, it was either limited to comparing sequences (which, again, does not concern the mechanism of evolution) or did not propose sufficiently detailed routes to justify a Darwinian conclusion. Comparing sequences is interesting but cannot explain how molecular machines arose. Mechanisms (such as gene duplication, domain shuffling, and concerted evolution of multigene families)  are thought to be involved in evolution at the molecular level,  are however not justified in Darwinian terms.  The processes like gene duplication, etc., although very significant, are not by themselves sufficient to understand how any complex biochemical system, may have arisen by Darwinian means. 

Behes claim is confirmed through this peer reviewed paper :

http://onlinelibrary.wiley.com/doi/10.1002/cplx.20365/abstract

although the process of gene duplication and subsequent random mutation has certainly contributed to the size and diversity of the genome, it is alone insufficient in explaining the origination of the highly complex information pertinent to the essential functioning of living organisms.

http://www.ideacenter.org/contentmgr/showdetails.php/id/1181

Evolvability. Evolutionary biology’s preferred research strategy consists in taking distinct biological systems and finding similarities that might be the result of a common evolutionary ancestor. Intelligent design, by contrast, focuses on a different strategy, namely, taking individual biological systems and perturbing them (both intelligently and randomly) to see how much the systems can evolve. Within this latter research strategy, limitations on evolvability by material mechanisms constitute indirect confirmation of design.



Last edited by Admin on Wed Mar 08, 2017 2:01 am; edited 1 time in total

https://reasonandscience.catsboard.com

28My articles - Page 2 Empty Re: My articles Wed Jul 22, 2015 5:36 pm

Otangelo


Admin

What Might Be a Protocell’s Minimal “Genome?

My articles - Page 2 M_pneu10

As new knowledge of functional complexity is revealed, we realize that our knowledge of that complexity has been increasing exponentially, with no end in sight. As one layer is pealed back, a new level of functional complexity is exposed. Rather than getting simpler, the more we know, the more we know we don't know!

"Most of the (bio)chemical processes found within all the living organisms are well understood at the molecular level, whereas the origin of life remains one of the most vexing issues in chemistry, biology, and philosophy".

It is important to realize that "we don't yet know, but the answers will be coming" isn't a scientific statement, but rather expresses faith in naturalism-of-the-gaps, which is no more scientific than the god-of-the-gaps explanation that most scientists would dismiss out-of-hand.

"the information content of a biological whole exceeds that of the sum of its parts". "A whole exists when it acts like a whole, when it produces combined effects that the parts cannot produce alone" . "Understanding the origin of life requires knowledge not only of the origin of biological molecules such as amino acids, nucleotides and their polymers, but also the manner in which those molecules are integrated into the organized systems that characterize cellular life".

The argument for abiogenesis "simply says it happened. As such, it is nothing more than blind belief. Science must provide rational theoretical mechanism, empirical support, prediction fulfillment, or some combination of these three.

Laws of chemistry and physics, which follow exact statistical, thermodynamic, and spatial laws, are totally inade-quate for generating complex functional information or those systems that process that information using prescriptive algorithmic information"

Communication of information requires that both sender and receiver  know the arbitrary protocol determined by rules, not law. A functioning protocell would have needed formal organization, not redundant order. Organization requires control, which requires  formalism as a reality. Each protein is currently the result of the execution of a real computer program running on the genetic operating system. How did inanimate nature write those programs and  operating systems?

the alphabet involved with the origin of life, by the necessary conditions of information theory, had to be at least as symbolically complex as the current codon alphabet. If intermediate alphabets existed (as some have speculated), each predecessor also would be required to be at least as complex as its successor, or Shannon's Channel Capacity  would be exceeded for information transfer between the probability space of alphabets with differing Shannon capacity. Therefore, life's original alphabet must have used a coding system at least as symbolically complex as the current codon alphabet. There has been no feasible natural explanation proposed to produce such an alphabet since chance or physicality cannot produce functional information or a coding system, let alone a system as complex as that in life" . Coded information has never been observed to originate from physicality.

The most significant challenges can be summarized by two points: a) each cell makes hundreds of thousands of different RNAs and a large percent of these are cleaved into shorter functional RNAs demonstrating that each region of the genome is likely to be multifunctional and b) the identification of the functional regions of a genome is difficult because not only are there many of them but because the functional RNAs can be created by taking sequences that are not near each other in the genome and joining them together in an RNA molecule. The order of these sequences that are joined together need not be sequential. The central mystery is what controls the temporal and coordinated expression of these RNAs"

Controlled chemical metabolic networks are needed that can selectively admit "fuel" (redox, heat, photons, etc.) into the cell and process the "fuel" to harness the energy for growth, reproduction, manufacturing of needed components that can't migrate in, and other useful work. Both sender and receiver of the each control signal are needed, along with knowledge of the protocol rules for correct  communication. The manufacturing control for needed cellular  components would probably require enzymatic functionality for polymerization, along with producing homochiral components. In addition, control is required for cell division. Without control, organization (as opposed to self-ordering) is impossible, and functionality would disintegrate, with "tar" a likely result.

The proto-genome would also need to be able to effect highly accurate duplication of the entire proto-cell, with only an occasional "error" that could produce a very similar proto-cell, still possessing all three of the requirements in section 3. The proto-genome, along with all the proto-cell components, would need to have a feasible path to eventually produce cells with the functional complexity of today's life. It does little good to speculate a "simple" initial system unless there are feasible scenarios that can traverse from the proposed initial system to life as we know it, including coded information and other features highlighted previously. For example, one could envision dipping a finger into a bottle of ink and flicking the ink toward a white sheet would eventually produce a pattern that looks like an English letter. That would not explain the formal rules and meaningful syntax of letters that you are currently observing in this book, however.

According to Meyer the “simplest extant cell, Mycoplasma genitalium — a tiny bacterium that inhabits the human urinary tract — requires ‘only’ 482 proteins to perform its necessary functions….” If, for the sake of argument, we assume the existence of the 20 biologically occurring amino acids, which form the building blocks for proteins, the amino acids have to congregate in a definite specified sequence in order to make something that “works.” First of all they have to form a “peptide” bond and this seems to only happen about half the time in experiments. Thus, the probability of building a chain of 150 amino acids containing only peptide links is about one chance in 10 to the 45th power.

In addition, another requirement for living things is that the amino acids must be the “left-handed” version. But in “abiotic amino-acid production” the right- and left-handed versions are equally created. Thus, to have only left-handed, only peptide bonds between amino acids in a chain of 150 would be about one chance in 10 to the 90th. Moreover, in order to create a functioning protein the “amino acids, like letters in a meaningful sentence, must link up in functionally specified sequential arrangements.” It turns out that the probability for this is about one in 10 to the 74th. Thus, the probability of one functional protein of 150 amino acids forming by random chance is 10 to the 164th. If we assume some minimally complex cell requires 250 different proteins then the probability of this arrangement happening purely by chance is one in 10 to the 164th multiplied by itself 250 times or one in 10 to the 41,000th power.

there are about 10 to the 80th elementary particles in our observable universe. Assuming a Big Bang about 13 billion years ago, there have been about 10 to the 16th seconds of time. Finally, if we take the time required for light to travel one Plank length we will have found “the shortest time in which any physical effect can occur.” This turns out to be 10 to the minus 43rd seconds. Or turning it around we can say that the most interactions possible in a second is 10 to the 43rd. Thus, the “probabilistic resources” of the universe would be to multiply the total number of seconds by the total number of interactions per second by the total number of particles theoretically interacting. The math turns out to be 10 to the 139th.

http://reasonandscience.heavenforum.org/t2110-what-might-be-a-protocells-minimal-genome



Last edited by Admin on Wed Mar 08, 2017 2:02 am; edited 1 time in total

https://reasonandscience.catsboard.com

29My articles - Page 2 Empty Re: My articles Thu Jul 23, 2015 10:18 pm

Otangelo


Admin

Vault particles, made by a 3d Polyribosome nano-printer

That is the most  incredible particle  i have encountered  so far. Not only because of its astonishing simmetrical barrel shape structure, but because of the way its made:

NCBI reports :  Three-dimensional printers fabricate a gamut of products such as medical devices, toys, and even specialty chocolates. Now a new study suggests that eukaryotic cells evolved a 3-D nanoprinter millions of years ago, in the form of polyribosomes.

In a time in which efficient 3D manufacturing is predicted to have a revolutionary effect on mankind, nature unveils that it has already been using this technique for millions of years. Vaults are very large ribonucleoprotein particles found widely in eukaryotes. Our discovery of the unique assembly mechanism of the vault particle reveals an unforeseen function of the polyribosome as a very sophisticated cellular 3D nanoprinter.

“I had never in my life seen anything like these rolls,” Mrazek says. “Normally, when you get misassembled proteins you see ugly tangles, but these were so symmetric.” Mrazek concluded that the rolls must reflect structural intermediates during the vault assembly process. Ribosomes moving along an mRNA normally synthesize individual protein strands, which then come off the ribosome and fold into their 3-D shapes. On the basis of the proposed helical geometry of the polyribosome, Mrazek hypothesized that as an individual MVP molecule gets translated by a ribosome in the cluster, it could form a dimer with the MVP synthesized by the ribosome next to it. As they’re formed, adjacent dimers could then arrange side by side to form the vault particle. The vaults take shape bit by bit as the dimers are completed and come off the polyribosome, much like a 3-D printer might build up the layers of a plastic object. The sixth mutation in the MVP somehow disrupts the pinching off of the vault particle after the 39th dimer assembles, resulting in the long, rolled-up vaults.

Ribosomes are molecular machines that function in polyribosome complexes to translate genetic information, guide the synthesis of polypeptides, and modulate the folding of nascent proteins. Here, we report a surprising function for polyribosomes as a result of a systematic examination of the assembly of a large ribonucleoprotein complex, the vault particle. Structural and functional evidence points to a model of vault assembly whereby the polyribosome acts like a 3D nanoprinter to direct the ordered translation and assembly of the multi-subunit vault homopolymer, a process which we refer to as polyribosome templating. Structure-based mutagenesis and cell-free in vitro expression studies further demonstrated the critical importance of the polyribosome in vault assembly. Polyribosome templating prevents chaos by ensuring efficiency and order in the production of large homopolymeric protein structures in the crowded cellular environment and might explain the origin of many polyribosome-associated molecular assemblies inside the cell.

A 2014 paper found “a surprising function for polyribosomes as a result of a systematic examination of the assembly of a large ribonucleoprotein complex, the vault particle”. Beyond merely orienting ribosomes in crowded conditions to avoid aggregation of freshly produced peptides, polyribosomes act “like a 3D nanoprinter” spinning out the many copies needed for this homopolymer, which they called ‘polyribosome templating’. That is, the interactions are not being minimised for fear of dangerous aggregation, but controlled to orchestrate favourable interactions to produce vaults.

So how are Vaults and their " made of " best explained ? Design, or natural mechanisms ? We have only recently developed 3d print technology. Its the most advanced engeneering and technology human kind has been able to develop so far. This proceeding is applied through nano machines since life exists. That is in my view a amazing example which points to a intelligent designer as the source of the Polyribosome which make the Vaults, and the Vault particles itself.

http://reasonandscience.heavenforum.org/t2112-vault-proteins-made-by-a-3d-polyribosome-nano-printer#3748



Last edited by Admin on Fri Dec 01, 2017 11:23 am; edited 1 time in total

https://reasonandscience.catsboard.com

30My articles - Page 2 Empty Re: My articles Fri Jul 24, 2015 8:51 am

Otangelo


Admin

Topoisomerase II enzymes, amazing evidence of design

Complete and equal transmission of DNA to daughter cells is  crucial during mitosis. During cell division, each daughter cell inherits one copy of every chromosome. The metaphase-to-anaphase transition ( see picture above ) is the critical point in the cell cycle where the cell commits to separation of sister chromatids . Once spindle attachment is complete, cohesion must be eliminated to enable the physical separation of sister chromatids. This requires cleavage of the protein complex cohesin by separase and, in some instances, completion of chromosome decatenation. Catenation is the process by which two circular DNA strands are linked together like chain links. This occurs after DNA replication, where two single strands are catenated and can still replicate but cannot separate into the two daughter cells. 

II Topoisomerase enzymes  is a ubiquitous enzyme that is essential for the survival of all eukaryotic organisms and plays critical roles in virtually every aspect of DNA metabolism. It performs the amazing feat of breaking a DNA double helix, passing another helix through the gap, and resealing the double helix behind it.  They are essential in the separation of entangled daughter strands during replication. This function is believed to be performed by topoisomerase II in eukaryotes and by topoisomerase IV in prokaryotes. Failure to separate these strands leads to cell death. As genetic material DNA is wonderful, but as a macromolecule it is unruly, voluminous and fragile. Without the action of DNA replicases, topoisomerases, helicases, translocases and recombinases, the genome would collapse into a topologically entangled random coil that would be useless to the cell.  The topoisomerase is thought to be a highly dynamic structure, with several gates for entry of DNA into the two DNA-sized holes. Loss of topoisomerase activity in metaphase leads to delayed exit and extensive anaphase chromosome bridging, often resulting in cytokinesis failure, although maintenance of limited catenation until anaphase may be important for sister chromatid structural organization 9 Accurate transmission of chromosomes requires that the sister DNA molecules created during DNA replication are disentangled and then pulled to opposite poles of the cell before division. Defects in chromosome segregation produce cells that are aneuploid (containing an abnormal number of chromosomes)-a situation that can have dire consequences. 

Like many other enzymes, topoisomerase II are essential for cell function, and had to be present in the first living cell to exercise their function right in the beginning, when life began.

Within each chromosome, two dimensions of organization are at play: condensation along the axes ensures the entire chromatid, end-to-end, is kept together 8 , while the tight association of sister chromatids until anaphase, termed sister chromatid cohesion (SCC), ensures that each daughter cell receives only one copy . Two mechanisms are known to play a role in SCC: DNA catenation, which physically interlocks (catenates) DNA across the sister chromatids ; and protein linkages through the cohesin complex, which physically tether the sister chromatids to one another.

Topoisomerase II forms a covalent linkage to both strands of the DNA helix at the same time, making a transient double-strand break in the helix. These enzymes are activated by sites on chromosomes where two double helices cross over each other such as those generated by supercoiling in front of a replication fork 

Once a topoisomerase II molecule binds to such a crossing site, the protein uses ATP hydrolysis to perform the following set of reactions efficiently:

(1) it breaks one double helix reversibly to create a DNA “gate”;
(2) it causes the second, nearby double helix to pass through this opening; and
(3) it then reseals the break and dissociates from the DNA. At crossover points generated by supercoiling, passage of the double helix through the gate occurs in the direction that will reduce supercoiling. In this way, type II topoisomerases can relieve the overwinding tension generated in front of a replication fork. Their reaction mechanism also allows type II DNA topoisomerases to efficiently separate two interlocked DNA circles. Topoisomerase II also prevents the severe DNA tangling problems that would otherwise arise during DNA replication. The enormous usefulness of topoisomerase II for untangling chromosomes can readily be appreciated by anyone who has struggled to remove a tangle from a fishing line without the aid of scissors.


These molecular machines are far beyond what unguided processes involving chance and necessity can produce. Indeed, machinery of the complexity and sophistication of Topoisomerase enzymes are, based on our experience, usually atributed to intelligent agents. 

Type IIA topoisomerases consist of several key motifs: an

N-terminal GHKL ATPase domain 

Toprim domain
 
central DNA-binding core 

C-terminal domain

Each of these key motifs are essential for the proper function of the enzyme. No part can be reduced, and neither is it possible any of the subparts to emerge by natural means. Not only had the enzyme to emerge prior to the first cell being formed, and so could not be the result of evolution, but the sub parts by themself, and the enzyme by itself even fully formed,  would have no use, unless the DNA double helix molecules were already existing as well, and so the whole process of cell division, mitosis, and catenation, which happens through DNA replication. The enzyme is however essential for life, so if Topo II is removed, life could not exist. So we have here one of inumerous essential seemingly tiny and aparently unimportant parts, which by closer looking reveal to be life essential. This provides another big question mark in regard of naturalistic explanations, provides on the other part ones more a powerful argument for design. 

http://reasonandscience.heavenforum.org/t2111-topoisomerase-ii-enzymes-amazing-evidence-of-design#3754



Last edited by Admin on Tue Oct 13, 2015 3:10 pm; edited 1 time in total

https://reasonandscience.catsboard.com

31My articles - Page 2 Empty Re: My articles Mon Jul 27, 2015 4:29 pm

Otangelo


Admin

The Design of the Simplest Self-Replicator

Following article  shows  the minimal structure of a  cell that is  required for self-replication to occur. It takes a lot of faith to believe, the cell could arise due to random natural chemical reactions. 


Goals, Assumptions and Requirements


Goals:


Develop insights into internal design of the cell
Evaluate complexity in creating an artificial cell


Assumptions:


There is an intake of materials from the outside of cell
There is an output of waste materials from inside to outside of the cell
We assume throughout that we design for building an artificial cell – that need not have a biological basis (not built with carbon-based chemistry) but is rather a ‘clunky’ one (made from metal, plastic, semiconductors, etc.)


Requirements


The cell has an Enclosure to separate it and protect it from its environment
The cell  is capable to create an identical copy of itself


What would  have to happen during the replication of our artifical  cell ?


Input materials processed through material extraction into good materials for fabrication of parts or for energy generation
Energy is generated and made available throughout the cell
Fabrication function starts to fabricate parts, components and assemblies for:
Cloning (creating copies) of all cell internal elements
Creating scaffolding elements for the growing cell interior
Creating new elements that are added to the growing enclosure
When the cloning of all original cell internal parts completed, the cell division starts:
The original cell content is now at (for example) “north pole” of the cell enclosure
The cloned cell content (the “nascent daughter cell”) is now at the “south pole” of the cell enclosure
The SSR enclosure and its content now divides at the “equatorial” plane and the separate “mother” (at North) and daughter (at South) cell emerge.
It would not be possible  by using a mechanical copy process – similar with that used to duplicate house keys.  Using internal design information would be required in combination with computer controlled roboters.


Control of Input Flow, Functions of Material Identification and Material Extraction 


Opens/closes the enclosure input gateways
Acts based on the nature of input material/part and commands from other functions
Identifies nature of input materials and parts
Tags input materials and parts, manufactured materials and fabricated parts with type Id (bar code like)
Uses specific processes to extract manufacturing materials from raw materials
Uses specific machinery and parts
Our cell needs to have gates which permit  material to get in or out. Further it needs to be able to recognize and identify  what the incoming materials  in are made of. So the cell must be able to act like a roboter, programmed and automatically, by itself, to recognize the materials and permit or refuse their entrance.  


Function of Energy Generation and internal transport highways and transport vessels


The cell needs to be able to generate energy from raw or processed materials. It needs to be able to transport and distribute materials, and managing energy requirements
( electricity ) It has to use special machinery: generators, transformers, converters.
An open question is, what material basis to provide energy:  fuel, oil, coal, chemical, atomic etc.
Transport materials and parts/components
Uses containers, conduits, wires, carriers
Transports also energy and information


Ability of chain supply, recycling  and function  of output Flow control


Ensures steady supply of materials, energy and parts
Coordinating and scheduling capability
Re-introduce useful materials and parts in the fabrication cycle
Selects materials and parts as refuse; cleans spaces
Sends refuse materials and parts outside the cell
Controls output gateways of the enclosure


Storage of a catalog  of all materials  , Construction Plan , registration of construction status 


Catalogs of all materials and all parts
For each element: its composition in sub-elements and materials
Catalog of construction plan and design of all parts, components, assemblies including the cell as a whole
Catalog of all processes
Catalog of all procedures
Uses replicas of construction plans to mark construction progress
Status updated by functions involved in fabrication and construction


Hability of Manipulation , Fabrication of all parts, and Fabrication quality Control 


Ability to “grab”, “handle”, “manipulate” materials, parts, components
Implemented with robot arm – like machinery
Must be able to fabricate any and all cell parts and components
In particular able to fabricate all cell machinery
Follows the construction plans
Commands the fabrication function to manufacture next elements in the plan


Function of Communication and Notification, Growth Function through scaffolding and Enclosure Growth 


Facilitates communication between the “control” centers and “execution” centers
Notifications from “executor” to “controller”
Controls construction and growth of cell scaffolding
Mostly on the “daughter” cell side
Controls the construction and growth of the enclosure
Addition of enclosure gateways; flexible geometry


Cloning Function, Hability of cell Division and Replication 


Choreographs the cloning phase
Coordinates fabrication of the clone and growth of scaffolding and enclosure
Copies info catalogs and software into the cloned parts
Choreograph the cell  division phase
“start the engines” of the “daughter” cell just before division completes
Highest level function:
Implements the designer commandments
- Grow and
- Multiply


What we learned about the artificial Cell?


The cell must be designed for growth and division: the enclosure must support changing surface, volume and shape
The cell  must contain detailed, structured, cohesive descriptive information that must be accurately and integrally passed to next cell generation. 


Required information:


-all used materials: identification, description, characteristics
-manufacturing materials: extraction procedures and processes
-bill of materials for all fabricated parts, components and assemblies
-procedures and processes for energy generation, storage (if needed) transportation and management
-construction plans for all fabricated parts, components and assemblies including the cell  itself.
-all fabrication processes and procedures
-all assemblage procedures
-all recycling procedures and processes


The cell


must contain advanced materials and parts identification capabilities as well as material extraction capabilities
must contain sophisticated, fully automated and computer-controlled capabilities for energy generation, transportation, management and distribution
must contain very sophisticated fabrication and assemblage capabilities that must be information-driven for full automation and computer control.
must posses advanced computing (information processing) capabilities as well as good information communication capabilities.
must control its many parts and layered functions through very advanced software running on cell-computer(-like) machinery.
Above all:  the cell must be based on a very sophisticated design that harmoniously, precisely and completely provides full automation and self-sufficiency for all machinery and processes that happens inside an cell during its growth, division and replication.
The design of a cell can be successful only if it is harmoniously integrated and precisely coordinated with the design and characteristics of its environment.


An artificial cell most probable must contain:


a material mining sub-unit
a metallurgic subunit
a chemical plant
a power plant
an electricity distribution network
a network of avenues, alleys and conduits for robotized transportation
a semiconductor manufacturing plant
a computer manufacturing plant
an extended communication network connecting by wire or rather wirelessly all plants and robots
a software manufacturing plant and software distribution and installation agents.
a materials and parts recycling and refuse management plant
an army of intelligent robots for transportation and manipulation
a highly sophisticated distributed, multi-layered software system that controls in a cohesive manner all plants, robots and communications.


Evaluating the Complexity of an artificial cell


The Cell: autonomous, computerized and automated


There is no comparable human engineering artifact in terms of:


Autonomy (materials, energy, fabrication closure, information closure, ‘intelligence’)
full manufacturing automation
spectrum of processes and fabrication types




No successful attempt so far on building a real autonomous artificial cell from scratch


Attempts so far:


software simulations
cellular automata
self-replicating software entities
RepRap – self replicating 3D printers
self-assembling Lego robots
Micro Electro Mechanical Systems  (MEMS)
Craig Venter’s synthetic bacterial cell


Comparing a genuine artificial Cell with:


An advanced car manufacturing/assembly line:


many/most parts are fabricated elsewhere
not fully automated; many manual operations performed by humans
no material identification, material extraction capabilities
not so many process technologies involved
mostly an assembly operation


The Large Hadron Collider (HDC) in Switzerland


no fabrication
not comparable in terms of automation, process diversity


The Martian Rover


some good amount of autonomy
no fabrication


Our articial cell, and the Origin Of Life (OOL) Research


Any OOL credible explanation should provide answers to the following questions:
How the self describing information (of so many varieties) residing in the cell originated?
How the energy generation and transport function originated?
How the material identification function and the material extraction function originated?
How the fabrication function originated
How the transport and manipulation functions originated?
How the coordinated control of various functions originated?
How the whole sophisticated design of the cell originated?
Is it reasonable to believe/accept that the cell resulted through random/natural processes when the 21st century scientists are only beginning to understand only SOME OF THE INTERNALS of a cell?
Is it reasonable to believe/accept that the cell resulted through random/natural processes when the 21st century scientists and engineers are still not able to design and create an artificial cell?


The Metaphysics of It All


A reasonable scientific hypothesis is that the Master Designer designed wisely all life types for this successful cohabitation of  Homo Sapiens
with all other types of life.
More so it is hypothesized (again scientifically) that the Earth, the Solar System, the Milky Way Galaxy and the Whole Universe was designed and finely adjusted by the Master Designer so that Homo Sapiens and all other life forms have a comfortable and enjoyable place to live.
More so, besides having a comfortable place to live Homo Sapiens have plenty of cell types to study and to marvel at the fabulous skills of the Master Designer revealed so blatantly in His cell designs.
More so, besides having amazing engineering feats to discover and admire, the Homo Sapiens has a rightful Master Designer to praise and worship all his life.


http://reasonandscience.heavenforum.org/t2125-the-design-of-the-simplest-self-replicator



Last edited by Admin on Fri Dec 01, 2017 11:23 am; edited 3 times in total

https://reasonandscience.catsboard.com

32My articles - Page 2 Empty Re: My articles Fri Jul 31, 2015 8:53 pm

Otangelo


Admin

Cell Membranes, origins through natural mechanisms, or design ?  

http://reasonandscience.heavenforum.org/t2128-membrane-structure#3798

According to this website : The Interdependency of Lipid Membranes and Membrane Proteins
The cell membrane contains various types of proteins, including ion channel proteins, proton pumps, G proteins, and enzymes. These membrane proteins function cooperatively to allow ions to penetrate the lipid bilayer. The interdependency of lipid membranes and membrane proteins suggests that lipid bilayers and membrane proteins co-evolved together with membrane bioenergetics.

The nonsense of this assertion is evident. How could the membrane proteins co-evolve, if they had to be manufactured in the machinery , protected by the cell membrane ?

The cell membrane contains various types of proteins, including ion channel proteins, proton pumps, G proteins, and enzymes. These membrane proteins function cooperatively to allow ions to penetrate the lipid bilayer.

The ER and Golgi apparatus together constitute the endomembrane compartment in the cytoplasm of eukaryotic cells. The endomembrane compartment is a major site of lipid synthesis, and the ER is where not only lipids are synthesized, but membrane-bound proteins and secretory proteins are also made.

So in order to make cell membranes, the Endoplasmic Recticulum is required. But also the Golgi Apparatus, the peroxysome, and the mitochondria. But these only function, if protected and encapsulated in the cell membrane.  What came first, the cell membrane, or the endoplasmic recticulum ? This is one of many other catch22 situations in the cell, which indicate that the cell could not emerge in a stepwise gradual manner, as proponents of natural mechanisms want to make us believe.

Not only is the cell membrane intricate and complex (and certainly not random), but it has tuning parameters such as the degree to which the phospholipid tails are saturated. It is another example of a sophisticated biological design about which evolutionists can only speculate. Random mutations must have luckily assembled molecular mechanisms which sense environmental challenges and respond to them by altering the phospholipid population in the membrane in just the right way. Such designs are tremendously helpful so of course they would have been preserved by natural selection. It is yet another example of how silly evolutionary theory is in light of scientific facts.



Last edited by Admin on Sun Mar 26, 2017 11:11 pm; edited 3 times in total

https://reasonandscience.catsboard.com

33My articles - Page 2 Empty Re: My articles Sun Aug 02, 2015 8:20 pm

Otangelo


Admin

The amazing design of the DNA packaging motor

According to the United Nations, 2015 is the International Year of Light as well as the International Year of Soils. But, for the marine microbial ecologist Forest Rohwer, 2015 is also the Year of the Phage.  Phages, more formally known as bacteriophages, are viruses that infect bacteria. They are easily as ubiquitous, universal, and essential to life on Earth as light and soil, and yet they are largely unknown. 

“The thing that even most biologists don’t get—let alone most of the rest of the world—is that phages are the most diverse things on the planet, and there are more of them than anything else, and we really don’t have a clue” Phages possess a wide array of forms and functions. They are all incredibly small; at just a few nanometres across, they lie on the border of measurability between quantum and classical physics, all but impossible to see without a scanning electron microscope.

Viruses look incredibly well designed. Some bacteriophages look like lunar landing capsules, legs and all.

Viruses are tiny particles that can’t reproduce on their own, but hijack the machinery of truly living cells. But they still have genetic material, long strands of DNA (or sometimes RNA) enclosed in a protein sheath. They are biologically inert until they enter into host cells. Then they start to propagate using host cellular resources. The infected cell produces multiple copies of the virus, then often bursts to release the new viruses so the cycle can repeat. One of the most common types is the bacteriophage (or simply ‘phage’) which infects bacteria. It consists of an infectious tailpiece made of protein, and a head capsule (capsid) made of protein and containing DNA packaged at such high pressure that when released, the pressure forces the DNA into the infected host cell.

How does the virus manage to assemble this long information molecule at high pressure inside such a small package, especially when the negatively charged phosphate groups repel each other? It has a special packaging motor, more powerful than any molecular motor yet discovered, even those in muscles.  ‘The genome is about 1,000 timeslonger than the diameter of the virus. It is the equivalent of reeling in and packing 100 mts of fishing line into a coffee cup, but the virus is able to package its DNA in under five minutes.
Force 
A surprising finding  is that the phage packaging motor generates enormous force in order to package DNA. Forces as high as ∼60 pN were measured in phages ϕ29, λ, and T4, thus making the packaging motor one of the strongest force generating biological motors reported to date.  The force is 20–25 times that of myosin, 10 times that of kinesin, or >2 times that of RNA polymerase. Such high forces seem to be essential to pack the viral DNA against the enormous electrostatic repulsive forces (and bending and entropic energies) to confine a highly negatively charged DNA polymer within a limited volume of the capsid

Velocity
The phage packaging motors show high rates of packaging as well as high processivity. The T4 motor can achieve rates as high as ∼2000 bp/sec, the highest recorded to date. 

Power
Phage packaging motors generate enormous power, with the T4 motor being the fastest and the most powerful. Even with a high external load force of 40 pN, the T4 motor can translocate DNA at a remarkable speed of ∼380 bp/sec. This is equivalent to a power of 15,200 pN/bp/s, or 5.2 × 10−18 W. Scaling up the nanoscale T4 packaging motor to a macromotor, the motor power density is approximately twice that of a typical automobile engine

The sequence of steps in the head morphogenesis  is as follows:

(i) assembly of the packaging motor on a nascent (unexpanded) empty prohead (Figure A)
(ii) expansion of the capsid after about 10%–25% of the genome is packaged (Figure B)
(iii) packaging until the head is full
(iv) cutting of DNA and dissociation of the motor (Figure C)
(v) assembly of neck proteins to seal the packaged heads (Figure D)

Question : How could natural forces and chemical reactions have come up with such a elaborated mechanism ? 

In a specially interesting scientific paper from last year scientists report that  The 30° tilt of the subunits matches perfectly with the 30° transitions that the dsDNA helix exhibits during revolution (360° ÷ 12 = 30°). 

Question : how did this precise and finely tuned arrangement emerge ? trial and error ?

In each step of revolution that moves the dsDNA to the next subunit, the dsDNA physically moves to a second point on the channel wall, keeping a 30° angle between the two segments of the DNA strand . This structural arrangement enables the dsDNA to touch each of the 12 connector subunits in 12 discrete steps of 30° transitions for each helical pitch . Nature has created and evolved  a clever machine   that advances dsDNA in a single direction while avoiding the difficulties associated with rotation, such as DNA supercoiling, as seen in many other processes.

Question : how did this precise and finely tuned arrangement emerge ? trial and error ? since when can  be clever be assigned to something that is not intelligent ?? Should the author of the article not rather honor the inventor of this amazing nano machinery, namely the creator ??

The dramatic divergence of bacteriophage genomes is an obstacle that frequently prevents the detection of homology between proteins and, thus, the determination of phylogenetic links between phages.

Phylogenetic reconstruction using the complete genome sequence not only failed to recover the correct evolutionary history because of these convergent changes, but the true history was rejected as being a significantly inferior fit to the data. 

Convergence, of course,is a common feature of design. It’s also precisely the opposite of “divergence”, which is supposed to be a hallmark of evolution.

 Even viruses, which are not even alive by the definition of being able to reproduce independently, show incredible design.  They are too well designed to be accidents. 

Proponents of naturalism have to believe in miracles – that super-efficient, compact, powerful motors like this just appeared, arose or emerged (favorite Darwinian miracle-words) from nowhere.

The large packaging subunit gp17 but not the small subunit gp16 exhibited an ATPase activity. 2 Although gp16 lacked ATPase activity, it enhanced the gp17-associated ATPase activity by >50-fold. The gp16 enhancement was specific and was due to an increased catalytic rate for ATP hydrolysis. A phosphorylated gp17 was demonstrated under conditions of low catalytic rates but not under high catalytic rates in the presence of gp16. The data are consistent with the hypothesis that a weak ATPase is transformed into a translocating ATPase of high catalytic capacity after assembly of the packaging machine. The nonstructural terminase complex, constituted by one small subunit and one large subunit, is a key component of the DNA-packaging machine 

So both subunits are required for proper functioning of the molecular motor. These subunits do not have any use unless duly embedded in this nano motor. A irreducible complex system must have at least two subunits, who could not have emerged through evolutionary steps. This seems to be the case in this amazing molecular machine as well. Further evidence is the fact that no protein homology exists between different Phages, which is another indication that they are designed and created separately.

http://reasonandscience.heavenforum.org/t2134-the-amazing-design-of-bacteriophage-viruses-and-its-dna-packaging-motor

https://reasonandscience.catsboard.com

34My articles - Page 2 Empty Nanomachines in the powerhouse of the cell Thu Aug 06, 2015 2:42 pm

Otangelo


Admin

Nanomachines in the powerhouse of the cell

Visualize an old locomotive train roaring down the tracks.  One of the characteristic images that surely comes to mind is the oscillating motion of the coupling rods on the wheels.  The long rods that connected the wheels provided a way to convert heat energy from the steam into mechanical energy

A special type of „transmission element“, which is not known from any other protein, appears to be responsible for the energy transduction within the complex by mechanical nanoscale coupling. Transferred to the technical world, this could be described as a power transmission by a coupling rod, which connects for instance the wheels of a steam train. This new nano-mechanical principle will now be analysed by additional functional studies and a refined structural analysis.

Its structure is strongly suggestive of a mechanism that involves conformational coupling via connecting elements acting like a coupling rod in a steam engine, driving symmetry-related domains instead of wheels. To highlight a remarkable analogy between the independent creations of man and nature, the background shows a drawing by James Watt of his steam engine developed between 1787-1800

Isn’t this wonderful information?  Now we see that the respiratory transport chain in mitochondria includes coupling rods that act like little locomotives.  Those rods must be moving incredibly fast.  They are pumping protons like gangbusters, 24x7, all the years of your life.  This mechanical wonder is only one amazing device in the first stage of a respiratory chain that includes some 40 enzymes.  The machinery dazzles and boggles the mind as it continues on its way to the climax of ATP synthase, one of the most elegant and perfect molecular machines.

After ten years of research work, the x-ray crystallographic analysis of the huge and most complicated protein complex of the mitochondrial respiratory chain was successful. The complex contains more than 40 different proteins, marks the entry to cellular respiration and is thus also called mitochondrial complex I

Over and over again, we find researchers ignoring Darwinism as they uncover the workings of molecular machines in the cell.  Darwin himself could never have imagined that life at its foundations would be this complex, this mechanical.  It has all the appearance of Paley’s pocket watch – only more elegant, more efficient, and more beautiful at an unimaginably small scale.  And this is just one of thousands of such machines.  Remember the other locomotives, the machines that transport cargo down your molecular railroad?
Notice how scientists in a recent paper in PNAS employed “engineering models to understand the control principles” of a biological phenomenon.)  Fire the storytellers!  Train engineers! (Catch the pun?)  When science discovers powered locomotives at work in the simplest organisms, it no longer needs storytellers with loco motives.

http://reasonandscience.heavenforum.org/t2140-nadh-dehydrogenase-complex-i-in-mitochondria



Last edited by Admin on Wed Nov 11, 2015 3:43 pm; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

New Structural studies  of the  ATP synthase power engines in our cells give amazing new insight of on their architecture. Observe how following scientific paper describes them:

http://reasonandscience.heavenforum.org/t1439-atp-synthase#3848

Rotary ATPases are molecular rotary motors involved in biological energy conversion. They either synthesize or hydrolyze the universal biological energy carrier adenosine triphosphate. Recent work has elucidated the general architecture and subunit compositions of all three subtypes of rotary ATPases Molecular machines, like the rotary ATPases seem to have much in common with man-made machines. However, the analogies hold only to a certain point and are in large parts not fully understood. What is evident is that that these biological motors  are unsurpassed in efficiency, fine-tuning to their environment and sustainability. Understanding their detailed function at the molecular level is not only important to satisfy our curiosity, but will certainly have implications in understanding human physiology, including mitochondrial disorders, bioenergetics and the processes of aging, as well as impacting nano-engineering and many other fields afar.

they seem also to be irreducible complex, and could not be the result of natural selection since they had to be operational before the cell could replicate. Furthermore, if they were not embedded in a  proton gradient , they could not operate either.

any of these parts have to be in place in order for the atpase to function.

1, nucleotide binding stator subunits (“cylinders”) ;
2, central stalk (“crankshaft”) ;
3, A/V rotor subunit (“adapter”) ;
4, rotor ring (“turbine”) ;
5, ion channel forming subunit (no structure available);
6, peripheral stalk (“pushrod”) ;
7a and b, A/V peripheral stalk connector subunits (“rockers”)
8, small central stalk subunit (“ratchet” in prokaryotes) ;
9, eukaryotic additional central stalk subunit (“lock”) (2WPD);
10, IF1  (“brake”) (1OHH);
11, eukaryotic V-type additional peripheral stalk subunit (“brake”)

These are amazing nano motors in our bodies, carefully crafted by the creator of all life to make our existence possible.

ATP synthase is an irreducibly complex motor—a proton-driven motor divided into rotor and stator portions as described and illustrated earlier in this paper. Protons can flow freely through the CF0 complex without the CF1 complex, so that if it evolved first, a pH gradient could not have been established within the thylakoids. The δ and critical χ protein subunits of the CF1 complex are synthesized in the cytosol and imported into the chloroplast in everything from Chlorella to Eugenia in the plant kingdom.49 All of the parts must be shipped to the right location, and all must be the right size and shape, down to the very tiniest detail. Using a factory assembly line as an analogy, after all the otherwise useless and meaningless parts have been manufactured in different locations and shipped in to a central location, they are then assembled, and, if all goes as intended, they fit together perfectly to produce something useful. But the whole process has been carefully designed to function in that way. The whole complex must be manufactured and assembled in just one certain way, or nothing works at all. Since nothing works until everything works, there is no series of intermediates that natural selection could have followed gently up the back slope of mount impossible. The little proton-driven motor known as ATP synthase consists of eight different subunits, totalling more than 20 polypeptide* chains, and is an order of magnitude smaller than the bacterial flagellar motor,50 which is equally impossible for evolutionists to explain.

http://www.tandfonline.com/doi/pdf/10.4161/bioa.23301



Last edited by Admin on Sun Mar 26, 2017 11:12 pm; edited 3 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The Transport of Proteins into Mitochondria requires a interdependent, interlocked, irreducible complex system and a advanced transport and communication system. This is pretty strong evidence that a planning intelligence is required to set it up.

The Transport of Proteins into Mitochondria is a interdependent complex system

http://reasonandscience.heavenforum.org/t2157-the-transport-of-proteins-into-mitochondria-is-a-interdependent-complex-system#3874

A critically important macromolecule—arguably “second in importance only to DNA”—is ATP. As far as known, all organisms from the simplest bacteria to humans use ATP as their primary energy currency. ATP contains the purine base adenine and the sugar ribose which together form the nucleoside adenosine. Adenine is one of the most important organic molecules for life as we know it today. "Adenine synthesis is perhaps the best example of an irreducibly complex system that can be found in life ..." the process doesn't work unless all 11 enzymes are present.

http://reasonandscience.heavenforum.org/t2137-atp-the-energy-currency-for-the-cell#3825

Adenine would never accumulate in any kind of "prebiotic soup.

http://reasonandscience.heavenforum.org/t2028-origin-of-the-dna-double-helix#3435

In eukaryotes the mitochondria produce most of the cell’s ATP (anaerobic glycolysis also produces some)   The systems most frequently mentioned as irreducible complex, as the flagellum, have about 40 essential proteins.

ATP is only one of hundreds of thousands of essential molecules in eukaryotic cells. That makes the cell a hudge, enormous , unimaginable irreducible, interlocked, interdependent nano factory of incredible complexity.A few essential proteins and molecules are mentioned here :

http://reasonandscience.heavenforum.org/t2085-essential-parts-proteins-enzymes-organelles-and-functions-in-the-cell?highlight=essential

This list is supposed to grow upon further investigation. We know only four basic methods of producing ATP: in bacterial cell walls, in the cytoplasm by photosynthesis, in chloroplasts, and in mitochondria. No transitional forms exist to bridge these four methods by evolution. According to the concept of irreducible complexity, these ATP producing machines must have been manufactured as functioning units and they could not have evolved by Darwinism mechanisms. Anything less than an entire ATP molecule will not function and a manufacturing plant which is less than complete cannot produce a functioning ATP. Some believe that the field of biochemistry which has achieved this understanding has already falsified the Darwinian world view (Behe, 1996).  It certainly looks like the numerous enzymes and carrier proteins needed for cellular respiration demonstrates irreducible complexity. not only does there have to be enough of each of the enzymes and carrier proteins present but they must also work in the right order and be effective enough as well. A chain is only as strong as its weakest link and a machine is only as efficient as its slowest part. Given what we know about how life actually works and how easily it dies when it doesn't have enough energy, it is evident that for cellular respiration to have developed naturally within living organisms that could reproduce, would have required several simultaneous innovations. Some scientists have argued that the positions of intelligent design and irreducible complexity are arguments from ignorance which lack enough imagination. I would submit that the concerns put forth above are based, not on ignorance, but on what we actually do know about how life actually works and how easily it dies. Just as a car can die from not having enough gas for energy, or oil for seizing parts, or anti-freeze for engine overheating, so too, all physicians know that there are many different pathways to death. If you really want to begin to understand how life came into existence, you first have to understand how easily it can become non-existent.

My articles - Page 2 Fdsasd13

The Transport of Proteins into Mitochondria

Mitochondria are double-membrane-enclosed organelles. They specialize in ATP synthesis, using energy derived from electron transport and oxidative phosphorylation in mitochondria and from photosynthesis in chloroplasts . Although both organelles contain their own DNA, ribosomes, and other components required for protein synthesis, most of their proteins are encoded in the cell nucleus and imported from the cytosol.

If the endosymbiosis theory were true, would the proteins not keep being encoded and produced all inside mitochondria ?

Most organelle proteins are synthesized in the cytoplasm from nuclear encoded mRNAs. These proteins must be imported into mitochondria. Special sequences, called signal sequences, target the protein to its proper organelle. Organelles contain protein translocator complexes that are required for this transport.

Key players in this process are proteins, a signal sequence, chaperonins, ATP, protein translocator complexes, and signal peptidase.

In order for  proteins required inside of mitochondria being able to arrive at their destination, following is required :

All the machinery to synthesize mRNA's
Cytoplasm and the container of it ( the cell membrane )
the ribosome to make proteins
proteins
signal sequences
chaperonins
ATP,
protein translocator complexes required for the transport
signal peptidase
and the organelle ( the mitochondrion ) into which the protein is transported

if any of it is missing, nothing goes. That is a irreducible , interlocked and interdependent system, which indicates that all the organelles and machinery had to emerge simultaneously. A separate independent stepwise arise is not possible.

Proteins destined for transport into mitochondria contain a signal sequence. This sequence acts as a targeting mechanism to ensure the protein is delivered to the proper organelle.

Most signal-relay stations we know about were intelligently designed. Signal without recognition is meaningless.  Communication implies a signalling convention (a “coming together” or agreement in advance) that a given signal means or represents something: e.g., that S-O-S means “Send Help!”   The transmitter and receiver can be made of non-sentient materials, but the functional purpose of the system always comes from a mind.  The mind uses the material substances to perform an algorithm that is not itself a product of the materials or the blind forces acting on them.  Signal sequences may be composed of mindless matter, but they are marks of a mind behind the intelligent design.

In the mitochondrial electron-transport chain, six different cytochrome hemes, eight iron–sulfur clusters, three copper atoms, a flavin mononucleotide (another electron-transfer cofactor), and ubiquinone work in a defined sequence to carry electrons from NADH to O2. In total, this pathway involves more than 60 different polypeptides arranged in three large membrane protein complexes, each of which binds several of the above electron-carrying cofactors.

http://reasonandscience.heavenforum.org/t2131-the-mitochondrion#3818

Beside specially ATP synthase, a nanomotor par excellence

http://reasonandscience.heavenforum.org/t1439-atp-synthase

and NADH dehydrogenase ( Complex I ) Visualize an old locomotive train roaring down the tracks.  One of the characteristic images that surely comes to mind is the oscillating motion of the coupling rods on the wheels.  The long rods that connected the wheels provided a way to convert heat energy from the steam into mechanical energy. It now appears that the we have  trillions of mechanical devices similar like those coupling rods.  They serve to transmit the energy in the food we eat into mechanical energy, driving a proton pump inside the mitochondrion.  It’s all part of an amazing series of electromechanical machines in the powerhouses of the cell. A special type of „transmission element“, which is not known from any other protein, appears to be responsible for the energy transduction within the complex by mechanical nanoscale coupling. Transferred to the technical world, this could be described as a power transmission by a coupling rod, which connects for instance the wheels of a steam train.

http://reasonandscience.heavenforum.org/t2140-nadh-dehydrogenase-complex-i-in-mitochondria

Did life really come about solely by random chemicals coming together to form cells, then simple organisms, and then complex ones like us? In other words, without a mind at work to make it happen? Do you think that the over twenty different enzymes and carrier proteins, each consisting of over 300 amino acids, just happened to come together in a specific pathway, called cellular respiration, to provide our cells with the energy they need to live? No, when it comes to the origin of life it seems to me that Science still has a lot of explaining to do. Meanwhile, as we wait for evolutionary biologists to admit the deficiencies within their theory, our children and the whole world continue to be misled!

http://reasonandscience.heavenforum.org/t2150-the-electron-transport-chain



Last edited by Admin on Wed Mar 08, 2017 2:03 am; edited 3 times in total

https://reasonandscience.catsboard.com

37My articles - Page 2 Empty Re: My articles Mon Aug 17, 2015 6:07 pm

Otangelo


Admin

Rubisco's amazing evidence of design  1

http://reasonandscience.heavenforum.org/t1554-the-rubisco-enzymes-amazing-evidence-of-design

Rubisco  is the most important enyzme on the planet


virtually all the organic carbon in the biosphere derives ultimately from the carbon dioxide that this enzyme fixes from the atmosphere. Without it, advanced life would not be possible. And we would not be able to debate our origins. All inquiry and quest about if we are ultimatively the result of a powerful creator, or just random natural chemical reactions and emerging properties of  lifeless matter, if biodiversity is due to evolution, or a intelligent designer, is second to the inquiry of how Rubisco came to be. Through my research i gained remarkable insight about Rubisco's  complex structure, functioning  and synthesis process, how many cell parts , enzymes, proteins and pathways are involved and required to assemble it, how the unfinished sub units  require co and post-translational modifications, specific proteins that help like assembly robots in the manufacturing process,sophisticated pathways and mechanisms of protein import and targeting in chloroplasts through large multiprotein translocon complexes  in the stroma, and andvanced protein communication and information systems. All this is of bewildering complexity, where dozens of individual interconnected and finely tuned parts are required, a web of interlocked extremely complex advanced molecular machines where if one is missing, nothing goes, that defy the intelligence of the best scientists for decades to find out their structure, mechanisms and functions. Could all this  be due to natural processes ? 

RuBisCO is a multi-subunit plant protein essential to photosynthesis.  It catalyzes the primary chemical reaction by which inorganic carbon enters the biosphere. In the C3 pathway, RuBisCO is responsible for initiating the first step in carbon dioxide fixation, a process by which atmospheric carbon dioxide is converted by plants to energy-rich molecules such as glucose. This  step of the Calvin Cycle plays a crucial role in providing energy for the cell.

http://reasonandscience.heavenforum.org/t2164-the-calvin-benson-cycle

Rubisco is also the most abundant enzyme on earth. It is present in every plant and  photosynthetic organism, from the smallest cyanobacteria and plankton to palm trees and giant sequoias. Rubisco is a complex composed by eight large subunits and eight small subunits

Synthesized RuBisCO does not have a fully functional active site. It needs to be activated by a CO2 molecule that carbamylates its catalytic Lys to bind Mg2+ that completes the activation process. . The carboxylation involves at least four, perhaps five discrete steps and at least three transition states;

The origin of these highly specific , regulated and coordinated  steps, which are essential for the activation of Rubisco, are best explained through a planning mind, which all set it up. Natural mechanisms are extremely unlikely to be capable to produce these sofisticated metabolic multistep pathways and assembly lines to make Rubisco in the first place . No wonder, that no mainstream scientific papers are able to provide compelling evolutionary scenarios.    As long as the enzyme is not fully functional, nothing goes, and ultimatively, advanced life on earth would not be possible .How did the correct insertion of the correct metal cation Mg2+ surrounded by three H2O/OH molecules emerge ? Trial and error ? The genome needs the right information in order to get the right materials, the right shape and quantity of each subunit co-factors and metal clusters,how to position them at the right active site, and how to mount these parts in the right order . That seems to me only being explained in a compelling manner by the wise planning  of a super intelligent engineer, which knew how to invent and build this highly sophisticated and complex machine and make it fully functional right from scratch. A step wise unguided emergence seems to be extremely unlikely.This mechanism  seems to be the result of a  intelligence, which set it all up through power, will and information.

The eight large  subunits of rubisco are coded by the chloroplast DNA, and the eight small  subunits by nuclear DNA. The small  subunit of Rubisco and all the other Calvin cycle enzymes are encoded by nuclear genes and must be  transported and travel to the chloroplast site after their synthesis in the cytosol.

http://reasonandscience.heavenforum.org/t2165-pathways-and-mechanisms-of-protein-import-and-targeting-in-chloroplasts

The precursor forms of these stromal proteins contain an N-terminal stromal-import sequence. This  transit peptide allows transfer of the small subunits synthesized in the cytosol through the chloroplast envelope translocon complexes into the plastid. These are highly complex molecular gates in the chloroplast inner and outer membrane, which filter which molecules go in.  After the unfolded precursor enters the stromal space, it binds transiently to a stromal Hsc70 chaperone and the Nterminal sequence is cleaved.

Folding of the small and large Rubisco subunit proteins is mediated by the amazing GroEL–GroES chaperonin system. Protein folding mediated by chaperonins  is the process by which newly synthesized polypeptide chains acquire the three-dimensional structures necessary for biological function. For many years, protein folding was believed to occur spontaneously.  But it has become apparent that large proteins frequently fail to reach native state, forming nonfunctional aggregates instead. They need the aid of these sophisticated barrel shaped proteins.

http://reasonandscience.heavenforum.org/t1437-chaperones?highlight=chaperones

That raises interesting questions : How should and could natural non guided natural mechanisms forsee the necessity of chaperones in order to get a specific goal, that is the right precise 3 dimensional folding resulting in functional proteins to make living organisms ? Non living matter has no natural " drive " or purpose or goal to become living. The make of proteins to create life however is a multistep process of many parallel acting complex metabolic pathways and production-line like processes to make proteins and other life essential products like  nucleotides, amino acids, lipids , carbohydrates etc. The right folding of proteins is just one of several other essential processes in order to get a functional protein. But a functional protein by its own has no function, unless correctly embedded through the right  assembly sequence and order at the right functional place. 

Eight S subunits combine with the eight L subunits to yield the active rubisco enzyme. At least three chloroplast outer-membrane proteins, including a receptor that binds the stromal-import sequence and a translocation channel protein, and five inner-membrane proteins are known to be essential for directing proteins to the stroma. Import into the stroma depends on ATP hydrolysis catalyzed by a stromal Hsc70 chaperone. Chloroplasts cannot generate an electrochemical gradient (proton-motive force) across their inner membrane. Thus protein import into the chloroplast stroma is powered solely by ATP hydrolysis. Within the stroma, the S-subunits undergo further posttranslational modification (transit peptide cleavage, Met-1 aN- methylation) prior to assembly into final L8S8 Rubisco complexes. How did natural evolutionary processes find out how to do it ? Trial and error? 

In order to make and assemble Rubisco, at least  25 parts, most of them essential and irreducible, are directily involved in Rubisco function , activation, and  synthesis:

http://reasonandscience.heavenforum.org/t1554-the-rubisco-enzymes-amazing-evidence-of-design#3899

Could these parts, proteins enzymes  etc.  have evolved separately and gradually ? What about the RbcX Assembly Chaperone, specifically used as assembly tool of Rubisco ? What about the barrel shaped GroEL/GroES chaperonins which perform their function with extremely impressive  simplicity and elegance, namely helping over 100 different proteins to get into their correct shape and form, essential for function ? ( in our case, helping the Rubisco RbcL subunits to get their proper shape ) 

These chaperone systems are themselves made of proteins which also require the assistance of chaperones to correctly fold and to maintain integrity once folded. Chaperones for chaperones in fact. The very simplest of cells that we know of have these systems in place. 

Or  how do proponents of evolution explain how natural selection would have favoured the emergence of Hsp70 chaperones,  central components of the cellular network,  proteins which assist a large variety of protein folding processes in the cell by transient association of their substrate binding domain with short hydrophobic peptide segments within their substrate proteins ?  That is in our case, their function of which was to prevent a still-useless rubisco small subunit from folding outside the chloroplast?  They are made, used during the synthesis process, and once Rubisco assembly has finished, these enzymes are discarted. This is very much a factory-like production and assembly-line process, using fully automatized and programmed nano-robot like molecular machines, namely enzymes.  Most parts, if missing, render 1. the assembly of Rubisco impossible, and 2. Rubisco useless. Many parts, if missing render it not fully functional and defective.  Beside the enzymes that have use in other biological systems, there would be no reason to make them unless all other parts were there too, and the assembly insctructions of Rubisco. As a analogy, if you had to make the implementation of a car factory, why would you make the assembly chain of a piston, if you do not have all the precise instructions to make 1. the car as a whole, and 2. the instructions of the precise shape and the materials required for the piston in particular, and how to mount it in the motor ? Thats precisely what happens in the cell . Evolution has no consciousness, and no forsight nor intelligence. But precisely that is required for PLANNING and make of blueprints. I cannot create a machine, without the precise drawing and project information in advance, which is required to make 1. the assembly tools 2. the subparts 2. the whole machine.

How do proponents of evolution explain how natural selection would have favoured a protein complex the function of which was to prevent a still-useless Rubisco small subunit from folding outside the chloroplast? Before it evolved a way to get the protein inside, there would be no benefit from keeping it unfolded outside. How could blind chance ‘know’ it needed to cause large subunit polypeptides to fold ‘correctly’ and to keep them from clumping? It could not ‘anticipate’ the ‘correct’ conformation before the protein became useful. And evolution would need to be clever indeed to chemically modify something not yet useful so that it could be folded ‘correctly’ when even the ‘correctly’ folded polypeptide would not yet become useful.

Only a designer would know why it would be necessary to produce a specialized protease, target it to the chloroplast, and program it to clip off the targeting sequence of the small subunit at just the right place. And what about the assembly of a collection of meaningless rubisco parts in just one certain way? In order to design a sophisticated set of tools to make something else useful in the future that had, as yet, no function, evolution (as ‘designer’) would have had to have detailed knowledge of the future usefulness of the protein it was so cleverly engineering. If evolution managed to generate any one of these chaperone protein complexes (and it would not), it would still be useless for generating rubisco unless all the other chaperones were also present. Without any one of them, the sixteen-unit complex could not be generated.

That totally destroys the Evolution Theory:  How should and could natural non guided natural mechanisms forsee the necessity of chaperones in order to get a specific goal, that is the right precise 3 dimensional folding resulting in functional proteins to make living organisms ? Non living matter has no natural " drive " or purpose or goal to become living. The make of proteins to create life however is a multistep process of many parallel acting complex metabolic pathways and production-line like processes to make proteins and other life essential products like nucleotides, amino acids, lipids , carbohydrates etc. The right folding of proteins is just one of several other essential processes in order to get a functional protein. But a functional protein by its own has no function, unless correctly embedded through the right assembly sequence and order at the right functional place." thats precisely the problem of evolution. there is no forsight. So why would evolution produce a assembly chaperone enzyme to make rubisco ? You dont make a robot for a assembly line, if the end product is not known.



Last edited by Admin on Wed Mar 08, 2017 2:03 am; edited 1 time in total

https://reasonandscience.catsboard.com

38My articles - Page 2 Empty Re: My articles Fri Aug 28, 2015 11:28 am

Otangelo


Admin

The amazing fatty acid synthase nano factories, and origin of life scenarios

http://reasonandscience.heavenforum.org/t2168-the-amazing-fatty-acid-synthase-nano-factories-and-origin-of-life-scenarios

The four basic categories of molecules for building life are carbohydrates, lipids, proteins, and nucleic acids.  Here we will give a closer look at fatty acids,  constituents of lipids, and their biosynthesis.

Lipids (‘fats’) are essential for the formation of a cell membrane that contains the cell contents, as well as for other cell functions. The cell membrane, comprised of several different complex lipids, is an essential part of a free-living cell that can reproduce itself.

Lipids have much higher energy density than sugars or amino acids, so their formation in any chemical soup is a problem for origin of life scenarios (high energy compounds are thermodynamically much less likely to form than lower energy compounds).  Fatty acids are hydrocarbon chains of various lengths. The ability to synthesize a variety of lipids is essential to all organisms.  Fatty acid synthesis requires the oxidation of the co-factor NADPH.

The major source of NADPH in animals and other non-photosynthetic organisms is the pentose phosphate pathway. Due to the complexity of the metabolic pathways, it has been argued that metabolism‐like chemical reaction sequences are unlikely to be catalysed by simple environmental catalysts.


This constitutes a serious problem for naturalistic explanations of the origin of life. The pentose phosphate pathway requires 7 enzymes, and is interdependent with glycolysis , since the beginning molecule for the pentose phosphate pathway is glucose-6-P, which is the second intermediate metabolite in glycolysis. 

Eukaryotic cells face a dilemma in providing suitable amounts of substrate for fatty acid synthesis. Sufficient quantities of acetyl-CoA, malonyl-CoA, and NADPH must be generated in the cytosol for fatty acid synthesis. Malonyl-CoA is made by carboxylation of acetyl-CoA, so the problem reduces to generating sufficient acetyl-CoA and NADPH. There are three principal sources of acetyl-CoA. The acetyl-CoA derived from amino acid degradation is normally insufficient for fatty acid biosynthesis, and the acetyl-CoA produced by pyruvate dehydrogenase and by fatty acid oxidation cannot cross the mitochondrial membrane to participate directly in fatty acid synthesis. Instead, acetyl-CoA is linked  with  oxaloacetate to form citrate, which is transported from the mitochondrial matrix to the cytosol by  citrate carriers (CIC),  nuclear-encoded proteins located in the mitochondrial inner membrane, members of the mitochondrial carrier family.  Biosynthesis of oxaloacetate requires  malate dehydrogenase enzymes or, in plants, pyruvate carboxylase enzymes.

So all these listed functional units and substrates are required in the synthesis process. They are essential, constituting a interdependent interlocked system of the cell.

As Bruce Alberts said in 1998, the biology of the future was going to be the study of molecular machines: “the entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines.”  One of those machines is like a mini-factory in itself. It’s called fatty acid synthase.

The first step of fatty acid biosynthesis requires the participation of  malonyl-CoA, a three-carbon intermediate.  The formation of malonyl-CoA from acetyl-CoA is an irreversible process, catalyzed by acetyl-CoA carboxylase enzymes. a multifunctional protein with 3 subunits, which is carefully regulated.

In the second step, fatty acid synthase ( FAS) proteins come into action. These are the little heroes of this article. FAS most striking feature is the “high degree of architectural complexity” – some 48 active sites, complete with moving parts

Which organism has one of the most elaborate fatty-acid machines of all?  The surprising answer: fungi. 
Perhaps the most striking feature of fungal FAS is its high degree of architectural complexity, in which 48 functional centers exist in a single ... particle.  Detailed structural information is essential for delineating how this complex particle coordinates the reactions involved in many steps of synthesis of fatty acids.... The six alpha subunits form a central wheel in the assembly, and the beta subunits form domes on the top and bottom of the wheel, creating six reaction chambers within which each Acyl Carrier Protein (ACP) can reach the six active sites through surprisingly modest movements.

The crystal structure of yeast FAS reveals that this large, macromolecular assembly functions as a six-chambered reactor for fatty acid synthesis.  Each of the six chambers functions independently and has in its chamber wall all of the catalytic units required for fatty acid priming, elongation, and termination, while one substrate-shuttling component, ACP, is located inside each chamber and functions like a swinging arm.  Surprisingly, however, the step at which the reactor is activated must occur before the complete assembly of the particle since the PPT domain that attaches the pantetheine arm to ACP lies outside the assembly,inaccessible to ACP that lies inside.  Remarkably, the architectural complexity of the FAS particle results in the simplicity of the reaction mechanisms for fatty acid synthesis in fungi.

To imagine this level of precision and master-controlled processing on a level this small, cannot help but induce a profound sense of wonder and awe.  Here, all this time, this machine has been helping to keep living things functioning and we didn’t even know the details till now.

The fatty acids are useless without the amino acids, and vice versa .  Even if some kind of metabolic cycle were to be envisioned under semi-realistic conditions, how did this elaborate machine, composed of amino acids with precise charge distributions, arise?  It’s not just the machine, it’s the blueprints and construction process that must be explained.  What blind process led to the precise placement of active sites that process their inputs in a programmed sequence?  What put them into a structure with shared walls where six reaction chambers can work independently?  All this complexity, involving thousands of precision amino acids in FAS  has to be coded in DNA, then built by the formidably complex translation process, then assembled together in the right order, or FAS won’t work.  But the storage, retrieval, translation and construction systems all need the fatty acids, too, or they won’t work.

We are witnessing an interdependent system of mind-boggling complexity that defies any explanation besides intelligent design.  Yes, Bruce Alberts, “as it turns out, we can walk and we can talk because the chemistry that makes life possible is much more elaborate and sophisticated than anything we students had ever considered.”  We have tended to “vastly underestimate the sophistication of many of these remarkable devices.”

The closer they look, the more wondrous the cell gets.  Who would have thought that the requirement to make these fatty acids would require machinery with moving parts and reaction chambers?  Who would have imagined their surfaces would be covered with complex proteins that regulate the production inside?  Who would have realized that fat was so important, the cell had complex assembly plants to build it?  Fat is almost a mild cussword in our vocabulary, but it is another class of molecular building blocks we couldn’t live without.  Fats, sugars, proteins and nucleic acids all work together in life, from humans to lowly fungi.  Each class of molecules has immense variety, each is essential, and each is manufactured to spec by precision machinery.  What a wonderful post-Darwinian world.

How do origin of life researchers envision the arise of these hyper complex nano factories and assembly lines to make fatty acids ? The scientific paper The lipid world says :

Self-assembly of amphiphilic molecules into complex supramolecular structures is spontaneous. The plausibility that such structures were present in the prebiotic environment is supported by the occurrence of amphiphilic molecules in carbonaceous meteorites and the demonstration that they can assemble into membrane vesicles. 

This paper shows the helplessness of proponents of  natural prebiotic origin of lipids. Its a hudge gap between above explanation, and the arise of hypercomplex multyenzymatic proteins, which produce fatty acids through advanced, regulated, precise, coordinated multistep factory assembly-line like robotic  procedures. 

I conclude that the make of  essential fatty acids, ingredients of cell membranes, requires interdependent irreducible complex procedures,  several different metabolic pathways in order to make the substrates and produce the energy used in the process, several enzymes, the whole machinery to make the assembly proteins and enzymes. Since this constitutes a complex interlocked process, it could not be due to step by step evolutionary manner. Fatty acids, constituents of the cell membranes, had to exist right from the start for life to arise. This fact makes the design inference the most rational one.

http://reasonandscience.heavenforum.org/t2168-the-amazing-fatty-acid-synthase-nano-factories-and-origin-of-life-scenarios

Following parts are involved direct or indirectly in fatty acid synthesis, and must exist in order for fatty acids to be able to be synthesized :

the cytosol
NADPH.

enzymes of the Pentose phosphate pathway enzymes :

Glucose-6-phosphate dehydrogenase
6-phosphogluconolactonase
Phosphogluconate dehydrogenase
Ribose-5-phosphate isomerase
Phosphopentose epimerase
Transketolase
Transaldolase

of the glycolysis pathway, at least : hexokinase enzymes

oxaloacetate
phophopantetheinyl transferases
citrate
mitochondria
The citrate carrier (CiC)
the nucleus
malate dehydrogenase enzymes or pyruvate carboxylase enzymes
acetyl-CoA carboxylase enzymes
Acyl Carrier Proteins
FAS fatty acid synthase proteins
The citric acid cycle
ATP



Last edited by Admin on Wed Mar 08, 2017 2:05 am; edited 2 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Major metabolic pathways and their inadequacy for origin of life proposals

According to geneticist Michael Denton, the break between the nonliving and the living world ‘represents the most dramatic and fundamental of all the discontinuities of nature.
And John Lennox writes in his book has science buried God ?

It is hard for us to get any kind of picture of the seething, dizzyingly complex activity that occurs inside a living cell, which contains within its lipid membrane maybe 100 million proteins of 20,000 different types and yet the whole cell is so tiny that a couple of hundred could be placed on the dot in this letter ‘i’.

The meaning of the genetic code is also virtually identical in all cells. The size, structure and component design of the protein synthetic machinery is practically the same in all cells. In terms of their basic biochemical design, therefore, no living system can be thought of as being primitive or ancestral with respect to any other system, nor is there the slightest empirical hint of an evolutionary sequence among all the incredibly diverse cells on earth.’

This view is supported by Nobel Prize-winner Jacques Monod, whom Denton cites. ‘We have no idea what the structure of a primitive cell might have been. The simplest living system known to us, the bacterial cell… in its overall chemical plan is the same as that of all other living beings. It employs the same genetic code and the same mechanism of translation as do, for example, human cells. Thus the simplest cells available to us for study have nothing “primitive” about them… no vestiges of truly primitive structures are discernible.’ Thus the cells themselves exhibit a similar kind of ‘stasis’ to that referred to in the previous chapter in connection with the fossil record.

Its  interesting to try to figure out what that supposed last universal common ancestor ( LUCA ) was, in order to understand what kind  of biochemical mechanisms, metabolism, enzymes, co-factors, proteins and genome information would have to be explained, and its origin.

From a biochemist’s perspective, life at the cellular level can be defined as a network of integrated and carefully regulated metabolic pathways, each contributing to the sum of activities that a cell must carry out. Cellular metabolism is a complex process involving about a thousand chemical reactions catalyzed by globular proteins, enzymes.

In the scientific paper: The Enzymatic and Metabolic Capabilities of Early Life, the author states that several independent studies have used comparative bioinformatics methods to identify taxonomically broad features of genomic sequence data, protein structure data, and metabolic pathway data in order to predict physiological features that were present in early, ancestral life forms. We survey modern metabolic pathways to identify those that maintain the highest frequency of metaconsensus enzymes. Using the full set of modern reactions catalyzed by these metaconsensus enzyme functions, we reconstruct a representative metabolic network that may reflect the core metabolism of early life forms.

Their research revealed the mind blowing complexity of Luca, and its metabolic pathways:

http://reasonandscience.heavenforum.org/t2174-the-enzymatic-and-metabolic-capabilities-of-early-life

According to another research paper : Evolution of the first metabolic cycles, There are two alternatives concerning the origin of life: the origin may be either  heterotrophic or autotrophic. The paper : Analysis of the Intermediary Metabolism of a Reductive Chemoautotroph gives a idea of the complexity of it:

http://reasonandscience.heavenforum.org/t2147-the-naturalistic-approach-of-origin-of-life-scenarios

No wonder, do the authors of the paper: How Life Began:  The Emergence of Sparse Metabolic Networks , openly admit that: " The process by which the network of extant metabolism emerged is one of the major puzzles in the origin of life field." Another paper admits that "  An open question for scientists is when and how cellular metabolism, the network of chemical reactions necessary to produce nucleic acids, amino acids and lipids, the building blocks of life, appeared on the scene."  The pathways for synthesis of most of the twenty amino acids used in proteins and the four nucleotides used in RNA are identical or nearly identical in Archaea, bacteria and eukaryotes, suggesting that these pathways were inherited from the LUCA. metabolic network. Thus, it appears that that the LUCA had the ability to synthesize the critical building blocks of life and did not rely on exogenous sources of these compounds. This supposition is supported by bioinformatic reconstructions of the genome of the LUCA. Biosynthetic pathways in extant organisms clearly resemble those in the LUCA.  In the scientific paper : In The Ancient Ocean, Did Metabolism Precede The Origin Of Life?

http://reasonandscience.heavenforum.org/t2004-major-metabolic-pathways-and-their-inadequacy-for-origin-of-life-proposals

the author writes :

The observed chemical reactions occurred in the absence of enzymes but were made possible by the chemical molecules found in the Archean sea. Finding a series of reactions that resembles the "core of cellular metabolism" suggests that metabolism predates the origin of life. This implies that, at least initially, metabolism may not have been shaped by evolution but by molecules like RNA formed through the chemical conditions that prevailed in the earliest oceans.

Whether and how the first enzymes adopted the metal-catalyzed reactions described by the scientists remain to be established.

Its easily observable the hudge gap between the just so, almost helpless explanation attempts of the origin and arise of essential metabolic pathways, and their complexity   observed even in the simplest cells.

This made the leading Origin of Life researcher Leslie Orgel say following:

The Implausibility of Metabolic Cycles on the Prebiotic Earth
Leslie E Orgel†

http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.0060018

Almost all proposals of hypothetical metabolic cycles have recognized that each of the steps involved must occur rapidly enough for the cycle to be useful in the time available for its operation. It is always assumed that this condition is met, but in no case have persuasive supporting arguments been presented. Why should one believe that an ensemble of minerals that are capable of catalyzing each of the many steps of the reverse citric acid cycle was present anywhere on the primitive Earth, or that the cycle mysteriously organized itself topographically on a metal sulfide surface? The lack of a supporting background in chemistry is even more evident in proposals that metabolic cycles can evolve to “life-like” complexity. The most serious challenge to proponents of metabolic cycle theories—the problems presented by the lack of specificity of most nonenzymatic catalysts—has, in general, not been appreciated. If it has, it has been ignored. Theories of the origin of life based on metabolic cycles cannot be justified by the inadequacy of competing theories: they must stand on their own.



Last edited by Admin on Wed Mar 08, 2017 2:05 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

All cellular functions are  irreducibly complex 

http://reasonandscience.heavenforum.org/t2179-all-cellular-functions-are-irreducibly-complex

Prokaryotes are thought to differ from eukaryotes in that they lack membrane-bounded organelles. However, it has been demonstrated that there are bacterias which have membrane bound organelles named acidocalcisomes, and that V-H+PPase proton pumps are present in their surrounding membranes. Acidocalcisomes have been found in organisms as diverse as bacteria and humans. Volutin granules which are equivalent of acidocalcisomes also occur in Archaea and are, therefore, present in the three superkingdoms of life (Archaea, Bacteria and Eukarya). These volutin granule organelles occur in organisms spanning an enormous range of phylogenetic complexity from Bacteria and Archaea to unicellular eukaryotes to algae to plants to insects to humans. According to neo-darwinian thinking, the universal distribution of the V-H+PPase  domain  suggests the domain and the enzyme were already present in the Last Universal Common Ancestor (LUCA).

http://reasonandscience.heavenforum.org/t2176-lucathe-last-universal-common-ancestor#3992

If the proton pumps of Volutin granules were present in LUCA, they had to emerge prior to self replication, which induces serious constraints to propose evolution as driving factor. But if evolution was not the mechanism, what else was ? There is not much left, namely chance, random chemical reactions, or physical necessity.

But lets for a instance accept the "fact of evolution", and suppose as the driving force to make  V-H+PPase proton pumps.  In some period prior to the verge of non-life to life, natural selection or an other evolutionary mechanism would have had to start polymerisation of the right amino acid sequence to produce V-H+PPase proton pumps by addition of one amino acid monomer to the other. First, the whole extraordinarly  production line of staggering complexity starting with DNA would have to be in place, that is  :

The cell sends activator proteins to the site of the gene that needs to be switched on, which then jump-starts the RNA polymerase machine by removing a plug which blocks the DNA's entrance to the machine.  The DNA strands do shift position so that the DNA lines up with the entrance to the RNA polymerase. Once these two movements have occurred and the DNA strands are in position, the RNA polymerase machine gets to work melting them out, so that the information they contain can be processed to produce mRNA 2 The process follows then after INITIATION OF TRANSCRIPTION through RNA polymerase enzyme complexes, the mRNA is  capped through Post-transcriptional modifications by several different enzymes ,  ELONGATION provides the main transcription process from DNA to mRNA, furthermore  SPLICING and CLEAVAGE ,  polyadenylation where a long string of repeated adenosine nucleotides is added,  AND TERMINATION through over a dozen different enzymes,    EXPORT FROM THE NUCLEUS TO THE CYTOSOL ( must be actively transported through the Nuclear Pore Complex channel in a controlled process that is selective and energy dependent  )  INITIATION OF PROTEIN SYNTHESIS (TRANSLATION) in the Ribosome in a enormously complex process,  COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING through chaperone enzymes. From there the proteins are transported by specialized proteins to the end destination. Most of these processes require ATP, the energy fuel inside the cell.  

http://reasonandscience.heavenforum.org/t2067-there-is-no-selective-advantage-until-you-get-the-final-function?highlight=function

The genetic code to make the right ~600 amino acid sequence would have to be made by mutation and natural selection. But mutation of what, if there was no functional protein yet ? . The problem in this stage is,  when there is no selective advantage until you get the final function, the final function doesn't evolve. In other words, a chain of around 800 amino acids is required to make a funcional V-H+PPase proton pump, but there is no function, until polymerisation of all 600 monomers is completed and the right sequence achieved.

The problem for those who accept the truth of evolution is,  they cannot accept the idea that any biological structure with a beneficial function, however complex, is very far removed from the next closest functional system or subsystem within the potential of "sequence space" that might be beneficial if it were ever found by random mutations of any kind. In our case the situation is even more drastic, since DENOVO genetic sequence and subsequently amino acid chain for a new formation of a new amino acids strand is required.  A further constraint is the fact that 100% of  amino acids used and needed for life are left handed, while DNA and RNA requires D-sugars.  Until today, science has not sorted out how nature is able to select the right chiral handedness. The problem is that the pre-biotic soup is believed to be a warm soup consisting of racemic mixtures of amino acid enantiomers (and sugars). How did this homogenous phase separate into chirally pure components? How did an asymmetry (assumed to be small to start with) arise in the population of both enantiomers? How did the preference of one chiral form over the other, propagate so that all living systems are made of 100 percent optically pure components?

What is sequence space ?
Imagine 20 amino acids mixed up  in a pool, randomly mixed , one adjacent to the other. The  pool with all the random amino acids  is the sequence space. This space can be two dimentional, tridimensional, or multidimensional. In evolutionary biology, sequence space is a way of representing all possible sequences (for a protein, gene or genome).  Most sequences in sequence space have no function, leaving relatively small regions that are populated by naturally occurring genes. Each protein sequence is adjacent to all other sequences that can be reached through a single mutation. Evolution can be visualised as the process of sampling nearby sequences in sequence space and moving to any with improved fitness over the current one.

Functional sequences in sequence space
Despite the diversity of protein superfamilies, sequence space is extremely sparsely populated by functional proteins. That is, amongst all the possible amino acid sequences, only a few permit the make of functional proteins. Most random protein sequences have no fold or function. To exemplify:  In order to write METHINKS IT IS LIKE A WEASEL , there are 10^40 possible random combinations possible to get the right sequence. But only one is correct.

Enzyme superfamilies, therefore, exist as tiny clusters of active proteins in a vast empty space of non-functional sequence.The density of functional proteins in sequence space, and the proximity of different functions to one another is a key determinant in understanding evolvability.
Protein sequence space has been compared to the Library of Babel a theoretical library containing all possible books that are 410 pages long. In the Library of Babel, finding any book that made sense was impossible due to the sheer number and lack of order. 


How would a bacterium evolve a function like a single protein enzyme? - like a V-H+PPase proton pump? The requirement is about 600  specified residues at minimum.  A useful V-H+PPase cannot be made with significantly lower minimum size and specificity requirements.   These minimum requirements create a kind of threshold beyond which the V-H+PPase function simply cannot be built up gradually where very small one or two residues changes at a time result in a useful change in the degree of the proton pump function. Therefore, such functions cannot have evolved in a gradual, step by step manner.  There simply is no template or gradual  pathway from just any starting point to the minimum threshold requirement.  Only after this threshold has been reached can evolution take over and make further refinements - but not until. Now, there are in fact examples of computer evolution that attempt to address this problem;

All Functions are "Irreducibly Complex" 

The fact is that all cellular functions are irreducibly complex in that all of them require a minimum number of parts in a particular order or orientation.  I go beyond what Behe proposes and make the suggestion that even single-protein enzymes are irreducibly complex.  A minimum number of parts in the form of amino acid residues are required for them to have their particular functions.  The proton pump function cannot be realized in even the smallest degree with a string of only 5 or 10 or even 500 residues of any arrangement.  Also, not only is a minimum number of parts required for the proton pump function to be realized, but the parts themselves, once they are available in the proper number, must be assembled in the proper order and three-dimensional orientation.  Brought together randomly, the residues, if left to themselves, do not know how to self-assemble themselves to form a much of anything as far as a functional system that even comes close to the level of complexity of a even a relatively simple function like a proton pump.  And yet, their specified assembly and ultimate order is vital to function.
Of course, such relatively simply systems, though truly irreducibly complex, have evolved.  This is because the sequence space at such relatively low levels of functional complexity is fairly dense.  It is fairly easy to come across new beneficial sequences  if the density of potentially beneficial sequences in sequence space is relatively high.  This density does in fact get higher and higher at lower and lower levels of functional complexity - in an exponential manner.  

It is much like moving between 3-letter words in the English language system.  Since the ratio of meaningful vs. meaningless 3-letter words in the English language is somewhere around 1:18, one can randomly find a new meaningful and even beneficial 3-letter word via single random letter changes/mutations in relatively short order.  This is not true for those ideas/functions/meanings that require more and more letters.  For example, the ratio of meaningful vs. meaningless 7-letter words and combinations of smaller words equaling 7-letters is far far lower at about 1 in 250,000.  It is therefore just a bit harder to evolve between 7-letter words, one mutation at a time, than it was to evolve between 3-letter words owing to the exponential decline in the ratio of meaningful vs. meaningless sequences.  

The same thing is true for the evolution of codes, information systems, and systems of function in living things as it is for non-living things (i.e., computer systems etc).  The parts of these codes and systems of function, if brought together randomly, simply do not have enough meaningful information to do much of anything. So, how are they brought together in living things to form such high level functional order?

Prior to the origin of the first living cell, all proteins had to be synthesized de novo, that is, from zero. In order to do that however, all the machinery to make proteins had to be in place. To propose that ribozymes would have done the job, without template, without  coded information is far fetched. Beside this, the machinery itself that makes proteins,is made of proteins. Thats a catch22 situation. The cell furthermore would have  a) know how to select the right left  handed amino acids in a mixed pool of amino acids, b) select amongst inumerous amino acids, just the amongst the 20 required for life, and then select each one correctly, and bond one to the other in the right sequence.  A protein chain cannot evolve from zero without the machinery in place, and the right information. Period. As the proton pump for example. First, it emerged prior to replication, which cancels evolution as a possible mechanism. Secondly, there is no function until the protein chain is fully formed with at least 600 amino acid residues linked each one correctly to  another, all L amino acids selected, and the protein folded correctly. Trial and error will simply NEVER provide you that result. Thats impossible. This is true for one protein. Not to speak for the thousands in the whole immensly complex cell. Take lottery balls with 20 different colors, amongst the colors black. nuber them all from one to 600. But on 600 balls, you write left, and on another 600, you write right. Total 24000 balls. Now play lottery , and see how many trials will get you a chain of aligned numbers, from 1 to 600, only with balls written left on them and black. Or put it another way. let us consider a simple protein containing 600 amino acids. There are 20 different kinds of L-amino acids in proteins, and each can be used repeatedly in chains of 600. Therefore, they could be arranged in 20^600 different ways....... Would you bet a dime on such odds ?



Last edited by Admin on Thu Sep 03, 2015 6:28 am; edited 2 times in total

https://reasonandscience.catsboard.com

41My articles - Page 2 Empty Re: My articles Wed Sep 02, 2015 7:57 pm

Otangelo


Admin

Cyanobacterias, amazing evidence of design

http://reasonandscience.heavenforum.org/t1551-cyanobacteria-amazing-evidence-of-design

The main source for food and oxygen are cyanobacteria and chloroplasts that do photosynthesis. Cyanobacteria are essential for the nitrogen cycle, and so to transform nitrogen in the atmosphere into useful form for organisms to make the basic building blocks for life. The end product of photosynthesis is glucose, - needed as food source for almost all life forms. For a proponent that life took millions of years to emerge gradually and biodiversity as well, and so cyanobacteria and chloroplasts, that came hundreds of millions of years after life started, that is a hudge problem. No oxygen in the atmosphere, and UV radiation would kill the organisms. Nor could they emerge without a adequate food source. Looking everything in that perspective, it makes a lot of sense to believe God created everything in six days. And created the atmosphere with oxygen , and the nitrogen cycle fully setup, and plants and animals like cyanobacteria, essential in the food chain and nitrogen cycle. That would solve the - problem of nutrition, - the problem of UV radiation - and the problem of the nitrogen source required for life.

The existence in the same organism of cyanobacterias of two conflicting metabolic systems, oxygen evolving photosynthesis and oxygen-sensitive nitrogen fixation, is a puzzling paradox. Explanations are pure guesswork.

Researchers have long been puzzled as to how the cyanobacteria could make all that oxygen without poisoning themselves. To avoid their DNA getting wrecked by a hydroxyl radical that naturally occurs in the production of oxygen, the cyanobacteria would have had to evolve protective enzymes. But how could natural selection have led the cyanobacteria to evolve these enzymes if the need for them didn’t even exist yet? The explanations are fantasious at best. 

Nick Lane describes the dilemma in the book Oxygen, the molecule that made the world:
Before cells could commit to oxygenic photosynthesis, they must have learnt to deal with its toxic waste, or they would surely have been killed, as modern anaerobes are today. But how could they adapt to oxygen if they were not yet producing it? An oxygen holocaust, followed by the emergence of a new world order, is the obvious answer; but we have seen that there is no geological evidence to favour such a catastrophic history. In terms of the traditional account of life on our planet, the difficulty and investment required to split water and produce oxygen is a Darwinian paradox.

If there was a reduced atmosphere without oxygen some time back in the past ( which is btw quite controversial ) then there would be no ozone layer, and if there was no ozone layer the ultraviolet radiation would penetrate the atmosphere and would destroy the amino acids as soon as they were formed. If the Cyanobacterias however would overcome that problem ( its supposed the bacterias in the early earth lived in the water, but that would draw other unsurmountable problems ), and evolve photosynthesis, they would have to evolve at the same time protective enzymes that prevented them oxygen to damage their DNA through hydroxyl radicals. So what evolutionary advantage would there be they to do this ?

Cyanobacteria are the prerequisite for complex life forms. They are said to exist already 3,5 bio years, and did not change morphologically. They do oxygenic photosynthesis, where the energy of light is used to split water molecules into oxygen, protons, and electrons. It occurs in two stages. In the first stage, light-dependent reactions or light reactions capture the energy of light and use it to make the energy-storage molecules ATP and NADPH. During the second stage, the light-independent reactions use these products to capture and reduce carbon dioxide.

They have ATP synthase nano-motors. How could ATP synthase “evolve” from something that needs ATP, manufactured by ATP synthase, to function? Absurd “chicken-egg” paradox!

ATP Synthase is a molecular machine found in every living organisms. It serves as a miniature power-generator, producing an energy-carrying molecule, adenosine triphosphate, or ATP. The ATP synthase machine has many parts we recognize from human-designed technology, including a rotor, a stator, a camshaft or driveshaft, and other basic components of a rotary engine. This machine is just the final step in a long and complex metabolic pathway involving numerous enzymes and other molecules—all so the cell can produce ATP to power biochemical reactions, and provide energy for other molecular machines in the cell. Each of the human body’s 14 trillion cells performs this reaction about a million times per minute. Over half a body weight of ATP is made and consumed every day!

A rotary molecular motor that can work at near 100% efficiency.
http://www.pnas.org/content/early/2011/10/12/1106787108.full.pdf

We found that the maximum work performed by F1-ATPase per 120° step is nearly equal to the thermodynamical maximum work that can be extracted from a single ATP hydrolysis under a broad range of conditions. Our results suggested a 100% free-energy transduction efficiency and a tight mechanochemical coupling of F1-ATPase.

http://reasonandscience.heavenforum.org/t1439-atp-synthase#2204

How could ATP synthase “evolve” from something that needs ATP, manufactured by ATP synthase, to function? Absurd “chicken-egg” paradox! Also, consider that ATP synthase is made by processes that all need ATP—such as the unwinding of the DNA helix with helicase to allow transcription and then translation of the coded information into the proteins that make up ATP synthase. And manufacture of the 100 enzymes/machines needed to achieve this needs ATP! And making the membranes in which ATP synthase sits needs ATP, but without the membranes it would not work. This is a really vicious circle for evolutionists to explain.



They have aerobic respiration, and anaerobic fermentation which uniquely occur together in these prokaryotic cells. They do photosynthesis through complex Photosystem I and II and other electron transport complexes. They have a carbon concentration mechanism, which  increases the concentration of carbon dioxide available to the initial carboxylase of the Calvin cycle, the enzyme RuBisCO, and transcriptional regulation, which is the change in gene expression levels by altering transcription rates. They are capable of performing the process of water-oxidizing photosynthesis by coupling the activity of  photosystem  II and I, in a chain of events known as the Z-scheme. They metabolize Carbohydrates  through the  pentose phosphate pathway. They reduce Carbon dioxide  to form carbohydrates through  the Calvin cycle. Furthermore, they are able to reduce elemental sulfur by anaerobic respiration in the dark.

No nitrogen: no proteins, no enzymes, no life. We need nitrogen in our bodies, to form amino acids and nucleic acids. Cyanobacteria have the greatest contribution  to nitrogen fixation. So in the beginning, not only was lack of oxygen a gigantic problem, but the lack of nitrogen was no less so. In order for the anaerobic organisms, whatever they might have been, to generate oxygen in quantity, they simply HAD to have nitrogen in their tissues (as enzymes etc). With nitrogen as unreactive as it is, then how did they fix it? N2 gas is a very stable compound due to the strength of the triple bond between the nitrogen atoms, and it requires a large amount of energy to break this bond. This is one of the hardest chemical bonds of all to break.The whole process requires eight electrons and at least sixteen ATP molecules. The process, nitrogenase,  works in a more exact and efficient way than the clumsy chemical processes of human invention. Several atoms of iron and molybdenum are held in an organic lattice to form the active chemical site. With assistance from an energy source (ATP) and a powerful and specific complementary reducing agent (ferredoxin), nitrogen molecules are bound and cleaved with surgical precision. In this way, a ‘molecular sledgehammer’ is applied to the NN bond, and a single nitrogen molecule yields two molecules of ammonia. The ammonia then ascends the ‘food chain’, and is used as amino groups in protein synthesis for plants and animals. This is a very tiny mechanism, but multiplied on a large scale it is of critical importance in allowing plant growth and food production on our planet to continue. 

They are able to capture the energy of light  with 95% efficiency. Recently it has been discovered, that they accomplish that through sophisticated  quantum mechanics – an esoteric aspect of nature that even most scientists don’t understand. The use light harvesting antennas for that !! 

They possess a autoregulatory transcriptional feedback mechanism called circadian clock and coordinate their activities  such as sleep/wake behavior, body temperature, hormone secretion, and metabolism into daily cycles . This is a  intrinsic time-keeping mechanism that controls the daily rhythms of numerous physiological processes. They control the expression of numerous genes, including those that code for the oscillator proteins of the clock itself.Cyanobacterias have 1,054 protein families !!!

In a BBC report , they said : Oxygenic photosynthesis is a very complicated metabolism and it makes sense that the evolution of such a metabolism would take perhaps two billion years.   

Feel free to explain how Cyanobacteria got these amazing capabilites, amongst others, in a relatively short evolutionary time scale ?



Last edited by Admin on Wed Mar 08, 2017 2:54 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The awe inspiring spliceosome, the most complex macromolecular machines known, and pre-mRNA processing in eukaryotic cells

http://reasonandscience.heavenforum.org/t2180-the-spliceosome-the-splicing-code-and-pre-mrna-processing-in-eukaryotic-cells#4014

Along the way to make proteins in eukaryotic cells,  there is a whole chain of subsequent events that must all be fully operational, and the machinery in place, in order to get the functional product, that is  proteins. At the beginning of the process, DNA is transcribed in the RNA polymerase molecular machine, to yield messenger RNA ( mRNA ) , which afterwards must go through post transcriptional modifications. That is CAPPING,  ELONGATION,  SPLICING, CLEAVAGE,POLYADENYLATION AND TERMINATION , before it can be EXPORTED FROM THE NUCLEUS TO THE CYTOSOL,  and PROTEIN SYNTHESIS INITIATED, (TRANSLATION), and  COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING.

Bacterial mRNAs are synthesized by the RNA polymerase starting and stopping at specific spots on the genome. The situation in eukaryotes is substantially different. In particular, transcription is only the first of several steps needed to produce a mature mRNA molecule. The mature transcript for many genes is encoded in a discontinuous manner in a series of discrete exons, which are separated from each other along the DNA strand by non-coding introns. mRNAs, rRNAs, and tRNAs can all contain introns that must be removed from precursor RNAs to produce functional molecules.The formidable task of identifying and splicing together exons among all the intronic RNA is performed by a large ribonucleoprotein machine, the spliceosome, which is composed of several individual small nuclear ribonucleoproteins,  five snRNPs,  pronounced ‘snurps’, (U1, U2, U4, U5, and U6) each containing an RNA molecule called an snRNA that is usually 100–300 nucleotides long, plus additional protein factors that recognize specific sequences in the mRNA or promote conformational rearrangements in the spliceosome required for the splicing reaction to progress, and many more additional proteins that come and go during the splicing reaction.  It has been described as one of "the most complex macromolecular machines known," "composed of as many as 300 distinct proteins and five RNAs".

The snRNAs perform many of the spliceosome’s mRNA recognition events. Splice site consensus sequences are recognized by non-snRNP factors; the branch-point sequence is recognized by the branch-point-binding protein (BBP), and the polypyrimidine tract and 3′ splice site are bound by two specific protein components of a splicing complex referred to as U2AF (U2 auxiliary factor), U2AF65 and U2AF35, respectively.

This is one more great example of a amazingly complex molecular machine, that will operate and exercise its precise orchestrated function properly ONLY with ALL components fully developed and formed and able to interact in a highly complex, ordered , precise manner. Both, the software, and the hardware, must be in place fully developed, or the mechanism will not work. No intermediate stage will do the job. And neither would  snRNPs (U1, U2, U4, U5, and U6) have any function if not fully developed. And even if they were there, without the branch-point-binding protein (BBP) in place, nothing done, either, since the correct splice site could not be recognized. Had the introns and exons not have to emerge simultaneously with the Spliceosome ? No wonder, does the paper : " Origin and evolution of spliceosomal introns " admit:  Evolution of exon-intron structure of eukaryotic genes has been a matter of long-standing, intensive debate. 1 and it  concludes that : The elucidation of the general scenario of evolution of eukaryote gene architecture by no account implies that the main problems in the study of intron evolution and function have been solved. Quite the contrary, fundamental questions remains wide open. If the first evolutionary step would have been the arise of  self-splicing Group II introns, then the question would follow : Why would evolution not have stopped there, since that method works just fine ? 


There is no credible road map, how introns and exons, and  the splice function could have emerged gradually. What good would the spliceosome be good for, if the essential sequence elements to recognise where to slice would not be in place ? What would happen, if the pre mRNA with exons and introns were in place, but no spliceosome ready in place to do the post transcriptional modification, and neither the splicing code, which directs the way where to splice  ? In the article : ‘JUNK’ DNA HIDES ASSEMBLY INSTRUCTIONS, the author,  Wang,  observes that splicing "is a tightly regulated process, and a great number of  diseases are caused by the 'misregulation' of splicing in which the gene was not cut and pasted correctly." Missplicing in the cell can have dire consequences as the desired product is not produced, and often the wrong products can be toxic for the cell. For this reason, it  has been proposed that  ATPases are important for ‘proofreading’ mechanisms that promote fidelity in splice site selection. In his textbook Essentials of Molecular Biology, George Malacinski points out why proper polypeptide production is critical:

"A cell cannot, of course, afford to miss any of the splice junctions by even a single nucleotide, because this could result in an interruption of the correct reading frame, leading to a truncated protein."

Following the binding of these initial components, the remainder of the splicing apparatus assembles around them, in some cases displacing some of the previously bound components.

Question: How could the information to assemble the splicing apparatus correctly have emerged gradually ? In order to do so, had the assembly parts not have to be there, at the assembly site, fully developed, and ready for recruitment?  Had the availability of these parts not have  to be synchronized so that at some point, either individually or in combination, they were all available at the same time ? Had the assembly not have to be coordinated in the right way right from the start ? Had the parts not have to be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’ ? even if sub systems or parts are put together in the right order, they also need to interface correctly.


Is it feasable that this complex machine were the result of a progressive evolutionary development, in which simple molecules are the start of the biosynthesis chain and are then progressively developed in sequencial steps, if the end goal is not known by the process and mechanism promoting the development ?  How could  each intermediate in the pathway be a end point in the pathway, if that end point had no function ? Did not  each intermediate have to be usable in the past as an end product ? And how could the be usable, if the amino acid sequence chain had only a fraction of the fully developed sequence ? How could successive steps be added to improve the efficiency of a product where there was no use for it at this stage ?  Despite the fact that proponents of naturalism embrace this kind of scenario, it seems obvious that is extremely unlikely to be possible that way.

Martin and Koonin admit in their paper  “Hypothesis: Introns and the origin of nucleus-cytosol compartmentalization,”:  The transition to spliceosome-dependent splicing will also impose an unforgiving demand for inventions in addition to the spliceosome. And furthermore: More recent are the insights that there is virtually no evolutionary grade detectable in the origin of the spliceosome, which apparently was present in its (almost) fully fledged state in the common ancestor of eukaryotic lineages studied so far. Thats a surprising admittance.

This means that  the spliceosome  appeared fully formed almost abruptly, and that the intron invasion took place over a short time and has not changed for supposedly hundreds of millions of years.

In another interesting paper : Breaking the second genetic code, the authors write 2 :  The genetic instructions of complex organisms exhibit a counter-intuitive feature not shared by simpler genomes: nucleotide sequences coding for a protein (exons) are interrupted by other nucleotide regions that seem to hold no information (introns). This bizarre organization of genetic messages forces cells to remove introns from the precursor mRNA (pre-mRNA) and then splice together the exons to generate translatable instructions. An advantage of this mechanism is that it allows different cells to choose alternative means of pre-mRNA splicing and thus generates diverse messages from a single gene. The variant mRNAs can then encode different proteins
with distinct functions. One difficulty with understanding alternative pre-mRNA splicing is that the selection of particular exons in mature mRNAs is determined not only by intron sequences adjacent to the exon boundaries, but also by a multitude of other sequence elements present in both exons and introns. These auxiliary sequences are recognized by regulatory factors that assist or prevent the function of the spliceosome — the molecular machinery in charge of intron removal.

Moreover, coupling between RNA processing and gene transcription influences alternative splicing, and recent data implicate the packing of DNA with histone proteins and histone covalent modifications — the epigenetic code — in the regulation of splicing. The interplay between the histone and the splicing codes will therefore need to be accurately formulated in future approaches. 

Question: How could natural mechanisms have provided  the tuning, synchronization and coordination  between the histone and the splicing codes ? First, these two codes and the carrier proteins and molecules ( the hardware and software ) would have to emerge by themself, and in a second step orchestrate  their coordination. Why is it reasonable to believe, that unguided, random chemical reactions would be capable of emerging with  the immensly complex organismal functions ? 

Fazale Rana puts it nicely :  Astounding is the fact that other codes, such as the histone binding code, transcription factor binding code, the splicing code, and the RNA secondary structure code, overlap the genetic code. Each of these codes plays a special role in gene expression, but they also must work together in a coherent integrated fashion.

1) http://www.biologydirect.com/content/7/1/11
2) http://nar.oxfordjournals.org/content/early/2013/11/07/nar.gkt1053.full.pdf

Dale Dickinson Loved this post when I read it. I have a PhD in genetics and molecular biology and did two postdocs on cell signaling. The more I got into complex molecular and signaling cascades the more I came to believe that evolution was an absolute farce. Almost any change in DNA sequence (assuming at a locus that effects functional RNA production in some fashion) has a deleterious impact. While a positive impacts do come along, they are very, very rare, and given simple probability theory, could not account for irreducibly complex structures such as the spliceosome or signaling pathways with multiple levels of control and cross-talk.



Last edited by Admin on Wed Mar 08, 2017 2:06 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The astonishing  language written on microtubules, amazing evidence of  design

http://reasonandscience.heavenforum.org/t2096-the-astonishing-language-written-on-microtubules-amazing-evidence-of-design

Following information is truly mind boggling. Take your time to read all through, and check the links. The creator of life has left a wealth of evidence for his existence in creation. A treasure grove  to evidence intelligent design is every living cell. Its widely known that DNA is a advanced information storage device, encoding complex specified information to make proteins and directing many highly complex processes in the cell. What is less known, is that there are several other code systems as well, namely the histone binding code, transcription factor binding code, the splicing code, and the RNA secondary structure code. And there is  another astonishing code system, called the tubuline code, which is being unravelled in recent scientific research. It is known so far that amongst other things, it directs and signals Kinesin and Myosin motor proteins precisely where and when to disengage from nanomolecular superhighways and deliver their cargo.

http://reasonandscience.heavenforum.org/t1448-kinesin-motor-proteins-amazing-cargo-carriers-in-the-cell?highlight=kinesin

Recent research helds that this code in a amazing manner even stores our memories in the brain and makes them available on the long therm. 

http://reasonandscience.heavenforum.org/t2182-heres-an-incredible-idea-for-how-memory-works#4032

For cells to function properly, they must organize themselves and interact mechanically with each other and with their environment. They have to be correctly shaped, physically robust, and properly structured internally. Many have to change their shape and move from place to place. All cells have to be able to rearrange their internal components as they grow, divide, and adapt to changing circumstances. These spatial and mechanical functions depend on a remarkable system of filaments called the cytoskeleton. The cytoskeleton’s varied functions depend on the behavior of three families of protein filaments—actin filaments, microtubules, and intermediate filaments. Microtubules are very important in a number of cellular processes. They are involved in maintaining the structure of the cell and provide a platform for intracellular macromolecular assemblies through dynein and kinesin motors. They are also involved in chromosome separation (mitosis and meiosis), and are the major constituents of mitotic spindles, which are used to pull apart eukaryotic chromosomes. Mitotic cell division is the most fundamental task of all living cells. Cells have intricate and tightly regulated machinery to ensure that mitosis occurs with appropriate frequency and high fidelity. If someone wants to explain the origin of eukaryotic cells, the arise of mitosis and its mechanism and involved cell organelles and proteins must be elucidated. The  centrosome plays a crucial role : it functions as the major microtubule-organizing center and plays a vital role in guiding chromosome segregation during mitosis. In the centrosome, two centrioles reside at right angles to each other, connected at one end by fibers.
These architecturally perfect structures are essential in many animal cells and plants (though not in flowering plants or fungi, or in prokaryotes). They help organize the centrosomes, whose spindles of microtubules during cell division reach out to the lined-up chromosomes and pull them into the daughter cells.

http://reasonandscience.heavenforum.org/t2090-centriole-centrosome-the-centriole-spindle-the-most-complex-machine-known-in-nature?highlight=spindle

α- and β-tubulin heterodimers  are the structural subunits of microtubules. The structure is divided in  the amino-terminal domain containing the nucleotide-binding region, an intermediate domain containing the Taxol-binding site, and the carboxy-terminal domain, which probably constitutes the binding surface for motor proteins. Unless all 3 functional domais were fully functional right from the beginning,  tubulins would have no useful function. There would be no reason for the Taxol-binding site to be without motor proteins existing. Dynamic instability, the stochastic switching between growth and shrinkage, is essential for microtubule function. 

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4033

Microtubule dynamics inside the cell are governed by a variety of proteins that bind tubulin dimers or microtubules. Proteins that bind to microtubules are collectively called microtubule-associated proteins, or MAPs.The MAP family includes large proteins like MAP-1A, MAP-1B, MAP-1C, MAP-2, and MAP-4 and smaller components like tau and MAP-2C.

This is highly relevant. Microtubules depend on microtubule-associated proteins for proper function. Interdependence is a hallmark of intelligent design, and strong evidence that both, microtubules, and MAP's had to emerge together, at the same time, since one depends on the other for proper function. But more than that. Microtubules are essential to form the cytoskeleton, which is essential for cell shape and structure. In a few words, No MAP's, no proper function of microtubules. No microtubules, no proper function of the cytoskeleton. No cytoskeleton, no proper functioning cell. Evidence is very strong, that all these elements had to arise together at once. Kinesin and Dynein belong to MAP proteins. Kinesin-13 proteins  contribute to microtubule depolymerizing activity to the centrosome and centromere  during mitosis. These activities have been shown to be essential for spindle morphogenesis and chromosome segregation.  A step-wise  evolutionary emergence of eukaryotic cells is not feasable since several parts of the call can only work if interacting together in a interlocked fully developed system. 

When incorporated into microtubules, tubulin accumulates a number of post-translational modifications, many of which are unique to these proteins. These modifications include detyrosination, acetylation, polyglutamylation, polyglycylation,phosphorylation, ubiquitination, sumoylation, and palmitoylation. The α- and β-tubulin heterodimer undergoes multiple post-translational modifications (PTMs). The modified tubulin subunits are non-uniformly distributed along microtubules. Analogous to the model of the ‘histone code’ on chromatin, diverse PTMs are proposed to form a biochemical ‘tubulin code’ that can be ‘read’ by factors that interact with microtubules .

This is a relevant and amazing fact , and raises the question of how the " tubulin code " beside the several other codes in the cell emerged. In my view, once more this shows that intelligence was required to create these amazing biomolecular structures;  formation of  coded information has always shown to be able only to be produced by intelligent minds. What good would the tubulin code be for, if no specific goal was forseen, that is, it acts as emitter of information , and if there is no destination and receiver  of the information, there is no reason of the code to arise in the first place. So both, sender and receiver, must exist first as hardware, that is the microtubules with the post transcriptional modified tubulin units in a specified coded conformation, and the the receiver, which can be MAP's in general, or Kinesin or Myosin motor proteins, which are directed to the right destination to fullfill specific tasks, or other proteins directed for specific jobs.

Taken together, multiple and complex tubulin PTMs provide a myriad of combinatorial possibilities to specifically ‘tag’ microtubule subpopulations in cells, thus destining them for precise functions. How this tubulin or microtubule code allows cells to divide, migrate, communicate and differentiate in an ordered manner is an exciting question that needs to be answered in the near future. Initial insights have already revealed the potential roles of tubulin PTMs in a number of human pathologies, like cancer, neurodegeneration and ciliopathies. This raises the question : If PTM's are not precise and fully functioning, they cause deseases. What about if the MAP's are not fully specified and evolved ? There is a threshold , a dividing line between a non functional protein - amino acid sequence that is non functional, and when it has enough residues to fold properly and become functional. How proteins arose in the first place is a mistery for proponents of natural mechanisms..... Not only does it have to be elucidated how this tubulin or microtubule code allows cells to do all these tasks, but also what explains best its arising and encoding. Most of these enzymes are specific to tubulin and microtubule post translational modifications. They have only use if microtubules exist. Microtubules however require these enzymes to modify their structures.  It can therefor be concluded that they are interdependent and could not arise independently by natural evolutionary mechanisms. 

An emerging hypothesis is that tubulin modifications specify a code that dictates biological outcomes through changes in higher-order microtubule structure and/or by recruiting and interacting with effector proteins. This hypothesis is analogous to the histone code hypothesis ‑ that modifications on core histones, acting in a combinatorial or sequential fashion, specify multiple functions of chromatin such as changes in higher-order chromatin structure or selective activation of transcription. The apparent parallels between these two types of structural frameworks, chromatin in the nucleus and microtubules in the cytoplasm, are intriguing 

Isn't that  striking evidence of a  common designer that invented both codes ? 

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4035

Microtubules are typically nucleated and organized by dedicated organelles called microtubule-organizing centres (MTOCs). Contained within the MTOC is another type of tubulin, γ-tubulin, which is distinct from the α- and β-subunits of the microtubules themselves. The γ-tubulin combines with several other associated proteins to form a lock washer-like structure known as the γ-tubulin ring complex" (γ-TuRC). This complex acts as a template for α/β-tubulin dimers to begin polymerization; it acts as a cap of the (−) end while microtubule growth continues away from the MTOC in the (+) direction. The γ-tubulin small complex (γTuSC) is the conserved, essential core of the microtubule nucleating machinery, and it is found in nearly all eukaryotes.

This  γ-tubulin ring complex  is a striking example of purposeful design which is required to nucleate the microtubules into the right shape. There would be no function for the γ-tubulin ring complex to emerge without microtubules, since  it would have no function by its own. Furthermore, it is made of several subunits which are indispensable for proper use, that is for example the attachment factors, accessory proteins, and γ-tubulins, which constitute a irreducible γ-tubulins ring complex, made of several interlocked parts, which could not emerge by natural selection. The complex has only purposeful function when microtubules have to be asssembled. So the, γ-tubulins ring complex and microtubules are interdependent. 

See its striking structure here :

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4040

Here’s an Incredible Idea For How Memory Works

Cytoskeletal Signaling: Is Memory Encoded in Microtubule Lattices by CaMKII Phosphorylation?

how the brain could store information long-term has been something of a mystery. But now researchers have developed a very interesting idea of how the brain’s neurons could store information using, believe it or not, a binary encoding scheme based on phosphorylation:

Memory is attributed to strengthened synaptic connections among particular brain neurons, yet synaptic membrane components are transient, whereas memories can endure. This suggests synaptic information is encoded and ‘hard-wired’ elsewhere, e.g. at molecular levels within the post-synaptic neuron. In long-term potentiation (LTP), a cellular and molecular model for memory, post-synaptic calcium ion (Ca2+) flux activates the hexagonal Ca2+-calmodulin dependent kinase II (CaMKII), a dodacameric holoenzyme containing 2 hexagonal sets of 6 kinase domains.
This enzyme has a astonishing and remarkable configuration and functionality :

Each kinase domain can either phosphorylate substrate proteins, or not (i.e. encoding one bit). Thus each set of extended CaMKII kinases can potentially encode synaptic Ca2+ information via phosphorylation as ordered arrays of binary ‘bits’. Candidate sites for CaMKII phosphorylation-encoded molecular memory include microtubules (MTs), cylindrical organelles whose surfaces represent a regular lattice with a pattern of hexagonal polymers of the protein tubulin. Using molecular mechanics modeling and electrostatic profiling, we find that spatial dimensions and geometry of the extended CaMKII kinase domains precisely match those of MT hexagonal lattices. This suggests sets of six CaMKII kinase domains phosphorylate hexagonal MT lattice neighborhoods collectively, e.g. conveying synaptic information as ordered arrays of six “bits”, and thus “bytes”, with 64 to 5,281 possible bit states per CaMKII-MT byte. Signaling and encoding in MTs and other cytoskeletal structures offer rapid, robust solid-state information processing which may reflect a general code for MT-based memory and information processing within neurons and other eukaryotic cells.

Size and geometry of the activated hexagonal CaMKII holoenzyme and the two types of hexagonal lattices (A and B) in MTs are identical. 6 extended kinases can interface collectively with 6 tubulins

Is the precise interface matching striking coincidence, or purposeful design ? Either a intelligent , goal oriented creator made the correct size, where CaMKII would fit and match the hexagonal lattices, or that is the result of unguided, random, evolutionary processes. What explanation makes more sense ?  

The electrostatic pattern formed by a neighborhood of tubulin dimers on a microtubule ( MT )  surface  shows highly negative charged regions surrounded by a less pronounced positive background, dependent on the MT lattice type . These electrostatic fingerprints are complementary to those formed by the 6 CaMKII holoenzyme kinase domains making the two natural substrates for interaction. Alignment of the CaMKII holoenzyme with tubulin dimers in the A-lattice MT arrangement yields converging electric field lines indicating a mutually attractive interaction.

So additionally to the precise interface matching significant association of the CaMKII holoenzyme with the MT through electrostatic forces indicates cumulative evidence of design.

there are 26 possible encoding states for a single CaMKII-MT interaction resulting in the storage of 64 bits of information. This case, however, only accounts for either α- or β-tubulin phosphorylation, not both. In the second scenario each tubulin dimer is considered to have three possible states – no phosphorylation (0), β-tubulin phosphorylation (1), or α-tubulin phosphorylation (2) (see Figure 5 B). These are ternary states, or ‘trits’ (rather than bits). Six possible sites on the A-lattice yield 36 = 729 possible states. The third scenario considers the 9-tubulin B-lattice neighborhood with ternary states. As in the previous scenarios the central dimer is not considered available for phosphorylation. In this case, 6 tubulin dimers out of 8 may be phosphorylated in three possible ways. The total number of possible states for the B lattice neighborhood is thus 36–28−8(27) = 5281 unique states.

So thirdly we have here a advanced encoding mechanism of information, which adds to the precise interface and electrostatic force interactions, which adds further cumulative evidence of design.

http://reasonandscience.heavenforum.org/t2181-cell-communication-and-signalling-evidence-of-design#4019

Motor proteins dynein and kinesin move  along microtubules (using ATP as fuel) to transport and deliver components and precursors to specific synaptic locations. While microtubules are assumed to function as passive guides, like railroad tracks for motor proteins, the guidance mechanism seems to be through CaMKII kinase enzymes which "write" on microtubules through phosphorylation and encode the way  to regulate motor protein transport along microtubules directly, and signal motor proteins precisely where and when to disengage from microtubules and deliver their cargo. There needs to be programming all the way along. Programming to make the specific enzymes, and how they have to operate.  That constitutes in my view another amazing argument for intelligent design.



Last edited by Admin on Wed Mar 08, 2017 2:48 am; edited 2 times in total

https://reasonandscience.catsboard.com

Otangelo


Admin

Transition from from unicellular to multicellular Organisms, a major change that finds no good explanation through evolution

Proponents of evolution claim like a mantra, that micro evolution leads to macro evolution, and no barrier exists which hinders the transition from one to the other, which last not least explains our biodiversity today.

The emergence of multicellularity was supposedly, a major evolutionary leap. Indeed, most biologists consider it one of the most significant transitions in the evolutionary history of Earth’s inhabitants. “How a single cell made the leap to a complex organism is however one of life’s great mysteries.”

Macro evolutionary scenarios and changes include major transitions , that is from LUCA, the last common universal ancestor, to  the congregation  to yield the first prokaryotic cells, the associations of prokaryotic cells to create eukaryotic cells with organelles such as chloroplasts and mitochondria, and the establishment of cooperative societies composed of discrete multi-cellular individuals. Or in other words : The current hierarchical organization of life reflects a series of transitions in the units of evolution, such as from genes to chromosomes, from prokaryotic to eukaryotic cells, from unicellular to multi cellular individuals, and from multi-cellular organisms to societies. Each of these steps requires the overcome of huge hurdles and  increase of complexity , which can only be appreciated by the ones, that have spend time to educate themselves, and gained  insight of the extraordinarily complex and manifold  mechanisms involved. The emergence of multi-cellularity was ostensibly a major evolutionary leap. 

The switch from single-celled organisms to ones made up of many cells have supposedly evolved independently more than two dozen times.  Evolution requires more than a mere augmentation of an existing system for co-ordinated multicellularity to evolve; it requires the ex nihilo creation of an entirely new system of organisation to co-ordinate cells appropriately to form a multicellular individual.

There is a  level of structure found only in multi-cellular organisms: intercellular co-ordination. The organism has strategies for arranging and differentiating its cells for survival and reproduction. With this comes a communication network between the cells that regulates the positioning and abundance of each cell type for the benefit of the whole organism. A fundamental part of this organisation is cellular differentiation, which is ubiquitous in multicellular organisms. This level cannot be explained by the sum of the parts, cells, and requires co-ordination from an organisational level above what exists in individual cells. There is  a 4-level hierarchy of control in multicellular organisms that constitutes a gene regulatory network. This gene regulatory network is essential for the development of the single cell zygote into a full-fledged multicellular individual.

If evolution and transition from unicellular to multi cellular life is exceedingly complex, the chance that it happened once is also exceedingly small. That it happened multiple times separately, becomes even more remotely possible. Convergent evolution of similar traits is evidence against , not for evolution. In order to infer that a proposition is true, these nuances are important to observed. The key is in the details. As Behe states : In order to say that some function is understood, every relevant step in the process must be elucidated. The relevant steps in biological processes occur ultimately at the molecular level, so a satisfactory explanation of a biological phenomenon such as the de novo make of cell communication  and cell junction proteins essential for multi-cellular life must include a molecular explanation.

The cells had not only to hold together, but important mechanisms to stick the cells together had to emerge, that is, the ability of individual cells to associate in precise patterns to form tissues, organs, and organ systems requires that individual cells be able to recognize, adhere to, and communicate with each other.

Of all the social interactions between cells in a multicellular organism, the most fundamental are those that hold the cells together. The apparatus of cell junctions and the extracellular matrix is critical for every aspect of the organization, function, and dynamics of multicellular structures. Animal cells use specialized adhesion receptors to attach to one another. Many of these adhesion proteins are transmembrane proteins, which means the extracellular portion of these proteins can interact with the extracellular portion of similar proteins on the surface of a neighboring cell. Although diagrams of adhesive structures may suggest that they are static once assembled, they are anything but. Cells can dynamically assemble and disassemble adhesions in response to a variety of events.  This seems to be a essential requirement for function right from the beginning of multicellularity.  Many adhesion proteins are continuously recycled: Protein at the cell surface is internalized by  endocytosis, and new protein is deposited at the surface via exocytosis. The molecular machines to exercise these functions therefore had to emerge together with adhesion proteins. Furthermorecell adhesion is coordinated with other major processes, including 

1.cell signaling, 
2.cell movement, 
3.cell proliferation, and 
4.cell survival. 

We now know that cell-cell adhesion receptors fall into a relatively small number of classes. They include 

1.immunoglobulin superfamily (IgSF) proteins, 
2.cadherins, 
3.selectins, and, in a few cases, 
4.integrins

In order to explain multicellularity, its origin must be explained .

http://reasonandscience.heavenforum.org/t2187-cell-junctions-and-the-extracellular-matrix

Thus, the apparatus of cell junctions and the extracellular matrix is critical for every aspect of the organization, function, and dynamics of multi cellular structures. The arise of  adhesive junctions, tight junctions and gap junctions,  and how they emerged is therefor a key factor to explain multi-cellular life. The cells of multi-cellular organisms detect and respond to countless internal and extracellular signals that control their growth, division, and differentiation during development, as well as their behavior in adult tissues. At the heart of all these communication systems are regulatory proteins that produce chemical signals, which are sent from one place to another in the body or within a cell, usually being processed along the way and integrated with other signals to provide clear and effective communication. The arise of these communication channels had to arise together with junction mechanisms in order to establish successful multi cellular organisms. One feature without the other would not have provided success and advantage  of survival. 

The ability of cells to receive and act on signals from beyond the plasma membrane is fundamental to life.  This conversion of information into a chemical change, signal transduction, is a universal property of living cells. Signal transductions are remarkably specific and exquisitely sensitive. Specificity is achieved by precise molecular complementarity between the signal and receptor molecules. 

Question : signal transduction had to be present in the first living cells. How could the specificity of the signal molecule , and the precise fit on its complementary receptor have evolved ? or the Amplification, or the desensitization/adaptation, where the receptor activation triggers a feedback circuit that shuts off the receptor or removes it from the cell surface, once the signal got trough ? 

Three factors account for the extraordinary sensitivity of signal transducers: the high affinity of receptors for signal molecules, cooperativity (often but not always) in the ligand-receptor interaction, and amplification of the signal by enzyme cascades. The trigger for each system is different, but the general features of signal transduction are common to all: a signal interacts with a receptor; the activated receptor interacts with cellular machinery, producing a second signal or a change in the activity of a cellular protein; the metabolic activity of the target cell undergoes a change; and finally, the transduction event ends. This seems to be a irreducible system, requiring high content of pre-programming and advanced coding.

http://reasonandscience.heavenforum.org/t2181-cell-communication-and-signalling-evidence-of-design



Question : how did the high affinity, cooperativity and amplification have emerged ? Is a preestablished convention not necessary, and so a mental process to yield the function ? Is trial and error or evolution not a completely incapable mechanism to get this functional information system ? 

This is a important, essential and fundamental macro evolutionary change, and the explanation of macro-evolution must account for these changes, and provide feasible possible and likely ways through mutation and natural selection. Beside this,  a shift on several levels of biological organization had to occur, providing a considerable advantage of survival, considering that for example  one of the first cooperative steps required for the evolution of multicellularity in the volvocine algae was the development of the extracellular cell matrix from cell wall components, which can be metabolically costly to produce. But much more is required.

Ann Gauger: New genes and proteins must be invented. The cytoskeleton, Hox genes, desmosomes, cell adhesion molecules, growth factors, microtubules, microfilaments, neurotransmitters, whatever it takes to get cells to stick together, form different shapes, specialize, and communicate must all come from somewhere. Regulatory proteins and RNAs must be made to control the expression in time and space of these new proteins so that they all work together with existing pathways.In fact, in order for development to proceed in any organism, a whole cascade of coordinated genetic and biochemical events is necessary so that cells divide, change shape, migrate, and finally differentiate into many cell types, all in the right sequence at the right time and place. These cascades and the resulting cell divisions, shape changes, etc., are mutually interdependent. Interrupting one disrupts the others.

And last not least: 

Like engineers carefully blowing up a bridge, cells have intricate, programmed suicide mechanisms. Without apoptosis, all multicellular life would be impossible. Good luck to proponents of evolution to explain how it emerged........


http://reasonandscience.heavenforum.org/t2193-apoptosis-cell-s-essential-mechanism-of-programmed-suicide-points-to-design



Last edited by Admin on Tue Jan 03, 2017 8:18 pm; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

O spliceosome incrível, a máquina macromolecular mais complexa conhecida, e processamento de pré-mRNA em células eucarióticas

http://reasonandscience.heavenforum.org/t2180-the-spliceosome-the-splicing-code-and-pre-mrna-processing-in-eukaryotic-cells

Ao longo do caminho para fazer proteínas em células eucarióticas, há toda uma cadeia de eventos subsequentes que devem estar simultaneamente plenamente operacionais, bem como as máquinas prontas no local, a fim de obter o produto funcional, isto é proteínas. No início do processo, o DNA é transcrito na máquina molecular de RNA-polimerase, para se obter o RNA mensageiro (mRNA), que depois tem de passar por modificações pós-transcricionais. Isso é tampando o mRNA, fase chamada de alongamento, o splicing, corte, poliadenilação e terminação, antes que possa ser exportado a partir do núcleo para o CITOSSOL, e síntese protéica iniciada, (TRADUÇÃO), e a conclusão da síntese de proteína e dobra das proteínas.

mRNAs bacterianas são sintetizadas pela polimerase de RNA, a transcrição partindo  e parando em pontos específicos no genoma. A situação em eucariotas é substancialmente diferente. Em particular, a transcrição é apenas o primeiro de vários passos necessários para a produção de uma molécula de mRNA madura. O RNA maduro para muitos genes é codificada de uma maneira descontínua numa série de exões discretos, que são separados um do outro ao longo da cadeia de RNA por intrões não-codificantes. mRNA, rRNA, e tRNA  podem conter intrões que devem ser removidos a partir de RNAs do precursor para produzir moleculas funcionais . A tarefa formidável de identificação e junção para unir  exões entre todos os RNA's intrónicos é realizada por uma máquina grande de ribonucleoproteína, chamada spliceossoma, qual é composta de várias pequenas ribonucleoproteínas nucleares individuais, cinco snRNPs, pronuncia-se '' snurps ", (U1, U2, U4, U5 e U6), cada uma contendo uma molécula de RNA chamada de snRNA que tem geralmente 100-300 nucleótidos de comprimento, além de fatores adicionais de proteínas que reconhecem sequências específicas do mRNA ou promovem rearranjos conformacionais na spliceosoma necessário para a reacção de splicing a progressão, e muitas proteínas  adicionais a mais que vão e vêm durante a reacção de agregação. Ele foi descrito como uma das " máquinas mais complexas macromoleculares conhecidas", "composta por mais de 300 proteínas distintas e cinco RNAs".

Os snRNAs realizam muitos dos eventos de reconhecimento de mRNA do spliceosome. Sequências de consenso local Splice são reconhecidas por fatores não-snRNP; a sequência de ramo de ponto é reconhecida pela proteína de ramo de ligação ponto-(BBP), e o aparelho de polipirimidina e 3 'local de splicing estão ligados por dois componentes proteicos específicos de um complexo de splicing referidos como U2AF (U2 fator auxiliar), U2AF65 e U2AF35, respectivamente.

Este é mais um grande exemplo de uma máquina molecular surpreendentemente complexa, que vai operar e exercer a sua função orquestrada precisa corretamente somente com todos os componentes totalmente desenvolvidos e formados e capazes de interagir de uma maneira altamente complexa, ordenada, precisa. Ambos, o software e o hardware, devem estar no local totalmente desenvolvidos, ou o mecanismo não iria funcionar. Nenhum estágio intermediário iria fazer o trabalho. E nem  snRNPs (U1, U2, U4, U5 e U6) têm qualquer função, se não totalmente desenvolvidos. E mesmo se eles estivessem lá, sem a proteína ramo de ligação ponto-(BBP) no lugar, nada feito também, desde que o local de splicing correto não poderia ser reconhecido. E os íntrons e exons não tinham que  surgir em simultâneo com a spliceosome? Não admira, que o artigo científico: "Origem e evolução de íntrons spliceosomal" admite: Evolução da estrutura exon-intron dos genes eucarióticos tem sido uma questão de longa data, de debate intensivo, e conclui que: A elucidação do quadro geral da evolução da arquitetura gene eucarionte de maneira alguma implica que os principais problemas no estudo da evolução e função intron foram resolvidos. Muito pelo contrário, as questões fundamentais continua em aberto. Se a primeira etapa evolutiva teria sido o surgimento de íntrons self-splicing do Grupo II, então a questão se seguiria: Por que  a evolução não parou  por aí, já que esse método funciona muito bem?

Não há roteiro crível, como íntrons e exons, e a função de emenda poderia ter surgido de forma gradual. Que utilidade o spliceosome teria , se os elementos essenciais para reconhecer a sequência e fatia a ser cortada  não estaria no lugar? O que aconteceria, se o mRNA com pré exons e introns estivessem no local, mas o spliceosome não estivesse pronto no lugar para fazer a modificação pós-transcricional ?  e nem o código de splicing, que direciona a maneira em que a emenda deve ser feita ? No artigo: "junk" DNA ESCONDE INSTRUÇÕES DE MONTAGEM, o autor, Wang, observa que splicing "é um processo rigorosamente regulado, e um grande número de doenças são causadas pelo" falha de regulação  'de splicing em que o gene não foi cortado e colado corretamente. " Splicing incorreta na célula pode ter conseqüências terríveis como o produto desejado não ser produzido, e muitas vezes os produtos errados podem ser tóxicos para a célula. Por esta razão, foi proposto que ATPases são importantes para mecanismos de revisão '' que promovem a fidelidade na selecção do local de splice. No livro Essentials of Molecular Biology , George Malacinski ressalta por que a produção de polipeptídeos adequados é fundamental:

"Uma célula não pode, evidentemente, se dar ao luxo de perder qualquer uma das junções de processamento por até mesmo um único nucleótido, porque isto poderia resultar numa interrupção da fase de leitura correta, levando a uma proteína truncada."

Após a ligação destes componentes iniciais, o resto do aparelho de emenda monta-los em torno dos componentes do mRNA, e em alguns casos, até deslocando alguns dos componentes anteriormente ligados.

Pergunta: Como é que as informações para montar o aparelho de emenda corretamente surgiram gradualmente? A fim de fazer isso, tinham as peças para montar esta maquina nanomolecular formidável  não ter que estar lá, no local da montagem, totalmente desenvolvidos e especificados e prontos para o recrutamento? Tinha a disponibilidade destes componentes não ter que ser sincronizada, de modo que, em algum ponto, quer individualmente ou em combinação, eles foram todos disponíveis, ao mesmo tempo? Tinha a montagem não ter que ser coordenada no modo e na maneira certa desde o início? As partes não tinham que ser  compatíveis entre si, e capaz de corretamente 'interagir'? Mesmo se os sistemas sub ou partes são colocados juntos na ordem certa, eles também precisam estar com a interface correta desde o primeiro momento.

Será que é imaginável que esta máquina complexa  fosse o resultado de um desenvolvimento evolutivo progressivo, em que as moléculas simples são o início da cadeia de biossíntese e são, em seguida desenvolvidas progressivamente em passos sequenciais, se o objetivo final não é conhecido pelo processo e mecanismo de promoção do desenvolvimento? Como poderia cada produto intermediário no caminho ser um ponto final da via, se este ponto final intermediario não apresentava função? Cada ponto de desenvolvimento intermediário não tinha  que  ser utilizável  como um produto final com apdidão de sobrevivência maior? E como poderia  ser utilizável, se a cadeia de sequência de aminoácidos tinha apenas uma fracção da sequência totalmente desenvolvida? Conhecimento molecular é a quantidade mínima de informação útil para um gene necessário para ter qualquer função. Se um gene não contém conhecimento molecular, então ele não tem nenhuma função, ele não confere qualquer vantagem seletiva. Assim, antes de uma região do DNA contér o conhecimento molecular necessário, a seleção natural não desempenha nenhum papel em guiar a sua evolução. Assim, o conhecimento molecular pode ser relacionado a uma probabilidade de evolução.

http://reasonandscience.heavenforum.org/t2062-proteins-how-they-provide-striking-evidence-of-design#4055

 Como poderia passos sucessivos ser adicionados para melhorar a eficiência de um produto onde não havia nenhum uso para ele nesta fase? Apesar do fato de que os defensores do naturalismo abraçarem  este tipo de cenário, parece óbvio que é extremamente improvável que seja possível desta maneira.

Martin e Koonin admitem em seu artigo "Hipóteses: Introns e a origem da compartimentalização núcleo-citoplasma,": A transição para splicing-dependente do spliceosome também vai impor uma demanda implacável para  invenções, além do spliceosome. E além disso: Mais recente é o reconhecimento que não há praticamente nenhum grau evolutivo detectável na origem do spliceosome, que aparentemente estava presente em seu estado de pleno direito no ancestral comum de linhagens eucarióticas estudadas até agora. Isso é uma admissão surpreendente.

Isto significa que o spliceosome apareceu completamente formado quase abruptamente, e que a invasão intron teve lugar durante um curto período de tempo e não mudou em supostamente centenas de milhões de anos.

Em outro artigo interessante: Quebrando o segundo código genético, os autores escrevem : As instruções genéticas de organismos complexos exibem um recurso contra-intuitivo não compartilhado por genomas mais simples: sequências de nucleótidos que codificam uma proteína (exons) são interrompidos por outras regiões de nucleótidos que parecem ter nenhuma informação (íntrons). Esta organização bizarra de mensagens genéticas forçam células a remover íntrons do mRNA precursor (pré-mRNA) e, em seguida, emendar juntos os exons para gerar instruções traduzíveis. Uma vantagem do presente mecanismo é que ele permite que diferentes células para escolher meios alternativos de splicing de pré-mRNA e, assim, gera diversas mensagens a partir de um único gene. As variantes podem, em seguida codificar proteínas diferentes
com funções distintas. Uma dificuldade com a compreensão de splicing alternativo de mRNA de pré-selecção é a de que os exões particulares em mRNAs maduros não são determinados apenas por sequências de intrões adjacentes aos limites de exão, mas também por uma série de outros elementos de sequências presentes em ambos os exões e intrões. Estas sequências auxiliares são reconhecidas por fatores reguladores que auxiliam ou impedem a função do spliceossoma - a maquinaria molecular responsável pela remoção do intrão.

Além disso, o acoplamento entre o processamento do RNA e transcrição do gene influencia o splicing alternativo, e dados recentes implicam a embalagem de DNA com proteínas histonas e modificações covalentes das histonas - o código epigenético - na regulação do splicing. A interação entre a histona e os códigos de emenda terá, portanto, que ser precisamente  formulada  nas abordagens futuras.

Pergunta: Como é que os mecanismos naturais forneceriam  o  ajuste fino, sincronização e coordenação entre a histona e os códigos de emenda? Em primeiro lugar, estes dois códigos e as proteínas transportadoras e moléculas (a hardware e software) teriam que emergir por eles mesmos, e em uma segunda etapa orquestrar a sua coordenação. Por que é razoável acreditar, que as reações químicas não guiadas, aleatórias seriam capaz de sair com  funções organismal imensamente complexos?

Fazale Rana : surpreendente é o fato de outros códigos, tais como o código de ligação a histona, o código de ligação de fator de transcrição, o código de splicing, e o código de estrutura secundária de RNA, o código glycan, e o código de tubulins se sobrepoem  ao código genético. Cada um destes códigos desempenha um papel especial na expressão do gene, mas eles também devem trabalhar em conjunto de forma coerente e integrada.

https://reasonandscience.catsboard.com

46My articles - Page 2 Empty Re: My articles Sat Oct 10, 2015 1:03 pm

Otangelo


Admin

A linguagem codificada  escrita em microtúbulos, o esqueleto da célula, e como isso ressalta surpreendentemente a origem inteligente da vida

http://reasonandscience.heavenforum.org/t2096-the-astonishing-language-written-on-microtubules-amazing-evidence-of-design

O criador da vida deixou uma riqueza de evidências de sua existência na criação. Vastas impressões que evidenciam o design inteligente em cada célula viva. É amplamente conhecido que o DNA é um dispositivo de armazenamento de informação complexa e specificada, que codifica a informação  para produzir proteínas e dirigindo muitos processos altamente complexos na célula. O que é menos conhecido, é que existem vários outros sistemas de códigos, bem como, nomeadamente, o código de ligação de histonas, código de ligação do fator de transcrição, o código de splicing,o código de estrutura secundária de RNA, e o código ultracomplexo e ainda não decifrado glycans. E há um outro sistema de código surpreendente, o chamado código Tubulin, que está sendo desvendado aos poucos em recentes pesquisas científicas. Sabe-se até agora que, entre outras coisas, ele dirige e dá sinais para proteínas motoras cinesina e miosina precisamente onde e quando para desengatar a partir de auto-estradas nanomolares aonde entregar sua carga.

http://reasonandscience.heavenforum.org/t1448-kinesin-motor-proteins-amazing-cargo-carriers-in-the-cell?highlight=kinesin

Pesquisas recentes estão descobrindo que este código de uma maneira mesmo incrível  até armazena nossas memórias no cérebro e as torna disponíveis a longo prazo.

http://reasonandscience.heavenforum.org/t2182-heres-an-incredible-idea-for-how-memory-works#4032

Para que as células funcionem adequadamente, elas devem organizar-se e interagir mecanicamente uns com as outras e com o seu ambiente. Elas têm que ser corretamentes em forma, fisicamente robustas, e devidamente estruturadas internamente. Muitas têm que mudar a sua forma e se deslocar de um lugar para outro. Todas as células têm que ser capaz de reorganizar seus componentes internos à medida que crescem, se dividem, e adaptar-se às novas circunstâncias. Estas funções espaciais e mecânicas dependem de um sistema de filamentos notável chamado o citoesqueleto. Variadas funções do citoesqueleto dependem do comportamento de três famílias de filamentos de actina-proteína filamentos, microtúbulos e filamentos intermédios. Os microtúbulos são muito importantes para um número de processos celulares. Eles estão envolvidos na manutenção da estrutura da célula e fornecer uma plataforma para montagens macromoleculares intracelular através de motores moleculars dineínas e cinesinas que marcham como gente. Eles também estão envolvidos na separação cromossoma (mitose e meiose), e são os principais constituintes de fusos mitóticos, os quais são utilizados para puxar para além cromossomas eucarióticos. A divisão celular mitótica é a tarefa mais fundamental de todas as células eucariótas vivas. As células têm máquinas intrincadas e bem regulamentadas para garantir que a mitose ocorre com uma frequência adequada e alta fidelidade. Se alguém quiser explicar a origem das células eucarióticas, o surgir da mitose e seu mecanismo e organelas celulares envolvidas e proteínas devem ser explicadas. O centrossoma desempenha um papel central: ele funciona como o principal centro-organização dos microtúbulos e desempenha um papel vital em guiar a segregação dos cromossomos durante a mitose. No centrossoma, dois centrioles residem em ângulos rectos entre si, ligadoss numa extremidade por fibras.

Estas estruturas são perfeitas arquiteturas essenciais em muitas células de animais e plantas (embora não em plantas com flores ou fungos, ou em procariotas). Elas ajudam a organizar os centrossomas, cujos eixos de microtúbulos durante a divisão celular chegam aos cromossomos alinhados e trazê-los para as células filhas.

http://reasonandscience.heavenforum.org/t2090-centriole-centrosome-the-centriole-spindle-the-most-complex-machine-known-in-nature?highlight=spindle

heterodímeros α- e β-tubulina são as subunidades estruturais da estrutura microtubules. A estrutura  é dividida no domínio do terminal amino contendo a região de ligação de nucleótidos, um domínio intermediário contendo o local de ligação do taxol, e o domínio carboxi-terminal, que provavelmente constitui a superfície de ligação para proteínas do motor. A menos que todos os 3 dominios funcionais estavam  totalmente funcionais e desenvolvidos desde o início, as tubulinas não teriam nenhuma função útil. Não haveria razão para o local de ligação Taxol estar sem  proteínas de motor existentes. Instabilidade dinâmica, a mudança estocástica entre crescimento e contração, é essencial para a função de microtúbulos.

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4033

A dinâmica dos microtúbulos no interior das células é regulada por uma variedade de proteínas que  ligam dímeros de tubulina ou os microtúbulos. Proteínas que se ligam aos microtúbulos são chamados coletivamente de proteínas associadas a microtúbulos, ou a família maps.The MAP inclui grandes proteínas como a MAP-1A, MAP-1B, 1C-MAP, MAP-2 e MAP-4 e componentes menores, como tau e MAP 2C.

Isto é altamente relevante. Os microtúbulos dependem de proteínas associadas a microtúbulos para a função apropriada. Interdependência é uma característica da concepção inteligente, e uma forte evidência de que ambos, microtúbulos, e proteinas MAP's tiveram que emergir juntos, ao mesmo tempo, uma vez que um depende do outro para a função apropriada. Mas mais do que isso. Os microtúbulos são essenciais para formar o citoesqueleto, o qual é essencial para a forma e estrutura da célula. Em poucas palavras, Sem proteínas MAP's, não haveria nenhuma função adequada dos microtúbulos. Sem microtúbulos, nenhuma função adequada do citoesqueleto poderia existir. Sem citoesqueleto, nenhuma célula com funcionamento adequado existiria. A evidência é muito forte, que todos esses elementos tiveram que surgir juntos, de uma vez. Kinesin e Dynein pertencem a familia de proteínas MAP's.   Proteínas cinesina contribuem para a atividade de despolimerização de microtúbulos ao centrómero centrossoma e durante a mitose. Estas atividades têm sido mostradas como sendo essencial para a morfogénese do fuso e a segregação de cromossomas. A emergência evolutiva gradual de células eucarióticas não é factível para mais uma razão, descrita aqui.

Quando incorporados em microtúbulos, tubulina acumula um número de modificações pós-traducionais, muitos dos quais são únicos para estas proteínas. Estas modificações incluem detyrosination, acetilação, polyglutamylation, polyglycylation, fosforilação, ubiquitinação, sumoilação, e palmitoilação. O heterodímero α- e β-tubulina sofre múltiplas modificações pós-traducionais (PTMs). As subunidades de tubulina são modificadas de maneira não uniforme distribuídas ao longo de microtúbulos. Análogo ao modelo do "código de histonas" na cromatina, são propostas diversas PTMs para formar um bioquímico "código tubulin" que pode ser "lido" por fatores que interagem com microtúbulos.

Este é um fato relevante e incrível, e levanta a questão de como o "código tubulin" ao lado de vários outros códigos na celula emergiu. A meu ver, mais uma vez, isso mostra que a inteligência foi necessária para criar essas estruturas biomoleculares surpreendentes; formação de informação codificada sempre demonstrou ser capaz apenas de ser produzida por mentes inteligentes. Além disso: Que utilidade tería o código tubulin , se nenhum objetivo específico foi concebido antecipadamente, ou seja, ele agir como emissor, e se não houver nenhum destino das informações, não há razão do código  existir em primeiro lugar. Assim, ambos, o emissor e o receptor, devem existir primeiro como hardware, que é o de microtúbulos com as  unidades de tubulina modificados pós-transcricionalmente em uma conformação codificada especificada e o receptor, que pode ser Cinesina ou proteínas do motor de miosina, que são direcionados para o destino correto para exercer trabalhos específicos, ou outras proteínas dirigidas para tarefas específicas.

Tomados em conjunto, múltiplas e complexas PMTs ( post-modificação transcricional) de tubulinas fornecem uma miríade de possibilidades combinatórias para especificamente 'etiquetar' subpopulações de microtúbulos em células, destinando-os para funções precisas. Como esta tubulina ou código de microtúbulos permite que as células se dividem, migram, comunicam e diferenciam-se de uma maneira ordenada é uma pergunta interessante que se tenta responder em um futuro próximo. Percepções iniciais já revelaram os potenciais papéis de PMTs tubulina em uma série de patologias humanas, como o câncer, neurodegeneração e ciliopatias.

Não só tem que ser elucidado como a tubulina ou código de microtúbulos permite que as células façam todos esses trabalhos, mas também o que explica melhor o seu surgimento e codificação. A maioria destas enzimas são específicas para tubulina e modificações pós translacionais de microtúbulos. Estas enzymas são só usadas  se existirem microtúbulos. Os microtúbulos, contudo, exigem  estas enzimas para modificar suas estruturas. Portanto, pode-se concluir que eles são interdependentes e não poderiam surgir de forma independente por mecanismos evolutivos naturais.

Uma hipótese que emerge é que modificações de tubulina especificam um código que determina resultados biológicos através de alterações na estrutura de ordem superior de microtúbulos e / ou por recrutamento e interagindo com proteínas efetoras. Esta hipótese é análoga à hipótese do código de histonas - que modificações em histonas nucleares, agindo de forma combinatória ou sequencial, especificam múltiplas funções da cromatina tais como mudanças na estrutura da cromatina de ordem superior ou ativação seletiva de transcrição. Os paralelos aparentes entre estes dois tipos de quadros estruturais, a cromatina no núcleo e microtúbulos no citoplasma, são intrigantes

Não é  evidência impressionante de um designer comum que  inventou ambos os códigos?

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4035

Os microtúbulos são tipicamente nucleados ( montados peça a peça mediante os " blocos de construção " de microtúbulos chamados tubulins )  e organizados pela organela dedicada chamada centro de organização de microtúbulos (MTOCs). Contido dentro da MTOC, há um outro tipo de tubulina, chamada γ-tubulina,  que é distinta das  subunidades α- e β dos próprios microtúbulos. A γ-tubulina combina com várias outras proteínas associadas para formar uma estrutura semelhante a uma anilha de bloqueio do tipo conhecido como o complexo γ-tubulina anel "(γ-UTRA) Este complexo atua como um modelo para colocação e montagem de α/β dímeros de p-tubulina para começar a polimerização.; atua como um tampão de o (-) final, enquanto o crescimento dos microtúbulos continua se distanciando da MTOC na direcção (+) . O núcleo essencial chamado  complexo pequeno (γTuSC) γ-tubulina é a parte central conservada da máquina de nucleação de microtúbulos, e é encontrado em quase todos os eucariotas.

Este complexo anel γ-tubulina é um exemplo notável de design intencional que é necessário para nuclear  os microtúbulos na forma certa. Não haveria nenhuma função para o complexo anel γ-tubulina de surgir sem microtúbulos, uma vez que não teria nenhuma função por conta própria. Além disso, ele é feito de várias subunidades, que são indispensáveis ​​para a utilização, que é, por exemplo, os fatores de ligação, proteínas acessórias, e y-tubulinas, que constituem um complexo anel y-tubulinas irredutível , feito de várias partes interligadas,  que não poderia emergir pela seleção natural. O complexo tem apenas função proposital quando microtúbulos têm de ser montados. Assim, o complexo anel y-tubulinas  e microtúbulos são interdependentes.

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4035

Veja a sua estrutura impressionante aqui:

http://reasonandscience.heavenforum.org/t2096-the-cytoskeleton-microtubules-and-post-translational-modification#4040

E esta é provavelmente a parte mais impressionante:

A memória é atribuída a conexões sinápticas reforçadas entre particulares neurônios do cérebro, porém componentes da membrana sináptica são transitórios, enquanto que as memórias podem permanecer por muito tempo. Isto sugere informações sinápticas são codificadas  e ' armazenadas "em outros lugares, por exemplo, em níveis moleculares dentro do neurônio pós-sináptico. Em potenciação a longo prazo (LTP), um modelo celular e molecular para a memória, ião pós-sináptica de cálcio (Ca2 +) fluxo ativa o hexagonal Ca2 + -calmodulina dependente quinase II (CaMKII), uma holoenzima dodacamerica contendo 2 conjuntos hexagonais de 6 domínios de cinase .

http://reasonandscience.heavenforum.org/t2181-cell-communication-and-signalling-evidence-of-design#4019

Aqui está uma idéia incrível para o funcionamento da memória

Citoesqueleto de sinalização:  a memória é codificada em microtúbulos Grades por CaMKII fosforilação?

Cada domínio quinase pode fosforilar proteínas de substrato, ou não (ou seja, codificação de um bit). Assim, cada conjunto de quinases CaMKII estendidos pode potencialmente codificar Ca2 + sináptica informações via fosforilação como matrizes ordenadas de 'bits' binários. Os sítios candidatos à CaMKII memória molecular codificado-fosforilação incluem microtúbulos (MTS), organelas cilíndricas cujas superfícies representam uma rede regular com um padrão de polímeros hexagonais da tubulina de proteínas. Usando modelagem de mecânica molecular e profiling eletrostática, descobrimos que as dimensões espaciais e da geometria dos domínios quinase CaMKII estendidas precisamente coincidem com os de MT ( microtúbulos ) treliças hexagonais. Isto sugere conjuntos de seis domínios de cinase CaMKII fosforilar hexagonais bairros MT retículo em bloco, por exemplo, transmitir informações sináptica como matrizes ordenadas de seis "bits", e, assim, "bytes", com 64 a 5.281 estados possíveis de bits por byte CaMKII-MT. Sinalização e codificação em MTs e outras estruturas do citoesqueleto oferecer rápido, robusto processamento  informação de estado-sólido que pode refletir um código geral de memória e processamento de informação baseada em MT dentro dos neurônios e outras células eucarióticas.

Tamanho e geometria da holoenzima CaMKII hexagonal ativada e os dois tipos de reticulados hexagonais (A e B) nos microtúbulos são idênticos. 6 quinases estendidos podem interagir coletivamente com 6 tubulinas

É a interface precisa correspondentes coincidência impressionante, ou design intencional? Ou, um criador inteligente com objetivos em mente fez o tamanho correto, onde caberia a holoenzyma CaMKII e combinaria as treliças hexagonais com os tubulins dos microtúbulos, ou é o resultado de processos aleatórios, não guiados, evolutivos. Que explicação faz mais sentido?

O padrão electrostático formado pelo encostamente e junção de " blocos de construção "  de tubulinas  na superfície de microtúbulos mostra regiões eletrostáticamente carregadas altamente negativas rodeado por um fundo positivo menos pronunciado, dependendo do tipo da estrutura dos microtúbulos. Estas impressões digitais electrostáticas são complementares para os formados pelos 6 domínios de cinase CaMKII holoenzima fazendo os dois substratos naturais para a interacção. O alinhamento da holoenzyma CaMKII com dímeros de tubulina convergem com as linhas de campo elétrico indica uma interação mutuamente atraente.

Assim, adicionalmente à interface precisa,  correspondentes  significativa da holoenzyma CaMKII com os microtúbulos através de forças eletrostáticas indica evidência cumulativa de design.

Existem 26 possíveis estados de codificação para uma interacção CaMKII-MT única resultante na memória de 64 bits de informação. Neste caso, no entanto, representa apenas um ou outro ou fosforilação α- β-tubulina, mas não ambos. No segundo cenário, cada um dímero tubulina é considerado ter três estados possíveis - nenhuma fosforilação (0), β-tubulina fosforilação (1), ou fosforilação tubulina-α (2). Estes são estados ternários, ou 'Trits' (em vez de bits). Seis possíveis locais sobre o rendimento A-grade 36 = 729 estados possíveis. O terceiro cenário considera os estados ternários. Como nos cenários anteriores o dímero central não é considerado disponível para a fosforilação. Neste caso, 6 dímeros de tubulina de 8 podem ser fosforilados em três formas possíveis. O número total de estados possíveis para o bairro B estrutura é, assim, 36-28-8 (27) = 5281 estados originais.

Assim, em terceiro lugar, temos aqui um mecanismo de codificação avançada de informações, o que acrescenta a interface precisa e interações de força eletrostática, o que adiciona mais uma evidência cumulativa de design.

Proteínas motoras dynein e movimento kinesin (em direções opostas) ao longo de microtúbulos (usando ATP como combustível) transportam  e entregam componentes e precursores para locais sinápticos específicos. Enquanto em relação a microtúbulos é assumido que funcionam como guias passivas, como trilhos de trem para proteínas de motor, o mecanismo de orientação parece ser através de enzimas CaMKII quinase que "escrevem" nos microtúbulos através de fosforilação e codificam a forma de regular o transporte de proteínas do motor ao longo de microtúbulos diretamente, e  sinalizam para proteínas do motor precisamente onde e quando  desengatar e entregar sua carga. Que constitui a meu ver outro argumento surpreendente para o design inteligente.

https://reasonandscience.catsboard.com

Otangelo


Admin

CONTROL OF TRANSCRIPTION BY SEQUENCESPECIFIC DNA-BINDING PROTEINS

Coded information can always be tracked back to a intelligence, which has to set up the convention of meaning of the code, and the information carrier, that can be a book, the hardware of a computer, or  the smoke of a fire of a indian tribe signalling to another. All communication systems have an encoder which produces a message which is processed by a decoder. In the cell there are several code systems. DNA is the most well known, it stores coded information through the four nucleic acid bases. But there are several others, less known. Recently there was some hype about a second DNA code. In fact, it is essential for the expression of genes. The cell uses several formal communication systems according to Shannon’s model because they encode and decode messages using a system of symbols.  As Shannon wrote :

“Information, transcription, translation, code, redundancy, synonymous, messenger, editing, and proofreading are all appropriate terms in biology. They take their meaning from information theory (Shannon, 1948) and are not synonyms, metaphors, or analogies.” (Hubert P. Yockey,  Information Theory, Evolution, and the Origin of Life,  Cambridge University Press, 2005).

An organism’s DNA encodes all of the RNA and protein molecules required to construct its cells. Yet a complete description of the DNA sequence of an organism—be it the few million nucleotides of a bacterium or the few billion nucleotides of a human—no more enables us to reconstruct the organism than a list of English words enables us to reconstruct a play by Shakespeare. In both cases, the problem is to know how the elements in the DNA sequence or the words on the list are used. Under what conditions is each gene product made, and, once made, what does it do? The different cell types in a multicellular organism differ dramatically in both structure and function. If we compare a mammalian neuron with a liver cell, for example, the differences are so extreme that it is difficult to imagine that the two cells contain the same genome. The genome of a organism contains the instructions to make all different cells, and  the expression of either a neuron cell or liver cell can be regulated at many of the steps in the pathway from DNA to RNA to Protein. The most important imho is CONTROL OF TRANSCRIPTION BY SEQUENCESPECIFIC DNA-BINDING PROTEINS, called transcription factors or regulators. These proteins recognize specific sequences of DNA (typically 5–10 nucleotide pairs in length) that are often called cis-regulatory sequences.   Transcription regulators bind to these sequences, which are dispersed throughout genomes, and this binding puts into motion a series of reactions that ultimately specify which genes are to be transcribed and at what rate. Approximately 10% of the protein-coding genes of most organisms are devoted to transcription regulators. Transcription regulators must recognize short, specific cis-regulatory sequences within this structure. The outside of the double helix is studded with DNA sequence information that transcription regulators recognize: the edge of each base pair presents a distinctive pattern of hydrogen-bond donors, hydrogen-bond acceptors, and hydrophobic patches in both the major and minor grooves. The 20 or so contacts that are typically formed at the protein–DNA interface add together to ensure that the interaction is both highly specific and very strong.


These instructions are written in a language that is often called the ‘gene regulatory code’.  The preference for a given nucleotide at a specific position is mainly determined by physical interactions between the aminoacid side chains of the TF ( transcription factor ) and the accessible edges of the base pairs that are contacted.  It is possible that some complex code, comprising rules from each of the different layers, contributes to TF– DNA binding; however, determining the precise rules of TF binding to the genome will require further scientific research. So, Genomes contain both a genetic code specifying amino acids, and this regulatory code specifying transcription factor (TF) recognition sequences. We find that ~15% of human codons are dual-use codons (`duons') that simultaneously specify both amino acids and TF recognition sites. Genomes also contain a parallel regulatory code specifying recognition sequences for transcription factors (TFs) , and the genetic and regulatory codes have been assumed to operate independently of one another, and to be segregated physically into the coding and non-coding genomic compartments. the potential for some coding exons to accommodate transcriptional enhancers or splicing signals has long been recognized

In order for communication to happen, 1. The sequence of DNA bases located in the regulatory region of the gene is required , and 2. transcription factors that read the code. If one of both is missing, communication fails, the gene that has to be expressed, cannot be encountered, and the whole procedure of gene expression fails. This is a irreducible complex system. The gene regulatory code could not arise in a stepwise manner either, since if that were the case, the code has only the right significance if fully developed. Thats a example par excellence of intelligent design.. The fact that these transcription factor binding sequences overlap protein coding sequences, suggest that both sequences were designed together, in order to optimize the efficiency of the DNA code. As we learn more and more about DNA structure and function, it is apparent that the code was not just hobbled together by the trial and error method of natural selection, but that it was specifically designed to provide optimal efficiency and function.


 Stephen Meyer puts it that way in his excellent book: Darwins doubt pg.270:

INTEGRATED CIRCUITRY: DEVELOPMENTAL GENE REGULATORY NETWORKS 

Keep in mind, too, that animal forms have more than just genetic information. They also need tightly  integrated networks of genes, proteins, and other molecules to regulate their development—in other words, they require developmental gene regulatory networks, the dGRNs . Developing animals face two main challenges. First, they must produce different types of proteins and cells and, second, they must get those proteins  and cells to the right place at the right time.20 Davidson has shown that embryos accomplish this task by relying on networks of regulatory DNA-binding proteins (called transcription factors) and their physical targets. These physical targets are typically sections of DNA (genes) that produce other  proteins or RNA molecules, which in turn regulate the expression of still other genes.

These interdependent networks of genes and gene products present a striking appearance of design. Davidson's graphical depictions of these dGRNs look for all the world like wiring diagrams in an electrical engineering blueprint or a schematic of an integrated circuit, an uncanny resemblance Davidson himself has often noted. "What emerges, from the analysis of animal dGRNs," he muses, "is almost astounding: a network of logic interactions programmed into the DNA sequence that amounts  essentially to a hardwired biological computational device." These molecules collectively form a tightly integrated network of signaling molecules that function as an integrated circuit. Integrated circuits in electronics are systems of individually functional components such as transistors, resistors, and capacitors that are connected together to perform an overarching function. Likewise, the functional components of dGRNs—the DNA-binding proteins, their DNA target sequences, and the other molecules that the binding proteins and target molecules produce and regulate—also form an integrated circuit, one that contributes to accomplishing the overall function of producing an adult animal form. 

Davidson himself has made clear that the tight functional constraints under which these systems of molecules (the dGRNs) operate preclude their gradual alteration by the mutation and selection mechanism. For this reason, neo-Darwinism has failed to explain the origin of these systems of molecules and their functional integration. Like advocates of evolutionary developmental biology, Davidson himself favors a model of evolutionary change that envisions mutations generating large-scale developmental effects, thus perhaps bypassing nonfunctional intermediate circuits or systems. Nevertheless, neither proponents of "evo-devo," nor proponents of other recently proposed materialistic theories of evolution, have identified a mutational mechanism capable of generating a dGRN or anything even remotely resembling a complex integrated circuit. Yet, in our experience, complex integrated circuits—and the functional integration of parts in complex systems generally—are known to be produced by intelligent agents—specifically, by engineers. Moreover, intelligence is the only known cause of such effects. Since developing animals employ a form of integrated circuitry, and certainly one manifesting a tightly and functionally integrated system of parts and subsystems, and since intelligence is the only known cause of these features, the necessary presence of these features in developing Cambrian animals would seem to indicate that intelligent agency played a role in their origin 



Last edited by Admin on Thu Oct 22, 2015 12:07 pm; edited 1 time in total

https://reasonandscience.catsboard.com

48My articles - Page 2 Empty Re: My articles Thu Oct 15, 2015 8:34 pm

Otangelo


Admin

Carbon-14-dated dinosaur bones are less than 40,000 years old

http://reasonandscience.heavenforum.org/t1767-carbon-14-dated-dinosaur-bones-are-less-than-40000-years-old

Researchers have found a reason for the puzzling survival of soft tissue and collagen in dinosaur bones - the bones are younger than anyone ever guessed. Carbon-14 (C-14) dating of multiple samples of  bone from 8 dinosaurs found in Texas, Alaska, Colorado, and Montana revealed that they are only 22,000 to 39,000 years old.

Members of the Paleochronology group presented their findings at the 2012 Western Pacific Geophysics Meeting in Singapore, August 13-17, a conference of the American Geophysical Union (AGU) and the Asia Oceania Geosciences Society (AOGS).

Since dinosaurs are thought to be over 65 million years old, the news is stunning - and more than some can tolerate. After the AOGS-AGU conference in Singapore, the abstract was removed from the conference website by two chairmen because they could not accept the findings. Unwilling to challenge the data openly, they erased the report from public view without a word to the authors. When the authors inquired, they received this letter:

They did not look at the data and they never spoke with the researchers. They did not like the test results, so they censored them.

Carbon-14 is considered to be a highly reliable dating technique. It's accuracy has been verified by using C-14 to date artifacts whose age is known historically. The fluctuation of the amount of C-14 in the atmosphere over time adds a small uncertainty, but contamination by "modern carbon" such as decayed organic matter from soils poses a greater possibility for error.

Dr. Thomas Seiler, a physicist from Germany, gave the presentation in Singapore. He said that his team and the laboratories they employed took special care to avoid contamination. That included protecting the samples, avoiding cracked areas in the bones, and meticulous pre-cleaning of the samples with chemicals to remove possible contaminants. Knowing that small concentrations of collagen can attract contamination, they compared precision Accelerator Mass Spectrometry (AMS) tests of collagen and bioapatite (hard carbonate bone mineral) with conventional counting methods of large bone fragments from the same dinosaurs. "Comparing such different molecules as minerals and organics from the same bone region, we obtained concordant C-14 results which were well below the upper limits of C-14 dating. These, together with many other remarkable concordances between samples from different fossils, geographic regions and stratigraphic positions make random contamination as origin of the C-14 unlikely".

The theoretical limit for C-14 dating is 100,000 years using AMS, but for practical purposes it is 45,000 to 55,000 years. The half-life of C-14 is 5730 years. If dinosaur bones are 65 million years old, there should not be one atom of C-14 left in them.

Many dinosaur bones are not petrified. Dr. Mary Schweitzer, associate professor of marine, earth, and atmospheric sciences at North Carolina State University, surprised scientists in 2005 when she reported finding soft tissue in dinosaur bones. She started a firestorm of controversy in 2007 and 2008 when she reported that she had sequenced proteins in the dinosaur bone. Critics charged that the findings were mistaken or that what she called soft tissue was really biofilm produced by bacteria that had entered from outside the bone. Schweitzer answered the challenge by testing with antibodies. Her report in 2009 confirmed the presence of collagen and other proteins that bacteria do not make. In 2011, a Swedish team found soft tissue and biomolecules in the bones of another creature from the time of the dinosaurs, a Mosasaur, which was a giant lizard that swam in shallow ocean waters. Schweitzer herself wonders why these materials are preserved when all the models say they should be degraded. That is, if they really are over 65 million years old, as the conventional wisdom says.

Dinosaur bones with Carbon-14 dates in the range of 22,000 to 39,000 years before present, combined with the discovery of soft tissue in dinosaur bones, indicate that something is indeed wrong with the conventional wisdom about dinosaurs.

However, it has been hard to reach the public with the information. Despite being simple test results without any interpretation, they were blocked from presentation in conference proceedings by the 2009 North American Paleontological Convention, the American Geophysical Union in 2011 and 2012, the Geological Society of America in 2011 and 2012, and by the editors of various scientific journals. Fortunately, there is the internet.

Dino Soft Tissue Confirms Creationist Prediction

Gleanings from the original paper show discoverers excited but surprised. Creationists are excited and gratified.
Yesterday’s announcement of dinosaur soft tissue in Nature Communications by scientists from Imperial College London sets a new high hurdle for critics. It’s not really news, since soft tissue in dinosaur bones has been reported for over a decade now (see Bob Enyart’s list of journal papers).  This new paper, however, is noteworthy in many respects that will challenge naysayers:

The team used ordinary, common bones from museum specimens. “Models proposed to account for such preservation indicate that it should be the exception rather than the rule,” they noted. “.…Here, we examined eight dinosaur bones from the Cretaceous period,none of which are exceptionally preserved.”

The outside of the bones gave no hint of what was inside. “Incredibly, none of the samples showed external indicators of exceptional preservation and this strongly suggests that the preservation of soft tissues and even proteins is a more common phenomenon than previously accepted.”

The bones they sampled came from both major classes of dinosaurs. “Specimens representing both major dinosaurian clades (Ornithischia [bird-hipped] and Saurischia [lizard-hipped]) and different osteological elements were chosen.”

The bones came from different parts of the anatomy. “…an ungual claw of an indeterminate theropod dinosaur…”, “… a hadrosaurid tibia,” “an astragalus of a hadrosaurid” and others.

The team took great pains to prevent contamination. “The sections were obtained from the interior of each sample, ruling out modern surface contamination.” Again, “this method rules out the possibility of modern contamination, as the surface exposed is inaccessible to any contaminant.”

The team used multiple methods for observation: SEM, TEM, energy-dispersive X-ray spectroscopy, and focused ion beam (FIB) mass spectrometry.  “These results show that to determine the presence of soft tissue in fossils a new synergistic approach needs to be applied where micro/nano-analytical methods are utilized to their full potential.”

They used controls by running the same tests with rabbit bone, another bone lacking the fibers, emu blood and a copper grid. “Sections were obtained from an agglomeration of erythrocyte-like structures and cement surrounding these from specimen NHMUK R12562, fixed emu blood, three fossils showing calcified fibres (NHMUK R4493, NHMUK R4249,NHMUK R4864), rabbit bone and a fossil not presenting sign of calcified fibres (NHMUKR12562). As a control, a mass spectrum from the copper grid holding the samples was also obtained.”

Six of the eight samples contained soft tissue. “.…in this study, putative soft tissue (either erythrocyte-like structures, collagen-like, fibrous structures or amorphous carbon-rich structures (Supplementary Fig. 7)) was observed in six of our eight dinosaur specimens(Supplementary Table 1).”

Two distinct kinds of soft tissue were reported: collagen and red blood cells. “In one sample, we observe structures consistent with endogenous collagen fibre remains .… Furthermore, we observe structures consistent with putative erythrocyte remains that exhibit mass spectra similar to emu whole blood.”

The collagen structure had not degraded; it still displayed the quaternary structure characteristic of collagen’s triple-helix configuration. “One sample (NHMUK R4493) also showed, for the first time in a dinosaur bone, a clear ~67 nm banding, that is typical of the banding observed in collagen (Fig. 3e), for the length of the preserved fibre.” The fibers are obvious from the electron micrographs shown in the paper and the popular news.

Amino acids were detected that are characteristic of collagen: “The positive mass spectrum obtained from NHMUK R4493 showed peaks corresponding to fragments of theamino acids glycine, alanine, proline and others.… Detection of fragments of the amino acids normally found in collagen supports the results obtained from TEM analysis where the ~67 nm banding is consistent with potential preservation of the original quaternary structure of the protein.”

Blood cells were found. Though shrunken in size, this confirms Schweitzer’s original claim of finding blood cells. “The spectra obtained from four different regions of the dinosaur bone containing erythrocyte-like structures are surprisingly similar to the spectra obtained from emu blood.” Why would the dinosaur cells be smaller? “Within the dinosaur samples on average, the erythrocyte-like structures are ~2 μm in length. This is somewhat smaller than erythrocytes of birds, which range from 9 to 15 μm in length; emu blood cells in our sample were 9±2 μm (n=17). The structures consistent with putative erythrocytes in the fossilcould well have been deformed and it is quite probable that these structures have undergone some shrinkage during fossilization.”
Another standout feature of this paper is the undercurrent of emotion. Scientific papers tend to be stodgy and understated in tone. These scientists used “exciting” twice, and a scattering of other “surprise” words:

Therefore, the observation of a ~67-nm banding in the fibrous structures of fossilized samples here is very exciting, as it is consistent with a preservation of the ultrastructure of putative collagen fibres over a time period of 75 million years. Before this finding, the oldest undegraded collagen recorded (based on mass spectrometry sequencing and peptide fingerprinting) was about 4 million years old.

The common preservation of soft tissues could pave the way for cellular investigations of extinct animals, shedding light on aspects of physiology and behaviour that have been previously inaccessible to palaeontologists and inaugurating a new and exciting way to do paleontology.

https://reasonandscience.catsboard.com

Otangelo


Admin

Confirmation of intelligent design predictions

http://reasonandscience.heavenforum.org/t1659-confirmation-of-intelligent-design-predictions

To use design as a basis for scientific predictions is compatible with the scientific process because it does exactly what science is supposed to do. It puts our theories and hypotheses out in the open to be discussed, to be supported by accumulating evidence, or refuted by the evidence. Some may object to this, but if we  are seeking for truth, why should we not  do it? Intelligent design theory seeks  evidence of design in nature. Intelligent design starts with observation in the natural world, and tries to find out, how the  origin  of given phenomenon can be best explained. Since there are basically two possible mechanisms, design, and natural, unguided, random events, both should be considered, and evaluated against each other.

The beginning of the universe requires a cause. The fine-tuning of the universe requires a tuner. Coded Information which is complex and instructional/specified found in epigenetic systems  and genes, and irreducible , interdependent molecular machines and biosynthetic and metabolic pathways in biological systems point to a intelligent agent as best explanation of their setup and  origins. 
 
http://reasonandscience.heavenforum.org/t1659-confirmation-of-intelligent-design-predictions

Observation: Intelligent agents  act frequently  with an end goal in mind, constructing functional irreducibly complex  multipart-machines, and  make  exquisitely integrated circuits that require a blueprint to build the object. Furthermore, Computers   integrate  software/hardware and store  high levels of instructional complex coded information. In our experience, systems that either a)require or b)store  large amounts of specified-instructional complex information  such as codes and languages, and which are constructed in a interdependence of hard and software invariably originate from an intelligent source. No exception.

Hypothesis (Prediction): Natural structures will be found that contain many parts arranged in intricate patterns, metabolic pathways similar to electronic circuits, and irreducible structures  that perform  specific functions -- indicating high levels of  Information, irreducible complexity, and interdependence, like hard/software.

Experiment: Experimental investigations of DNA, epigenetic codes, and metabolic circuits indicate that biological molecular machines and factories ( Cells ) are full of information-rich, language-based codes and code/blueprint-based structures. Biologists have performed mutational sensitivity tests in proteins and determined that their amino acid sequences, in order to provide  function, require highly instructional complex coded information stored in the Genome.   Additionally, it has been found out, that cells require and use various epigenetic codes, namely  Splicing Codes,  Metabolic Codes,  Signal Transduction Codes,  Signal Integration Codes Histone Codes, Tubulin Codes, Sugar Codes , and The Glycomic Code. Furthermore, all kind of irreducible complex molecular machines and biosynthesis performing  and metabolic pathways have been found, which could not keep their basic functions without a minimal number of parts and complex inter wined and interdependent structures. That indicates these biological machines and pathways had to emerge fully operational, all at once. A step wise evolutionary manner is not possible. Furthermore, knock out experiments of all components of the flagellum have shown that the flagellum is irreducible complex.

Conclusion: Unless someone can falsify the prediction, and  point out a non-intelligent source  of  Information as found in the cell, the high levels of instructional complex coded information, irreducible complex and interdependent molecular systems and complex metabolic circuits and biosynthesis pathways, their origin is   best explained by the action of an intelligent agent.




Proponents of evolution frequently argue that intelligent design is not science, since it doesn't make predictions. Following a list of predictions made by intelligent design, and the confirmation:

High prescriptive information content will be found throughout the genome – (already proven)

Laws of chemistry and physics, which follow exact statistical, thermodynamic, and spatial laws, are totally inade-quate for generating complex functional information or those systems that process that information using prescriptive algorithmic information". Organization requires control, which requires  formalism as a reality. Each protein is currently the result of the execution of a real computer program running on the genetic operating system.

http://reasonandscience.heavenforum.org/t2110-what-might-be-a-protocells-minimal-requirement-of-parts

Machine-like irreducibly complex structures will be found – (already proven, and no K. Millers poor rebuttal is no refutation at all)


High information content machine-like irreducibly complex and interdependent structures,  of which photosynthesis, the eye, the human body, nitrogenase, the ribosome, the cell, rubisco, photosystem II, the oxygen evolving complex etc. are prime examples, are commonly found in nature.
Since Evolution is unable to  provide a advantage of adaptation in each evolutionary step, and is unable to select it,  1) Darwinism’s prediction is falsified; 2) Design’s prediction is confirmed.

http://reasonandscience.heavenforum.org/t2166-a-list-of-irreducible-complex-systems


Forms will be found in the fossil record that appear suddenly and without any precursors – (already known)


"A record of pre-Cambrian animal life, it appears, simply does not exist. Why this lamentable blank? Various theories have been proposed; none is too satisfactory. It has been suggested, for example, that all the Pre-Cambrian sediments were deposited on continental areas, and the absence of fossils in them is due to the fact that all the older animals were seadwellers. But that all these older sediments were continental is a theory which opposes, without proof, everything we know of deposition in later times. Again, it is suggested that the Pre-Cambrian seas were poor in calcium carbonate, necessary for the production of preservable skeletons; but this is not supported by geochemical evidence. Yet again, it is argued that even though conditions were amenable to the formation of fossilizable skeletal parts, the various phyla only began to use these possibilities at the dawn of the Cambrian. But it is, a priori, hard to believe that the varied types present in the early Cambrian would all have, so to speak, decided to put on armour simultaneously. And, once again, it has been argued that the whole evolution of multicellular animals took place with great rapidity in late Pre-Cambrian times, so that a relatively short gap in rock deposition would account for the absence of any record of their rise. Perhaps; but the known evolutionary rate in most groups from the Cambrian on is a relatively leisurely one, and it is hard to convince oneself that a sudden major burst of evolutionary advance would be so promptly followed by a marked 'slowdown'. All in all, there is no satisfactory answer to the Pre-Cambrian riddle."

Romer Alfred S. [late Professor of Zoology, Museum of Comparative Zoology, Harvard University], "The Procession of Life," The World Publishing Co: Cleveland OH, 1968, pp.19-20.

http://reasonandscience.heavenforum.org/t1701-does-fossil-record-support-the-evolution-model-of-gradual-and-small-changes

Genes and functional parts will be re-used in different unrelated organisms – (already proven)

The insect eye and the vertebrate eye are two examples of structures said to be analogous ( Analogous structures are similar or resembling in certain respects, e.g. in function or in appearance but not in evolutionary origin or developmental origin. An example is wings of a butterfly and wings of a hummingbird are analogous.) . However, they can be shown to both be based on the expression of the Pax-6 gene , and it is probable that the vertebrate and insect (and cephalopod) eyes are the modified descendents of a basic metazoan photoreceptive cell that was regulated by Pax-6.

Research at the molecular level has failed to demonstrate the expected correspondence between gene product changes and the organismal changes predicted by evolution.
Evolution by DNA mutations 'is largely uncoupled from morphological evolution'

Some regulatory genes that have similar DNA sequences are found to regulate similar structures in different phyla where those structures are thought to have "evolved" independently. These homologous genes that regulate analogous structures might encourage the Darwinist to reconsider whether those structures might actually be homologous due to common ancestry. However, in consideration of the evidence that different phyla do not have common ancestors, these "homologies of process" are better explained as evidence of intelligent design, where the designer reused the same control mechanism for the development of similar structures in unrelated organisms.


http://reasonandscience.heavenforum.org/t2191-the-developmental-genetic-toolkit-and-the-molecular-homologyanalogy-paradox


The genetic code will NOT contain much discarded genetic baggage code or functionless “junk DNA” – (being proven over & over today)


When all sorts of peripheral genetic elements were discovered, evolutionary geneticists referred to them as “junk DNA” on the assumption that they were nothing but useless remnants left over from evolutionary predecessors. Come to find out, these regulatory elements are the key to cellular health and development, as well as the primary link to disease when not operating properly.

The massive store of apparently unused DNA components in every cell, which Richard Dawkins, incredibly, once dismissed as “99% junk”, now appears to hold multiple layers of subtle logic which are only beginning to be unravelled, with serious and long-lasting implications.



An article in the 7 September 2012 issue of Science was titled "ENCODE project writes eulogy for junk DNA". "This week, 30 research papers... sound the death knell for the idea that our DNA is mostly littered with useless bases. A decadelong project, the Encyclopedia of DNA Elements (ENCODE), has found that 80% of the human genome serves some purpose". "The ENCODE effort has revealed that a gene's regulation is far more complex than previously thought, being influenced by multiple stretches of regulatory DNA located both near and far from the gene itself and by strands of RNA not translated into proteins, so-called noncoding RNA."--

Pennisi, Elizabeth. 7 September 2012. Science, Vol. 337, pp. 1159-1161.


http://reasonandscience.heavenforum.org/t1812-junk-dna-has-function?highlight=junk+dna

Few intermediate forms will found giving a clear and gradual pathway from one family to another – there are none so far  Most of the claimed ancestors will be shown to have serious problems – already historically proven

Michael Denton stated:

“It is still, as it was in Darwin's day, overwhelmingly true that the first representatives of all the major classes of organisms known to biology are already highly characteristic of their class when they make their initial appearance in the fossil record. This phenomenon is particularly obvious in the case of the invertebrate fossil record. At its first appearance in the ancient paleozoic seas, invertebrate life was already divided into practically all the major groups with which we are familiar today

anthropologist Edmund Ronald Leach stated:

“ Missing links in the sequence of fossil evidence were a worry to Darwin. He felt sure they would eventually turn up, but they are still missing and seem likely to remain so

One of the most famous proponents of evolution was the late Harvard paleontologist Stephen Jay Gould. But Gould admitted,

"The extreme rarity of transitional forms in the fossil record persists as the trade secret of paleontology. We fancy ourselves as the only true students of life’s history, yet to preserve our favored account of evolution by natural selection, we view our data as so bad that we never see the very process we profess to study.

http://reasonandscience.heavenforum.org/t1693-transitional-fossils

Mechanisms for error detection and correction will be abundant within the genome of all organisms – (already proven)

At least four excision repair pathways exist to repair single stranded DNA damage:

Nucleotide excision repair (NER)
Base excision repair (BER)
DNA mismatch repair (MMR)
Repair through alkyltransferase-like proteins (ATLs)

Natural selection cannot act without accurate replication, yet the protein machinery for the level of accuracy required is itself built by the very genetic code it is designed to protect.  Thats a catch22 situation.  It would have been challenging enough to explain accurate transcription and translation alone by natural means, but as consequence of UV radiation, it  would have quickly been destroyed through accumulation of errors.  So accurate replication and proofreading are required for the origin of life. How on earth could proofreading enzymes emerge, especially with this degree of fidelity, when they depend on the very information that they are designed to protect?  Think about it....  This is one more prima facie example of chicken and egg situation. What is the alternative explanation to design ? Proofreading  DNA by chance ?  And a complex suite of translation machinery without a designer?


http://reasonandscience.heavenforum.org/t2043-dna-repair?highlight=repair

Mechanisms for *non-random* adaptations, coherent with environmental pressures, will be found (already found)


The genome has traditionally been treated as a Read-Only Memory (ROM) subject to change by copying errors and accidents. In this review, I propose that we need to change that perspective and understand the genome as an intricately formatted Read-Write (RW) data storage system constantly subject to cellular modifications and inscriptions. Cells operate under changing conditions and are continually modifying themselves by genome inscriptions. These inscriptions occur over three distinct time-scales (cell reproduction, multicellular development and evolutionary change) and involve a variety of different processes at each time scale (forming nucleoprotein complexes, epigenetic formatting and changes in DNA sequence structure). Research dating back to the 1930s has shown that genetic change is the result of cell-mediated processes, not simply accidents or damage to the DNA. This cell-active view of genome change applies to all scales of DNA sequence variation, from point mutations to large-scale genome rearrangements and whole genome duplications (WGDs). This conceptual change to active cell inscriptions controlling RW genome functions has profound implications for all areas of the life sciences.

http://reasonandscience.heavenforum.org/t1476-how-life-changes-itself-the-read-write-rw-genome?highlight=genome

So called vestigial organs will be found to have specific purpose and usefulness – (already proven)

Darwin argued in The Origin of Species that the widespread occurrence of vestigial organs -- organs that may have once had a function but are now useless -- is evidence against creation. "On the view of each organism with all its separate parts having been specially created, how utterly inexplicable is it that organs bearing the plain stamp of inutility... should so frequently occur." But such organs, he argued, are readily explained by his theory: "On the view of descent with modification, we may conclude that the existence of organs in a rudimentary, imperfect, and useless condition, or quite aborted, far from presenting a strange difficulty, as they assuredly do on the old doctrine of creation, might even have been anticipated in accordance with the views here explained."



Many of the organs that are claimed to be useless actually do have a use. Granted, many of these uses were not identified for a long time, which led to the misnomer that they were functionless. This leaves me skeptical of other and future accusations for useless organs whether on humans or other animals. As Mention points out, “The problem with declaring any organ to be without function is discriminating between truly functionless organs and those that have functions that are simply unknown. Indeed, over the years nearly all organs once thought to be useless have been found to be functional. When we have no evidence for function of an organ, we need to bear in mind that absence of evidence is not evidence of absence,” (Menton, 231).

http://reasonandscience.heavenforum.org/t1811-vestigial-organs?highlight=vestigial

Few mutations will end up being beneficial in the long run – (already proven)

Proponents of evolution maintain there must have been “beneficial” mutations on occasion to allow uphill drift of genetic information. Although there are small handfuls of mutations which make it easier for an organism to survive in an extreme environment, so by definition are “equivocally ” beneficial, none are “unequivocally ” beneficial or “uphill” in the sense of adding new genetic information to the gene pool.

http://reasonandscience.heavenforum.org/t1388-mutations-are-rarely-beneficial

Genetic entropy will be found to cancel our most if any beneficial mutations


Ratio of beneficial vs. detrimental mutations:
There are numerous published estimates ranging from 1/1000 to 1/1,000,000. A 1998 paper published in Genetica suggests a beneficial mutation rate (vs. the total mutation rate) of approximately 1 in 1,000,000 (Gerrish and Lenski, 1998). Given that a significant portion if not most of the human genome is functional to one degree or another, to a similar degree those mutations that are not beneficial would be functionally detrimental to one degree or another. In short, the ratio of beneficial vs. detrimental is very small - most likely well below the ratio of 1/1000.



http://reasonandscience.heavenforum.org/t2208-mutation-rates



Last edited by Admin on Wed Mar 08, 2017 2:07 am; edited 1 time in total

https://reasonandscience.catsboard.com

Otangelo


Admin

The hardware and software of the cell, evidence of design

http://reasonandscience.heavenforum.org/t2221-the-hardware-and-software-of-the-cell-evidence-of-design

In order to explain the origin of life, the origin of the physical parts, that is DNA, RNA, organelles , proteins, enzymes etc. of the cell must be explained, and the origin of the information and various code systems of the cell. Following excerpt will elucidate why the origin of both, the software, and the hardware are best explained through the action of a intentional creator.

Replication upon which mutations and natural selection act could not begin prior when life started and cell's began with self-replication. According to geneticist Michael Denton, the break between the nonliving and the living world ‘represents the most dramatic and fundamental of all the discontinuities of nature. Before that remarkable event, a fully operating cell had to be in place, various organelles, enzymes, proteins, DNA, RNA, tRNA, mRNA, and a a extraordinarily complex machinery, that is : a complete DNA replication machinery, topoisomerases for replication and chromosome segregation functions, a DNA repair system, RNA polymerase and transcription factors, a fully operational ribosome for translation, including   20 aminoacyl-tRNA synthases, tRNA and the complex machinery to synthesize them, proteins for posttranslational modifications and chaperones for correct folding of a series of essential proteins, FtsZ microfilaments for celldivision and formation of cell shape, a transport system for proteins etc., a complex metabolic system consistent of several different enzymes for energy generation, lipid biosynthesis to make the cell membrane, and machinery for nucleosynthesis.

This constitutes a minimal set of basic parts, superficially described. If one, just ONE of these parts is missing, the cell will not operate. That constitutes a interdependent , interlocked and irreducibly complex biological system of extraordinary complexity, which had to arise ALL AT ONCE. No step by step build up over a long period of time is possible.

http://reasonandscience.heavenforum.org/t2110-what-might-be-a-protocells-minimal-requirement-of-parts#3797

That constitutes a formidable catch22 problem.

Biochemist David Goodsell describes the problem, "The key molecular process that makes modern life possible is protein synthesis, since proteins are used in nearly every aspect of living. The synthesis of proteins requires a tightly integrated sequence of reactions, most of which are themselves performed by proteins." and continues : this "is one of the unanswered riddles of biochemistry: which came first, proteins or protein synthesis? If proteins are needed to make proteins, how did the whole thing get started?"45 The end result of protein synthesis is required before it can begin. To make it more clear what we are talking about:

That chain is constituted by INITIATION OF TRANSCRIPTION, CAPPING,  ELONGATION,  SPLICING, CLEAVAGE,POLYADENYLATION AND TERMINATION, EXPORT FROM THE NUCLEUS TO THE CYTOSOL, INITIATION OF PROTEIN SYNTHESIS (TRANSLATION), COMPLETION OF PROTEIN SYNTHESIS AND PROTEIN FOLDING. In order for evolution to work, the robot-like working machinery and assembly line  must be in place, fully operational.

Jacques Monod noted: "The code is meaningless unless translated. The modern cell's translating machinery consists of at least fifty macromolecular components which are themselves coded in DNA: the code cannot be translated otherwise than by products of translation." (Scientists now know that translation actually requires more than a  hundred proteins.)

http://reasonandscience.heavenforum.org/t2059-catch22-chicken-and-egg-problems?highlight=catch22

Furthermore, to build up this system, following conditions in a primordial earth would have to be met :

( Agents Under Fire: Materialism and the Rationality of Science, pgs. 104-105 (Rowman & Littlefield, 2004). HT: ENV.)

Availability. Among the parts available for recruitment to form the system, there would need to be ones capable of performing the highly specialized tasks of individual parts, even though all of these items serve some other function or no function.
Synchronization. The availability of these parts would have to be synchronized so that at some point, either individually or in combination, they are all available at the same time.
Localization. The selected parts must all be made available at the same ‘construction site,’ perhaps not simultaneously but certainly at the time they are needed.
Coordination. The parts must be coordinated in just the right way: even if all of the parts of a system are available at the right time, it is clear that the majority of ways of assembling them will be non-functional or irrelevant.
Interface compatibility. The parts must be mutually compatible, that is, ‘well-matched’ and capable of properly ‘interacting’: even if sub systems or parts are put together in the right order, they also need to interface correctly.

http://reasonandscience.heavenforum.org/t1468-irreducible-complexity#2133

Fred Hoyles example is not far fetched but rather a excellent illustration. If it is ridiculous to think that a perfectly operational 747 Jumbo-jet could come into existence via a lucky accident, then it is likewise just as illogical to think that such a sophisticated organism like a first living cell could assemble by chance . It gets even more absurd to think that the OOL  would also form by chance and have the capability to reproduce. Life cannot come from non-life even if given infinite time and oportunities. If life could spontaneously arise  from non-life than it still should be doing that today.  Hence the Law of Biogenesis: The principle stating that life arises from pre-existing life, not from non-living matter. Life is clearly best explained through the wilful action of a extraordinarily intelligent and powerful designer.

http://reasonandscience.heavenforum.org/t1279p30-abiogenesis-is-impossible#4171

that is the hardware part, which you can compare to a computer with hard disk, cabinet, cpu etc.

Secondly, you need coded, specified, complex information. Thats the software. And constitutes the second major hurdle that buries any naturalistic just so fairy tale stories and fantasies. In following paper :

Origin and evolution of the genetic code: the universal enigma
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3293468/

In our opinion, despite extensive and, in many cases, elaborate attempts to model code optimization, ingenious theorizing along the lines of the coevolution theory, and considerable experimentation, very little definitive progress has been made.

Summarizing the state of the art in the study of the code evolution, we cannot escape considerable skepticism. It seems that the two-pronged fundamental question: “why is the genetic code the way it is and how did it come to be?”, that was asked over 50 years ago, at the dawn of molecular biology, might remain pertinent even in another 50 years. Our consolation is that we cannot think of a more fundamental problem in biology.

http://reasonandscience.heavenforum.org/t2001-origin-and-evolution-of-the-genetic-code-the-universal-enigma

The genetic code is nearly optimal for allowing additional information within protein-coding sequences
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1832087/

DNA sequences that code for proteins need to convey, in addition to the protein-coding information, several different signals at the same time. These “parallel codes” include binding sequences for regulatory and structural proteins, signals for splicing, and RNA secondary structure. Here, we show that the universal genetic code can efficiently carry arbitrary parallel codes much better than the vast majority of other possible genetic codes. This property is related to the identity of the stop codons. We find that the ability to support parallel codes is strongly tied to another useful property of the genetic code—minimization of the effects of frame-shift translation errors. Whereas many of the known regulatory codes reside in nontranslated regions of the genome, the present findings suggest that protein-coding regions can readily carry abundant additional information.

if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection.

Fazale Rana wrote in his book Cell's design:   In 1968, Nobel laureate Francis Crick argued that the genetic code could not undergo significant evolution. His rationale is easy to understand. Any change in codon assignments would lead to changes in amino acids in every polypeptide made by the cell. This wholesale change in polypeptide sequences would result in a large number of defective proteins. Nearly any conceivable change to the genetic code would be lethal to the cell.

Even if the genetic code could change over time to yield a set of rules that  allowed for the best possible error-minimization capacity, is there enough time for this process to occur? Biophysicist Hubert Yockey addressed this question. He determined that natural selection would have to explore 1.40 x 10^70 different genetic codes to discover the universal genetic code found in nature. The maximum time available for it to originate was estimated at 6.3 x 10^15 seconds. Natural selection would have to evaluate roughly 10^55 codes per second to find the one that's universal. Put simply, natural selection lacks the time necessary to find the universal genetic code.

http://reasonandscience.heavenforum.org/t1404-the-genetic-code-is-nearly-optimal-for-allowing-additional-information-within-protein-coding-sequences

The cell converts the information carried in an mRNA molecule into a protein molecule. This feat of translation was a focus of attention of biologists in the late 1950s, when it was posed as the “coding problem”: how is the information in a linear sequence of nucleotides in RNA translated into the linear sequence of a chemically quite different set of units—the amino acids in proteins?

The first scientist after Watson and Crick to find a solution of the coding problem, that is the relationship between the DNA structure and protein synthesis was Russian  physicist George Gamow. Gamow published  in the October 1953 issue of Nature  a solution called the “diamond code”, an overlapping triplet code based on a combinatorial scheme in which 4 nucleotides arranged 3-at-a-time would specify 20 amino acids.  Somewhat like a language, this highly restrictive code was primarily hypothetical, based on then-current knowledge of the behavior of nucleic acids and proteins.

The concept of coding applied to genetic specificity was somewhat misleading, as translation between the four nucleic acid bases and the 20 amino acids would obey the rules of a cipher instead of a code. As Crick acknowledged years later, in linguistic analysis, ciphers generally operate on units of regular length (as in the triplet DNA scheme), whereas codes operate on units of variable length (e.g., words, phrases). But the code metaphor worked well, even though it was literally inaccurate, and in Crick’s words, “‘Genetic code’ sounds a lot more intriguing than ‘genetic cipher’.”

Question: how did the translation of the triplet anti codon to amino acids, and its assignment, arise ?  There is no physical affinity between the anti codon and the amino acids. What must be explained, is the arrangement of the codon " words " in the standard codon table which is highly non-random, redundant and optimal, and serves to translate the information into the amino acid sequence to make proteins, and the origin of the assignment of the 64 triplet codons to the 20 amino acids. That is, the origin of its translation. The origin of a alphabet through the triplet codons is one thing, but on top, it has to be translated to a other " alphabet " constituted through the 20 amino acids. That is as to explain the origin of capability to translate the english language into chinese. We have to constitute the english and chinese language and symbols first, in order to know its equivalence. That is a mental process.

http://reasonandscience.heavenforum.org/t2057-origin-of-translation-of-the-4-nucleic-acid-bases-and-the-20-amino-acids-and-the-universal-assignment-of-codons-to-amino-acids

The exposed facts are a dead blow for naturalism. Game over. Truly.



Last edited by Admin on Sun Jun 19, 2016 8:56 pm; edited 1 time in total

https://reasonandscience.catsboard.com

Sponsored content



Back to top  Message [Page 2 of 13]

Go to page : Previous  1, 2, 3, ... 11, 12, 13  Next

Permissions in this forum:
You cannot reply to topics in this forum