ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview

Welcome to my library—a curated collection of research and original arguments exploring why I believe Christianity, creationism, and Intelligent Design offer the most compelling explanations for our origins. Otangelo Grasso


You are not connected. Please login or register

Perguntas ....

Go to page : Previous  1, 2, 3, 4 ... 11 ... 19  Next

Go down  Message [Page 3 of 19]

51Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 8:10 am

Otangelo


Admin

Factory and machine planning and design, and what it tells us about cell factories and molecular machines

https://reasonandscience.catsboard.com/t2245-factory-and-machine-planning-and-design-and-what-it-tells-us-about-cell-factories-and-molecular-machines

Some steps to consider in regard of factory planning, design and operation

All text in red requires INTELLIGENCE :

Choosing Manufacturing   and Factory location
Selecting Morphology of Factory Types
Factory planning
Factory design
Information management within factory planning and design
Factory layout planning
Equipment supply
Process planning
Production Planning and Control
establishing various internal and external  Communication networks 
Establishing Quantity and Variant Flexibility
The planning of either a rigid or flexible volume concept depending of what is required
Establishing Networking and Cooperation
Establishing Modular organization
Size and internal factory space organization, compartmentalization and layout 
Planning of recycling Economy
Waste management
Controlled factory implosion programming

All these procedures and operational steps are required and implemented in human factories, and so in biological cells which operate like factories. It takes a lot of faith to believe, human factories require intelligence, but cells, far more complex and elaborated, do not require intelligence to make them, and intelligent programming to work in a self sustaining and self replicating manner, and to self disctruct, when required.  

Molecular machines: 

The most complex molecular machines are proteins found within cells. 1 These include motor proteins, such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which produces the axonemal beating of motile cilia and flagella. These proteins and their nanoscale dynamics are far more complex than any molecular machines that have yet been artificially constructed.

Probably the most significant biological machine known is the ribosome. Other important examples include ciliary mobility. A high-level-abstraction summary is that, "[i]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines." Flexible linker domains allow the connecting protein domains to recruit their binding partners and induce long-range allostery via protein domain dynamics. 

Engineering design process

The engineering design process is a methodical series of steps that engineers use in creating functional products and processes. 2

All text in red requires INTELLIGENCE

https://reasonandscience.catsboard.com

52Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 8:42 am

Otangelo


Admin

The factory maker argument

https://www.youtube.com/watch?v=yUyzXe1mzcM&t=101s

1. Blueprints, instructional information and master plans, and the make of complex machines and factories upon these are both always tracked back to an intelligent source which made them for purposeful, specific goals.  

2. Biological cells are a factory park of unparalleled gigantic complexity and purposeful adaptive design of interlinked high-tech fabrics, fully automated and self-replicating, directed by genes and epigenetic languages and signalling networks.

2. The Blueprint and instructional information stored in DNA and epigenetics, which directs the make of biological cells and organisms - the origin of both is, therefore, best explained by an intelligent designer which created life for his own purposes.

Herschel 1830 1987, p. 148:
“If the analogy of two phenomena be very close and striking, while, at the same time, the cause of one is very obvious, it becomes scarcely possible to refuse to admit the action of an analogous cause in the other, though not so obvious in itself.”

https://reasonandscience.catsboard.com

53Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 8:43 am

Otangelo


Admin

                       Following argument signals the death knell of atheism
Wikipedia:
“A Death Knell was the ringing of a bell immediately after a death to announce it. Historically it was the second of three bells rung around death; the first being the "Passing Bell" to warn of impending death, and the last was the "Lych Bell", or "Corpse Bell", which survives today as the Funeral toll.”
1. Blueprints, instructional information and master plans, and the making of complex machines and factories upon these are both always tracked back to an intelligent source which made them for purposeful, specific goals. 
2. Biological cells are a factory park of unparalleled gigantic complexity and purposeful adaptive design of interlinked high-tech fabrics, fully automated and self-replicating, directed by genes and epigenetic languages and signalling networks.

3. The Blueprint and instructional information stored in DNA and epigenetics, which directs the making of biological cells and organisms - the origin of both is, therefore, best explained by an intelligent designer which created life for his own purposes.
Herschel 1830 1987, p. 148:
“If the analogy of two phenomena be very close and striking, while, at the same time, the cause of one is very obvious, it becomes scarcely possible to refuse to admit the action of an analogous cause in the other, though not so obvious in itself.”

https://reasonandscience.catsboard.com

54Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 8:43 am

Otangelo


Admin

The factory maker argument

Someone wrote that following argument signals the death knell of atheism.
From Wikipedia
“A Death Knell was the ringing of a bell immediately after a death to announce it. Historically it was the second of three bells rung around death; the first being the "Passing Bell" to warn of impending death, and the last was the "Lych Bell", or "Corpse Bell", which survives today as the Funeral toll.”

The factory maker argument

1. Blueprints, instructional information and master plans, and the making of complex machines and factories upon these are both always tracked back to an intelligent source which made them for purposeful, specific goals.  

2. Biological cells are a factory park of unparalleled gigantic complexity and purposeful adaptive design of interlinked high-tech fabrics, fully automated and self-replicating, directed by genes and epigenetic languages and signalling networks.

3. The Blueprint and instructional information stored in DNA and epigenetics, which directs the making of biological cells and organisms - the origin of both is, therefore, best explained by an intelligent designer which created life for his own purposes.

Herschel 1830 1987, p. 148:
“If the analogy of two phenomena be very close and striking, while, at the same time, the cause of one is very obvious, it becomes scarcely possible to refuse to admit the action of an analogous cause in the other, though not so obvious in itself.”

Proteins are the result of the DNA blueprint, which specifies the complex sequence necessary to produce functional 3D folds of proteins. Both, improbability and specification are required in order to justify an inference of design.
1. According to the latest estimation of a minimal protein set for the first living organism, the requirement would be about 560 proteins, this would be the absolute minimum to keep the basic functions of a cell alive.  
2. According to the Protein-length distributions for the three domains of life, there is an average between prokaryotic and eukaryotic cells of about 400 amino acids per protein. 8
3. Each of the 400 positions in the amino acid polypeptide chains could be occupied by any one of the 20 amino acids used in cells, so if we suppose that proteins emerged randomly on prebiotic earth, then the total possible arrangements or odds to get one which would fold into a functional 3D protein would be 1 to 20^400 or 1 to 10^520. A truly enormous, super astronomical number.
4. Since we need 560 proteins total to make a first living cell, we would have to repeat the shuffle 560 times, to get all proteins required for life. The probability would be therefore 560/10^520.  We arrive at a probability far beyond  of 1 in 10^200.000  ( A proteome set with 239 proteins yields odds of approximately 1/10^119.614 ) 7
Granted, the calculation does not take into consideration nor give information on the probabilistic resources available. But the sheer gigantic number os possibilities throw any reasonable possibility out of the window.

If we sum up the total number of amino acids for a minimal Cell, there would have to be 560 proteins x 400 amino acids  =  224.000 amino acids, which would have to be bonded in the right sequence, choosing for each position amongst 20 different amino acids, and selecting only the left-handed, while sorting out the right-handed ones. That means each position would have to be selected correctly from 40 variants !! that is 1 right selection out of 40^224.000 possibilities or 10^378.000 !! Obviously, a gigantic number far above any realistic probability to occur by unguided events. Even a trillion universes, each hosting a trillion planets, and each shuffling a trillion times in a trillionth of a second, continuously for a trillion years, would not be enough. Such astronomically unimaginably gigantic odds are in the realm of the utmost extremely impossible.

https://www.youtube.com/watch?v=yUyzXe1mzcM&t=101s

https://reasonandscience.catsboard.com

55Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 8:44 am

Otangelo


Admin

Blueprints and factories are always and only purposefully built.
Cells are factories made based on the blueprint stored in DNA.
Therefore, cells built based on the blueprint stored in DNA were made with a purpose.
Only conscious agents act with purpose and foresight.
Therefore, biological Cells and life were made by a conscient agent with purpose.
That agent was with the highest probability, God.

The Golgi apparatus, another membranous structure embedded in the cytoplasm, is also involved in the processing of macromolecules made within the cell. Its special properties are for modifying cell products so that they can be exported from the cell. In our chemical factory analogy, they are the packaging and exporting department.

Membranes represent the walls of the cellular factory. In our chemical factory analogy, membranes control what comes into the factory and what leaves. They divide the factory into departments. and control the transfer of material between departments. They also enable the machines of the factory (enzymes) to be organised into highly efficient production lines.

Enzymes are indeed rather like the workers in a large complex industrial process. Each is designed to carry out a specific task in a specific area of the factory.

An amazing feature of these cells is that, in the presence of a plentiful supply of food and under optimal conditions, they may grow and divide every twenty minutes or so. In principle, a single cell could give rise to 4 x 109 cells within about 11 hours!! You will learn in later chapters that these seemingly simple cells are composed of many, very complex chemicals. The rapid multiplication of cells, therefore, represent an enormous capability to synthesise chemicals. Prokaryotic cells are amazing chemical factories.

https://reasonandscience.catsboard.com

56Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 9:18 am

Otangelo


Admin

Question: what is the better explanation for the origin of the following things?

https://reasonandscience.catsboard.com/t2245p25-abiogenesis-the-factory-maker-argument#7904

- factory portals with fully automated security checkpoints and control
- factory compartments
- a library index and fully automated information classification, storage, and retrieval program
- computer hardware
- software, a language using signs and codes like the alphabet, an instructional blueprint,
- information retrieval systems
- information transmission systems
- translation systems
- complex robotlike machines
- taxis adapted for cargo transport and delivery, with GPS systems
- highways
- tagging programs informing taxis were to transport goods
- factory assembly lines
- error check and repair systems
- recycling machines
- waste grinders and management
- power generating plants
- power turbines
- electric circuits

Chance, or intelligent design ?

The cell is a factory - adios materialism.

1. Computer hard-drives with high capacity of digital data storage, software programs based on languages using statistics, syntax, semantics, pragmatics, and apobetics, and the elaboration of complex instructional blueprints through those software programs, and data transmission systems (encoding, sending, decoding), all operated through computers and interlinked computer networks, which prescribe, drive, direct, operate and control interlinked compartmentalized factory parks making products for specific purposes, full of autonomous, robotlike high-tech production lines, high-efficiency power plants, complex high-tech robots with autoregulation and feedback loops, producing products with minimal error rates, that are transported through GPS driven transport carriers to their destination, all driven through energy made by high rotative turbines and power plants, are always set up by intelligent agents designing those things for purposeful goals.

2. Science has unraveled, that cells, strikingly, contain, and operate through all those things. Cells are cybernetic, ingeniously crafted cities full of factories. Cells contain information, which is stored in genes (books), and libraries (chromosomes). Cells have superb, fully automated information classification, storage, and retrieval programs ( gene regulatory networks ) which orchestrate strikingly precise and regulated gene expression. Cells also contain hardware - a masterful information-storage molecule ( DNA ) - and software, more efficient than millions of alternatives ( the genetic code ) - ingenious information encoding, transmission, and decoding machinery ( RNA polymerase, mRNA, the Ribosome ) - and highly robust signaling networks ( hormones and signaling pathways ) - awe-inspiring error check and repair systems of data ( for example mind-boggling Endonuclease III which error checks and repairs DNA through electric scanning ). Information systems, which prescribe, drive, direct, operate, and control interlinked compartmentalized self-replicating cell factory parks that perpetuate and thrive life. Large high-tech multimolecular robotlike machines ( proteins ) and factory assembly lines of striking complexity ( fatty acid synthase, non-ribosomal peptide synthase ) are interconnected into functional large metabolic networks. In order to be employed at the right place, once synthesized, each protein is tagged with an amino acid sequence, and clever molecular taxis ( motor proteins dynein, kinesin, transport vesicles ) load and transport them to the right destination on awe-inspiring molecular highways ( tubulins, actin filaments ). All this, of course, requires energy. Responsible for energy generation are high-efficiency power turbines ( ATP synthase )- superb power generating plants ( mitochondria ) and electric circuits ( highly intricate metabolic networks ). When something goes havoc, fantastic repair mechanisms are ready in place. There are protein folding error check and repair machines ( chaperones), and if molecules become non-functional, advanced recycling methods take care ( endocytic recycling ) - waste grinders and management ( Proteasome Garbage Grinders )

3. Chemist Wilhelm Huck, professor at Radboud University, Netherlands: A working cell is more than the sum of its parts. "A functioning cell must be entirely correct at once, in all its complexity. Cells, containing all those things are irreducibly complex. Without energy, information, or the basic building blocks fully synthesized, there would be no life. All this is best explained as a product of a super-intellect, an agency equipped with unfathomable intelligence - through the direct intervention, creative force, and activity of an intelligent cognitive agency, a powerful creator.

https://reasonandscience.catsboard.com

57Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 9:21 am

Otangelo


Admin

Gods existence is based on gaps of knowledge, you say?

Assignment must be done by intelligence
Architecture requires an architect.
Communication systems require network engineers.
Computers require computer makers
Creating a language requires intelligence
Creating Instructional information requires intelligent specialists 
Coordination requires a coordinator
Controlling requires intelligence
Controlled factory implosion programming requires an Explosive Safety Specialist
Electrical networks require electrical engineers
Engineering requires an engineer
Factories require factory makers
Fine-tuning requires a fine-tuner
Logistics requires a logistic specialist
Laws require a lawmaker
Life can only be created by preexisting life
Modular organization requires a modular project manager
Nanoscale technology requires nano processes, development engineers
Orchestration requires a director
Organization requires an organizer
Product planning and control require a production control coordinator
Product Quantity and Variant Flexibility control require product management engineers
Programming languages are always set up by programmers
Regulation requires a regulator 
Recruiting requires intelligence 

Rules require a rule maker
Setting up recycling systems require a recycling technician
Setting up power plants requires systems engineers of power plants
Setting up strategies requires a strategist
Setting up switch mechanisms based on logic gates with on and off states require intelligent setup
Setting up transport highways requires  Transportation Development engineers
Translation programs are always set up by translation programmers
The Universe requires a Universe creator.
The making of chemistry requires a chemist
Waste disposal and management require a waste logistics manager

The above actions are all observed to occur in the natural world, and its origin is therefore best explained by a super powerful and intelligent creator.


activating,
binding,
breaking,
coordinating,
conferring positional information,
directing,
forcing transmission,
generating,
guiding,
helping to organize,
inducing, 
informing, 
mediating, 
modulating, 
organizing, 
orienting, 
providing positioning rules, 
provoking changes, 
promoting, 
regulating, 
signaling, 
stretching, 
specifying, 

are all actions performed in cells during the formation and morpho-genesis of single eukaryotic cells, structure, and shape, and tissues.

https://reasonandscience.catsboard.com

58Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 9:22 am

Otangelo


Admin

Complex artifacts made by man for specific purposes almost always require a manufacturing and assembly process which is more complex than the device to be made itself. I don't know of ANY factory, that makes products, that are equally complex, or more complex, than the factory, and the efforts to produce it. If we quantify the information, energy, and physical parts (machines, etc), and compare it to the product made, the former is always more complex than the latter. But remarkably, in life, in a VERY dramatic way, the opposite is the case. One single fertilized human egg stores the information, to make an organism, which, when grown up, is made of 37 trillion cells!! Think about that ??!! Science is not even close to unraveling how this is possible. And while human factories require a lot of human intervention, cell factories operate 100% autonomously. Self-replication is the epitome of manufacturing sophistication. The machine in the core of the process in biology is the Ribosome. it requires several hundreds of assembly machines, which make the machines, which make the subunits of the Ribosome. Once each subunit is made, it goes through a very delicate, precise, and orchestrated test drive process. Even long-range communication between the assembly machines monitor if the newly synthesized ribosome subunit was produced properly, and only if the test drive is successful, the subunit is incorporated in the maturation of the ribosome. If not, there are proteasome grinders waiting to recycle the misfolded product, which, otherwise, would accumulate, and toxic the cell. Once the assembly factors have done their job, they are re-used in the next round to make the next ribosome. All this had to emerge prior when life started, and so evolution was not the hero on the block. So one has either to believe, that all this enormously complex machine-building emerged spontaneously for no reason at all, or there was a super-intellect, that conceptualized life, and instantiated it, through his far superior intellectual capacity, then we humans have. Either chance or design. What is the superior, more rational explanation?

We, as intelligent beings, are only able to make devices using a more complex assembly process than the device as product, while nature can make an equally complex product, like a single cell self-replicating, BUT even a pluripotent cell like a fertilized human egg can give rise to a MUCH more complex organism, like a grown human with 37 trillion cells. So we go from complex => simple, while nature is able to go from simple ( one cell ) to trillions.... The second, of course, being a far more advanced production process. Human factories make less complex products. Gods factories make more complex products. While our factories require the constant mental input ( factory workers), Gods factories ( cells ) are fully automated, and work robot like, in an independent manner, for millennia by self replication.

https://reasonandscience.catsboard.com

59Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 9:23 am

Otangelo


Admin

When you see a blueprint with information, that contains instructional assembly information to make a factory and all the machines in it, and subsequently, you see the factory-made upon that blueprint, and you have no information whatsoever about who made both, the blueprint, and the factory, and you have two competing hypotheses: One is that intelligent beings made it, and the other, that it is a product of random chance, what is the better explanation?

https://reasonandscience.catsboard.com

60Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 9:27 am

Otangelo


Admin

Flagellum:
https://www.youtube.com/watch?v=Ey7Emmddf7Y
Spliceosome:
https://www.youtube.com/watch?v=FVuAwBGw_pQ

https://reasonandscience.catsboard.com

61Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 9:28 am

Otangelo


Admin

Bacteriophage:
https://www.youtube.com/watch?v=Dfl4F1R0Hv0
https://www.youtube.com/watch?v=qyaM577oaG4
https://www.youtube.com/watch?v=4PnPNkkfCt4
Vesicular transport:
https://www.youtube.com/watch…
https://www.youtube.com/watch?v=eRslV6lrVxY
https://www.youtube.com/watch?v=q-Er5sEaj2U
https://www.youtube.com/watch?v=u2lieHDDYPY

https://reasonandscience.catsboard.com

62Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jun 23, 2021 9:28 am

Otangelo


Admin

Kinesin-'molecular truck':
https://www.youtube.com/watch?v=y-uuk4Pr2i8
http://bioslawek.files.wordpress.com/2014/02/t1.jpg…
Mechanical stress activated channels (mechanoreceptors) in the auditory cells (the hairy cells):
https://www.youtube.com/watch?v=1VmwHiRTdVc
http://www.cochlea.eu/…/ouverture-des-canaux-de-transductio…
http://bioslawek.files.wordpress.com/…/cellule-ciliee-d-une…
Ribosomes:
https://www.youtube.com/watch?v=Jml8CFBWcDs
https://www.youtube.com/watch?v=q_n0Ij3K_Ho
https://www.youtube.com/watch?v=ID7tDAr39Ow
https://www.youtube.com/watch?v=D5vH4Q_tAkY[/size]

1. https://reasonandscience.catsboard.com/t2130-peptide-bonding-of-amino-acids-to-form-proteins-and-its-origins#6664

https://reasonandscience.catsboard.com

63Perguntas .... - Page 3 Empty Re: Perguntas .... Fri Jun 25, 2021 7:31 am

Otangelo


Admin

http://www.reasons.org/articles/fundamental-forces-show-greater-fine-tuning

A team of Austrian, German, and Hungarian astrophysicists has recently added evidence to the case for divine design, sweeping aside a recent challenge to the design argument I present in The Creator and the Cosmos.1 Their research focused on two of the four fundamental forces of physics: 1) electromagnetism, which governs the degree to which atomic nuclei hold on to their electrons, and 2) the strong nuclear force, which governs the degree to which protons and neutrons stick together in the nuclei of atoms.

The teams strategy was to construct mathematical models of red giant stars, altering (slightly) the values for the strong nuclear force and electromagnetic force constants. They discovered that even tiny increases or decreases cause problems. The adjusted red giants would produce too little carbon, too little oxygen, or too little of both oxygen and carbon for any kind of physical life to be possible anywhere in the universe. Specifically, they determined that if the value of the coupling constant for electromagnetism were four percent smaller or larger than what we observe, life would be impossible. In the case of the coupling constant for the strong nuclear force, if it were 0.5 percent smaller or larger, life would be impossible.

The teams achievement helps relieve a criticism of the design argument set forth in my book2 and used by others. Two years ago, (atheistic) physicist Victor Stenger commented in Skeptic magazine that not much fine-tuning at all was necessary to make long-lived stars.3 He implied that my fine-tuning claims were invalid and, thus, left me with no case for a cosmic Designer.

The new achievement discussed here establishes that rather than my design conclusion being too optimistic, it is too conservative. I might add, too, that the case for divine design never rested on just one or two features of the cosmos.

This research demonstrates how cosmic creation can be subjected to ongoing testing. If the atheists are right and Christians are wrong, the more we learn about the universe, the weaker the cosmic design evidence should become. However, if we are right and the atheists are wrong, learning about the universe should reveal more and stronger cosmic design evidence. The latter describes the trend we observe and document.4, 5.


http://www.leaderu.com/science/ross-justright.html

What about all those positively charged protons in the center of an atom

http://www.conversantlife.com/science/what-kind-of-proofs-are-there-that-god-exists

... it is as if someone carefully prepared our world exactly with us in mind. Certain laws of nature rest within very narrowly defined parameters that allow humans to exist here.

Scientists conservatively estimate there are at least 18 physical laws that work in perfect harmony in order for the universe and Planet Earth to be suitable for complex life. For example, there are the laws of gravity, conservation of energy, thermodynamics, strong nuclear forces, electromagnetic forces, and so on. If any of these laws varied ever so slightly, life would not be possible in our universe.

Consider the strong nuclear force. This is the force that holds the nuclei—centers—of atoms together. The protons and neutrons of the atom within the nuclei exchange subparticles. The protons are then bound together by the strong force, even though their positive charges would normally repel each other. And the atom stays intact.

To see one of the results of the strong force, take the sun’s production of nuclear energy, for example. Our sun “burns” and produces energy to sustain Planet Earth by fusing hydrogen atoms together. And when two such atoms fuse, 0.7 percent of their mass is converted into energy. But what if the percentage of matter converted to energy was slightly smaller? If the conversion was just 0.6 percent instead of 0.7 percent, the proton could not bond to the neutron and the universe would consist only of hydrogen. There would be no Planet Earth for us to inhabit, nor would there be a sun to warm it.

And what if the matter converted to energy was just a bit larger, say 0.8 percent? The fusion would happen so quickly that no hydrogen could survive. And this would also mean that life as we know it could not exist. Our universe is so fine-tuned that the tiny hydrogen atoms, when fusing, must give up exactly between 0.6 percent and 0.8 percent of their mass in the form of energy!

There are dozens of such examples that demonstrate that our universe is finely tuned to an incredible degree. It is unthinkable that it originated “by chance.” It is as if some Intelligent Agent prepared Planet Earth with a welcome sign that said, “I made this specifically for you.”

THE TELEOLOGICAL ARGUMENT AND THE ANTHROPIC PRINCIPLE
More specifically, the values of the various forces of nature appear to be fine-tuned for the existence of intelligent life. The world is conditioned principally by the values of the fundamental constants a (the fine structure constant, or electromagnetic interaction), mn/me (proton to electron mass ratio, aG (gravitation), aw (the weak force), and as (the strong force). When one mentally assigns different values to these constants or forces, one discovers that in fact the number of observable universes, that is to say, universes capable of supporting intelligent life, is very small. Just a slight variation in any one of these values would render life impossible.

For example, if as were increased as much as 1%, nuclear resonance levels would be so altered that almost all carbon would be burned into oxygen; an increase of 2% would preclude formation of protons out of quarks, preventing the existence of atoms. Furthermore, weakening as by as much as 5% would unbind deuteron, which is essential to stellar nucleosynthesis, leading to a universe composed only of hydrogen. It has been estimated that as must be within 0.8 and 1.2 its actual strength or all elements of atomic weight greater than four would not have formed. Or again, if aw had been appreciably stronger, then the Big Bang's nuclear burning would have proceeded past helium to iron, making fusion-powered stars impossible. But if it had been much weaker, then we should have had a universe entirely of helium. Or again, if aG had been a little greater, all stars would have been red dwarfs, which are too cold to support life-bearing planets. If it had been a little smaller, the universe would have been composed exclusively of blue giants which burn too briefly for life to develop. According to Davies, changes in either aG or electromagnetism by only one part in 1040 would have spelled disaster for stars like the sun. Moreover, the fact that life can develop on a planet orbiting a star at the right distance depends on the close proximity of the spectral temperature of starlight to the molecular binding energy. Were it greatly to exceed this value, living organisms would be sterilized or destroyed; but were it far below this value, then the photochemical reactions necessary to life would proceed too slowly for life to exist. Or again, atmospheric composition, upon which life depends, is constrained by planetary mass. But planetary mass is the inevitable consequence of electromagnetic and gravitational interactions. And there simply is no physical theory which can explain the numerical values of a and mn/me that determine electromagnetic interaction.

 http://www.leaderu.com/offices/billcraig/docs/teleo.html


1)      Gravity is essential in the formation of stars and planets. As I discussed in a previous blog, life needs something like stars as a long-lived stable energy source. Also, as cosmologist Luke Barnes has pointed out: “if gravity were repulsive rather than attractive, then matter wouldn’t clump into complex structures. Remember: your density, thank gravity, is 1030 times greater than the average density of the universe.”

2)      The strong nuclear force is necessary to hold together the protons and neutrons in the nucleus. Without this fundamental force, no atoms would exist beyond hydrogen and thus there would be no meaningful chemistry and thus no possibility for intelligent life. The positively charged protons in the nucleus repel each other but thankfully the strong nuclear force is sufficiently stronger than electromagnetic repulsion. If the strong force acted at long ranges like gravity or electromagnetism, then no atoms would exist because it would dominate over the other forces. Barnes notes that “any structures that formed would be uniform, spherical, undifferentiated lumps, of arbitrary size and incapable of complexity.[1]”

3)      The electromagnetic force accounts for chemical bonding and for why electrons orbit the nucleus of atoms. Without chemistry, there is no plausible way to store and replicate information such as would be necessary for life. Light supplied by stars is also of critical importance to life in overcoming the tendency towards disorder, as dictated by the Second Law of Thermodynamics. Barnes points out that without electromagnetism, “all matter would be like dark matter, which can only form large, diffuse, roughly spherical haloes.[2]” Suppose like charges attracted and opposites repelled (in contrast with the behavior in our universe), there would be no atoms.

4)      The weak nuclear force plays a key role during core-collapse supernova[3] in the expulsion of key heavier elements, making them available for life rather than just entombed forever in dying stars. Also, the weak force enables the key proton-proton reaction which powers stars in our universe. There is a clever paper by Harnik[4] that attempts to find a life-permitting universe without the weak force but only at the expense of a “judicious parameter adjustment.” See this discussion of the additional finely-tuned constants that were necessary to compensate for the lack of a weak force.[5] Also, some physicists think that the weak force is necessary for there to be matter in our universe.[6]
https://crossexamined.org/many-changes-laws-physics-life-prohibiting/

http://worldview3.50webs.com/mathprfcosmos.html

The "Big Bang"
It has been known for about 70 years, that the galaxies of the universe are moving apart and away from each other, in similar fashion to raisins moving apart and away from each other in an expanding lump of dough. In 1929, astronomer Edwin Hubble's measurements on more than 40 galaxies established that the galaxies of the universe are indeed receding away from each other at several hundred miles per second, as an explosion would propel exploded pieces from each other. That "explosion"-event is now popularly called the "Big Bang" --- and this event is evidenced by left-over heat (or "background radiation") throughout the universe which (along with much other evidence) leaves little doubt that this hot explosive event occurred.

In addition, recent research, such as data from the "BOOMERANG experiment" (short for "Balloon Observations of Millimetric Extragalactic Radiation and Geophysics") have determined that the geometry and ""shape" of the universe," is "flat" (as opposed to having "curved" space). A "flat" universe means that Euclydian geometry applies throughout space, and every "straight" line (as people normally think of straightness) in the universe does not curve with the fabric of space --even over a very long distance. It doesn't matter that gravity causes light to curve --the geometry of space is still flat. This means that there is an approximate "center" to the material universe (since there is a finite number of galaxies).

One of the implications of the universe being "flat" is that it will expand forever --and in fact, there is experimental confirmation that the universe is actually accelerating in its expansion rate. All of this means that the expansion will never reverse and bring the universe back together into a "Big Crunch," because there's not enough gravity in the mass of the universe to stop the expansion. -- Therefore, we know that the universe has not been on an endless cycle of bang, crunch, bang, crunch, etc. And because of this, we know that the universe is not an eternal entity.

In the following video, Dr. William Lane Craig discusses this current cosmological evidence:
The Evidence of Cosmology

Astrophysicists (such as Stephen Hawking) determined that the evident starting point just before the Big Bang involved something called a "singularity," which is: all the cosmos's potential mass (matter), energy, and dimensions --and time-- reduced down to an infinitely small point of zero volume. ---Thus, matter, 3-dimensional space, and time virtually did not exist before the Big Bang.

The expanding universe is an important discovery, because if we "reverse the film" of that expansion, then we arrive back at a starting-point for its beginning ...and if there is a beginning, there must logically be a "beginner" to initiate the Big Bang. The beginner of the Bang precedes and is outside of (transcends) all matter, dimensions and time. In light of this, the thoughts of many people go to the first verse of the Bible, which states, "In the beginning God created the heavens and the earth" (Gen. 1:1).

This powerful evidence contradicts worldviews and religions that posit an eternally existing universe (such as older materialism), ... or views which posit the idea of cosmic "reincarnation" with an oscillating universe that eternally expands and contracts (such as Hinduism, Buddhism, & New Age philosophies); ---but instead, --the Big Bang would support the biblical view of a transcendent God; that "the universe was formed at God's command, so that what is seen was not made out of what was visible" - Hebrews 11:3. In addition, ---unlike any other supposedly "holy writings"--- the Bible alone says that there was a "beginning of time" (2Tim. 1:9 & Titus 1:2), ---and God was causing effects before that beginning (John 17:5 & Colos. 1:16-17).

The Balance of the Bang: In order for life to be possible in the universe, the explosive power of the Big Bang needed to be extremely closely matched to the amount of mass and balanced with the force of gravity, so that the expansion-speed is very precise. This very exact expansion-speed of the universe, is called the "Cosmological Constant." If the force of the bang was slightly too weak, the expanding matter would have collapsed back in on itself before any planets suitable for life (or stars) had a chance to form, ---but if the bang was slightly too strong, the resultant matter would have been only hydrogen gas that was so diffuse and expanding so fast, that no stars or planets could have formed at all.

Science writer Gregg Easterbrook explains the required explosive power-balance of the Big Bang, saying that, "Researchers have calculated that, if the ratio of matter and energy to the volume of space ...had not been within about one-quadrillionth of one percent of ideal at the moment of the Big Bang, the incipient universe would have collapsed back on itself or suffered runaway relativity effects" (My emphasis.) (ref. G.Easterbrook, "Science Sees the Light", The New Republic, Oct.12, 1998, p.26).

In terms of the expansion rate of the universe as a result of the Big Bang: "What's even more amazing is how delicately balanced that expansion rate must be for life to exist. It cannot differ by more than one part in 1055 from the actual rate." (My emphasis.) (Ref: H.Ross, 1995, as cited above, p.116). (Note: 1055 is the number 1 with 55 zeros after it ---and 1055 is about the number of atoms that make up planet earth).

THE PROBABILITY: The chances we can conservatively assign to this: It was about one chance out of 1021 that the force of the Big Bang could have randomly been properly balanced with the mass & gravity of the universe, in order for stars and planets to form, so that life could exist here in our cosmos.

Design and Cosmology
In the following video, Dr. William Lane Craig discusses this current cosmological evidence:
The Evidence of Cosmology

Next --- Several of the following items deal with strengths of the four (known) basic forces of physics in the material universe, which hold everything together. Those four basic forces are: the force of gravity, the strong nuclear force, the weak nuclear force, and the electromagnetic force. The strengths of these four forces are extremely finely tuned and balanced with each other and with the amount of matter in the universe, which makes life possible in the present cosmos. ---What is the chance that such fine-tuning happened by chance? --- (Note: If a scientist can improve the accuracy in the numbers used for probabilities here, such information would be appreciated.)

1. https://en.wikipedia.org/wiki/Electromagnetism
2. THE PRIVILEGED PLANET Guillermo Gonzalez and Jay W. Richards, page 202

https://reasonandscience.catsboard.com

64Perguntas .... - Page 3 Empty Re: Perguntas .... Sat Jun 26, 2021 11:24 am

Otangelo


Admin

One of the standard answers and repertoire of atheists that try to refute theist arguments is the claim that they are based on lack of knowledge, ignorance,  and gaps in our understanding of reality. Nothing could be less true. We have truly an overwhelming amount of positive scientific evidence, more than ever before. So the ignorance is, in reality, all on the side of atheists, who are too lazy and unwilling to give honest consideration to the evidence provided.  There have never been fewer reasons to stick to atheism, than today.

The inference of an intelligent designer as the best  explanation of
origins are not based on lack of knowledge, gaps, and ignorance, but
overwhelming, positive scientific evidence.

The universe had a beginning, therefore a cause
Physical laws had to emerge together with space and matter
The physical universe is finely tuned to host life, therefore, requires a fine tuner. 
Mental reality exists. Matter cannot produce a mind. 
Biological systems appear designed. Therefore, most probably, they are designed. 
DNA stores the blueprint of life. Blueprints can always be traced back to intelligence.
Instructional complex information has always a mental origin
Biological systems work in an interdependent manner. Interdependence is a hallmark of design.
Biological cells require a minimal number of parts, which have no use by themselves.
Furthermore, a minimal proteome, genome, and metabolome for life to start
and are therefore irreducible, which means, they had to emerge all at once, not gradually.
Life could not emerge in a gradual manner, but only all at once
Metabolic pathways are like electronic circuits.
Conscience, free will, moral values, and logic cannot emerge from matter
Logic and language cannot emerge from non-life
Factories have never been observed to self-assemble
Biological cells are complex factories full of machines and production lines
Abiogenesis is impossible. Even the simplest cells are too complex to
emerge from unguided, random events. 
The only mechanism to explain the origin of life, once design
is excluded, is self-assembly spontaneously by orderly aggregation and sequentially correct
manner without external direction. Does that make sense? 
Self-replication is the epitome of manufacturing advance and achievement
There is an assignment of codons to amino acids which is arbitrary. An assignment is a mental activity. 
Science-based on methodological naturalism is unable to explain the origin
of the genetic code, the genetic cipher,  the genetic and epigenetic information. 
Life requires hardware and software to instruct how to build biological organisms
Biological cells are made of diverse interlinked complex molecular machines, 
use a complex web  of  interconnected cooperative cell motors
and miniaturized factories ( ribosomes, endoplasmic reticulum, transcriptional 
factories, membrane factories, literal production lines )
Cells use the same information to make various products ( spliceosome )
Cells use complex error check and repair mechanisms that had to be fully 
setup and operational when life began
DNA is the smallest information storage system known
The gene regulatory network instructs when to express the right genes
Cells use advanced communication and signaling networks
Cell signaling depends on the right homeostatic environment inside the cell. 
Calcium concentration inside the cell has to be 20,000 times lower than
that in the extracellular milieu, otherwise, no signaling, no life.
Cell membrane and membrane proteins are interdependent. 
Bacterias and eukaryotic cells have different membranes and DNA replication 
machinery, which falsifies the claim of common ancestry
the Cambrian explosion falsifies gradual evolution
Epigenetic information falsifies the claim of natural selection as
exclusive mechanism of biological variation, speciation, 
complexity,  form, and biodiversity

https://reasonandscience.catsboard.com

65Perguntas .... - Page 3 Empty Re: Perguntas .... Sun Jun 27, 2021 8:13 am

Otangelo


Admin

Stephen Hawking, physicist, and inspiration to millions, dies aged 76

http://www.independent.co.uk/news/science/stephen-hawking-dead-latest-physics-cambridge-brief-history-time-black-holes-age-76-a8254836.html

The physicist and author of A Brief History of Time has died at his home in Cambridge. His children said: ‘We will miss him for ever’

Stephen Hawking, the brightest star in the firmament of science, whose insights shaped modern cosmology and inspired global audiences in the millions, has died aged 76.

His family released a statement in the early hours of Wednesday morning confirming his death at his home in Cambridge.

The iconic physicist is known as one of the greatest scientific minds in the history of the world, and worked to peer into the most mysterious parts of the universe. But Professor Hawking was known also for the accessible way in which he communicated those discoveries, with his work including A Brief History Of Time making its way into pop culture.

Hawking: 'I'm an atheist, science is more convincing than God'
SEP 25, 2014 IN SCIENCE

The world's preeminent theoretical physicist did explicitly acknowledge for the first time in 2014 that he was an atheist, explaining that "science offers a more convincing explanation" of the origins of the universe than 'God.'

In an article published in the leading Spanish daily El Mundo, Hawking clarified an infamous passage in his international bestselling book A Brief History of Time, in which he wrote:

"If we discover a complete [unifying] theory, it would be the ultimate triumph of human reason—for then we should know the mind of God."

“What I meant by ‘we would know the mind of God’ is, we would know everything that God would know, if there were a God, which there isn’t,” Hawking, 72, told El Mundo reporter Pablo Jáuregui. “I’m an atheist.”

Stephen Hawking from "God did not create Universe"
http://www.bbc.co.uk/news/uk-11161493

"Because there is a law such as gravity, the universe can and will create itself from nothing. SPONTANEOUS CREATION is the reason there is something rather than nothing, why the universe exists, why we exist. It is not necessary to invoke God to light the blue touch paper and set the universe going."

Laws of Physics, where did they come from?
http://reasonandscience.heavenforum.org/t1336-laws-of-physics-where-did-they-come-from

The physical universe and the laws of physics are interdependent and irreducible. There would not be one without the other. Origins make only sense in face of Intelligent Design.

http://www.digitaljournal.com/science/hawking-i-m-an-atheist-science-is-more-convincing-then-god/article/405397#ixzz59iwRJOHm

Stephen Hawking: 'No hay ningún dios. Soy ateo'

http://www.elmundo.es/ciencia/2014/09/21/541dbc12ca474104078b4577.html

" la idea de de Dios «no es necesaria» para explicar su origen. "

I used frequently a quote of Hawking in regards of the beginning of the universe:

The universe most probably had a beginning
https://reasonandscience.catsboard.com/t1297-the-universe-most-probably-had-a-beginning

Stephen Hawking leaves also no doubt in The Beginning of Time:

Strictly speaking, according to Einstein's Theory of Relativity, a singularity does not contain anything that is actually infinite, only things that MOVE MATHEMATICALLY TOWARDS infinity. A black hole is formed when large stars collapse and their mass has been compressed down to a very small size and the powerful gravitational field so formed prevents anything, even light, from escaping from it. A black hole, therefore, forms a singularity at its center from the concentrated mass of the collapsed star itself and from the accumulated mass that is sucked into it. A singularity's mass is, therefore, finite, the 'infinity' refers only to the maths.

It is beyond my understanding, why one of the most celebrated, world-famous and influential scientists of our time, was unable to make the very basic leap of understanding and logic, that everything that begins to exist, has a cause, well propagated by W.L.Craig through the Kalaam Cosmological Argument.

The Kalaam Cosmological Argument
https://reasonandscience.catsboard.com/t1333-the-kalaam-cosmological-argument

The Kalam Cosmological Argument
(1) Everything that has a beginning of its existence has a cause of its existence.
(2) The universe has a beginning of its existence.
Therefore:
(3) The universe has a cause of its existence.
(4) If the universe has a cause of its existence then that cause is God.
Therefore:
(5) God exists.

Hawking has taken today the course that all humans do. The passage from this reality to the next. There is no report, that he changed his mind before he died. Sad, because a human being is gone, which did not honor and recognize his creator.

As such, he will not have to explain anything to God - the God he denied because God knows everything.

Hebrews 9:27
27 Just as people are destined to die once, and after that to face judgment, 28 so Christ was sacrificed once to take away the sins of many; and he will appear a second time, not to bear sin, but to bring salvation to those who are waiting for him.

He will just recognize Gods existence, and then take the course of all unbelievers.

Let's pray that other famous atheists like Dawkins, Krauss, Harris, and many others, find their way out of the deception of the atheistic worldview before they do the passage, which Hawkins sadly did today.

https://reasonandscience.catsboard.com

66Perguntas .... - Page 3 Empty Re: Perguntas .... Sun Jun 27, 2021 8:33 am

Otangelo


Admin

According to calculations, the current comoving distance—proper distance, which takes into account that the universe has expanded since the light was emitted—to particles from which the cosmic microwave background radiation (CMBR) was emitted, which represent the radius of the visible universe, is about 14.0 billion parsecs (about 45.7 billion light-years), while the comoving distance to the edge of the observable universe is about 14.3 billion parsecs (about 46.6 billion light-years)[

https://reasonandscience.catsboard.com

67Perguntas .... - Page 3 Empty Re: Perguntas .... Tue Jun 29, 2021 2:08 pm

Otangelo


Admin

Fine-tuning of atoms

https://reasonandscience.catsboard.com/t2795-fine-tuning-of-atoms#6498

It appears that the standard model of physics contains around 26 of these constants and further it appears that changing most of these by even the smallest amount can result in changes to chemistry, nuclear physics and space itself that would cause life to be impossible. 2 There appears, to use Davies’ often used phrase “some fine tuning” going on. So if the constants are changing, or have changed – why did they change to a value that today makes life possible?

The way that the laws of nature, expressed in mathematics, describe the relationships between space, time and matter has a great formal coherence. 1 There are fundamental constants in physics that are apparently arbitrary—numbers that seem to exist entirely in their own right, without reference to the rest of the universe. No obvious reason seems to exist for them to be as they are; they are simply the way the world is.

The fine structure constant is one of the fundamental constants in nature, just like the speed of light or Planck's constant. It is there, and that's all we know for sure. We don't really have a compelling theory on its origin, nor a mechanism that explains its value 3


The fine structure constant is one such parameter: This number, which has become ubiquitous in physics, remains mysterious. One of the pioneers of quantum theory, Wolfgang Pauli said of it, “When I die, my first question to the devil will be: What is the meaning of the fine structure constant?”

Michael Murphy writes that “All ‘everyday’ phenomena are gravitational and electromagnetic. Thus G and α are the most important constants for ‘everyday’ physics” .

The laws of physics seem peculiarly well suited to life. The laws of physics: - or more precisely the constants of nature like the charge on an electron or G or α - are consistent with the existence of life.


1. https://www.economist.com/the-world-if/2017/07/13/reflections-on-the-fine-structure-constant
2. http://www.bretthall.org/fine-structure.html
3. https://physics.stackexchange.com/questions/377440/where-does-the-fine-structure-constant-come-from

https://reasonandscience.catsboard.com

68Perguntas .... - Page 3 Empty Re: Perguntas .... Thu Jul 01, 2021 6:11 pm

Otangelo


Admin

“This is the critical point in understanding this paper. The assumption is that during the creation of the heavenly bodies15 on Day 4 the universe underwent a very rapid expansion. For example, the Bible tells us:


‘He wraps himself in light as with a garment; he stretches out the heavens like a tent’ ([url=http://biblia.com/bible/esv/Psalms 104.2]Psalms 104:2[/url]).


‘He sits enthroned above the circle of the earth, and its people are like grasshoppers. He stretches out the heavens like a canopy, and spreads them out like a tent to live in’ ([url=http://biblia.com/bible/esv/Isaiah 40.22]Isaiah 40:22[/url]).


‘This is what God the Lord says—he who created the heavens and stretched them out, who spread out the earth and all that comes out of it, who gives breath to its people, and life to those who walk on it’ ([url=http://biblia.com/bible/esv/Isaiah 42.5]Isaiah 42:5[/url]).


‘This is what the Lord says—your Redeemer, who formed you in the womb: I am the Lord, who has made all things, who alone stretchedout the heavens’ ([url=http://biblia.com/bible/esv/Isaiah 44.24]Isaiah 44:24[/url]).

The very fabric of space was stretched, and during that time of stretching, stars and galaxies were created.”
Hartnett’s cosmology explains anomalies such as type Ia Supernovas which have higher redshifts than they should, without resorting to “dark energy” {7} and also explains the anomalous rotation curves of spiral galaxies without invoking unproven “dark matter” or “dark energy”, because it takes into account the effects of the expansion of space on the galaxies {8}

Hartnett, who formerly was an atheist, comments on this new cosmology and its relation to the Bible: “But don’t be mistaken; though Carmeli is some sort of rebel in that he has challenged the established thinking, in his mind his new theory does not present as anything more than a new type of big bang model.   However, we can apply the same theory to extract a new model that is consistent with what we would expect, starting with the Genesis history.  The starting conditions cannot be determined from observations, and even if we could see back in time to the beginning,, the same data could support a range of different historical interpretation-there is no unique history presented by the evidence.  To get the correct starting conditions you would need the testimony of an eye-witness to those events, which is what we have in God’s account of what He says He did, in Genesis 1.” {9}
So again we see it comes back to worldview, as cosmologist George Ellis admits: “People need to be aware that there is a range of models that could explain the observations.  For instance, I can construct you a spherically symmetrical universe with Earth at its center, and you cannot disprove it based on observations.  You can only exclude it on philosophical grounds.  In my view there is absolutely nothing wrong in that.  What I want to bring into the open is the fact that we are using philosophical criteria in choosing our models.  A lot of cosmology tries to hid that.” {10}
 
Dr. Jason Lisle-The Anisotropic Synchrony Convention Model:
Astrophysicist Dr. Jason Lisle has come up with yet another way to potentially explain the distant starlight problem.  He acknowledges the value of the previous models, but also suggests that the time for starlight to get to Earth depends on the convention one uses to measure time.  His model is called the Anisotropic Synchrony Convention.  A Synchrony convention is a procedure used for synchronizing clocks that are separated by a distance.  This theory is based on the fact that the speed of light in one direction, that is the one-way speed of light, actually cannot be objectively measured. What is measured in experiments is the round-trip speed of light, using mirrors to reflect the light back.  So it is possible that the one-way speed of light could actually be instantaneous, even though the round-trip two way speed of light is constant.
Lisle explains why we can’t measure the one-way speed of light  in this excerpt from his article {11}

In order to avoid assuming the time for one-way speed of light, we need to be able to measure the one-way trip. But it is impossible because moving a clock to the mirror may change the time on the clock!



Perguntas .... - Page 3 Light-traveling-fig3Perguntas .... - Page 3 Light-traveling-fig4


In other words, we are free to choose what the speed of light will be in one direction, though the “round-trip” time averaged speed is always constant.
The reason that the one-way speed of light cannot be objectively measured is that you need a way to synchronize two clocks separated by a distance. But in order to synchronize two clocks separated by some distance, you have to already know the one-way speed of light. So it cannot be done without circular reasoning.


We need to have a way of synchronizing clocks to know the one-way speed of light. But we need to know the one-way speed of light in order to synchronize clocks. Einstein was well aware of this dilemma. He said, “It would thus appear as though we were moving here in a logical circle.”2

Einstein’s resolution to this dilemma was to suggest that the one-way speed of light is not actually a property of nature but is instead a convention—something that we may choose!” {11}
So we can actually choose a convention, similar to choosing Local time over Universal Time on Earth.  Anisotropic refers to light having different speeds in different directions, as opposed to the convention Einstein used, isotropic-the same speed of light in all directions.


Genesis may imply the Anisotropic Synchrony Convention (ASC), since starlight was made available immediately.  So in this convention the one way speed of light from the distant galaxies to Earth was instantaneous.
It may seem unlikely that light would not have the same speed in all directions.  But even though we may assume for everyday use that light speed is constant in all directions as measured by our clocks, in a relativistic universe, as we approach the speed of light,  time and space no longer have absolute values independent of the observer.


In his more technical article, Lisle shows that using the Einsteinian convention, with light speed in all directions the same, leads to some interesting results when we have one observer in motion relative to the other. {12}.  In fact, they will get different answers as to whether some events happened at the same time, or in what order they happened.  With ASC we find that two observers see the same events as simultaneous, regardless of their velocity.


He makes the case also that since we can choose a convention, it makes sense to see which one fits the Bible.  As we said above, light traveling very fast from the stars to Earth would fit the ASC.  Also, people in most of history would not know anything about the speed of light, or lookback time and with ASC , it is not required to know the distance to an object, so ASC best preserves the clarity of Scripture.  Things in space would be seen as they happen.  Astronomer seem to use ASC when they name a supernova after the year they saw it, rather than the year they believe the light left the source.  ASC is just one more possible model that depends on one’s starting assumptions rather than the observations.


Finally, I come to a model that takes a different approach to the data that supports relativity., and proposes something radical to many physicists- a faster speed of light in the earlier days of the universe. I will look at physicist’s Barry Setterfield’s Theory mainly, although recently other scientists have proposed a change in light speed to help solve some of the big bang theory’s problems.
 
Barry Setterfield’s CDK Model: An Overlooked Cosmology, the Heat Problem in Accelerated Radioactive decay, and the Zero Point Energy
The Institute for Creation Research RATE team in their work spanning about 8 years found evidence for acceleration of radiometric decay.  For example, they found large amounts of helium in rocks equivalent to millions of years of decay end products, but that should have in the supposed millions of years of time diffused out of the rocks.  They also found polonium halos which were evidence again of quick formation as well as accelerated decay.   But there is a problem with the idea of accelerated decay under the theories proposed, the problem of excessive heat, which is especially true if you have large amount of decay too far after the beginning of creation, and a large amount of acceleration of decay happening in just one year of the flood.  They did propose some mechanisms that would absorb this heat, {13} but I think there is a creationist cosmology proposed by physicist Barry Setterfield, that solves this radioactive heat problem.   And this same model also deals with the light travel time from distant stars and galaxies.
 
 
As we have seen above, models for light travel time based on relativistic time dilation have been proposed that would work in theory, but have been hard to prove with hard experimental data.  Another problem for the conventional big bang model has been trying to explain the formation of galaxies and stars with gravity alone. They need to invoke things like shock waves from already existing exploding stars to push the matter together so gravity can supposedly take over. But this doesn’t explain the formation of the first stars and galaxies.  Dark matter is proposed to explain motions of galaxies and the holding together of star clusters, and dark energy is also invoked to explain the supposed acceleration of the universe’s expansion, even though there is no hard evidence for either dark matter or dark energy’s existence.

But there is an overlooked cosmology, proposed by physicist Barry Setterfield that could possibly neatly solve all of these problems.  This is the ZPE or Zero Point Energy cosmology, also called CDK or the slowing of light speed.  The ZPE according to the theory is an energy that pervades the vacuum of space at a temperature of absolute zero, even after all gases, liquids, and solids are removed.  The ZPE is not an idea that Setterfield came up with, rather Max Planck first conceived of it in a paper he wrote in 1911 (Planck’s second paper) in which he specified it as the cause and the measure of quantum uncertainty (Planck’s constant).  Planck had written his first paper in 1901 which was the basis for much of today’s QED (Quantum) physics, but he was actually unsatisfied with that first paper because it did not give a real physical mechanism for the jitter motion of particles and the associated uncertainty.  In 1925, Einstein, Nernst, and others examined this proposal and approved of it.  The ZPE’s existence was verified by Mulliken in 1925, from the measured shift of the spectral lines of boron monoxide, and confirmed in other ways since by evidence such as the Casimir effect, the inability to freeze liquid helium without pressure, the “noise” in electronic circuits, and the existence of Van Der Waal’s forces. We don’t feel the ZPE energy for the same reason we don’t feel air pressure from the atmosphere-it is the same inside and out.  For a more complete technical explanation of all this, please read Journal of Theoretics, {14}especially the first 10 pages.   This article discusses two approaches to modern physics; QED (Quantum Electrodynamics) and SED (Stochastic Electrodynamics).     Also for an excellent, easy to read summary of Setterfields work by another scientist, please look at The Setterfield Cosmology {15}
 
Included in the above article {15} was the fact that de Broglie in 1962, who had written one of the early papers which help introduce quantum physics, suggested that physicists had missed something and should take a second look at the ZPE, which provides an actual physical explanation for quantum phenomena, instead of these just being an inherent property of matter.  This re-examination is now ongoing in the area of SED physics.
 
The ZPE according to the theory originated when God stretched out the universe, creating a kind of potential energy similar to stretching out a rubber band.  When you release a stretched rubber band, the potential energy is converted to kinetic energy.  For the ZPE, this manifests itself in virtual particles (Planck particle pairs), the density of which increases as the “rubber band” continues to snap back,  making space “thicker” , in effect.  Photons of light traveling through space are momentarily absorbed and then re-emitted from these particles. The more there are, the more the light is slowed down.  Thus light starts at the beginning of the universe at a much higher speed, and as the kinetic ZPE increases, there are more virtual particles to go through, like a runner having to jump more and more hurdles, and thus the light slows down.
 
There is much data to support the light speed slowdown.  Measurements of light speed over the last few centuries have shown a consistent decrease, even when error bars are taken into account.   This is using 16 different methods of measurement. Some creationist reviews were skeptical, but a comprehensive defense of the statistical trends were published since then by statistician Alan Montgomery and physicist Lambert Dolphin, which have not been answered to date.  See: Defense of statistical trends in light speed{16}Also see  Statistical test of Setterfield hypothesis {17}


Perguntas .... - Page 3 Speed_of_light_foucault-300x150

 
Other quantities have been measured as changing in sync with light speed, such as Planck’s constant (increasing), electron rest mass (increasing), and a slowing of atomic time relative to orbital time. 11 “constants” in all have been measured as changing.  This is contrary to what is said by some creationists who don’t seem to consider that these quantities have actually been measured as changing.
 
As mentioned in the article in ref. {15}, Setterfield is not the only scientist that has put forth the idea of faster light speed in the past.    Rinus Kiel comments:
“Is Setterfield alone in his views? Certainly not! Several investigators have spoken about this theme! Because there is a painful problem in Big Bang, that must be solved, namely that galaxies, even as far as 13 billion light years away, do not show any trace of cosmological evolution, but are fully ‘evolved’ and adult. Which means that the supposed cosmological evolution should have taken place in 0.7 billion years, being only 5% of the age of the universe. Some examples:
Victor S. Troitskii of Radiophysical Research Institute in Gorki (Russia) concluded in 1987 as a result of his investigations in the redshift anomalies:

  • The cosmos is static.

  • The speed of light was very high in the beginning, practically indefinitely high, and it has decreased during the life time of the universe.

  • Other ‘constants’ change proportional/reversely proportional to the speed of light.


J.W. Moffat in 1993: There must have been a high speed of light in the beginning of the cosmos.
Andy Albrecht and João Magueijo (1999): Many cosmological puzzles are solved easily if only the speed of light in the beginning was very high.
John D. Barrow (1999): In a BBC TV-interview he said: “Call it heresy, but all the big cosmological problems will simply melt away, if you break one rule, the rule that says the speed of light never varies”.
Moffat as well as Albrecht, Magueijo and Barrow assume that this high initial light speed has decreased very quickly to the current value; but they have not taken seriously the relation to the other ‘constants’ as Troitskii did.
 
Most significantly, the ZPE model explains a problem with the values of increasing red shifts measured in distant galaxies, called quantization, which has not been solved with experimental data under the current Big Bang model.  Tifft and others found that the red shifts go in discrete jumps in value, much like as if the velocity of cars measured on the highway only went in multiples of 5, with nothing in between.  So of course this is a problem if the red shifts are simply Doppler shifts.
The ZPE model sheds some light on this problem.  What we could be seeing is an atomic phenomenon rather than an expansion effect.  The ZPE maintains atomic orbits stability.  An electron orbiting a nucleus radiates energy, and so should spiral into the nucleus unless some energy force counteracts this collapse.  In QED physics quantum laws are invoked to solve this difficulty, but without a physical explanation.  The ZPE provides one, in that electrons absorb energy from the ZPE which balances the energy they radiate and keeps the orbits stable.  Now, if the ZPE is higher, then the orbit of the electrons will be at a higher energy level, and light emitted from the atoms of the light source will be of higher energy, and thus bluer.  So as we look back in time, the ZPE was less, light was faster, and also light was emitted at a lower energy level, and therefore redder.  The quantization occurs because the electron energy levels are discrete and can only take on certain values before they are bumped up by more energy to the next orbital energy level.
Perguntas .... - Page 3 Atom
 
 
Again I will refer you to the above link that explains this better than I just did: The Setterfield Cosmology {15}, (I agree with Kiel in this paper that Setterfield’s cosmology deserves some attention.  (Incidentally, Setterfield’s geological model has Noah’s Flood farther down in the geological column, with the fossils forming in post-Flood catastrophic conditions.  However, he still has a young earth perspective and I personally think that the differences Setterfield might have in his geological model of Noah’s Flood from other creationist models are a separate issue and have little to no bearing on the general validity of his cosmology.   From what I see I think there are some good evidences that at least a majority of the fossils are from Noah’s Flood, but that I will cover in separate series of articles.  I think most of his differences in the placement of the Flood/Post Flood boundary in the geological column, and how much of it was from the Flood can be answered by the Catastrophic Plate Tectonics model and other models.  And it is healthy science to have several competing models out there.  Remember, we have only one eyewitness account, and that is in the Bible.)
 
In addition, the ZPE model reproduces the same experimental results that confirm General and Special Relativity   See: A New Look at Relativity and the Zero Point Energy{18}.  For example, in the article Setterfield describes how the ZPE, which consists of electromagnetic waves, cause the jitter motion of sub-atomic particles, which in turn send out a secondary magnetic field.  This has been verified experimentally.  This secondary radiation increases the strength of the ZPE locally:
“The jittering of sub-atomic particles by the ZPE results in these charged particles sending out a secondary electromagnetic field.  This is predicted by classical physics and verified experimentally.  This secondary radiation boosts the strength of the ZPE locally. Thus, an adjacent charged particle will experience two driving forces:  First are the driving forces of the ZPE causing it to oscillate, and second are the forces due to the secondary fields produced by the ZPE-driven oscillations of the first particle. Similarly, the ZPE-driven oscillations of the second particle will cause their own secondary fields to act back on the first particle.
The net effect is an attractive force between the particles. The sign of the charge does not matter; it only affects the phase of the interactions. This net attractive force between the particles has been shown by Haisch, Rueda and Puthoff to be identical to gravity. Thus, where there are many particles, there are many secondary fields which manifest as gravity, and which, at the same time, boost the ZPE strength locally, so that it becomes significantly greater. This local increase in ZPE strength around a large collection of particles results in the slowing of atomic clocks as does any increase in ZPE strength. Further details are in “General Relativity and the Zero Point Energy:”   (This paper also shows that the perihelion advance of Mercury is predicted by SED physics. Earlier, this had been one of the strongholds of Einstein’s theory).”
“ In other words, the presence of the ZPE and its effects fulfills the requirements as the actual physical mechanism which replaces the purely mathematical modeling of relativity.” {18}
 
Similarly, the ZPE also explains the effects of Special Relativity, the slowing of clocks at high speeds:
 
“Einstein’s theory of Special Relativity has to do with the effects of velocities on moving objects. These effects include increases in atomic masses as velocities become high, as well as the resulting slowing of atomic clocks. We have observed that the acceleration of an electron through a linear accelerator results in an increase in mass of the electron. This has been hailed as proof that relativity is correct. However, the SED approach predicts exactly the same effect as a result of the existence of the ZPE.
Using the SED approach, it has been shown that the masses of sub-atomic particles all come from the “jiggling” of these particles by the impacting waves of the ZPE. This “jiggling” imparts a kinetic energy to these mass-less particles and this energy appears atomically as mass. An increased “jiggling” occurs when a particle is in motion, because more ZPE waves are impacting the particle than when it is at rest. An increase in particle mass is then the result.  The higher the velocity, the more “jiggling” occurs and the greater the resulting mass. This has been mathematically quantified.
As the mass increases, it can be shown that the rate of ticking of atomic clocks slows down since kinetic energy is conserved.  Atomic clocks are based on the rates of atomic processes.  Atomic processes are governed by the atomic masses which, if they increase, require either more energy for the same amount of speed, or less speed for the same amount of energy.  Since one of the basic laws of nature is that energy is conserved, then the atomic particles must move more slowly as they gain mass.  This, in turn, would mean that atomic time varies with changes in mass, and that any increase in mass would result in a slowing of the atomic clock.  This has been experimentally demonstrated by accelerating a short half-life radioactive particle.  As the mass has increased with the speed, the rate of decay has slowed down.  This experiment has been used to show Einstein’s theory of Special Relativity is right, but the same result is predicted from SED physics with acceleration through the ZPE.” {18}
 
I recommend reading Setterfields articles: Reviewing the Zero Point Energy, and for a more comprehensive paper ZPE and atomic constants,  Also  a paper that talks about plasma physics in conjunction with the ZPE and how this approach could solve some of the problems with galaxy formation is Plasma universe  It’s hard to argue with the plasma theories for the origin of the galaxies, since in both lab work and computer simulations, the plasma forms itself into the same filamentous structures we see in the shapes of galaxies.  Here also is a mainstream physics paper that talks about the great magnitude of the ZPE energy: Zero Point energy article Calphysics Institute  Notice in this last article how they consider the ZPE approach, but are reluctant as Setterfield has described, to go down the SED path, choosing String Theory and M-theory instead.  As De Broglie described, the mainstream physics world may need to take a second look at Planck’s second paper.

But what about this heat problem with radioactive decay?  How does the ZPE approach solve this sticky problem?  The speed of light, “C, is proportional to the rate of radioactive decay for all types of radiometric decay.  Both are due to atomic processes affected by the ZPE: Radiometric dating and the ZPE  The RATE team found much evidence for accelerated decay, including helium retention in zircons, and Uranium and orphan polonium haloes.  But what to do with all the heat generated by accelerated decay?  Setterfield’s model has some answers: one is that the radiation density would not change.  See the previous link Radiometric dating and the ZPE and also Radiant Energy Emission.
Perguntas .... - Page 3 2000px-Thorium_decay_chain_from_lead-212_to_lead-208.svg_1-225x300
 
 
The properties of space are affected by the ZPE as we noted above.  I am going to quote Setterfield’s simplified explanation: “1. Space transmits electromagnetic waves, such as light. This means space itself must have both electric and magnetic properties. The electric property of space is referred to as ‘permittivity’ and the magnetic property is referred to as ‘permeability.’ These properties are governed by the number of virtual particles popping in and out of existence in a given volume. When there are fewer virtual particles per given volume, both the permittivity and the permeability of space are lower, which means that there is less resistance to the electric and magnetic elements of the photon (‘packet’ of light). Without this resistance, light travels more quickly.


2. In combination with the first point, when the speed of light was faster, a photon of light would travel farther in one second than it would travel now. That means that the same amount of light, or any radiation, would take up a greater volume at any one time. And THAT means that in any given, or defined, volume, the actual density of radiation from any given reaction would be less before than now.


3. Although faster radioactive decay rates mean that more radioactive atoms are decaying in a given time, the heat problem is offset by two factors: First that the amount of heat radiation in a given volume is lower, as explained in the previous two points. Secondly, as explained earlier in this paper, as we go back in time we are also going back to before so much energy was available to the atom. Before each quantum jump, the atom had lower energy than after. So the net effect here is that the earlier in time, the lower the energy of the atom, even though the light speed and therefore the actual rate of decay were faster. This lower energy in the atom thus somewhat reduced the amount of heat released by any given decay process.


Thus, the expected ‘frying’ effect of a higher radio decay rate which would be part of a time of higher light speed was counteracted by several factors:
First, the initial depth in the earth of radioactive materials.
Second, the increased volume taken up by any given photon.


Third, the lower energy in the atom in the past” (Taken from A Simplified Explanation of the Setterfield Hypothesis)

He also discusses how if the earth started out in a cold state with an ocean covering the surface, then even with a faster rate of radioactive decay, and with the radioactive elements deep in the earth (being brought up to the earth’s crust later in earth’s history), then it can be calculated that the temperature of the core today, assuming the biblical age for the earth of about 7,000 years, would be about 5,800 degrees, with 1,900 degrees now at the top of the lower mantle.  This is close to today’s temperature estimates for those regions.


Here is the more technical explanation of Radiation Density and heat production taken from the above link Radiometric dating and the ZPE: “Radiation energy densities also influence the heat from radioactive decay processes. With lower ZPE, and higher decay rates, came a greater number of gamma and X-rays. But this was moderated by the lower vacuum permittivity and permeability so radiation intensities were the same as today. Now radioactive decay processes produce heat. This applies to gamma and X-radiation, which are electromagnetic in character, as well as alpha and beta radiation, which are sub-atomic particles. Gamma and X-radiation can often accompany alpha and beta emission also. There is a reason why heat is produced by all these forms of radioactive decay.


The high energy forms of radiation ionize and/or excite the atoms of the substances through which they pass. The ionization process strips an electron(s) off those atoms with which the alpha, beta or gamma radiation interacts, while the excitation process shifts an electron(s) to a higher orbit in the host atom. In the case of excited atoms, the electrons return to lower orbits, emitting low energy photons which appear as heat, until the atom is again in its ground state. In the case of ionization, where the electrons are stripped off, these electrons cause the secondary excitation of atoms with which they interact. The process continues until all the kinetic energy originally imparted to these electrons is used up by the excitation process. As the excited atoms return to their ground state, they emit low energy photons that again result in heat. Gamma radiation produces ionization and the excitation of atoms over a relatively large distance, while alpha and beta particles only produce significant results over a short distance.


It can therefore be seen that the majority of heat from radioactive decay is generated by the lower energy photons. Since they are electromagnetic in character, they are subject to the moderating effects of the permittivity and permeability of space as shown by (76) and (77). Thus, even though a given radioactive source emitted more electromagnetic waves per unit time of both high and low energy, their effects would have been the same as the fewer number of waves emitted by that same source today. Thus heat production was similarly moderated.”
 
Also another quote taken from a presentation given in Germany by Rinus Kiel, who has done some research on the The Setterfield Cosmology already referred to above: “Radioactivity and other ‘constants’
“Several other nature ‘constants’ vary in rhythm with the strength of the ZPF.
Radioactive decay rate
The radioactive decay rate also depends on the strength of the ZPF, and thus follows the light speed curve, meaning:





    • The radioactive decay rate was very high in the beginning
    • The largest part of the decay has taken place during the creation days and shortly thereafter.
    • But the radiation energy was very low in the beginning and increases inversely proportional to the decay rate, which means that the harmful consequences of this radiation were not different from today’s. It also means that heat development was limited.
    • Setterfield assumes that the heavy, radioactive elements were mainly to be found at the inner structures of planets, because of the way they were formed. Through violent movement of mantle material during the flood they have come much closer to the surface.
    • The results of radio dating methods should be corrected in accordance to the decrease curve: 13.7 billion years then shrink easily to about 8,000 years.




The RATE project
A few words about the results of the RATE project (radio dating research program of ICR):



    • A high radioactive decay rate in the past is plausible, even if you have serious detailed criticism on the research process.
    • Radiation energy and heat development are big problems.

      • Both have been shifted to the first creation days, because there was no life that could be damaged by it: secondly the flood year was marked as a period with high radioactive decay without harmful consequences because of the thick water layer. What about the period between creation and flood? This is very unlikely.
      • Both problems have been easily solved in Setterfield’s model, but Setterfield is unfortunately still out of sight in creationist circles.”





 
 
Conclusion: It seems like there is solid scientific data to support the ZPE cosmology, and it solves some sticky problems like the heat problem in accelerated decay, as well as the undeniable problem of the quantized red shifts, the data from which can be used to draw the light speed decay curve.  This curve also fits the curve for the decline in the measured light speed.   Setterfield’s theory along with plasma theory does away with the need for dark matter and dark energy.
 
The RATE project mentioned above produced some great results supporting accelerated radiometric decay, but is dogged by that heat problem.  The cosmologies based on time dilation to explain the light speed problem work in theory, but they don’t have as much hard data supporting them as Setterfield’s theory does.  This theory also provides along with plasma theory an explanation for the formation of galaxies that has been a great difficulty for conventional big bang theories.  It is an outgrowth of the theories of well-known scientists such as Planck, DeBroglie and others.  The fact that Setterfield’s cosmology solves all of these simultaneously signals to me that it or some explanation like it, can be of invaluable use to the rest of the creationist world in providing a coherent theory for a young earth and universe that has experimental backing.
 
It’s time all of the creationist organizations started working together on this!
 
For discussions and Setterfield’s answers to challenges to his model, see his website www.setterfield.org. under: http://www.setterfield.org/GSRdiscussion.html and http://www.setterfield.org/GSRcritics.html
 
For a comprehensive reference on Barry Setterfield’s work, see his book Cosmology and the Zero Point Energy, Natural Philosophy Alliance Monograph Series, No. 1, 2013.
 
 
{1} Steinhardt, Paul J. , The Inflation Debate-Is the theory at the heart of modern cosmology deeply flawed? Scientific American, April 2011, pp. 37-43.  Also see articles:
Light-travel time: a problem for the big bang
New study confirms BICEP2 detection of cosmic inflation wrong
{2} Humphreys, D. Russell, Starlight and Time- Solving the Puzzle of Distant Starlight in a Young Universe, Master Books, Colorado Springs, CO, 1994.
{3} Hawking, Stephen W., and Ellis, G.F.R., The Large Scale Structure of Space-Time, Cambridge University Press, Cambridge, 1973, p.134.
{4}  D. Russell Humphreys, New Time Dilation helps Creation Cosmology, Journal of Creation 22 (3) 121-127, December 2008.
{5} Hartnett, John, Starlight, Time, and the New Physics, How we can see starlight in our young universe, Creation Book Publishers, Atlanta, Georgia, 2007.
{6} Hartnett, John, A 5 D Spherically Symmetric Expanding Universe is Young, Journal of Creation, 21 (1) 69-74, April2007.
{7} See ref {5} Starlight, Time, and the New Physics, pp. 64-67.  Shows that the Carmelian model agrees with observed matter densities whereas the conventional model doesn’t.  also see links:  What are type 1a Supernovae telling us?
Inflation-all in the “dark”
{8}See Ref, {5} pp. 42-48.  also see article ref {6} A 5 D Spherically Symmetric Expanding Universe is Young
{9}Starlight, Time, and the New Physics, page 64.
{10}  Gibbs, W.W. , Profile: George F. R. Ellis; thinking globally, acting universally, Scientific American, 273 (4): 28-29, 1995.
{11}Lisle, Dr. Jason, Distant Starlight-The Anisotropic Synchrony Convention, Answers Magazine, Jan-Mar 2011, pp. 68-71.
{12} Lisle, Dr. Jason, Anisotropic Synchrony Convention-A Solution to the Distant Starlight Problem. Answers Research Journal, 3, 2010, 191-207.

{13} Radioisotopes and the Age of the Earth, L. Vardiman, A. Snelling, and E. Chaffin, editors, Institute for Creation Research,  El Cajon, CA, and Creation Research Society, St. Joseph, Missouri, 2000, pp.369-375.
{14}Setterfield, Barry, Journal of Theoretics-Exploring the Vacuum, 2002.
{15} Kiel, Rinus, The Cosmology of Barry Setterfield, 2008
{16} Dolphin, Lambert and Montgomery, Alan, Is the Velocity of Light Constant in Time? , 1993.
{17} Montgomery, Alan, A Determination and Analysis of Appropriate Values of the Speed of Light to Test the Setterfield Hypothesis,  1995.
{18} Setterfield, Barry,A New Look at Relativity and the Zero Point Energy,  Jan. 18,2010.

https://reasonandscience.catsboard.com

69Perguntas .... - Page 3 Empty Re: Perguntas .... Mon Jul 05, 2021 3:39 pm

Otangelo


Admin

”The big bang today relies on a growing number of hypothetical entities, things that we have never observed—inflation, dark matter and dark energy are the most prominent examples. Without them, there would be a fatal contradiction between the observations made by astronomers and the predictions of the big bang theory.’
‘But the big bang theory can’t survive without these fudge factors. Without the hypothetical inflation field, the big bang does not predict the smooth, isotropic cosmic background radiation that is observed, because there would be no way for parts of the universe that are now more than a few degrees away in the sky to come to the same temperature and thus emit the same amount of microwave radiation. … Inflation requires a density 20 times larger than that implied by big bang nucleosynthesis, the theory’s explanation of the origin of the light elements.’ [This refers to the horizon problem, and supports what we say in Light-travel time: a problem for the big bang.]
‘In no other field of physics would this continual recourse to new hypothetical objects be accepted as a way of bridging the gap between theory and observation. It would, at the least, raise serious questions about the validity of the underlying theory [emphasis in original].’
‘What is more, the big bang theory can boast of no quantitative predictions that have subsequently been validated by observation. The successes claimed by the theory’s supporters consist of its ability to retrospectively fit observations with a steadily increasing array of adjustable parameters, just as the old Earth-centred cosmology of Ptolemy needed layer upon layer of epicycles.’" The above quote is from 33 secular scientists and can be found in New Scientist publication in case anyone was wondering

https://www.proquest.com/docview/1241012350
http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S2304-85572012000100007

https://reasonandscience.catsboard.com

70Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jul 07, 2021 8:45 am

Otangelo


Admin

The organism follows the rules of the Genetic Code. GGG = Glycine, CGG = Arginine, AGC = Serine, etc. Note that GGG is not literally Glycine, it is symbolic instructions
to make Glycine.

Just like computer codes, the genetic code is arbitrary. There is no law of physics that says “1” has to mean “on” and “0” has to mean “off.” There’s no law of physics that says 10000001 has to code for the letter “A.” Similarly, there is no law of physics that says three Guanine molecules in a row have to code for Glycine. In both cases, the communication system operates from a freely chosen, fixed set of rules.
In all communication systems it is possible to label the encoder, the message and the decoder and determine the rules of the code.
The rules of communication systems are defined in advance by conscious minds. There are no known exceptions to this. Therefore we have 100% inference that the Genetic Code was designed by a conscious mind.



Code, by definition, implies intelligence and the genetic code is real code, mathematically identical to that of language, computer codes etc. all of which can only arise by intelligent convention of symbologies. The genome contains meta information and there is now evidence of meta-programming as well. Meta info is information on information and we now know the genome contains such structures. But meta information cannot arise without knowledge of the original information.Meta programming is even more solid evidence of intelligence at work.

Rutgers University professor Sungchul Ji’s excellent paper “The Linguistics of DNA: Words, Sentences, Grammar, Phonetics, and Semantics” starts off

“Biologic systems and processes cannot be fully accounted for in terms of the principles and laws of physics and chemistry alone, but they require in addition the principles of semiotics—the science of symbols and signs, including linguistics.” Ji identifies 13 characteristics of human language. DNA shares 10 of them. Cells edit DNA. Theyalso communicate with each other and literally speak a language he called “cellese,” described as “a self-organizing system of molecules, some of which encode, act as signs for, or trigger, gene-directed cell processes.” This comparison between cell language and human language is not a loosey-goosey analogy; it’s formal and literal. Human language and cell language both employ multilayered symbols. Dr. Ji explains this similarity in his paper: “Bacterial chemical conversations also include assignment of contextual meaning to words and sentences (semantic) and conduction of dialogue (pragmatic)—the fundamental aspects of linguistic communication.” This is true of genetic material. Signals between cells do this as well.*

Nucleotides in DNA contain four different nitrogenous bases: Thymine, Cytosine, Adenine, or Guanine. The order of nucleotides along DNA polymers encode the genetic information carried by DNA. DNA polymers can be tens of millions of nucleotides long. At these lengths, the four letter nucleotide alphabet can encode nearly unlimited information. DNA is organized language of coded information. The symbols when read give a message or instructions. DNA is not just a pattern developed like a snowflake. It is information coded as symbolic representation of actual 3D implementation. It is a language that requires decision making and thought between a transmitter and receiver. The symbols contain an alphabet, grammar, meaning, intent and even error correction. You can even store it like computer data. The information is expressed through matter and energy. DNA is designed language. Natural patterns can not achieve cell to mammal morphology. Intelligence is required. All information is carried by a code of symbols that are material in nature.

To organize the first living cell implies the capacity to specify objects among alternatives. In short, the capacity to construct a cellular object made of x, y and z, requires the capacity to specify x, y, and z among other objects. Given that no physical object inherently specifies any other object, the act of specification is accomplished by the use of a representational medium (i.e. memory).

https://reasonandscience.catsboard.com

71Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jul 07, 2021 8:47 am

Otangelo


Admin

Just show ONE example of instructional information, that cannot be tracked back to intelligence, and you win. Just one.

Molecular Mechanisms of Autonomy in Biological Systems Relativity of Code, Energy and Mass, page 31
Information storage in molecules (embedding code in molecules) is the third dimension of matter. Biological systems are defined as precisely programmed systems via the information storage and operation by their molecules. Nucleic acids are well-known molecules that function as highly conserved coded chemicals in nature. The order and sequences of these bases determine the information available for building and maintaining an organism

https://reasonandscience.catsboard.com

72Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jul 07, 2021 9:14 am

Otangelo


Admin

In their textbook on the origin of life,
Thaxton, et al., addressed the implications of the genetic code.
We know that in numerous cases certain effects always
have intelligent causes, such as dictionaries,
sculptures, machines and paintings.We reason by
analogy that similar effects have intelligent causes.For
example, after looking up to see “BUYFORD” spelled
out in smoke across the skyweinfer the presence of a
skywriter even if we heard or saw no airplane.We
would similarly conclude the presence of intelligent
activity were we to come upon an elephant-shaped
topiary in a cedar forest.
In like manner an intelligible communication via radio
signal from some distant galaxy would be widely
hailed as evidence of an intelligent source.Whythen
doesn’t the message sequence on theDNA molecule
also constitute prima facie evidence for an intelligent
source? After all, DNA information is not just analogous
to a message sequence such asMorse code, it is
such a message sequence....
We believe that if this question is considered, it will
be seen that most often it is answered in the negative
simply because it is thought to be inappropriate to
bring a Creator into science (1984, pp.211-212,emp.
in orig.).



An explanation of the Genetic Code

http://www.bioinformatics.nl/webportal/background/geneticcodeinfo.html

three DNA base pairs code for one amino acid.

Practically, codons are "decoded" by transfer RNAs (tRNA) which interact with a ribosome-bound messenger RNA (mRNA) containing the coding sequence. There are 64 different tRNAs, each of which has an anticodon loop (used to recognise codons in the mRNA). 61 of these have a bound amino acyl residue; the appropriate "charged" tRNA binds to the respective next codon in the mRNA and the ribosome catalyses the transfer of the amino acid from the tRNA to the growing (nascent) protein/polypeptide chain. The remaining 3 codons are used for "punctuation"; that is, they signal the termination (the end) of the growing polypeptide chain (stopcodons). The genetic code is visualised in this scheme.

The genetic code is universal

Lastly, the Genetic Code in the table above has also been called "The Universal Genetic Code". It is known as "universal", because it is used by all known organisms as a code for DNA, mRNA, and tRNA. The universality of the genetic code is used in animals (including humans), plants, fungi, archaea, bacteria, and viruses. However, all rules have their exceptions, and such is the case with the Genetic Code; small variations in the code exist in mitochondria and certain microbes. Nonetheless, it should be emphasised that these variances represent only a small fraction of known cases, and that the Genetic Code applies quite broadly, certainly to all known nuclear genes.

http://link.springer.com/article/10.1007/s00018-013-1394-1

DNA, in addition to the digital information of the linear genetic code (the semantics), encodes equally important continuous, or analog, information that specifies the structural dynamics and configuration (the syntax) of the polymer.

https://reasonandscience.catsboard.com

73Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jul 07, 2021 9:26 am

Otangelo


Admin

Feature The digital code of DNA 1

http://www.nature.com/nature/journal/v421/n6921/full/nature01410.html

The discovery of the structure of DNA transformed biology profoundly, catalysing the sequencing of the human genome and engendering a new view of biology as an information science. Two features of DNA structure account for much of its remarkable impact on science: its digital nature and its complementarity, whereby one strand of the helix binds perfectly with its partner. DNA has two types of digital information — the genes that encode proteins, which are the molecular machines of life, and the gene regulatory networks that specify the behaviour of the genes.

The discovery of the double helix in 1953 immediately raised questions about how biological information isencoded in DNA. A remarkable feature of the structure is that DNA can accommodate almost any sequence of base pairs — any combination of the bases adenine (A), cytosine (C), guanine (G) and thymine (T) — and, hence any digital message or information. During the following decade it was discovered that each gene encodes a complementary RNA transcript, called messenger RNA (mRNA), made up of A, C, G and uracil (U), instead of T. The four bases of the DNA and RNA alphabets are related to the 20 amino acids of the protein alphabet by a triplet code — each three letters (or ‘codons’) in a gene encodes one amino acid. For example, AGT encodes the amino acid serine. The dictionary of DNA letters that make up the amino acids is called the genetic code. There are 64 different triplets or codons, 61 of which encode an amino acid (different triplets can encode the same amino acid), and three of which are used for ‘punctuation’ in that they signal the termination of the growing protein chain. The molecular complementary of the double helix — whereby each base on one strand of DNA pairs with its complementary base on the partner strand (A with T, and C with G) — has profound implications for biology. As implied by James Watson and Francis Crick in their landmark paper, base pairing suggests a template copying mechanism that accounts for the fidelity in copying of genetic material during DNA replication . It also underpins the synthesis of mRNA from the DNA template, as well as processes of repairing damaged DNA.

The digital nature of biological information

The value of having an entire genome sequence is that one can initiate the study of a biological system with a precisely definable digital core of information for that organism — a fully delineated genetic source code. The challenge, then, is in deciphering what information is encoded within the digital code. The genome encodes two main types of digital information — the genes that encode the protein and RNA molecular machines of life, and the regulatory networks that specify how these genes are expressed in time, space and amplitude. It is the regulatory networks and not the genes themselves that play the critical role in making organisms different from one another.Development is the elaboration of an organism from a single cell (the fertilized egg) to an adult (for humans this is 10^14 cells of thousands of different types). Physiology is the triggering of
specific functional programmes (for example, the immune response) by environmental cues. Regulatory networks are crucial in each of these aspects of biology. Regulatory networks are composed of two main types of components: transcription factors and the DNA sites to which they bind in the control regions of genes, such as promoters, enhancers and silencers. The control regions of individual genes serve as information processors to integrate the information inherent in the concentrations of different transcription factors into signals that mediate gene expression. The collection of the transcription factors and their cognate DNA-binding sites in the control regions of genes that carry out a particular developmental or physiological function constitute these regulatory networks (Fig. 2).

http://reasonandscience.heavenforum.org/t2213-control-of-transcription-by-sequencespecific-dna-binding-proteins

Perguntas .... - Page 3 Gene_r10

Because most ‘higher’ organisms or eukaryotes (organisms that contain their DNA in a cellular compartment called the nucleus), such as yeast, flies and humans, have predominantly the same families of genes, it is the reorganization of DNA-binding sites in the control regions of genes that mediate the changes in the developmental programmes that distinguish one species from another. Thus, the regulatory networks are uniquely specified by their DNA-binding sites and, accordingly, are basically digital in nature. One thing that is striking about digital regulatory networks is that they can change significantly in short periods of evolutionary time. This is reflected, for example, in the huge diversity of the body plans, controlled by gene regulatory networks, that emerged over perhaps 10–30 million years during the Cambrian explosion of metazoan organisms (about 550 million years ago). Likewise, remarkable changes occurred to the regulatory networks driving the development of the human brain during its divergence from its common ancestor with chimpanzees about 6 million years ago. Biology has evolved Why not [b]God created ?  several different types of informational hierarchies.
First, a regulatory hierarchy is a gene network that defines the relationships of a set of transcription factors, their DNA-binding sites and the downstream peripheral genes that collectively control a particular aspect of development. A model of development in the sea urchin represents a striking example (Fig. 2). Second, a hierarchy defines an order set of relationships. For example, a single gene may be duplicated to generate a multi-gene family, and a multi-gene family may be duplicated to create a supergene family. Third, molecular machines may be assembled into structural hierarchies by an ordered assembly process.  How was this ordered assembly process done ? One example of this is the basic transcription apparatus that involves the step-by-step recruitment of factors and enzymes that will ultimately drive the specific expression of a given gene. Had this recruitment not have to be programmed ? A second example is provided by the ribosome, the complex that translates RNA into protein, which is assembled from more than 50 different proteins and a few RNA molecules. Finally, an informational hierarchy depicts the flow of information from a gene to environment: gene >RNA >protein >protein interactions >protein complexes >networks of protein complexes in a cell >tissues or organs >individual organisms >populations > ecosystems. At each successively higher level in the informational hierarchy, information can be added or altered for any given element (for example, by alternative RNA splicing or protein modification).



1) http://www.nature.com/nature/journal/v421/n6921/full/nature01410.html[/b]

https://reasonandscience.catsboard.com

74Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jul 07, 2021 9:40 am

Otangelo


Admin

What lies at the heart of every living thing is not a fire, warm breath, not a ‘spark of life’. It is information, words, instructions…Think of a billion discrete digital characters…If you want to understand life think about technology – Richard Dawkins (Dawkins 1996, 112)




– George Sim Johnson (Sims Johnson 1999)
Human DNA contains more organized information than the Encyclopedia Britannica. If the full text of the encyclopedia were to arrive in computer code from outer space, most people would regard this as proof of the existence of extraterrestrial intelligence. But when seen in nature, it is explained as the workings of random forces


For instance, the precision of this genetic language is such that the average mistake that is not caught turns out to be one error per 10 billion letters. If a mistake occurs in one of the most significant parts of the code, which is in the genes, it can cause a disease such as sickle-cell anemia. Yet even the best and most intelligent typist in the world couldn't come close to making only one mistake per 10 billion letters—far from it.

http://crev.info/2005/03/how_to_get_something_from_nothing_genetic_code_syntax_explained/

Today’s code, written in DNA, is composed of triplet nucleotide “words” called codons that match the amino acid “words” in the language of proteins.

http://www.lifesorigin.com/information-storage-and-transfer-3.pdf

This chapter will explore how DNA stores information, and how this information is used to build proteins. It will also explore how mutations change this information. The language that life uses to store and transmit information is similar to human languages, but the rules of grammar and the vocabulary are much simpler. Only 20 words are used by life, so the vocabulary is very limited. Punctuation is limited to capitalization and periods. Every sentence must start with the same word.



http://www.creationscience.com/onlinebook/ReferencesandNotes31.html

https://reasonandscience.catsboard.com

75Perguntas .... - Page 3 Empty Re: Perguntas .... Wed Jul 07, 2021 9:45 am

Otangelo


Admin

http://www.nobelprize.org/nobel_prizes/medicine/laureates/1962/crick-lecture.html

Part of the work covered by the Nobel citation, that on the structure and replication of DNA, has been described by Wilkins in his Nobel Lecture this year. The ideas put forward by Watson and myself on the replication of DNA have also been mentioned by Kornberg in his Nobel Lecture in 1959, covering his brilliant researches on the enzymatic synthesis of DNA in the test tube. I shall discuss here the present state of a related problem in information transfer in living material - that of the genetic code - which has long interested me, and on which my colleagues and I, among many others, have recently been doing some experimental work.

It now seems certain that the amino acid sequence of any protein is determined by the sequence of bases in some region of a particular nucleic acid molecule. Twenty different kinds of amino acid are commonly found in protein, and four main kinds of base occur in nucleic acid. The genetic code describes the way in which a sequence of twenty or more things is determined by a sequence of four things of a different type.

It is hardly necessary to stress the biological importance of the problem. It seems likely that most if not all the genetic information in any organism is carried by nucleic acid - usually by DNA, although certain small viruses use RNA as their genetic material. It is probable that much of this information is used to determine the amino acid sequence of the proteins of that organism. (Whether the genetic information has any other major function we do not yet know.) This idea is expressed by the classic slogan of Beadle: "one gene - one enzyme", or in the more sophisticated but cumbersome terminology of today: "one cistron - one polypeptide chain".

It is one of the more striking generalizations of biochemistry - which surprisingly is hardly ever mentioned in the biochemical textbooks - that the twenty amino acids and the four bases, are, with minor reservations, the same throughout Nature. As far as I am aware the presently accepted set of twenty amino acids was first drawn up by Watson and myself in the summer of 1953 in response to a letter of Gamow's.

In this lecture I shall not deal with the intimate technical details of the problem, if only for the reason that I have recently written such a review1 which will appear shortly. Nor shall I deal with the biochemical details of messenger RNA and protein synthesis, as Watson has already spoken about these. Rather I shall ask certain general questions about the genetic code and ask how far we can now answer them.

Let us assume that the genetic code is a simple one and ask how many bases code for one amino acid? This can hardly be done by a pair of bases, as from four different things we can only form 4 x 4 = 16 different pairs, whereas we need at least twenty and probably one or two more to act as spaces or for other purposes. However, triplets of bases would give us 64 possibilities. It is convenient to have a word for a set of bases which codes one amino acid and I shall use the word "codon" for this.

This brings us to our first question. Do codons overlap? In other words, as we read along the genetic message do we find a base which is a member of two or more codons? It now seems fairly certain that codons do not overlap. If they did, the change of a single base, due to mutation, should alter two or more (adjacent) amino acids, whereas the typical change is to a single amino acid, both in the case of the "spontaneous" mutations, such as occur in the abnormal human haemoglobin or in chemically induced mutations, such as those produced by the action of nitrous acid and other chemicals on tobacco mosaic virus2. In all probability, therefore, codons do not overlap.

This leads us to the next problem. How is the base sequence, divided into codons? There is nothing in the backbone of the nucleic acid, which is perfectly regular, to show us how to group the bases into codons. If, for example, all the codons are triplets, then in addition to the correct reading of the message, there are two incorrect readings which we shall obtain if we do not start the grouping into sets of three at the right place. My colleagues and I3 have recently obtained experimental evidence that each section of the genetic message is indeed read from a fixed point, probably from one end. This fits in very well with the experimental evidence, most clearly shown in the work of Dintzis4 that the amino acids are assembled into the polypeptide chain in a linear order, starting at the amino end of the chain.

This leads us to the next general question: the size of the codon. How many bases are there in any one codon? The same experiments to which I have just referred3 strongly suggest that all (or almost all) codons consist of a triplet of bases, though a small multiple of three, such as six or nine, is not completely ruled out by our data. We were led to this conclusion by the study of mutations in the A and B cistrons of the rII locus of bacteriophage T4. These mutations are believed to be due to the addition or subtraction of one or more bases from the genetic message. They are typically produced by acridines, and cannot be reversed by mutagens which merely change one base into another. Moreover these mutations almost always render the gene completely inactive, rather than partly so.

By testing such mutants in pairs we can assign them all without exception to one of two classes which we call + and –. For simplicity one can think of the + class as having one extra base at some point or other in the genetic message and the – class as having one too few. The crucial experiment is to put together, by genetic recombination, three mutants of the same type into one gene. That is, either (+ with + with +) or ( – with – with –). Whereas a single + or a pair of them (+ with +) makes the gene completely inactive, a set of three, suitably chosen, has some activity. Detailed examination of these results show that they are exactly what we should expect if the message were read in triplets starting from one end.

We are sometimes asked what the result would be if we put four +'s in one gene. To answer this my colleagues have recently put together not merely four but six +'s. Such a combination is active as expected on our theory, although sets of four or five of them are not. We have also gone a long way to explaining the production of "minutes" as they are called. That is, combinations in which the gene is working at very low efficiency. Our detailed results fit the hypothesis that in some cases when the mechanism comes to a triplet which does not stand for an amino acid (called a "non sense" triplet) it very occasionally makes a slip and reads, say, only two bases instead of the usual three. These results also enable us to tie down the direction of reading of the genetic message, which in this case is from left to right, as the rII region is conventionally drawn. We plan to write up a detailed technical account of all this work shortly. A final proof of our ideas can only be obtained by detailed studies on the alterations produced in the amino acid sequence of a protein by mutations of the type discussed here.

One further conclusion of a general nature is suggested by our results. It would appear that the number of nonsense triplets is rather low, since we only occasionally come across them. However this conclusion is less secure than our other deductions about the general nature of the genetic code.

It has not yet been shown directly that the genetic message is co-linear with its product. That is, that one end of the gene codes for the amino end of the polypeptide chain and the other for the carboxyl end, and that as one proceeds along the gene one comes in turn to the codons in between in the linear order in which the amino acids are found in the polypeptide chain. This seems highly likely, especially as it has been shown that in several systems mutations affecting the same amino acid are extremely near together on the genetic map. The experimental proof of the co-linearity of a gene and the polypeptide chain it produces may be confidently expected within the next year or so.

There is one further general question about the genetic code which we can ask at this point. Is the code universal, that is, the same in all organisms? Preliminary evidence suggests that it may well be. For example something very like rabbit haemoglobin can be synthesized using a cell-free system, part of which comes from rabbit reticulocytes and part from Escherichia coli5. This would not be very probable if the code were very different in these two organisms. However as we shall see it is now possible to test the universality of the code by more direct experiments.

In a cell in which DNA is the genetic material it is not believed that DNA itself controls protein synthesis directly. As Watson has described, it is believed that the base sequence of the DNA - probably of only one of its chains - is copied onto RNA, and that this special RNA then acts as the genetic messenger and directs the actual process of joining up the amino acids into polypeptide chains. The breakthrough in the coding problem has come from the discovery, made by Nirenberg and Matthaei6, that one can use synthetic RNA for this purpose. In particular they found that polyuridylic acid - an RNA in which every base is uracil - will promote the synthesis of polyphenylalanine when added to a cell-free system which was already known to synthesize polypeptide chains. Thus one codon for phenylalanine appears to be the sequence UUU (where U stands for uracil: in the same way we shall use A, G, and C for adenine, guanine, and cytosine respectively). This discovery has opened the way to a rapid although somewhat confused attack on the genetic code.

It would not be appropriate to review this work in detail here. I have discussed critically the earlier work in the review mentioned previously1 but such is the pace of work in this field that more recent experiments have already made it out of date to some extent. However, some general conclusions can safely be drawn.

The technique mainly used so far, both by Nirenberg and his colleague6 and by Ochoa and his group7, has been to synthesize enzymatically "random" polymers of two or three of the four bases. For example, a polynucleotide, which I shall call poly (U,C), having about equal amounts of uracil and cytosine in (presumably) random order will increase the incorporation of the amino acids phenylalanine, serine, leucine, and proline, and possibly threonine. By using polymers of different composition and assuming a triplet code one can deduce limited information about the composition of certain triplets.

From such work it appears that, with minor reservations, each polynucleotide incorporates a characteristic set of amino acids. Moreover the four bases appear quite distinct in their effects. A comparison between the triplets tentatively deduced by these methods with the changes in amino acid sequence produced by mutation shows a fair measure of agreement. Moreover the incorporation requires the same components needed for protein synthesis, and is inhibited by the same inhibitors. Thus the system is most unlikely to be a complete artefact and is very probably closely related to genuine protein synthesis.

As to the actual triplets so far proposed it was first thought that possibly every triplet had to include uracil, but this was neither plausible on theoretical grounds nor supported by the actual experimental evidence. The first direct evidence that this was not so was obtained by my colleagues Bretscher and Grunberg-Manago8, who showed that a poly (C,A) would stimulate the incorporation of several amino acids. Recently other worker9, 10 have reported further evidence of this sort for other polynucleotides not containing uracil. It now seems very likely that many of the 64 triplets, possibly most of them, may code one amino acid or another, and that in general several distinct triplets may code one amino acid. In particular a very elegant experiment II suggests that both (UUC) and (UUG) code leucine (the brackets imply that the order within the triplets is not yet known). This general idea is supported by several indirect lines of evidence which cannot be detailed here. Unfortunately it makes the unambiguous determination of triplets by these methods much more difficult than would be the case if there were only one triplet for each amino acid. Moreover, it is not possible by using polynucleotides of "random" sequence to determine the order of bases in a triplet. A start has been made to construct polynucleotides whose exact sequence is known at one end, but the results obtained so far are suggestive rather than conclusive12. It seems likely however from this and other unpublished evidence that the amino end of the polypeptide chain corresponds to the "right-hand" end of the polynucleotide chain - that is, the one with the 2', 3' hydroxyls on the sugar.

It seems virtually certain that a single chain of RNA can act as messenger RNA, since poly U is a single chain without secondary structure. If poly A is added to poly U, to form a double or triple helix, the combination is inactive. Moreover there is preliminary evidence9 which suggests that secondary structure within a polynucleotide inhibits the power to stimulate protein synthesis.

It has yet to be shown by direct biochemical methods, as opposed to the indirect genetic evidence mentioned earlier, that the code is indeed a triplet code.

Attempts have been made from a study of the changes produced by mutation to obtain the relative order of the bases within various triplets, but my own view is that these are premature until there is more extensive and more reliable data on the composition of the triplets.

Evidence presented by several groups8, 9, 11 suggest that poly U stimulates both the incorporation of phenylalanine and also a lesser amount of leucine. The meaning of this observation is unclear, but it raises the unfortunate possibility of ambiguous triplets; that is, triplets which may code more than one amino acid. However one would certainly expect such triplets to be in a minority.

It would seem likely, then, that most of the sixty-four possible triplets will be grouped into twenty groups. The balance of evidence both from the cell-free system and from the study of mutation, suggests that this does not occur at random, and that triplets coding the same amino acid may well be rather similar. This raises the main theoretical problem now outstanding. Can this grouping be deduced from theoretical postulates? Unfortunately, it is not difficult to see how it might have arisen at an extremely early stage in evolution by random mutations, so that the particular code we have may perhaps be the result of a series of historical accidents. This point is of more than abstract interest. If the code does indeed have some logical foundation then it is legitimate to consider all the evidence, both good and bad, in any attempt to deduce it. The same is not true if the codons have no simple logical connection. In that case, it makes little sense to guess a codon. The important thing is to provide enough evidence to prove each codon independently. It is not yet clear what evidence can safely be accepted as establishing a codon. What is clear is that most of the experimental evidence so far presented falls short of proof in almost all cases.

In spite of the uncertainty of much of the experimental data there are certain codes which have been suggested in the past which we can now reject with some degree of confidence.

Comma-less triplet codes
All such codes are unlikely, not only because of the genetic evidence but also because of the detailed results from the cell-free system.

Two-letter or three-letter codes
For example a code in which A is equivalent to O, and G to U. As already stated, the results from the cell-free system rule out all such codes.

The combination triplet code
In this code all permutations of a given combination code the same amino acid. The experimental results can only be made to fit such a code by very special pleading.

Complementary codes
There are several classes of these. Consider a certain triplet in relation to the triplet which is complementary to it on the other chain of the double helix. The second triplet may be considered either as being read in the same direction as the first, or in the opposite direction. Thus if the first triplet is UCC, we consider it in relation to either AGG or (reading in the opposite direction) GGA.

It has been suggested that if a triplet stands for an amino acid its complement necessarily stands for the same amino acids, or, alternatively in another class of codes, that its complement will stand for no amino acid, i.e. be nonsense.

It has recently been shown by Ochoa's group that poly A stimulates the incorporation of lysine10. Thus presumably AAA codes lysine. However since UUU codes phenylalanine these facts rule out all the above codes. It is also found that poly (U,G) incorporates quite different amino acids from poly (A,C). Similarly poly (U,C) differs from poly (A,G)9, 10. Thus there is little chance that any of this class of theories will prove correct. Moreover they are all, in my opinion, unlikely for general theoretical reasons.

A start has already been made, using the same polynucleotides in cell-free systems from different species, to see if the code is the same in all organisms. Eventually it should be relatively easy to discover in this way if the code is universal, and, if not, how it differs from organism to organism. The preliminary results presented so far disclose no clear difference between E. coli and mammals, which is encouraging10, 13.

At the present time, therefore, the genetic code appears to have the following general properties:

(1) Most if not all codons consist of three (adjacent) bases.
(2) Adjacent codons do not overlap.
(3) The message is read in the correct groups of three by starting at some fixed point.
(4) The code sequence in the gene is co-linear with the amino acid sequence, the polypeptide chain being synthesized sequentially from the amino end.
(5) In general more than one triplet codes each amino acid.
(6) It is not certain that some triplets may not code more than one amino acid, i.e. they may be ambiguous.
(7) Triplets which code for the same amino acid are probably rather similar.
(8 ) It is not known whether there is any general rule which groups such codons together, or whether the grouping is mainly the result of historical accident.
(9) The number of triplets which do not code an amino acid is probably small.
(10) Certain codes proposed earlier, such as comma-less codes, two- or three-letter codes, the combination code, and various transposable codes are all unlikely to be correct.
(11) The code in different organisms is probably similar. It may be the same in all organisms but this is not yet known.

Finally one should add that in spite of the great complexity of protein synthesis and in spite of the considerable technical difficulties in synthesizing polynucleotides with defined sequences it is not unreasonable to hope that all these points will be clarified in the near future, and that the genetic code will be completely established on a sound experimental basis within a few years.

The references have been kept to a minimum. A more complete set will be found in the first reference.

https://reasonandscience.catsboard.com

Sponsored content



Back to top  Message [Page 3 of 19]

Go to page : Previous  1, 2, 3, 4 ... 11 ... 19  Next

Permissions in this forum:
You cannot reply to topics in this forum