ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview

Otangelo Grasso: This is my library, where I collect information and present arguments developed by myself that lead, in my view, to the Christian faith, creationism, and Intelligent Design as the best explanation for the origin of the physical world.

You are not connected. Please login or register

Fine-tuning arguments in short sentences

Go down  Message [Page 1 of 1]

1Fine-tuning arguments in short sentences Empty Fine-tuning arguments in short sentences Sat Aug 07, 2021 6:59 pm



Fine-tuning arguments in short sentences


Following is a list of Fine-tuning arguments in rather short sentences, without references, etc. Just leaving the argument.

Fine-tuning of the Big bang
Fine-tuning of the initial conditions of the universe
Fine-tuning of the fundamental forces of the universe
Fine-tuning of subatomic particles, and atoms

If amongst these, hundreds, even thousands of parameters that had to be finely adjusted to permit life, just ONE was not right, no life.

If the Pauli Exclusion Principle was not in place, you would not read these lines. How can based on such overwhelming evidence of design atheists claim that there is no evidence of God's existence? That is just silly and foolish. There are NO rational alternative explanations. The multiverse escape invention is baseless speculation and just pushes the problem further back, without solving it

Our deepest understanding of the laws of nature is summarized in a set of equations. 5 Using these equations, we can make very precise calculations of the most elementary physical phenomena, calculations that are confirmed by experimental evidence. But to make these predictions, we have to plug in some numbers that cannot themselves be calculated but are derived from measurements of some of the most basic features of the physical universe. These numbers specify such crucial quantities as the masses of fundamental particles and the strengths of their mutual interactions. After extensive experiments under all manner of conditions, physicists have found that these numbers appear not to change in different times and places, so they are called the fundamental constants of nature.

The standard model of particle physics and the standard model of cosmology (together, the standard models) contain 31 fundamental constants.  25 constants are from particle physics, and 6 are from cosmology. About ten to twelve out of these constants exhibit significant fine-tuning.

There is no explanation for the particular values that physical constants appear to have throughout our universe, such as Planck’s constant or the gravitational constant. The conservation of charge, momentum, angular momentum, and energy, these conservation laws can be related to symmetries of mathematical identities.

A universe governed by Maxwell’s Laws “all the way down”, with no quantum regime at small scales would not result in stable atoms — electrons would radiate their kinetic energy and spiral rapidly into the nucleus — and hence no chemistry would be possible. We don’t need to know what the parameters are to know that life in such a universe would be  impossible.
The physical forces that govern the universe remain constant - they do not change across the universe.

If we change one fundamental constant, one force law, the whole edifice of the universe would tumble. There HAVE to be four forces in nature, or there would be no life.  

The existence of life in the universe depends on the fundamental laws of nature having the precise mathematical structures that they do. For example, both Newton’s universal law of gravitation and Coulomb’s law of electrostatic attraction describes forces that diminish with the square of the distance. Nevertheless, without violating any logical principle or more fundamental law of physics, these forces could have diminished with the cube (or higher exponent) of the distance. That would have made the forces they describe too weak to allow for the possibility of life in the universe. Conversely, these forces might just as well have diminished in a strictly linear way. That would have made them too strong to allow for life in the universe.

Life depends upon the existence of various different kinds of forces—which are described with different kinds of laws— acting in concert.
1. a long-range attractive force (such as gravity) that can cause galaxies, stars, and planetary systems to congeal from chemical elements in order to provide stable platforms for life;
2. a force such as the electromagnetic force to make possible chemical reactions and energy transmission through a vacuum;
3. a force such as the strong nuclear force operating at short distances to bind the nuclei of atoms together and overcome repulsive electrostatic forces;
4. the quantization of energy to make possible the formation of stable atoms and thus life;
5. the operation of a principle in the physical world such as the Pauli exclusion principle that (a) enables complex material structures to form and yet (b) limits the atomic weight of elements (by limiting the number of neutrons in the lowest nuclear shell). Thus, the forces at work in the universe itself (and the mathematical laws of physics describing them) display a fine-tuning that requires explanation. Yet, clearly, no physical explanation of this structure is possible, because it is precisely physics (and its most fundamental laws) that manifests this structure and requires explanation. Indeed, clearly physics does not explain itself.

Paraphrasing Hoyle: Why does it appear that a super-intellect had been “monkeying” with the laws of physics? Why are the Standard Model parameters intriguingly finely tuned to be life-friendly? Why does the universe look as if it has been designed by an intelligent creator expressly for the purpose of spawning sentient beings? Why is the universe “just right” for life, in many intriguing ways? How can we account for this appearance of judicious design?

Why does beneath the surface complexity of nature lie a hidden subtext, written in a subtle mathematical code, the cosmic code which contains the rules on which the universe runs? Why lies beneath the surface of natural phenomena an abstract order, an order that cannot be seen or heard or felt, but only deduced? Why are the diverse physical systems making up the cosmos linked, deep down, by a network of coded mathematical relationships?

Why is the physical universe neither arbitrary nor absurd? Why is not just a meaningless jumble of objects and phenomena haphazardly juxtaposed, but rather, there is a coherent scheme of things? Why is there order in nature? This is a profound enigma: Where do the laws of nature come from? Why do they have the form that they do? And why are we capable of comprehending it?

So far as we can see today, the laws of physics cannot have existed from everlasting to everlasting. They must have come into being at the big bang. The laws must have come into being. As the great cosmic drama unfolds before us, it begins to look as though there is a “script” – a scheme of things. We are then bound to ask, who or what wrote the script? If these laws are not the product of divine providence, how can they be explained?

If the universe is absurd, the product of unguided events, why does it so convincingly mimic one that seems to have meaning and purpose? Did the script somehow, miraculously, write itself? Why do the laws of nature possess a mathematical basis? Why should the laws that govern the heavens and on Earth not be the mathematical manifestations of God’s ingenious handiwork? Why is a transcendent immutable eternal creator with the power to dictate the flow of events not the most case-adequate explanation?

The universe displays an abstract order, conditions that are regulated, it looks like a put-up job, a fix. There is a mathematical subtext, then does it not point to a creator?   The laws are real things –  abstract relationships between physical entities. They are relationships that really exist. Why is nature shadowed by this mathematical reality? Why should we attribute and explain the cosmic “coincidences” to chance? There is no logical reason why nature should have a mathematical subtext in the first place.

In order to “explain” something, in the everyday sense, you have to start somewhere. How can we terminate the chain of explanation, if not with an eternal creator? To avoid an infinite regress – a bottomless tower of turtles according to the famous metaphor – you have at some point to accept something as “given”, something which other people can acknowledge as true without further justification. If a cosmic selector is denied, then the equations must be accepted as “given,” and used as the unexplained foundation upon which an account of all physical existence is erected.

Everything we discover about the world ultimately boils down to bits of information. The physical universe was fundamentally based on instructional information, and matter is a derived phenomenon. What, exactly, determines that-which-exists and separates it from that-which-might-have-existed-but-doesn’t? From the bottomless pit of possible entities, something plucks out a subset and bestows upon its members the privilege of existing. What “breathes fire into the equations” and makes a life-permitting universe?

Not only do we need to identify a “fire-breathing actualizer” to promote the merely-possible to the actually-existing, we need to think about the origin of the rule itself – the rule that decides what gets fire breathed into it and what does not.  Where did that rule come from? And why does that rule apply rather than some other rule? In short, how did the right stuff get selected? Are we not back with some version of a Designer/Creator/Selector entity, a necessary being who chooses “the Prescription” and “breathes fire” into it?

Certain stringent conditions must be satisfied in the underlying laws of physics that regulate the universe. That raises the question: Why does our bio-friendly universe look like a fix – or “a put-up job”?   Stephen Hawking: “What is it that breathes fire into the equations and makes a universe for them to describe?” Who, or what does the choosing? Who, or what promotes the “merely possible” to the “actually existing”?

What are the chances that a randomly chosen theory of everything would describe a life-permitting universe? Negligible. If the universe is inherently mathematical, composed of a mathematical structure then does it not require a Cosmic Selector?  What is it then that determines what exists? The physical world contains certain objects – stars, planets, atoms, living organisms, for example. Why do those things exist rather than others?
Why isn’t the universe filled with, say, pulsating green jelly, or interwoven chains, or disembodied thoughts … The possibilities are limited only by our imagination.

Why not stick to the view and favor the mind of a creator seriously as a fundamental and deeply significant feature of the physical universe. ?  A preexisting God who is somehow self-explanatory? Galileo, Newton and their contemporaries regarded the laws as thoughts in the mind of God, and their elegant mathematical form as a manifestation of God’s rational plan for the universe. Newton, Galileo, and other early scientists treated their investigations as a religious quest. They thought that by exposing the patterns woven into the processes of nature they truly were glimpsing the mind of God.

“The great book of nature,” Galileo wrote, “can be read-only by those who know the language in which it was written. And this language is mathematics.” James Jeans: “The universe appears to have been designed by a pure mathematician.”

Fine-tuning of the Big bang:
The first thing that had to be finely tuned in the universe was the Big bang. Fast-forward a nanosecond or two and in the beginning, you had this cosmic soup of elementary stuff - electrons and quarks and neutrinos and photons and gravitons and muons and gluons and Higgs bosons a real vegetable soup.  There had to have been a mechanism to produce this myriad of fundamentals instead of just one thing. There could have been a cosmos where the sum total of mass was pure neutrinos and all of the energy was purely kinetic.  The evolution of the Universe is characterized by a delicate balance of its inventory, a balance between attraction and repulsion, between expansion and contraction. 1. Gravitational constant: 1/10^60 2. Omega, the density of dark matter: 1/10^62 or less 3. Hubble constant: 1 part in 10^60 4. Lambda: the cosmological constant: 10^122 5. Primordial Fluctuations:  1/100,000 6. Matter-antimatter symmetry: 1 in 10,000,000,000 7. The low-entropy state of the universe: 1 in 10^10^123 8. The universe would require 3 dimensions of space, and time, to be life-permitting.

Gravity is the least important force at small scales but the most important at large scales. It is only because the minuscule gravitational forces of individual particles add up in large bodies that gravity can overwhelm the other forces. Gravity, like the other forces, must also be fine-tuned for life. Gravity would alter the cosmos as a whole. For example, the expansion of the universe must be carefully balanced with the deceleration caused by gravity. Too much expansion energy and the atoms would fly apart before stars and galaxies could form; too little, and the universe would collapse before stars and galaxies could form. The density fluctuations of the universe when the cosmic microwave background was formed also must be a certain magnitude for gravity to coalesce them into galaxies later and for us to be able to detect them. Our ability to measure the cosmic microwave background radiation is bound to the habitability of the universe; had these fluctuations been significantly smaller, we wouldn’t be here.

Omega, density of dark matter
cosmic density fine-tuned to flatness today to less than a per mille must have been initially fine-tuned to tens of orders of magnitude. Omega measures the density of material in the universe— including galaxies, diffuse gas, and dark matter. The number reveals the relative importance of gravity in an expanding universe. If gravity were too strong, the universe would have collapsed long before life could have evolved. Had it been too weak, no galaxies or stars could have formed.
The flatness problem (also known as the oldness problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. Such problems arise from the observation that some of the initial conditions of the universe appear to be fine-tuned to very 'special' values, and those small deviations from these values would have extreme effects on the appearance of the universe at the current time. In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe. The current density of the universe is observed to be very close to this critical value. Since any departure of the total density from the critical value would increase rapidly over cosmic time, the early universe must have had a density even closer to the critical density, departing from it by one part in 10^62 or less. This leads cosmologists to question how the initial density came to be so closely fine-tuned to this 'special' value.

Hubble constant H0
The Hubble constant is the present rate of expansion of the universe, which astronomers determine by measuring the distances and redshifts of galaxies.  So our existence tells us that the Universe must have expanded, and be expanding, neither too fast nor too slow, but at just the "right" rate to allow elements to be cooked in stars. This may not seem a particularly impressive insight. After all, perhaps there is a large range of expansion rates that qualify as "right" for stars like the Sun to exist. But when we convert the discussion into the proper description of the Universe, Einstein's mathematical description of space and time, and work backwards to see how critical the expansion rate must have been at the time of the Big Bang, we find that the Universe is balanced far more crucially than the metaphorical knife edge. If we push back to the earliest time at which our theories of physics can be thought to have any validity, the implication is that the relevant number, the so-called "density parameter," was set, in the beginning, with an accuracy of 1 part in 10^60 . Changing that parameter, either way, by a fraction given by a decimal point followed by 60 zeroes and a 1, would have made the Universe unsuitable for life as we know it. If the rate of expansion one second after the big bang had been smaller by even one part in a hundred thousand million million, the universe would have recollapsed before it ever reached its present size. If the Universe had just a slightly higher matter density (red), it would be closed and have recollapsed already; if it had just a slightly lower density (and negative curvature), it would have expanded much faster and become much larger. The Big Bang, on its own, offers no explanation as to why the initial expansion rate at the moment of the Universe's birth balances the total energy density so perfectly, leaving no room for spatial curvature at all and a perfectly flat Universe. Our Universe appears perfectly spatially flat, with the initial total energy density and the initial expansion rate balancing one another to at least some 20+ significant digits

Lambda Fine-tuning arguments in short sentences Lambda    the cosmological constant
If the state of the hot dense matter immediately after the Big Bang had been ever so slightly different, then the Universe would either have rapidly recollapsed, or would have expanded far too quickly into a chilling, eternal void. Either way, there would have been no ‘structure’ in the Universe in the form of stars and galaxies. The smallness of the cosmological constant is widely regarded as the single the greatest problem confronting current physics and cosmology. 
There are now two cosmological constant problems. The old cosmological constant problem is to understand in a natural way why the vacuum energy density ρV is not very much larger. We can reliably calculate some contributions to ρV , like the energy density in fluctuations in the gravitational field at graviton energies nearly up to the Planck scale, which is larger than is observationally allowed by some 120 orders of magnitude. Such terms in ρV can be cancelled by other contributions that we can’t calculate, but the cancellation then has to be accurate to 120 decimal places. How far could you rotate the dark-energy knob before the Oops! moment? If rotating it…by a full turn would vary the density across the full range, then the actual knob setting for our Universe is about 10^123 of a turn away from the halfway point. That means that if you want to tune the knob to allow galaxies to form, you have to get the angle by which you rotate it right to 123 decimal places!
That means that the probability that our universe contains galaxies is akin to exactly 1 possibility in 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 . Unlikely doesn’t even begin to describe these odds. There are “only” 10^81 atoms in the observable universe, after all.  The low entropy starting point is the ultimate reason that the universe has an arrow of time, without which the second law would not make sense. However, there is no universally accepted explanation of how the universe got into such a special state.  Some unknown agent initially started the inflaton high up on its potential, and the rest is history. We are forced to conclude that in a recurrent world like de Sitter space our universe would be extraordinarily unlikely. A possibility is an unknown agent intervened in the evolution, and for reasons of its own restarted the universe in the state of low entropy characterizing inflation.

The Amplitude of Primordial Fluctuations Q
 represents the amplitude of complex irregularities or ripples in the expanding universe that seed the growth of such structures as planets and galaxies. It is a ratio equal to 1/100,000. If the ratio were smaller, the universe would be a lifeless cloud of cold gas. If it were larger, "great gobs of matter would have condensed into huge black holes," says Rees. Such a universe would be so violent that no stars or solar systems could survive.
Why Q is about is still a mystery. But its value is crucial: were it much smaller, or much bigger, the 'texture' of the universe would be quite different, and less conducive to the emergence of life forms. If Q were smaller than but the other cosmic numbers were unchanged, aggregations in the dark matter would take longer to develop and would be smaller and looser. The resultant galaxies would be anaemic structures, in which star formation would be slow and inefficient, and 'processed' material would be blown out of the galaxy rather than being recycled into new stars that could form planetary systems. If Q were smaller than loL6, gas would never condense into gravitationally bound structures at all, and such a universe would remain forever dark and featureless, even if its initial 'mix' of atoms, dark matter and radiation were the same as in our own. On the other hand, a universe where Q were substantially larger than - where the initial 'ripples' were replaced by large-amplitude waves - would be a turbulent and violent place. 

Matter/Antimatter Asymmetry
Baryogenesis Due to matter/antimatter asymmetry (1 + 109 protons compared to 109 antiprotons), only one proton for 109 photons remained after annihilation. The theoretical prediction of antimatter made by Paul Dirac in 1931 is one of the most impressive discoveries (Dirac 1934). Antimatter is made of antiparticles that have the same (e.g. mass) or opposite (e.g. electric charge) characteristics but that annihilate with particles, leaving out at the end mostly photons. A symmetry between matter and antimatter led him to suggest that ‘maybe there exists a completely new Universe made of antimatter’. Now we know that antimatter exists but that there are very few antiparticles in the Universe. So, antiprotons (an antiproton is a proton but with a negative electric charge) are too rare to make any macroscopic objects. In this context, the challenge is to explain why antimatter is so rare (almost absent) in the observable Universe. Baryogenesis (i.e. the generation of protons and neutrons AND the elimination of their corresponding antiparticles) implying the emergence of the hydrogen nuclei is central to cosmology. Unfortunately, the problem is essentially unsolved and only general conditions of baryogenesis were well posed by A. Sakharov a long time ago (Sakharov 1979). Baryogenesis requires at least departure from thermal equilibrium, and the breaking of some fundamental symmetries, leading to a strong observed matter–antimatter asymmetry at the level of 1 proton per 1 billion of photons. Mechanisms for the generation of the matter–anti matter strongly depend on the reheating temperature at the end of inflation, the maximal temperature reached in the early Universe. Forthcoming results from the Large Hadronic Collisionner (LHC) at CERN in Geneva, BABAR collaboration, astrophysical observations and the Planck satellite mission will significantly constrain baryogenesis and thereby provide valuable information about the very early hot Universe.

The low-entropy state of the universe 
This figure will give us an estimate of the total phase-space volume V available to the Creator, since this entropy should represent the logarithm of the volume of the (easily) largest compartment. Since 10123 is the logarithm of the volume, the volume must be the exponential of 10123, i. e. The Second Law of thermodynamics is one of the most fundamental principles of physics. The term “entropy” refers to an appropriate measure of disorder or lack of “specialness” of the state of the universe. The problem of the apparently low entropy of the universe is one of the oldest problems of cosmology. The fact that the entropy of the universe is not at its theoretical maximum, coupled with the fact that entropy cannot decrease, means that the universe must have started in a very special, low entropy state. The initial state of the universe must be the most special of all, so any proposal for the actual nature of this initial state must account for its extreme specialness.
The low-entropy condition of the early universe is extreme in both respects: the universe is a very big system, and it was once in a very low entropy state. The odds of that happening by chance are staggeringly small. Roger Penrose, a mathematical physicist at Oxford University, estimates the probability to be roughly 1/10^10^123. That number is so small that if it were written out in ordinary decimal form, the decimal would be followed by more zeros than there are particles in the universe! It is even smaller than the ratio of the volume of a proton (a subatomic particle) to the entire volume of the visible universe. Imagine filling the whole universe with lottery tickets the size of protons, then choosing one ticket at random. Your chance of winning that lottery is much higher than the probability of the universe beginning in a state with such low entropy! Huw Price, a philosopher of science at Cambridge, has called the low-entropy condition of the early universe “the most underrated discovery in the history of physics.”

The universe would require 3 dimensions of space, and time, to be life-permitting.
If whatever exists were not such that it is accurately described on macroscopic scales by a model with three space dimensions, then life would not exist.  If “whatever works” was four dimensional, then life would not exist, whether the number of dimensions is simply a human invention or an objective fact about the universe. We physicists need to confront the crisis facing us. A scientific theory [the multiverse/ Anthropic Principle/ string theory paradigm] that makes no predictions and therefore is not subject to experiment can never fail, but such a theory can never succeed either, as long as science stands for knowledge gained from rational argument borne out by evidence. The number of spatial dimensions of our universe seems to be a fortuitous contingent fact. It is easy to construct geometries for spaces with more or less than three dimensions (or space-times with more or less than three spatial dimensions). It turns out that mathematicians have shown that spaces with more than three dimensions have some significant problems. For example, given our laws of physics there are no stable orbits in spaces with more than three dimensions. It is hard to imagine how solar systems stable enough for life to slowly evolve could form without stable orbits. Additionally, consider the effect of long-range forces (like gravity and electromagnetism). These forces work according to the inverse square law (i.e. the effect of the force decreases by the square of the distance). So move ten times farther away from a gravitational field or a light source and the effect of the gravity or light is 100 times less. To intuitively see why this is, imagine a light bulb as sending out millions of thin straight wires in all directions. The farther we get away from this light, the more spread out these wires are. The closer we are to the light, the closer together the wires are. The more concentrated the wires, the stronger the force. But what would happen if we added one more spatial dimension to our universe? In this case, long-range forces would work according to an inverse cubed law. This is, of course, because there would be one more spatial dimension for the lines of force to be spread out within. So forces would decrease rapidly as you moved away from the source and increase rapidly as you moved closer. This would cause significant problems both at the atomic and at the cosmological scales. Rees explains the problem this way: An orbiting planet that was slowed down—even slightly—would then plunge ever faster into the sun, rather than merely shift into a slightly smaller orbit, because an inverse-cubed force strengthens so steeply towards the center; conversely an orbiting planet that was slightly speeded up would quickly spiral outwards into darkness.

Last edited by Otangelo on Sat Aug 07, 2021 8:11 pm; edited 4 times in total




Fine-tuning of the initial conditions of the universe
The fundamental constants of the universe contribute to the existence of the basic molecules of life
All the laws of Nature have particular constants associated with them, the gravitational constant, the speed of light, the electric charge, the mass of the electron, Planck's constant from quantum mechanics. Some are derived from physical laws (the speed of light, for example, comes from Maxwell's equations). However, for most, their values are arbitrary. The laws would still operate if the constants had different values, although the resulting interactions would be radically different.
The Gravitational constant determines the strength of gravity. If lower than stars would have insufficient pressure to overcome the Coulomb barrier to start thermonuclear fusion (i.e. stars would not shine). If higher, stars burn too fast, use up fuel before life has a chance to evolve. The Strong force coupling constant holds particles together in the nucleus of an atom. If weaker than multi-proton particles would not hold together, hydrogen would be the only element in the Universe. If stronger, all elements lighter than iron would be rare. Also radioactive decay would be less, which heats the core of Earth. The electromagnetic coupling constant determines the strength of the electromagnetic force that couples electrons to the nucleus. If less, then no electrons are held in orbit. If stronger, electrons will not bond with other atoms. Either way, no molecules. All the above constants are critical to the formation of the basic building blocks of life. And, the range of possible values for these constants is very narrow, only about 1 to 5% for the combination of constants.

Fine-Tuning in Quantum Mechanics 
The rules of quantum mechanics are very different from the rules of classical mechanics. In classical physics, the rules are deterministic and each object has both a place and a definite velocity. But in quantum mechanics, it’s a bit more complicated. Particles such as electrons have wave-like properties, so we describe them with a wave function whose peaks and troughs tell us where the electron probably is, and where it is probably going. It is very good news for our universe that classical physics does not hold at the level of atoms, because if it did then atoms would be unstable. But classical mechanics does hold for larger objects like people, plants, and planets. So where is the boundary line between the quantum and the classical? The answer lies in Planck’s constant. Below this size and the rules of quantum mechanics hold, and above this size the rules of classical mechanics hold  What would happen if we changed Planck’s constant? If we brought the constant to 0, then classical mechanics would hold not only for medium-sized objects like us but also for atoms too. This would be a disaster for our universe because atoms would become unstable as electrons lose energy and spiral into the nucleus. Such a universe could not have much interesting chemistry. But what if we made Planck’s constant considerably larger? In this imaginary universe, medium-sized material objects would behave in quantum-like ways. While there are tricky philosophical questions about how to interpret quantum mechanics, we can be sure that if ordinary medium-sized objects behaved according to these laws, the world would be a very different place. In such a world bodies would have to be “fuzzy.” “It would be like Schrodinger’s ‘quite ridiculous’ cat, never knowing where things are or where they are going. Even more confusingly, its own body could be similarly fuzzy. While it is unclear exactly what such a world would look like, we would know that it would not be obeying the laws of classical mechanics and that objects would have to behave in both wave-like and particle-like ways. Whether it would be possible to hunt down a boar who moved according to a wave function is far from clear (at best!), not to mention that I could not have both a place and a determinate velocity at the same time. Imagine kicking a ball in a world with a very large Planck’s constant and both the world around us [and] the ball would be radically unpredictable. This lack of predictability would be a significant problem for the existence of life. Thus, it seems as though Planck’s constant has to be relatively close to its current value for both atoms to be stable and life to be possible.

Initial Conditions and “Brute Facts”  
Besides physical constants, there are initial or boundary conditions, which describe the conditions present at the beginning of the universe. Initial conditions are independent of the physical constants. One way of summarizing the initial conditions is to speak of the extremely low entropy (that is, a highly ordered) initial state of the universe. This refers to the initial distribution of mass-energy. In The Road to Reality, physicist Roger Penrose estimates that the odds of the initial low entropy state of our universe occurring by chance alone are on the order of 1 in 10 10(123). This ratio is vastly beyond our powers of comprehension. Since we know a life-bearing universe is intrinsically interesting, this ratio should be more than enough to raise the question: Why does such a universe exist? If someone is unmoved by this ratio, then they probably won’t be persuaded by additional examples of fine-tuning. In addition to initial conditions, there are a number of other, well-known features about the universe that too exhibit a high degree of fine-tuning. Among the fine-tuning is  following:

Ratio of masses for protons and electrons If it were slightly different, building blocks for life such as DNA could not be formed.  Velocity of light  If it were larger, stars would be too luminous. If it were smaller, stars would not be luminous enough.  Mass excess of neutron over proton if it were greater, there would be too few heavy elements for life. If it were smaller, stars would quickly collapse as neutron stars or black holes.

Energy-Density is Finely-Tuned
The density of the universe one nanosecond (a billionth of a second) after the beginning had to have the precise value of 1024 kilogram per cubic meter. If the density were larger or smaller by only 1 kilogram per cubic meter, galaxies would never have developed.18 This corresponds to a fine-tuning of 1 part in 10^24. The amount of matter (or more precisely energy density) in our universe at the Big Bang turns out to be finely tuned to about 1 part in 10^55. In other words, to get a life-permitting universe the amount of mass would have to be set to a precision of 55 decimal places. This fine-tuning arises because of the sensitivity to the initial conditions of the universe. If the initial energy density would have been slightly larger, gravity would have quickly slowed the expansion and then caused the universe to collapse too quickly for life to form. Conversely, if the density were a tad smaller, the universe would have expanded too quickly for galaxies, stars, or planets to form. Life could not originate without a long-lived, stable energy source such as a star. Thus, life would not be possible unless the density were just right – if we added or subtracted even just the mass of our body to that of the universe this would have been catastrophic!

Cosmic inflation
is a potential dynamical solution to this problem based on a rapid early expansion of the universe. There are 6 aspects of inflation that would have to be properly setup, some of which turn out to require fine-tuning. One significant aspect is that inflation must last for the proper amount of time – inflation is posited to have been an extremely brief but hyper-fast expansion of the early universe. If inflation had lasted a fraction of a nanosecond longer, the entire universe would have been merely a thin hydrogen soup, unsuitable for life. In a best-case scenario, about 1 in 1000 inflationary universes would avoid lasting too long. The biggest issue though is that for inflation to start, it needs a very special/rare state of an extremely smooth energy density.   Even if inflation solves this fine-tuning problem, one should not expect new physics discoveries to do away with other cases of fine-tuning: “Inflation represents a very special case… This is not true of the vast majority of fine-tuning cases. There is no known physical scale waiting in the life-permitting range of the quark masses, fundamental force strengths, or the dimensionality of spacetime. There can be no inflation-like dynamical solution to these fine-tuning problems because dynamical processes are blind to the requirements of intelligent life. What if, unbeknownst to us, there was such a fundamental parameter? It would need to fall into the life-permitting range. As such, we would be solving a fine-tuning problem by creating at least one more. And we would also need to posit a physical process able to dynamically drive the value of the quantity in our universe toward the new physical parameter.”

Half of the gravitational potential energy that arose from the inflationary period was converted into kinetic energy from which arose an almost identical number of particles and antiparticles, but with a very small excess of matter particles (about one part in several billion). All the antiparticles annihilated with their respective particles leaving a relatively small number of particles in a bath of radiation. The bulk of this ‘baryonic matter’ was in the form of quarks that, at about one second after the origin, were grouped into threes to form protons and neutrons. (Two up quarks and one down quark form a proton, and one up quark and two down quarks a neutron. The up quark has +2/3 charge and the down quark −1/3 charge, so the proton has a charge of +1 and the neutron 0 charge.)

The Flatness Problem
It is well-known that our expanding Universe can be well described by a Friedmann-LemaˆıtreRobertson-Walker metric which is the general metric for an isotropic, homogeneous expanding Universe. Our Universe today is very flat, meaning that k = 0, hence from the equation follows that Ω today is very close to 1. Using some algebraic manipulations one can compute what Ω was in the early Universe. It turns out that to have Ω = 1 today, Ωearly must also lie extremely close to 1. This fine-tuning of Ωearly is called the Flatness Problem. This problem is what inflation theory solves.

Last edited by Otangelo on Sat Aug 07, 2021 7:14 pm; edited 1 time in total




Fine-tuning of the fundamental forces of the universe
The so-called standard model of particle physics, the most fundamental theory which is tested and which we know is true (within the energies tested so far) contains 18 free parameters. These parameters cannot be calculated or predicted theoretically. One can look at them as 18 adjustment knobs we can twiddle to best adapt the theory to all known data. The problem is that this is just too many. The absolute majority of the eighteen are related to the different values for the masses of the elementary particles. From a theoretical point of view, then, the particle masses are a total mystery - they might as well have been random numbers drawn from a hat. 

What happens if we vary the strengths of the four fundamental forces: gravity, electromagnetism, the strong force, and the weak force?  
If gravity did not exist, masses would not clump together to form stars or planets . . .  If the strong force didn’t exist, protons and neutrons could not bind together and hence no atoms with an atomic number greater than hydrogen would exist.  If the electromagnetic force didn’t exist, there would be no chemistry. A universe without the weak interactions would be a universe where neutrons would be stable, and all protons and neutrons would be forged into helium, leaving behind the merest trace of hydrogen. 

Gravity is weaker than electric forces by a huge amount (about 10^36 times weaker). If gravity was only 10^30 rather than 10^36 feebler than electric forces. Atoms and molecules would behave as in our actual universe, but objects would not need to be so large before gravity became competitive with the other forces”. This would require planets and stars to be scaled down by a factor of about a billion. We would also need to be scaled down as well because gravity would crush anything as large as ourselves. Rees argues that even small bugs would have to have very thick legs. But so far this seems to be an interesting universe with chemistry and organic life (albeit smaller organic life). But there are some significant problems. One of the biggest problems is the lifetime of the typical star. Instead of being around for 10 billion years, the average lifetime would be much shorter. The weaker gravity is (provided it isn’t actually zero), the grander and more complex can be its consequences. But if we strengthen gravity much more, then the consequences are even more dire. This grants gravity a rather firm upper bound relative to the electromagnetic force. Furthermore, if gravity were less than 0, then it would be a repelling force with obvious dire consequences. Collins calculates the quantity of fine-tuning here (assuming an upper bound of the strength of the strong force) as 1 part in 10^36. Even if we are skeptical of these exact calculations, we can safely conclude that the life-permitting range is extremely small.

A universal attractive force such as gravity is required for life. If no gravity existed, then there would be no stars, since the force of gravity is what holds the matter in stars together against the outward forces caused by the high internal temperatures inside the stars. This means that there would be no long-term energy sources to sustain life. Then, there would be no planets, since there would be nothing to bring material particles together, any beings of significant size could not move around without floating off the planet with no way of returning.
If gravity were repulsive rather than attractive, then matter wouldn’t clump into complex structures. Remember: your density, thank gravity, is 10^30 times greater than the average density of the universe. 

Things get worse when we fiddle with forces. Make the strength of gravity stronger or weaker by a factor of 100 or so, and you get universes where stars refuse to shine, or they burn so fast they exhaust their nuclear fuel in a moment.

The strong nuclear force
If the strong force were a long rather than short-range force, then there would be no atoms. Any structures that formed would be uniform, spherical, undifferentiated lumps, of arbitrary size, and incapable of complexity. 
It is the force that binds nucleons (i.e. protons and neutrons) together in the nucleus of an atom. Without it, the nucleons would not stay together. It is actually a result of a deeper force, the “gluonic force,” between the quark constituents of the neutrons and protons. It must be considerably stronger than the electromagnetic force; otherwise, the nucleus would come apart.  To have atoms with an atomic number greater than that of hydrogen, there must be a force that plays the same role as the strong nuclear force – that is, one that is much stronger than the electromagnetic force but only acts over a very short range. It should be clear that life could not be formed from mere hydrogen. One cannot obtain enough self-reproducing, stable complexity. Furthermore, in a universe in which no other atoms but hydrogen could exist, stars could not be powered by nuclear fusion, but only by gravitational collapse, thereby drastically decreasing the time for, and hence the probability of, the evolution of embodied life.

The strong force 
is the force that holds the nucleus of an atom together, and the electromagnetic force is the force that keeps electrons in orbit around a nucleus. The effect on the stability of elements of decreasing the strong force is straightforward since the stability of elements depends on the strong force being strong enough to overcome the electromagnetic repulsion between protons in a nucleus. Since protons all have a positive electric charge and like charges repel each other, weakening the strong force would make it substantially harder for the nucleus of an atom to hold together. But how much can we imagine weakening the strong force before a nucleus with more than one proton becomes unstable? Doing the required calculations tells us that a 50 percent decrease in the strength of the strong force . . . would undercut the stability of all elements essential for carbon-based life, with a slightly larger decrease eliminating all elements except hydrogen. Because it is the relative strengths of the forces that matters, one can reach the same instability with about a fourteen-fold increase in the electromagnetic force. This entails (keeping everything else the same) that in terms of atomic stability, there is a firm upper bound to the strength of the electromagnetic force (and, of course, it cannot fall below 0). The ratio between these forces is also significant when it comes to the production of heavy elements in stars.

A change of a little as 0.5 percent in the strength of the strong nuclear force, or 4 percent in the electric force, would destroy either nearly all carbon or all oxygen in every star, and hence the possibility of life as we know it. What would happen if we imagine eliminating a force from our universe (or imagine its value as 0). Interestingly, we can quickly infer that without gravity, electromagnetism, or the strong force, there is little chance for a life-permitting universe.

Messing with the strong or weak forces delivers elements that fall apart in the blink of an eye, or are too robust to transmute through radioactive decay into other elements,

The strong nuclear force (the force that binds together the elements in the nucleus of an atom) has a value of 0.007. If that value had been 0.006 or less, the Universe would have contained nothing but hydrogen. If it had been 0.008 or higher, the hydrogen would have fused to make heavier elements. In either case, any kind of chemical complexity would have been physically impossible. And without chemical complexity, there can be no life. 

The Nuclear Weak Coupling Force - Tuned to Give an Ideal Balance Between Hydrogen (as Fuel for Sun) and Heavier Elements as Building Blocks for Life
The weak force governs certain interactions at the subatomic or nuclear level. If the weak force coupling constant were slightly larger, neutrons would decay more rapidly, reducing the production of deuterons, and thus of helium and elements with heavier nuclei. On the other hand, if the weak force coupling constant were slightly weaker, the Big Bang would have burned almost all of the hydrogen into helium, with the ultimate outcome being a universe with little or no hydrogen and many heavier elements instead. This would leave no long-lived stars and no hydrogen-containing compounds, especially water. In 1991, Breuer noted that the appropriate mix of hydrogen and helium to provide hydrogen-containing compounds, long-term stars, and heavier elements is approximately 75 percent hydrogen and 25 percent helium, which is just what we find in our universe.

If, in electromagnetism, like charges attracted and opposites repelled, then there would be no atoms. As above, we would just have undifferentiated lumps of matter.
Without electromagnetism, there would be no atoms, since there would be nothing to hold the electrons in orbit. There would be no means of transmission of energy from stars for the existence of life on planets. It is doubtful whether enough stable complexity could arise in such a universe for even the simplest forms of life to exist.
The electromagnetic force allows matter to cool into galaxies, stars, and planets. Without such interactions, all matter would be like dark matter, which can only form into large, diffuse, roughly spherical haloes of matter whose only internal structure consists of smaller, diffuse, roughly spherical subhaloes.

Bohr’s rule of quantization 
requires that electrons occupy only fixed orbitals (energy levels) in atoms.  If we view the atom from the perspective of classical Newtonian mechanics, an electron should be able to go in any orbit around the nucleus. The reason is the same as why planets in the solar system can be any distance from the Sun – for example, the Earth could have been 150 million miles from the Sun instead of its present 93 million miles. Now the laws of electromagnetism – that is, Maxwell’s equations – require that any charged particle that is accelerating emit radiation. Consequently, because electrons orbiting the nucleus are accelerating – since their direction of motion is changing – they would emit radiation. This emission would in turn cause the electrons to lose energy, causing their orbits to decay so rapidly that atoms could not exist for more than a few moments. Thus, without the existence of this rule of quantization – or something relevantly similar – atoms could not exist, and hence there would be no life. 

The Pauli Exclusion Principle 
If electrons were bosons, rather than fermions, then they would not obey the Pauli exclusion principle. There would be no chemistry. It  implies that not more than two electrons can occupy the same orbital in an atom, since a single orbital consists of two possible quantum states corresponding to the spin pointing in one direction and the spin pointing in the opposite direction. This allows for complex chemistry since, without this principle, all electrons would occupy the lowest atomic orbital. Thus, without this principle, no complex life would be possible.

The Higgs Mass
The Higgs boson, only just recently discovered, plays an important role in the Standard Model: it is the only scalar field (spin = 0) and the coupling to this boson is responsible for a part of the mass of the elementary particles. To keep the Higgs mass within the observed range one hence requires large cancellations among each other in the loop corrections effectively introducing a new term which almost exactly cancels the loop term to a high degree of precision. This is no problem as this new term is not observable and the current value of the Higgs mass agrees well with predictions and measurements. This is dubbed the Hierarchy Problem since these extra terms need to be fine-tuned. A popular solution to this problem is supersymmetry: the statement that each particle in the Standard Model has a superpartner called an sparticle. These new sparticles can introduce new loops effectively countering the original large contributions. Yet, as these loops depend on the masses of these sparticles, any sparticles with a mass higher than the Higgs mass would reintroduce the problem. The problem is hence shifted from arbitrarily precise loop cancellations to the masses of these sparticles. However, there is currently no experimental evidence for supersymmetry.

When we invent a game or a program, we can implement an infinite number of rules or ways how the program executes its operations. These rules are grounded in our will to achieve a certain way of how the game or program is being played or run.

If we compare the universe to a computer,  there’s a specifically selected program and specific rules that our physical universe follows and obeys to. These rules are not grounded in anything else. They are arbitrary. If follows that these rules have to be explained from the outside.

That includes the speed of light, Planck's constant, electric charge, thermodynamics, and atomic theory. These rules can be described through mathematics. It cannot operate without these rules in place. There is no deeper reason or why these rules exist, rather than not.
The universe operates with clockwork precision, is orderly, and is stable for unimaginable periods of time. This is the normal state of affairs, but there is no reason why this should be the norm and not the exception.
The fact that the universe had a beginning, and its expansion rate was finely adjusted on a razor's edge, not too fast, not too slow, to permit the formation of matter, is by no means to be taken as a granted, natural, or self-evident. It's not. It's extraordinary to the extreme.
In order to have solid matter, electrons that surround the atomic nucleus need to have precise mass. They constantly jiggle, and if the mass would not be right, that jiggling would be too strong, and there would never be any solids, and we would not be here.
The masses of the subatomic particles to have stable atoms must be just right, and so the fundamental forces, and dozens of other parameters and criteria. They have to mesh like a clock - or even a watch, governed by the laws, principles, and relationships of quantum mechanics, all of which had to come from somewhere or from someone or from something.
If we had a life-permitting universe, with all laws and fine-tune parameters in place, but not Bohr's rule of quantization and the Pauli Exclusion Principle in operation, no stable atoms, no life.
Paulis Exclusion Principle dictates that not more than two electrons can occupy exactly the same 'orbit' and then only if they have differing quantum values, like spin-up and spin-down. This prevents all electrons from being together like people crowded in a subway train at rush hour. All electrons would occupy the lowest atomic orbital. Thus, without this principle, no complex life would be possible.
Bohr’s rule of quantization requires that electrons occupy only fixed orbitals (energy levels) in atoms. If we view the atom from the perspective of classical Newtonian mechanics, an electron should be able to go in any orbit around the nucleus. That can be in this level or that level or the next level but not at in-between levels. This prevents electrons from spiraling down and impacting the positively charged nucleus which, being negatively charged, electrons would otherwise want to do. Design and fine-tuning by any other name still appear to be design and fine-tuning. Thus, without the existence of this rule of quantization – atoms could not exist, and hence there would be no life.

Last edited by Otangelo on Tue Dec 14, 2021 7:44 am; edited 5 times in total




Fine-tuning of subatomic particles, and atoms


The proton requires to be 1836 times heavier than the electron, otherwise, the universe would be devoid of life. The electric charge on the proton is exactly equal and opposite to that of the electron to as many decimal places as you care to measure. This is more than slightly anomalous in that the proton and the electron share nothing else in common. The proton is not a fundamental particle (it is composed in turn of a trilogy of quarks), but the electron is a fundamental particle. The proton's mass is 1836 times greater than that of the electron. So how come their electric charges are equal and opposite? Why is it so? There is no reason why this state of affairs could not be different! Actually, there is an infinite number of possible different arrangements which would prohibit setting up stable atoms. Why is the electric charge on the electron is exactly equal and opposite to that of the positron, the positron being the electron's antimatter opposite? That equal but opposite charge is again verified to as many decimal places as one can calculate. So that means the electric charge on the proton and the electric charge on the positron are exactly the same, yet apart from that, the two entities are as alike as chalk and cheese. Why is it for this electric charge equality between different kinds of particles?

The electron, the proton, and the quark are all entities within the realm of particle hence quantum physics. All three carry an electrical charge. All three have mass. After those observations, things get interesting, or messy, depending on your point of view. The electric charge of the proton is exactly equal and the opposite of the electric charge on the electron, despite the proton being nearly 2000 times more massive. There’s no set-in-concrete theoretical reason why this should be so. It cannot be determined from first principles, only experimentally measured.

The proton mass is 1836 times that of an electron.  If this ratio were off even slightly, molecules would not form properly.  It is also interesting to note that although protons are very different in size and mass, the charges are exactly the same in opposite degree.  If this were not the case, again, molecules necessary to support complex life could not form.  The same is true of the electromagnetic coupling constant between protons and electrons - it is very precisely balanced to support complex life. 

The quark masses 
vary over a huge range from roughly 10 electron masses for the up-and down-quarks to 344,000 electron masses for the top-quark. Physicists puzzled for some time about why the top-quark is so heavy, but recently we have come to understand that it’s not the top-quark that is abnormal: it’s the up-and-down-quarks that are absurdly light. The fact that they are roughly twenty thousand times lighter than particles like the Z-boson and the W-boson is what needs an explanation. The Standard Model has not provided one. Thus, we can ask what the world would be like if the up-and-down quarks were much heavier than they are. Once again—disaster! Protons and neutrons are made of up-and down-quarks. According to the quark theory of protons and neutrons, the nuclear force (force between nucleons) can be traced to quarks hopping back and forth between these nucleons. If the quarks were much heavier, it would be much more difficult to exchange them, and the nuclear force would practically disappear. With no sticky flypaper force holding the nucleus together, there could be no chemistry. Luck is with us again.

The charge on the electron is a fundamental constant of nature
Like mass, charge is also a physical property. If we place an object in the gravitational field, due to “mass” it will experience force. In the same way, a charged particle experiences force in the electromagnetic field due to the presence of “charge” in it.   It seems that the quarks teamed up exactly so three of them can cancel out (attract) exactly the electron's electric charge. Now, here is the thing:  More elaborate composite nucleon models can have constituent partons with other rational multiples, and eventually not cancel out with the charge of the electron, and there would be no stable atoms, but ions, and no molecules, and life in the universe !! Composite particles could result in fractional charges. For example, 2 quarks and one anti-quark could sum up and result in a fractional charge ( 1/3). So is this state of affairs coincidence, or the result of the thoughts of our wise creator?

The partial (fractional) electrical charges on the up-quarks and the down-quarks had to arrange themselves just-so such that a proton is a unity of positive electric charge and a neutron is a unity of electric charge neutrality. Then, the positive electric charge on the proton has to balance just so (to an infinite number of decimal places, at least as close to infinity as one can actually measure and calculate) the negative electric charge of the electron. How can the electric charge of the electron be EXACTLY equal and opposite to that of the proton when they otherwise share nothing in common?

If hydrogen atoms couldn't link up with oxygen atoms there could be no water and no water implies no life could arise. The same applies to dozens of other essential molecules that life requires.

On the other hand, why isn't there a universal solvent or acid that disassembles molecules? Everything can be stored in at least one kind of container. That too seems to be essential for life as is the requirement that some things need to be in solution some of the time. An atom is literally 99.99% empty space. And the part which supposedly is matter, might be just pure energy. And as such, the whole universe is simply held together by Gods power and his word: information.

Where do quarks get their charge? 
Science has no answer to why nature uses certain fundamental rules and physical laws and not some other rules that we can conceive. That is probably a question best left to priests or philosophers.  For example, the electric charge is an intrinsic property of charged particles and it comes in 2 flavors: positive and negative. Even scientists don’t know where it comes from; it’s something that’s observed and measured. It gets worse: nobody knows how let alone why opposite charges attract and similar charges repel according to Coulomb’s law. Those are the basic rules of Nature that we discovered. We don’t know why these are the rules. And unless we find a more fundamental rule from which these rules are deduced, we will never know the answer to that why question (and even then, we’d just replace one why with another.)

What would happen to our universe if we changed the masses of the up quark, down quark, and electron? 
What would happen to our universe if we change the strengths of the fundamental forces? And what would happen to our universe if we eliminated one of the four fundamental forces (gravity, electromagnetism, strong force, weak force)? In each of these cases, even relatively minor changes would make the existence of intelligent organic life (along with almost everything else in our universe!) impossible. 

The neutron-to-proton mass ratio 
is 1.00137841870, which looks . . . uninspiring. Physically this means that the proton has very nearly the same mass as the neutron, which . . . is about 0.1 percent heavier. At first, it may seem as though nothing of significance hangs on this small difference. But that is wrong. All of life depends on it. The fact that the neutron’s mass is coincidentally just a little bit more than the combined mass of the proton, electron, and neutrino is what enables neutrons to decay. . . . If the neutron were lighter . . . yet only by a fraction of 1 percent, it would have less mass than the proton, and the tables would be turned: isolated protons, rather than neutrons, would be unstable. Isolated neutrons will decay within about fifteen minutes because particles tend toward the lowest possible energy consistent with the conservations laws. Given this tendency of fundamental particles, the slightly higher mass of the neutron means that the proton “is the lightest particle made from three quarks. The proton is stable because there is no lighter baryon to decay into. The ability of the proton to survive for an extended period without decaying is essential for the existence of life. In the early universe, there was a “hot, dense soup of particles and radiation,” and as the universe began to cool, the heavier elements decayed into protons and neutrons. The Universe managed to lock some neutrons away inside nuclei in the first few minutes before they decayed. Isolated protons were still important chemically because they could interact with electrons. In fact, a single proton is taken as an element even in the absence of any electrons—it is called hydrogen. But now imagine an alternative physics where the neutron was less massive, then free protons would decay into neutrons and positrons, with disastrous consequences for life, because without protons there could be no atoms and no chemistry. Neutrons are electrically neutral and so will not interact with electrons. That means a universe dominated by neutrons would have almost no interesting chemistry. A decrease in the neutron’s mass by 0.8 MeV would entail an “initially all neutron universe.  It is rather easy to arrange a universe to have no chemistry at all. If we examine a range of different possible masses for the up and down quarks (and so the proton and neutron), we can conclude that almost all changes lead to universes with no chemistry. Thus, there are firm and fairly narrow limits on the relative masses of the up and down quarks if our universe is going to have any interesting chemistry.

A small decrease in neutron mass of around 0.5 to 0.7 MeV would result in . . . an almost all helium universe. This would have serious life-inhibiting consequences since helium stars have a lifetime of at most 300 million years.  

Quantum jiggling of electrons: 
In solids, atoms are held together by chemical bonds in a fixed lattice. . . . We can break this lattice by shaking it vigorously. Given quantum mechanics, we know that the particles in the universe are constantly jiggling and moving. In fact, it is this constant jiggling that makes it impossible to bring something to absolute zero, which would require all atomic motion to stop. If we are imagining increasing the mass of the electron, then this quantum jiggling becomes a significant problem for the stability of solid structures. If the electron mass were within a factor of a hundred of the proton mass, the quantum jiggling of electrons would destroy the lattice. In short: no solids. So there is a firm-fixed range for the mass of the electron—it can range from 0 to about a factor of a hundred times smaller than the mass of the proton before life as we know it becomes impossible. We see from these very brief considerations that the masses of the fundamental particles cannot be changed very much relative to one another without losing chemistry (and so organic life).

For instance, to make life possible, the masses of the fundamental particles must meet an exacting combination of constraints. The fine-tuning of the masses of quarks is considerable—1 part in 10^21 . In addition, the difference in masses between the quarks cannot exceed one megaelectron volt, the equivalent of one-thousandth of 1 percent of the mass of the largest known quark, without producing either a neutron-only or a proton-only universe, both exceedingly boring and incompatible with life and even with simple chemistry. Equally problematic, increasing the mass of electrons by a factor of 2.5 would result in all the protons in all the atoms capturing all the orbiting electrons and turning them into neutrons. In that case, neither atoms, nor chemistry, nor life could exist. What’s more, the mass of the electron has to be less than the difference between the masses of the neutron and the proton and that difference represents fine-tuning of roughly 1 part in a 1000. In addition, if the mass of a special particle known as a neutrino were increased by a factor of 10, stars and galaxies would never have formed. The mass of a neutrino is about one-millionth that of an electron, so the allowable change is minuscule compared to its possible range. The combination of all these precisely fine-tuned conditions—including the fine-tuning of the laws and constants of physics, the initial arrangement of matter and energy, and various other contingent features of the universe —presents a remarkably restrictive set of criteria. These requirements for the existence of life, again defying our ability to describe their extreme improbability, have seemed to many physicists to require some explanation.

Precise values of masses of quarks:  
Strikingly, the masses of “up quarks” and “down quarks,” the constituent parts of protons and neutrons, must have precise values to allow for the production of the elements, including carbon, essential for a life-friendly universe. The masses of these quarks must have simultaneously nine different conditions for the right nuclear reactions to have occurred in the early universe.  The “right” reactions are ones that would produce the right elements (such as carbon and oxygen) in the right abundance necessary for life. The fine-tuning of the masses of these two naturally occurring quarks in relation to the range of expected possible values for the mass of any fundamental particle is exquisite. Physicists conceive of that range as extending between a mass of zero and the so-called Planck mass, an important unit of measure in quantum physics. But the value of the “up quark” must have a precise mass of between zero and just one billion trillionth of the Planck mass, corresponding to a fine-tuning of roughly 1 part in 10^21. The mass of the “down quark” must have a similarly precise fine-tuning.

Imagining a universe, slightly different to our own. Let’s just play with one number and see what happens: the mass of the down quark. Currently, it is set to be slightly heavier than the up quark. A proton is made of two light-ish up-quarks plus one of the heavy-ish down quarks. A neutron is made of two heavy-ish down-quarks plus one light-ish up-quark. Hence a neutron is a little heavier than a proton. That heaviness has consequences. The extra mass corresponds to extra energy, making the neutron unstable. Around 15 minutes after being created, usually in a nuclear reactor, neutrons break down. They decay into a proton and spit out an electron and a neutrino. Protons, on the other hand, appear to have an infinite lifespan. This explains why the early universe was rich in protons. A single proton plus an electron is what we know as hydrogen, the simplest atom. It dominated the early cosmos and even today, hydrogen represents 90% of all the atoms in the universe. The smaller number of surviving neutrons combined with protons, losing their energy to become stable chemical elements.

Now let’s start to play. If we start to ratchet up the mass of the down quark, eventually something drastic takes place. Instead of the proton is the lightest member of the family, a particle made of three up-quarks usurps its position. It’s known as the Δ++. It has only been seen in the rubble of particle colliders and exists only fleetingly before decaying. But in a heavy down-quark universe, it is Δ++ that is stable while the proton decays! In this alternative cosmos, the Big Bang generates a sea of Δ++ particles rather than a sea of protons. This might not seem like too much of an issue, except that this usurper carries an electric charge twice that of the proton since each up-quark carries a positive charge of two-thirds.

As a result, the Δ++ holds on to two electrons and so the simplest element behaves not like reactive hydrogen, but inert helium.

This situation is devastating for the possibility of complex life, as in a heavy down-quark universe, the simplest atoms will not join and form molecules. Such a universe is destined to be inert and sterile over its entire history. And how much would we need to increase the down quark mass to realize such a catastrophe? More than 70 times heavier and there would be no life. While this may not seem too finely tuned, physics suggests that the down-quark could have been many trillions of times heavier. So we are actually left with the question: why does the down-quark appear so light?

Complex atomic nuclei are not likely to result from random collisions of particles, even in the early hot universe. Free (three quarks) positive protons and electrons could just join up, and given their opposite electric charges. It could also have been that an equal number of electrons and protons had been formed post-Big Bang then the cosmos would be a soup on neutrons and perhaps neutrinos, but that would then be pretty much that.  How is it that an electron, protons, and neutrons can arrange themselves just so as to eventually produce macro stuff, including us? How to go from particle physics to chemistry?

Why do fundamental particles possess the specific values of mass that they have? Presently, physicists have no explanation for this and similar questions.   If the masses of particles or the values of fundamental constants were much different from what physicists have measured, carbon-based intelligent beings might not be here to measure them, because fundamental particles might not assemble into stable atoms, atoms might not form rocky planets and dying stars might not produce the chemical elements we find in our bodies.

Fine-tuning of Carbon nucleosynthesis, the basis of all life on earth
If the strength of the strong nuclear force, gs, were changed by 1% the rate of the triple-alpha reaction would be affected so markedly that the production of biophilic1 abundances of either carbon or oxygen would be prevented. Carbon is a fascinating nucleus since it is the basis of life and it is the product of an extraordinary energy coincidence at the level of nuclei. It is striking that the process of complexification of matter has to overcome these different obstacles; one sees that the result is very inefficient since the present mass fraction of atoms heavier than helium is only about 2% of about 5% of the baryonic matter in the Universe. This shows the extraordinary quality of rarety regarding the ordinary matter.
Carbon is unique in its ability to combine with other atoms, forming a vast and unparalleled number of compounds in combination with hydrogen, oxygen, and nitrogen. This universe of organic chemistry— with its huge diversity of chemical and physical properties—is precisely what is needed for the assembling of complex chemical systems. Furthermore, the general ‘metastability’ of carbon bonds and the consequent relative ease with which they can be assembled and rearranged by living systems contributes greatly to the fitness of carbon chemistry for biochemical life. No other atom is nearly as fit as carbon for the formation of complex biochemistry. Today, one century later, no one doubts these claims. Indeed the peerless fitness of the carbon atom to build chemical complexity and to partake in biochemistry has been affirmed by a host of researchers.  
Hoyle: If you wanted to produce carbon and oxygen in roughly equal quantities by stellar nucleosynthesis, these are the two levels you would have to fix, and your fixing would have to be just about where these levels are found to be ... A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as chemistry and biology and that there are no blind forces worth speaking about in nature. If those constants had been very slightly different, the universe would not have been conducive to the development of matter, astronomical structures, or elemental diversity, and thus the emergence of complex chemical systems. In new lattice calculations done at the Juelich Supercomputer Centre the physicists found that just a slight variation in the light quark mass will change the energy of the Hoyle state, and this in turn would affect the production of carbon and oxygen in such a way that life as we know it wouldn’t exist.

Last edited by Otangelo on Sat Aug 07, 2021 7:27 pm; edited 1 time in total




Here are just a few of the cosmic coincidences that have been noted in the scientific literature:


Sponsored content

Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum