ElShamah - Reason & Science: Defending ID and the Christian Worldview
Would you like to react to this message? Create an account in a few clicks or log in to continue.
ElShamah - Reason & Science: Defending ID and the Christian Worldview

Otangelo Grasso: This is my library, where I collect information and present arguments developed by myself that lead, in my view, to the Christian faith, creationism, and Intelligent Design as the best explanation for the origin of the physical world.

You are not connected. Please login or register

Fine-tuning of the Big Bang

Go down  Message [Page 1 of 1]

1Fine-tuning of the Big Bang  Empty Fine-tuning of the Big Bang Fri Aug 22, 2014 2:05 pm



Fine-tuning of the Big Bang


The first thing that had to be finely tuned in the universe was the Big bang. Fast-forward a nanosecond or two and in the beginning you had this cosmic soup of elementary stuff - electrons and quarks and neutrinos and photons and gravitons and muons and gluons and Higgs bosons (plus corresponding anti-particles like the positron) - a real vegetable soup.  There had to have been a mechanism to produce this myriad of fundamentals instead of just one thing. There could have been a cosmos where the sum total of mass was pure neutrinos and all of the energy was purely kinetic.

In the paper Do We Live in the Best of All Possible Worlds? The Fine-Tuning of the Constants of Nature, states: The evolution of the Universe is characterized by a delicate balance of its inventory, a balance between attraction and repulsion, between expansion and contraction. It mentions 4 parameters that must be finely tuned. There is a fifth, which is mentioned in Martin Rees book: Just six numbers ). Matter-antimatter symmetry. Physicist Roger Penrose estimated that the odds of the initial low entropy state of our universe occurring by chance alone are on the order of 1 in 10 10^123. 

1. Gravitational constant G: 1/10^60
2. Omega Ω, the density of dark matter: 1/10^62 or less
3. Hubble constant H0: 1 part in 10^60
4. Lambda: the cosmological constant: 10^122
5. Primordial Fluctuations Q:  1/100,000
6. Matter-antimatter symmetry: 1 in 10,000,000,000
7. The low-entropy state of the universe: 1 in 10^10^123
8. The universe would require 3 dimensions of space, and time, to be life-permitting.

Hawking: If the overall density of the universe were changed by even 0.0000000000001 percent, no stars or galaxies could be formed. If the rate of expansion one second after the Big Bang had been smaller by even one part in a hundred thousand million million, the universe would have recollapsed before it reached its present size

Stephen C. Meyer: The return of the God hypothesis, page 162
Though many leading physicists cite the expansion rate of the universe as a good example of fine-tuning, some have questioned whether it should be considered an independent fine-tuning parameter, since the rate of expansion is a consequence of other physical factors. Nevertheless, these physical factors are themselves independent of each other and probably finely tuned.  For example, the expansion rate in the earliest stages of the history of the universe would have depended upon the density of mass and energy at those early times. 

Paul Davies The Guardian (UK), page 23 26 Jun 2007
“Scientists are slowly waking up to an inconvenient truth - the universe looks suspiciously like a fix. The issue concerns the very laws of nature themselves. For 40 years, physicists and cosmologists have been quietly collecting examples of all too convenient "coincidences" and special features in the underlying laws of the universe that seem to be necessary in order for life, and hence conscious beings, to exist. Change any one of them and the consequences would be lethal. Fred Hoyle, the distinguished cosmologist, once said it was as if "a super-intellect has monkeyed with physics".

Naumann, Thomas: Do We Live in the Best of All Possible Worlds? The Fine-Tuning of the Constants of Nature Sep 2017
The Cosmic Inventory
The evolution of the Universe is characterized by a delicate balance of its inventory, a balance between attraction and repulsion, between expansion and contraction. Since gravity is purely attractive, its action sums up throughout the whole Universe. The strength of this attraction is defined by the gravitational constant GN and by an environmental parameter, the density ΩM of (dark and baryonic) matter. The strength of the repulsion is defined by two parameters: the initial impetus of the Big Bang parameterized by the Hubble constant H0 ( the expansion rate of the universe) and the cosmological constant Λ.

Fine-tuning of the attractive forces:
1. Gravitational constant G: It must be specified to a precision of 1/10^60
2. Omega Ω, density of dark matter: The early universe must have had a density even closer to the critical density, departing from it by one part in 10^62 or less.
Fine-tuning of the repulsive forces:
3. Hubble constant H0: the so-called "density parameter," was set, in the beginning, with an accuracy of 1 part in 10^60 
4. Lambda Fine-tuning of the Big Bang  Lambda  the cosmological constant: In terms of Planck units, and as a natural dimensionless value, Fine-tuning of the Big Bang  Lambda,  is on the order of 10^122 
5. Primordial Fluctuations Q: The amplitude of primordial fluctuations, is one of Martin Rees’ Just Six Numbers. It is a ratio equal to 1/100,000

1.Gravitational constant G
Dr. Walter L. Bradley: Is There Scientific Evidence for the Existence of God? How the Recent Discoveries Support a Designed Universe 20 August 2010
The "Big Bang" follows the physics of an explosion, though on an inconceivably large scale. The critical boundary condition for the Big Bang is its initial velocity. If this velocity is too fast, the matter in the universe expands too quickly and never coalesces into planets, stars, and galaxies. If the initial velocity is too slow, the universe expands only for a short time and then quickly collapses under the influence of gravity. Well-accepted cosmological models tell us that the initial velocity must be specified to a precision of 1/10^60. This requirement seems to overwhelm chance and has been the impetus for creative alternatives, most recently the new inflationary model of the Big Bang. Even this newer model requires a high level of fine-tuning for it to have occurred at all and to have yielded irregularities that are neither too small nor too large for the formation of galaxies. Astrophysicists originally estimated that two components of an expansion-driving cosmological constant must cancel each other with an accuracy of better than 1 part in 10^50. In the January 1999 issue of Scientific American, the required accuracy was sharpened to the phenomenal exactitude of 1 part in 10^123. Furthermore, the ratio of the gravitational energy to the kinetic energy must be equal to 1.00000 with a variation of less than 1 part in 100,000. While such estimates are being actively researched at the moment and may change over time, all possible models of the Big Bang will contain boundary conditions of a remarkably specific nature that cannot simply be described away as "fortuitous".

Guillermo Gonzalez, Jay W. Richards: The Privileged Planet: How Our Place in the Cosmos Is Designed for Discovery 2004 page 216
Gravity is the least important force at small scales but the most important at large scales. It is only because the minuscule gravitational forces of individual particles add up in large bodies that gravity can overwhelm the other forces. Gravity, like the other forces, must also be fine-tuned for life. Gravity would alter the cosmos as a whole. For example, the expansion of the universe must be carefully balanced with the deceleration caused by gravity. Too much expansion energy and the atoms would fly apart before stars and galaxies could form; too little, and the universe would collapse before stars and galaxies could form. The density fluctuations of the universe when the cosmic microwave background was formed also must be a certain magnitude for gravity to coalesce them into galaxies later and for us to be able to detect them. Our ability to measure the cosmic microwave background radiation is bound to the habitability of the universe; had these fluctuations been significantly smaller, we wouldn’t be here.

Laurence Eaves The apparent fine-tuning of the cosmological, gravitational and fine structure constants
A value of alpha −1 close to 137 appears to be essential for the astrophysics, chemistry and biochemistry of our universe. With such a value of alpha, the exponential forms indicate that Λ and G have the extremely small values that we observe in our universe, so that Λ is small enough to permit the formation of large-scale structure, yet large enough to detect and measure with present-day astronomical technology, and G is small enough to provide stellar lifetimes that are sufficiently long for complex life to evolve on an orbiting planet, yet large enough to ensure the formation of stars and galaxies. Thus, if Beck’s result is physically valid and if the hypothetical relation represents an as yet ill-defined physical law relating alpha to G, then the apparent fine-tuning of the three is reduced to just one, rather than three coincidences, of seemingly extreme improbability. 

2. Omega Ω, density of dark matter
A Brief History of Time, Stephen Hawking wrote ( page 126): 
If the overall density of the universe were changed by even 0.0000000000001 percent, no stars or galaxies could be formed. If the rate of expansion one second after the Big Bang had been smaller by even one part in a hundred thousand million million, the universe would have recollapsed before it reached its present size.

Eric Metaxas: Is atheism dead? page 58
Hawking certainly was not one who liked the idea of a universe fine-tuned to such a heart-stopping level, but he nonetheless sometimes expressed what the immutable facts revealed. In that same book, he said, “It would be very difficult to explain why the universe would have begun in just this way, except as the act of a God who intended to create beings like us.” Again, considering the source, this is an astounding admission, especially because in decades after he tried to wriggle away from this conclusion any way he could, often manufacturing solutions to the universe’s beginning that seemed intentionally difficult to comprehend by anyone clinging to the rules of common sense. Nonetheless, in his famous book, he was indisputably frank about what he saw and what the obvious conclusions seemed to be. The astrophysicist Fred Hoyle was also candid about the universe’s fine-tuning, despite being a long-time and dedicated atheist. In fact, as we have said, he led the charge in wrinkling his nose at the repulsive idea of a universe with a beginning, and inadvertently coined the term “Big Bang.” But in 1959, a decade after this, he was giving a lecture on how stars in their interiors created every naturally occurring element in the universe and was explaining that they do this with the simplest element: hydrogen. “

Brad Lemley Why is There Life? November 01, 2000
As determined from data of the PLANCK satellite combined with other cosmological observations, the total density of the Universe is Ωtot = ρ/ρcrit = 1.0002 ± 0.0026 and thus extremely close to its critical density ρcrit = 3H02/8πGN with a contribution from a curvature of the Universe of Ωk = 0.000 ± 0.005. However, the balance between the ingredients of the Universe is time-dependent. Any deviation from zero curvature after inflation is amplified by many orders of magnitude. Hence, a cosmic density fine-tuned to flatness today to less than a per mille must have been initially fine-tuned to tens of orders of magnitude.

Omega Ω , which measures the density of material in the universe— including galaxies, diffuse gas, and dark matter. The number reveals the relative importance of gravity in an expanding universe. If gravity were too strong, the universe would have collapsed long before life could have evolved. Had it been too weak, no galaxies or stars could have formed.

Wikipedia:  Flatness problem
The flatness problem (also known as the oldness problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. Such problems arise from the observation that some of the initial conditions of the universe appear to be fine-tuned to very 'special' values, and that small deviations from these values would have extreme effects on the appearance of the universe at the current time.

In the case of the flatness problem, the parameter which appears fine-tuned is the density of matter and energy in the universe. This value affects the curvature of space-time, with a very specific critical value being required for a flat universe. The current density of the universe is observed to be very close to this critical value. Since any departure of the total density from the critical value would increase rapidly over cosmic time, the early universe must have had a density even closer to the critical density, departing from it by one part in 10^62 or less. This leads cosmologists to question how the initial density came to be so closely fine-tuned to this 'special' value.

The problem was first mentioned by Robert Dicke in 1969. The most commonly accepted solution among cosmologists is cosmic inflation, the idea that the universe went through a brief period of extremely rapid expansion in the first fraction of a second after the Big Bang; along with the monopole problem and the horizon problem, the flatness problem is one of the three primary motivations for inflationary theory.

Martin Rees Just Six Numbers : The Deep Forces that Shape the Universe 1999 page 71
We know the expansion speed now, but will gravity bring it to a halt? The answer depends on how much stuff is exerting a gravitational pull. The universe will recollapse - gravity eventually defeating the expansion unless some other force intervenes - if the density exceeds a definite critical value. We can readily calculate what this critical density is. It amounts to about five atoms in each cubic metre. That doesn't seem much; indeed, it is far closer to a perfect vacuum than experimenters on Earth could ever achieve. But the universe actually seems to be emptier still. But galaxies are, of course, especially high concentrations of stars. If all the stars from all the galaxies were dispersed through intergalactic space, then each star would be several hundred times further from its nearest neighbour than it actually is within a typical galaxy - in our scale model, each orange would then be millions of kilometres from its nearest neighbours. If all the stars were dismantled and their atoms spread uniformly through our universe, we'd end up with just one atom in every ten cubic metres. There is about as much again (but seemingly no more) in the form of diffuse gas between the galaxies. That's a total of 0.2 atoms per cubic metre, twenty-five times less than the critical density of five atoms per cubic metre that would be needed for gravity to bring cosmic expansion to a halt.

3. Hubble constant H0
The Hubble constant is the present rate of expansion of the universe, which astronomers determine by measuring the distances and redshifts of galaxies. 

Gribbin: Cosmic Coincidences 1989 
So our existence tells us that the Universe must have expanded, and be expanding, neither too fast nor too slow, but at just the "right" rate to allow elements to be cooked in stars. This may not seem a particularly impressive insight. After all, perhaps there is a large range of expansion rates that qualify as "right" for stars like the Sun to exist. But when we convert the discussion into the proper description of the Universe, Einstein's mathematical description of space and time, and work backwards to see how critical the expansion rate must have been at the time of the Big Bang, we find that the Universe is balanced far more crucially than the metaphorical knife edge. If we push back to the earliest time at which our theories of physics can be thought to have any validity, the implication is that the relevant number, the so-called "density parameter," was set, in the beginning, with an accuracy of 1 part in 10^60 . Changing that parameter, either way, by a fraction given by a decimal point followed by 60 zeroes and a 1, would have made the Universe unsuitable for life as we know it. The implications of this finest of finely tuned cosmic coincidences form the heart of this book.

Hawking A Brief History of Time 1996
If the rate of expansion one second after the big bang had been smaller by even one part in a hundred thousand million million, the universe would have recollapsed before it ever reached its present size.

Ethan Siegel The Universe Really Is Fine-Tuned, And Our Existence Is The Proof  Dec 19, 2019
On the one hand, we have the expansion rate that the Universe had initially, close to the Big Bang. On the other hand, we have the sum total of all the forms of matter and energy that existed at that early time as well, including:
radiation, neutrinos, normal matter, dark matter, antimatter, and dark energy. Einstein's General Theory of Relativity gives us an intricate relationship between the expansion rate and the sum total of all the different forms of energy in it. If we know what your Universe is made of and how quickly it starts expanding initially, we can predict how it will evolve with time, including what its fate will be. A Universe with too much matter-and-energy for its expansion rate will recollapse in short order; a Universe with too little will expand into oblivion before it's possible to even form atoms. Yet not only has our Universe neither recollapsed nor failed to yield atoms, but even today, those two sides of the equation appear to be perfectly in balance. If we extrapolate this back to a very early time — say, one nanosecond after the hot Big Bang — we find that not only do these two sides have to balance, but they have to balance to an extraordinary precision. The Universe's initial expansion rate and the sum total of all the different forms of matter and energy in the Universe not only need to balance, but they need to balance to more than 20 significant digits. It's like guessing the same 1-to-1,000,000 number as me three times in a row, and then predicting the outcome of 16 consecutive coin-flips immediately afterwards.

Fine-tuning of the Big Bang  Https_17
If the Universe had just a slightly higher matter density (red), it would be closed and have recollapsed already; if it had just a slightly lower density (and negative curvature), it would have expanded much faster and become much larger. The Big Bang, on its own, offers no explanation as to why the initial expansion rate at the moment of the Universe's birth balances the total energy density so perfectly, leaving no room for spatial curvature at all and a perfectly flat Universe. Our Universe appears perfectly spatially flat, with the initial total energy density and the initial expansion rate balancing one another to at least some 20+ significant digits

The odds of this occurring naturally, if we consider all the random possibilities we could have imagined, are astronomically small. It's possible, of course, that the Universe really was born this way: with a perfect balance between all the stuff in it and the initial expansion rate. It's possible that we see the Universe the way we see it today because this balance has always existed. But if that's the case, we'd hate to simply take that assumption at face value. In science, when faced with a coincidence that we cannot easily explain, the idea that we can blame it on the initial conditions of our physical system is akin to giving up on science. It's far better, from a scientific point of view, to attempt to come up with a reason for why this coincidence might occur. One option — the worst option, if you ask me — is to claim that there are a near-infinite number of possible outcomes, and a near-infinite number of possible Universes that contain those outcomes. Only in those Universes where our existence is possible can we exist, and therefore it's not surprising that we exist in a Universe that has the properties that we observe.

If you read that and your reaction was, "what kind of circular reasoning is that," congratulations. You're someone who won't be suckered in by arguments based on the anthropic principle. It might be true that the Universe could have been any way at all and that we live in one where things are the way they are (and not some other way), but that doesn't give us anything scientific to work with. Instead, it's arguable that resorting to anthropic reasoning means we've already given up on a scientific solution to the puzzle. The fact that our Universe has such a perfect balance between the expansion rate and the energy density — today, yesterday, and billions of years ago — is a clue that our Universe really is finely tuned. With robust predictions about the spectrum, entropy, temperature, and other properties concerning the density fluctuations that arise in inflationary scenarios, and the verification found in the Cosmic Microwave Background and the Universe's large-scale structure, we even have a viable solution. Further tests will determine whether our best conclusion at present truly provides the ultimate answer, but we cannot just wave the problem away. The Universe really is finely tuned, and our existence is all the proof we need.

4. Lambda Fine-tuning of the Big Bang  Lambda  the cosmological constant
The letter {\displaystyle \Lambda }Fine-tuning of the Big Bang  0ac0a4a98a414e3480335f9ba652d12571ec6733 (lambda) represents the cosmological constant, which is currently associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity.

Commonly known as the cosmological constant, describes the ratio of the density of dark energy to the critical energy density of the universe, given certain reasonable assumptions such as that dark energy density is a constant. In terms of Planck units, and as a natural dimensionless value, Fine-tuning of the Big Bang  Lambda,  is on the order of 10^122. This is so small that it has no significant effect on cosmic structures that are smaller than a billion light-years across. A slightly larger value of the cosmological constant would have caused space to expand rapidly enough that stars and other astronomical structures would not be able to form.

Fine-tuning of the Big Bang  Lambda, the newest addition to the list, discovered in 1998. It describes the strength of a previously unsuspected force, a kind of cosmic antigravity, that controls the expansion of the universe. Fortunately, it is very small, with no discernable effect on cosmic structures that are smaller than a billion light-years across. If the force were stronger, it would have stopped stars and galaxies— and life— from forming.

Another mystery is the smallness of the cosmological constant. First of all, the Planck scale is the only natural energy scale of gravitation: mPl = (ħc/GN)1/2 = 1.2 × 1019 GeV/c2. Compared to this natural scale, the cosmological constant or dark energy density Λ is tiny: Λ~(10 meV)4~(10−30 mPl)4 = 10−120 mPl4. The cosmological constant is also much smaller than expected from the vacuum expectation value of the Higgs field, which like the inflaton field or dark energy is an omnipresent scalar field: ‹Φ4›~mH4~(100 GeV)4~1052 Λ As observed by Weinberg in 1987 there is an “anthropic” upper bound on the cosmological constant Λ. He argued “that in universes that do not recollapse, the only such bound on Λ is that it should not be so large as to prevent the formation of gravitationally bound states.”.

If the state of the hot dense matter immediately after the Big Bang had been ever so slightly different, then the Universe would either have rapidly recollapsed, or would have expanded far too quickly into a chilling, eternal void. Either way, there would have been no ‘structure’ in the Universe in the form of stars and galaxies.

Neil A. Manson GOD AND DESIGN The teleological argument and modern science , page 180
The cosmological constantThe smallness of the cosmological constant is widely regarded as the single the greatest problem confronting current physics and cosmology. The cosmological constant is a term in Einstein’s equation that, when positive, acts as a repulsive force, causing space to expand and, when negative, acts as an attractive force, causing space to contract. Apart from some sort of extraordinarily precise fine-tuning or new physical principle, today’s theories of fundamental physics and cosmology lead one to expect that the vacuum that is, the state of space-time free of ordinary matter fields—has an extraordinarily large energy density. This energy density, in turn, acts as an effective cosmological constant, thus leading one to expect an extraordinarily large effective cosmological constant, one so large that it would, if positive, cause space to expand at such an enormous rate that almost every object in the Universe would fly apart, and would, if negative, cause the Universe to collapse almost instantaneously back in on itself. This would clearly, make the evolution of intelligent life impossible. What makes it so difficult to avoid postulating some sort of highly precise fine-tuning of the cosmological constant is that almost every type of field in current physics—the electromagnetic field, the Higgs fields associated with the weak force, the inflaton field hypothesized by inflationary cosmology, the dilaton field hypothesized by superstring theory, and the fields associated with elementary particles such as electrons—contributes to the vacuum energy. Although no one knows how to calculate the energy density of the vacuum, when physicists make estimates of the contribution to the vacuum energy from these fields, they get values of the energy density anywhere from 10^53 to 10^120 higher than its maximum life-permitting value, max.6 (Here, max is expressed in terms of the energy density of empty space.) 

Steven Weinberg Department of Physics, University of Texas 
There are now two cosmological constant problems. The old cosmological constant problem is to understand in a natural way why the vacuum energy density ρV is not very much larger. We can reliably calculate some contributions to ρV , like the energy density in fluctuations in the gravitational field at graviton energies nearly up to the Planck scale, which is larger than is observationally allowed by some 120 orders of magnitude. Such terms in ρV can be cancelled by other contributions that we can’t calculate, but the cancellation then has to be accurate to 120 decimal places.

When one calculates, based on known principles of quantum mechanics, the "vacuum energy density" of the universe, focusing on the electromagnetic force, one obtains the incredible result that empty space "weighs" 1,093g per cubic centimetre (cc). The actual average mass density of the universe, 10-28g per cc, differs by 120 orders of magnitude from theory. 5 Physicists, who have fretted over the cosmological constant paradox for years, have noted that calculations such as the above involve only the electromagnetic force, and so perhaps when the contributions of the other known forces are included, all terms will cancel out to exactly zero, as a consequence of some unknown fundamental principle of physics.  But these hopes were shattered with the 1998 discovery that the expansion of the universe is accelerating, which implied that the cosmological constant must be slightly positive. This meant that physicists were left to explain the startling fact that the positive and negative contributions to the cosmological constant cancel to 120-digit accuracy, yet fail to cancel beginning at the 121st digit.

Curiously, this observation is in accord with a prediction made by Nobel laureate and physicist Steven Weinberg in 1987, who argued from basic principles that the cosmological constant must be zero to within one part in roughly 10^123 (and yet be nonzero), or else the universe either would have dispersed too fast for stars and galaxies to have formed, or else would have recollapsed upon itself long ago. In short, numerous features of our universe seem fantastically fine-tuned for the existence of intelligent life. While some physicists still hold out for a "natural" explanation, many others are now coming to grips with the notion that our universe is profoundly unnatural, with no good explanation.

L. Susskind et al. Disturbing Implications of a Cosmological Constant 14 November 2002
“How far could you rotate the dark-energy knob before the “Oops!” moment? If rotating it…by a full turn would vary the density across the full range, then the actual knob setting for our Universe is about 10^123 of a turn away from the halfway point. That means that if you want to tune the knob to allow galaxies to form, you have to get the angle by which you rotate it right to 123 decimal places!
That means that the probability that our universe contains galaxies is akin to exactly 1 possibility in 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 . Unlikely doesn’t even begin to describe these odds. There are “only” 10^81 atoms in the observable universe, after all. 
The low entropy starting point is the ultimate reason that the universe has an arrow of time, without which the second law would not make sense. However, there is no universally accepted explanation of how the universe got into such a special state. Far from providing a solution to the problem, we will be led to a disturbing crisis. Present cosmological evidence points to an inflationary beginning and an accelerated de Sitter end. Most cosmologists accept these assumptions, but there are still major unresolved debates concerning them. For example, there is no consensus about initial conditions. Neither string theory nor quantum gravity provide a consistent starting point for a discussion of the initial singularity or why the entropy of the initial state is so low. High scale inflation postulates an initial de Sitter starting point with Hubble constant roughly 10−5 times the Planck mass. This implies an initial holographic entropy of about 10^10, which is extremely small by comparison with today’s visible entropy. Some unknown agent initially started the inflaton high up on its potential, and the rest is history. We are forced to conclude that in a recurrent world like de Sitter space our universe would be extraordinarily unlikely. A possibility is an unknown agent intervened in the evolution, and for reasons
of its own restarted the universe in the state of low entropy characterizing inflation.

5. The Amplitude of Primordial Fluctuations Q
Q, the amplitude of primordial fluctuations, is one of Martin Rees’ Just Six Numbers. In our universe, its value is Q ≈ 2 × 10^5 , meaning that in the early universe the density at any point was typically within 1 part in 100,000 of the mean density. What if Q were different? “If Q were smaller than 10−6 , gas would never condense into gravitationally bound structures at all, and such a universe would remain forever dark and featureless, even if its initial ‘mix’ of atoms, dark energy and radiation were the same as our own. On the other hand, a universe where Q were substantially larger than 10−5 — were the initial “ripples” were replaced by large-amplitude waves — would be a turbulent and violent place. Regions far bigger than galaxies would condense early in its history. They wouldn’t fragment into stars but would instead collapse into vast black holes, each much heavier than an entire cluster of galaxies in our universe . . . Stars would be packed too close together and buffeted too frequently to retain stable planetary systems.” (Rees, 1999, pg. 115) 

 represents the amplitude of complex irregularities or ripples in the expanding universe that seed the growth of such structures as planets and galaxies. It is a ratio equal to 1/100,000. If the ratio were smaller, the universe would be a lifeless cloud of cold gas. If it were larger, "great gobs of matter would have condensed into huge black holes," says Rees. Such a universe would be so violent that no stars or solar systems could survive.

Martin Rees Just Six Numbers : The Deep Forces that Shape the Universe 1999 page 103
Why Q is about is still a mystery. But its value is crucial: were it much smaller, or much bigger, the 'texture' of the universe would be quite different, and less conducive to the emergence of life forms. If Q were smaller than but the other cosmic numbers were unchanged, aggregations in the dark matter would take longer to develop and would be smaller and looser. The resultant galaxies would be anaemic structures, in which star formation would be slow and inefficient, and 'processed' material would be blown out of the galaxy rather than being recycled into new stars that could form planetary systems. If Q were smaller than loL6, gas would never condense into gravitationally bound structures at all, and such a universe would remain forever dark and featureless, even if its initial 'mix' of atoms, dark matter and radiation were the same as in our own. On the other hand, a universe where Q were substantially larger than - where the initial 'ripples' were replaced by large-amplitude waves - would be a turbulent and violent place. Regions far bigger than galaxies would condense early in its history. They wouldn't fragment into stars but would instead collapse into vast black holes, each much heavier than an entire cluster of galaxies in our universe. Any surviving gas would get so hot that it would emit intense X-rays and gamma rays. Galaxies (even if they managed to form) would be much more tightly bound than the actual galaxies in our universe. Stars would be packed too close together and buffeted too frequently to retain stable planetary systems. (For similar reasons, solar systems are not able to exist very close to the centre of our own galaxy, where the stars are in a close-packed swarm compared with our less-central locality). The fact that Q is 1/100,000 incidentally also makes our universe much easier for cosmologists to understand than would be the case if Q were larger. A small Qguarantees that the structures are all small compared with the horizon, and so our field of view is large enough to encompass many independent patches each big enough to be a fair sample. If  Q were much bigger, superclusters would themselves be clustered into structures that stretched up to the scale of the horizon (rather than, as in our universe, being restricted to about one per cent of that scale). It would then make no sense to talk about the average 'smoothed-out' properties of our observable universe, and we wouldn't even be able to define numbers such as a. The smallness of Q, without which cosmologists would have made no progress, seemed until recently a gratifying contingency. Only now are we coming to realize that this isn't just a convenience for cosmologists, but that life couldn't have evolved if our universe didn't have this simplifying feature.

6. Matter/Antimatter Asymmetry
Elisabeth Vangioni Cosmic origin of the chemical elements rarety in nuclear astrophysics 23 November 2017
 Baryogenesis Due to matter/antimatter asymmetry (1 + 109 protons compared to 109 antiprotons), only one proton for 109 photons remained after annihilation. The theoretical prediction of antimatter made by Paul Dirac in 1931 is one of the most impressive discoveries (Dirac 1934). Antimatter is made of antiparticles that have the same (e.g. mass) or opposite (e.g. electric charge) characteristics but that annihilate with particles, leaving out at the end mostly photons. A symmetry between matter and antimatter led him to suggest that ‘maybe there exists a completely new Universe made of antimatter’. Now we know that antimatter exists but that there are very few antiparticles in the Universe. So, antiprotons (an antiproton is a proton but with a negative electric charge) are too rare to make any macroscopic objects. In this context, the challenge is to explain why antimatter is so rare (almost absent) in the observable Universe. Baryogenesis (i.e. the generation of protons and neutrons AND the elimination of their corresponding antiparticles) implying the emergence of the hydrogen nuclei is central to cosmology. Unfortunately, the problem is essentially unsolved and only general conditions of baryogenesis were well posed by A. Sakharov a long time ago (Sakharov 1979). Baryogenesis requires at least departure from thermal equilibrium, and the breaking of some fundamental symmetries, leading to a strong observed matter–antimatter asymmetry at the level of 1 proton per 1 billion of photons. Mechanisms for the generation of the matter–anti matter strongly depend on the reheating temperature at the end of inflation, the maximal temperature reached in the early Universe. Forthcoming results from the Large Hadronic Collisionner (LHC) at CERN in Geneva, BABAR collaboration, astrophysical observations and the Planck satellite mission will significantly constrain baryogenesis and thereby provide valuable information about the very early hot Universe.


7. The low-entropy state of the universe 
Roger Penrose The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics page 179 1994
This figure will give us an estimate of the total phase-space volume V available to the Creator, since this entropy should represent the logarithm of the volume of the (easily) largest compartment. Since 10123 is the logarithm of the volume, the volume must be the exponential of 10123, i. e.

The British mathematician Roger Penrose conducted a study of the probability of a universe capable of sustaining life occurring by chance and found the odds to be 1 in 1010 123 (expressed as 10 to the power of 10 to the power of 123). That is a mindboggling number. According to probability theory, odds of 1 to 10^50 represents “Zero Probability.” But Dr. Penrose’s calculations place the odds of life emerging as Darwin described it at more than a trillion trillion trillion times less than Zero.

The Second Law of thermodynamics is one of the most fundamental principles of physics. The term “entropy” refers to an appropriate measure of disorder or lack of “specialness” of the state of the universe.

Feynman“[I]t is necessary to add to the physical laws the hypothesis that in the past the universe was more ordered” (1994)

Luke A. Barnes The Fine-Tuning of the Universe for Intelligent Life  June 11, 2012
The problem of the apparently low entropy of the universe is one of the oldest problems of cosmology. The fact that the entropy of the universe is not at its theoretical maximum, coupled with the fact that entropy cannot decrease, means that the universe must have started in a very special, low entropy state. The initial state of the universe must be the most special of all, so any proposal for the actual nature of this initial state must account for its extreme specialness.

The Entropy of the Early Universe
The low-entropy condition of the early universe is extreme in both respects: the universe is a very big system, and it was once in a very low entropy state. The odds of that happening by chance are staggeringly small. Roger Penrose, a mathematical physicist at Oxford University, estimates the probability to be roughly 1/10^10^123. That number is so small that if it were written out in ordinary decimal form, the decimal would be followed by more zeros than there are particles in the universe! It is even smaller than the ratio of the volume of a proton (a subatomic particle) to the entire volume of the visible universe. Imagine filling the whole universe with lottery tickets the size of protons, then choosing one ticket at random. Your chance of winning that lottery is much higher than the probability of the universe beginning in a state with such low entropy! Huw Price, a philosopher of science at Cambridge, has called the low-entropy condition of the early universe “the most underrated discovery in the history of physics.”

There are roughly 6.51×10^22 copper atoms in one cubic meter.

If we take that the distance from Earth to the edge of the observable universe is about 14.26 gigaparsecs (46.5 billion light-years or 4.40×10^26 m) in any direction, and assuming that space is roughly flat (in the sense of being a Euclidean space), this size corresponds to a  volume of about  3.566×10^80 m3

That means there would be roughly 10^100 copper atoms if we filled the entire volume of the observable universe with atoms without leaving any space. Getting the right initial low entropy state of our universe is like picking one red atom amongst all atoms in the universe. By chance, or design?   

Besides physical constants, there are initial or boundary conditions, which describe the conditions present at the beginning of the universe. Initial conditions are independent of the physical constants. One way of summarizing the initial conditions is to speak of the extremely low entropy (that is, a highly ordered) initial state of the universe. This refers to the initial distribution of mass energy. In The Road to Reality, 

Physicist Roger Penrose estimates that the odds of the initial low entropy state of our universe occurring by chance alone are on the order of 1 in 10 10^123. This ratio is vastly beyond our powers of comprehension. Since we know a life-bearing universe is intrinsically interesting, this ratio should be more than enough to raise the question: Why does such a universe exist? If someone is unmoved by this ratio, then they probably won’t be persuaded by additional examples of fine-tuning. In addition to initial conditions, there are a number of other, wellknown features about the universe that are apparently just brute facts. And these too exhibit a high degree of fine-tuning. Among the fine-tuned
(apparently) “brute facts” of nature are the following:

The ratio of masses for protons and electrons—If it were slightly different, building blocks for life such as DNA could not be formed.
The velocity of light—If it were larger, stars would be too luminous. If it were smaller, stars would not be luminous enough.
Mass excess of neutron over proton—if it were greater, there would be too few heavy elements for life. If it were smaller, stars would quickly collapse as neutron stars or black holes.

David H Bailey: Is the universe fine-tuned for intelligent life?April 1st, 2017
The overall entropy (disorder) of the universe is, in the words of Lewis and Barnes, “freakishly lower than life requires.” After all, life requires, at most, a galaxy of highly ordered matter to create chemistry and life on a single planet. Physicist Roger Penrose has calculated (see The Emperor's New Mind, pg. 341-344) the odds that the entire universe is as orderly as our galactic neighborhood to be one in 10^10^123, a number whose decimal representation has vastly more zeroes than the number of fundamental particles in the observable universe. Extrapolating back to the big bang only deepens this puzzle.

J. Warner Wallace: Initial Conditions in a Very Low Entropy State JULY 21, 2014
Entropy represents the amount of disorder in a system. Thus, a high entropy state is highly disordered – think of a messy teenager’s room. Our universe began in an incredibly low entropy state. A more precise definition of entropy is that it represents the number of microscopic states that are macroscopically indistinguishable. An egg has higher entropy once broken because you’re “opening” up many more ways to arrange the molecules. There are more ways of arranging molecules that would still be deemed an omelet than there are ways to arrange the particles in an unbroken egg in where certain molecules are confined to subsets of the space in the egg – such as a membrane or the yolk. Entropy is thus closely associated with probability. If one is randomly arranging molecules, it’s much more likely to choose a high entropy state than a low entropy state. Randomly arranged molecules in an egg would much more likely look like an omelet that an unbroken egg.

It turns out that nearly all arrangements of particles in the early universe would have resulted in a lifeless universe of black holes. Tiny inconsistencies in the particle arrangements would be acted on by gravity to grow in size. A positive feedback results since the clumps of particles have an even greater gravitational force on nearby particles. Penrose’s analysis shows that in the incredibly dense early universe, most arrangements of particles would have resulted basically in nothing but black holes. Life certainly can’t exist in such a universe because there would be no way to have self-replicating information systems. Possibly the brightest objects in the universe are quasars, which release radiation as bright as some galaxies due to matter falling into a supermassive black hole. The rotation rates near black holes and the extremely high-energy photons would non-life permitting.

Roger Penrose is the first scientist to quantify the fine-tuning necessary to have a low entropy universe to avoid such catastrophes. “In order to produce a universe resembling the one in which we live, the Creator would have to aim for an absurdly tiny volume of the phase space of possible universes, about 1/10^10123.” This number is incomprehensibly small – it represents 1 chance in 10 to the power of (10 to the power of 123). Writing this number in ordinal notational would require more zeroes than the number of subatomic particles in the observable universe, 10123 zeroes. Under the assumption of atheism, the particles in our universe would have been arranged randomly or at least not with respect to future implications for intelligent life. Nearly all such arrangements would not have been life-permitting so this fine-tuning evidence favors theism over atheism. We have a large but finite number of possible original states and rely on well-established statistical mechanics to assess the relevant probability.

The incredibly low entropy state of the initial conditions shows fine-tuning was required to avoid excessive black holes! This fact about the initial conditions also calls into question Smolin’s proposed scenario that universes with differing physical constants might be birthed out of black holes. Smolin suggests the possibility of an almost Darwinian concept in which universes that produce more black holes, therefore, more baby universes than those which don’t. But if our universe requires statistically miraculous initial conditions to be life-permitting by avoiding excessive black holes, universes evolving to maximize black hole production would be unlikely to lead to life! (Even if the evolution of universes were possible) Furthermore, the skeptic who thinks that black holes suggest a purposeless universe should consider that black holes can, in moderation and kept at distance, be helpful for life. While a universe comprised of mostly black holes would be life-prohibiting, having a large black hole at the center of a galaxy is actually quite helpful for life. Here is a Scientific American article that documents the benefits of Black Holes for life – it summarizes: “the matter-eating beast at the center of the Milky Way may actually account for Earth’s existence and habitability.”

8. The universe would require 3 dimensions of space, and time, to be life-permitting.
Luke A. Barnes The Fine-Tuning of the Universe for Intelligent Life  June 11, 2012
If whatever exists were not such that it is accurately described on macroscopic scales by a model with three space dimensions, then life would not exist.  If “whatever works” was four dimensional, then life would not exist, whether the number of dimensions is simply a human invention or an objective fact about the universe.

Fine-tuning of the Big Bang  Spacet10
Anthropic constraints on the dimensionality of spacetime (from Tegmark, 1997). 
UNPREDICTABLE: the behavior of your surroundings cannot be predicted using only local, finite-accuracy data, making storing and processing information impossible. 
UNSTABLE: no stable atoms or planetary orbits. 
TOO SIMPLE: no gravitational force in empty space and severe topological problems for life. 
TACHYONS ONLY: energy is a vector, and rest mass is no barrier to particle decay. For example, an electron could decay into a neutron, an antiproton, and a neutrino. Life is perhaps possible in very cold environments. 

Lee Smolin wrote in his 2006 book The Trouble with Physics:
We physicists need to confront the crisis facing us. A scientific theory [the multiverse/ Anthropic Principle/ string theory paradigm] that makes no predictions and therefore is not subject to experiment can never fail, but such a theory can never succeed either, as long as science stands for knowledge gained from rational argument borne out by evidence.

Fine-tuning of the Big Bang  Anthro11
Anthropic limits on some cosmological variables: the cosmological constant Λ (expressed as an energy density ρΛ in Planck units), the amplitude of primordial fluctuations Q, and the matter to photon ratio ξ. The white region shows where life can form. The coloured regions show where various life-permitting criteria are not fulfilled,

The dramatic change is because we have changed the force exponentially. 

John M. Kinson The Big Bang: Does it point to God ? July 9, 2021

Ethan Siegel Ask Ethan: What Was The Entropy Of The Universe At The Big Bang? Apr 15, 2017


Fine-tuning of the Big Bang  Gravit10

Fine-tuning of the Big Bang  Fine-t10

Fine-tuning of the Big Bang  Lambda10

Three Cosmological Parameters

Last edited by Otangelo on Mon Mar 18, 2024 5:47 pm; edited 33 times in total


2Fine-tuning of the Big Bang  Empty The Lambda-CDM model Thu Jul 01, 2021 4:42 pm



The Fine-tuning of the Big bang is best explained by design.
1. If the initial expansion rate of the universe had differed values, the universe would have either quickly collapsed back on itself or expanded too rapidly for stars to form. In either case, life would be impossible. The Universe is characterized by a delicate balance of its inventory, a balance between attraction and repulsion, between expansion and contraction.
2. Several parameters had to be just right. To name: The Gravitational constant, the density of dark matter, the Hubble constant, the cosmological constant, the primordial Fluctuations,  Matter-antimatter symmetry, the low-entropy state of the universe. Not considering the low entropy state ( that alone had to be finely tuned in an unimaginable order of 10^10123 ), joining all other fine-tune parameters together, the expansion rate had to be adjusted in the order of one part in 10^400. To illustrate this number: One part in 10^60 can be compared to firing a bullet at a one-inch target on the other side of the observable universe, twenty billion light-years away, and hitting the target.
3. Fine-tuning is well explained by intelligent design, but an unwarranted ad-hoc explanation if someone posits a multiverse that originated randomly. Therefore, the action of a powerful intelligent designer is the best explanation.

The Big Bang was the most precisely planned event in all of history
1. The odds to have a low-entropy state at the beginning of our universe was: 1 in 10^10123. To put that in perspective: There are roughly 8.51^25 power of  atoms in one cubic meter. If we take that the distance from Earth to the edge of the observable universe is about 46.5 billion light-years in any direction, this size corresponds to a  volume of about  3.566×10^80 m3. That means there would be roughly 10^102nd power of  atoms if we filled the entire volume of the observable universe with atoms without leaving any space. An atom is 99,9999999999999%  of empty space. If we were to fill its entire space with protons, there would be 2,5^13 power of protons filling it. So there would be roughly 10^115 power of protons in the entire universe. The odds to find one red proton in one hundred universes, the size of ours, filled with protons, is about the same as to get a universe with a low entropy state at the beginning, like ours. If we had to find our universe amongst an ensemble of almost infinite parallel universes, it would take 17000  billion years to find one which would be eventually ours. 
2. Furthermore, at least 7 other parameters had to be finely adjusted, to name: the right gravitational constant, the density of dark matter, the Hubble constant, the primordial fluctuations, matter-antimatter symmetry, and 3 dimensions of space, plus time. 
3. These physical factors are themselves independent of each other, and each must also be fine-tuned to the extreme. Together, the odds to have the right expansion rate is in the order above 10^400. That is picking one red atom amongst 4 entire universes the size of ours filled with atoms. 
4. These odds are unimaginably improbable on naturalism and very likely on theism.

Problems with the cosmic inflation hypothesis at the beginning of the universe
1. The Big Bang was the first and most precisely fine-tuned event in all of the history of the universe. It had it to be adjusted to permit the right expansion rate, a balance between attraction and repulsion, between contraction and expansion, or it would have expanded too fast, and produced an unlimited expansion, and a void, lifeless universe, or it would have recollapsed back to a singularity, a Big Crunch. But also many different parameters had to be set just right in the first instants, right after the first nanosecond or two, in order to form stable atoms, or it would also be void of stars, planets, chemicals, and life. 
2.  The Lambda-CDM model, composed of six parameters, is a parameterization of the Big Bang. The standard model of particle physics contains 26 fundamental constants. A variety of physical phenomena, atomic, gravitational, and cosmological, must combine in the right way in order to produce a life-permitting universe.
3. Inflation is supposed to provide a dynamical explanation for the seemingly very fine-tuned initial conditions of the standard model of cosmology. It faces however ist own problems. There would have to be an inflation field with negative pressure,  dominating the total energy density of the universe, dictating its dynamic, and so, starting inflation. It would have to last for the right period of time.  And once inflation takes over, there must be some special reason for it to stop; otherwise, the universe would maintain its exponential expansion and no complex structure would form. It would also have to be ensured that the post-inflation field would not possess a large, negative potential energy, which would cause the universe to recollapse altogether. Inflation would also have to guarantee a homogeneous, but not perfectly homogeneous universe. Inhomogeneities had to be there for gravitational instability to form cosmic structures like stars, galaxies, and planets. Inflation would require an astonishing sequence of correlations and coincidences, to suddenly and coherently convert all its matter into a scalar field with just enough kinetic energy to roll to the top of its potential and remain perfectly balanced there for long enough to cause a substantial era of “deflation”.  It would be far more likely, that the inflation field would drop its energy rather than be converted into baryons and ordinary matter, dump its energy into radiation.  The odds to have a successful, finely adjusted inflaton field are maximally one in a thousand at its peak and drop rapidly. There is no physical model of inflation, and the necessary coupling between inflation and ordinary matter/radiation is just an unsupported hypothesis. 
4. Designed setup is the best explanation for the life-permitting conditions at the beginning of the universe.

Jason Waller Cosmological Fine-Tuning Arguments 2020, page 108
The number of spatial dimensions of our universe seems to be a fortuitous contingent fact. It is easy to construct geometries for spaces with more or less than three dimensions (or space-times with more or less than three spatial dimensions). It turns out that mathematicians have shown that spaces with more than three dimensions have some significant problems. For example, given our laws of physics there are no stable orbits in spaces with more than three dimensions. It is hard to imagine how solar systems stable enough for life to slowly evolve could form without stable orbits. Additionally, consider the effect of long-range forces (like gravity and electromagnetism). These forces work according to the inverse square law (i.e. the effect of the force decreases by the square of the distance). So move ten times farther away from a gravitational field or a light source and the effect of the gravity or light is 100 times less. To intuitively see why this is, imagine a light bulb as sending out millions of thin straight wires in all directions. The farther we get away from this light, the more spread out these wires are. The closer we are to the light, the closer together the wires are. The more concentrated the wires, the stronger the force. But what would happen if we added one more spatial dimension to our universe? In this case, long-range forces would work according to an inverse cubed law. This is, of course, because there would be one more spatial dimension for the lines of force to be spread out within. So forces would decrease rapidly as you moved away from the source and increase rapidly as you moved closer. This would cause significant problems both at the atomic and at the cosmological scales. Rees explains the problem this way: 

An orbiting planet that was slowed down—even slightly—would then plunge ever faster into the sun, rather than merely shift into a slightly smaller orbit, because an inverse-cubed force strengthens so steeply towards the center; conversely an orbiting planet that was slightly speeded up would quickly spiral outwards into darkness.

What powered the Big bang?
Today, as it has always been, the sum of all the energy in the universe is nearly zero, considering gravitation as negative energy. This has always been true since the big bang. When the matter was accumulated in a single point, there was the infinite negative energy of gravity, which after the big bang got transformed into the positive energy of matter/radiation. However, because infinity doesn't have an end, gravitation still exists today. Its range is still infinite.

CERN: The Big bang
The universe began, scientists believe, with every speck of its energy jammed into a very tiny point. This extremely dense point exploded with unimaginable force, creating matter and propelling it outward to make the billions of galaxies of our vast universe. Astrophysicists dubbed this titanic explosion the Big Bang.
The Big Bang was like no explosion you might witness on earth today. For instance, a hydrogen bomb explosion, whose center registers approximately 100 million degrees Celsius, moves through the air at about 300 meters per second. In contrast, cosmologists believe the Big Bang flung energy in all directions at the speed of light (300,000,000 meters per second, a million times faster than the H-bomb) and estimate that the temperature of the entire universe was 1000 trillion degrees Celsius at just a tiny fraction of a second after the explosion. Even the cores of the hottest stars in today's universe are much cooler than that.
There's another important quality of the Big Bang that makes it unique. While an explosion of a man-made bomb expands through air, the Big Bang did not expand through anything. That's because there was no space to expand through at the beginning of time. Rather, physicists believe the Big Bang created and stretched space itself, expanding the universe.

Geoff Brumfiel Outrageous fortune 04 January 2006
A growing number of cosmologists and string theorists suspect the form of our Universe is little more than a coincidence.  If the number controlling the growth of the Universe since the Big Bang is just slightly too high, the Universe expands so rapidly that protons and neutrons never come close enough to bond into atoms. If it is just ever-so-slightly too small, it never expands enough, and everything remains too hot for even a single nucleus to form. Similar problems afflict the observed masses of elementary particles and the strengths of fundamental forces. In other words, if you believe the equations of the world's leading cosmologists, the probability that the Universe would turn out this way by chance are infinitesimal — one in a very large number. “It's like you're throwing darts, and the bullseye is just one part in 10^120 of the dartboard,” says Leonard Susskind, a string theorist based at Stanford University in California. “It's just stupid.”
In much the same way as Kepler worried about planetary orbits, cosmologists now puzzle over numbers such as the cosmological constant, which describes how quickly the Universe expands. The observed value is so much smaller than existing theories suggest, and yet so precisely constrained by observations, that theorists are left trying to figure out a deeper meaning for why the cosmological constant has the value it does. To explain the perfectly adjusted cosmological constant one would need at least 1060 universes

Jason Waller Cosmological Fine-Tuning Arguments 2020, page 107
Fine-Tuning in Cosmology 
In the study of the most general features of our universe, there are a number of interesting “coincidences” that seem both arbitrary and would render our universe life prohibiting if they were just slightly different. We will examine only four of these: the quantity of matter/energy in the universe, the density of the matter/energy in early universe, the rate of expansion (cosmological constant), and the number of spatial dimensions. There are, of course, many other possible examples here, but four is sufficient for our purposes because they are all rather easy to understand. Let’s begin with the quantity of matter in the universe. The basic issue here is straightforward. If there were too much matter in the universe, then the universe would have collapsed very soon after the Big Bang. The more we increase the matter, the sooner the universe collapses into a big crunch. But if we decrease the quantity of matter by much, then the expansion would occur so quickly that no galaxies or stars could ever form.18 So how fine-tuned is the quantity of matter in our universe? Barnes explains that if we look at the matter/energy in our universe just 

one nanosecond after the Big Bang, it was immense, around 10^24 kg per cubic meter. This is a big number, but if the Universe was only a single kg per cubic meter higher, the Universe would have collapsed by now. And with a single kg per cubic meter less the Universe would have expanded too rapidly to form stars and galaxies.

In addition to the quantity of matter in the universe, the density of the matter is important too. The issue here is how “smooth” the distribution of matter is. If the universe is perfectly smooth, then there is a perfect distribution so that all points in space have exactly the same quantity of matter. But it turns out that a “life-permitting universe cannot be too smooth . . . there must be some lumpiness to the distribution of matter and radiation” . The lumps and ripples must be just right because these tiny lumps and bumps are the seeds of the structure of the Universe, growing into galaxies. . . . Without these seeds, the story doesn’t begin. If the universe began in perfect uniformity, then gravity would have nothing to grip. The universe would expand and cool, but the matter would remain forever smooth. . . . no galaxies, no stars, no planets, and hence no life.

Interestingly, the density of the early universe was nearly smooth. The density of the Universe departed from the average density by at most one part in 100,000. This number (1 in 100,000) is known as Q. We have no idea why it has this value, but this smoothness is essential for life.  Q is a very important number because if it were much less or much greater, then the universe would be entirely devoid of life. If Q were smaller than one hundred-thousand—say, one millionth— this would severely inhibit the formation of galaxies and stars. Conversely, if Q were bigger—one part in ten thousand or more— galaxies would be denser, leading to lots of planet disrupting stellar collisions. Make Q too big and you’d form giant black holes rather than clusters of stars. So there are firm upper and lower bounds on what Q can be in a universe that is life-friendly. Thus, both the quantity of matter in our universe and the density of this matter had to both exist within very narrow ranges for life to be possible. 

Paul Davies,The Goldilocks enigma: why is the universe just right for life?  page 68
A related question was why the big bang was just that big, rather than bigger or smaller: what, precisely, determined its oomph? Then there was the puzzle of why the large-scale geometry of the universe is flat and the related mystery of why the total mass-energy of the universe is indistinguishable from zero. But the biggest puzzle of all concerned the extraordinary uniformity of the universe on a grand scale, as manifested in the smoothness of the CMB radiation. As I have pointed out, on a scale of billions of light-years the universe looks pretty much the same everywhere. And similar remarks apply to the expansion: the rate is identical in all directions and, as best we can tell, in all cosmic regions. All these features were completely baffling in the 1970s, yet they are all crucial for creating a universe fit for life. For example, a bigger bang would have dispersed the cosmological gases too swiftly for them to accumulate into galaxies. Conversely, had the bang been not so big, then the universe would have collapsed back on itself before life could get going. Our universe has picked a happy compromise: it expands slowly enough to permit galaxies, stars, and planets to form, but not so slowly as to risk rapid collapse

The Goldilocks Enigma - Paul Davies

Dark energy
Dark energy, repulsive force that is the dominant component (69.4 percent) of the universe. The remaining portion of the universe consists of ordinary matter and dark matter. Dark energy, in contrast to both forms of matter, is relatively uniform in time and space and is gravitationally repulsive, not attractive, within the volume it occupies. The nature of dark energy is still not well understood. A kind of cosmic repulsive force was first hypothesized by Albert Einstein in 1917 and was represented by a term, the “cosmological constant,” that Einstein reluctantly introduced into his theory of general relativity in order to counteract the attractive force of gravity and account for a universe that was assumed to be static (neither expanding nor contracting). After the discovery in the 1920s by American astronomer Edwin Hubble that the universe is not static but is in fact expanding, Einstein referred to the addition of this constant as his “greatest blunder.” However, the measured amount of matter in the mass-energy budget of the universe was improbably low, and thus some unknown “missing component,” much like the cosmological constant, was required to make up the deficit. Direct evidence for the existence of this component, which was dubbed dark energy, was first presented in 1998.

Dark energy is detected by its effect on the rate at which the universe expands and its effect on the rate at which large-scale structures such as galaxies and clusters of galaxies form through gravitational instabilities. The measurement of the expansion rate requires the use of telescopes to measure the distance (or light travel time) of objects seen at different size scales (or redshifts) in the history of the universe. These efforts are generally limited by the difficulty in accurately measuring astronomical distances. Since dark energy works against gravity, more dark energy accelerates the universe’s expansion and retards the formation of large-scale structure. One technique for measuring the expansion rate is to observe the apparent brightness of objects of known luminosity like Type Ia supernovas. Dark energy was discovered in 1998 with this method by two international teams that included American astronomers Adam Riess (the author of this article) and Saul Perlmutter and Australian astronomer Brian Schmidt. The two teams used eight telescopes including those of the Keck Observatory and the MMT Observatory. Type Ia supernovas that exploded when the universe was only two-thirds of its present size were fainter and thus farther away than they would be in a universe without dark energy. This implied the expansion rate of the universe is faster now than it was in the past, a result of the current dominance of dark energy. (Dark energy was negligible in the early universe.) Studying the effect of dark energy on large-scale structure involves measuring subtle distortions in the shapes of galaxies arising from the bending of space by intervening matter, a phenomenon known as “weak lensing.” At some point in the last few billion years, dark energy became dominant in the universe and thus prevented more galaxies and clusters of galaxies from forming. This change in the structure of the universe is revealed by weak lensing. Another measure comes from counting the number of clusters of galaxies in the universe to measure the volume of space and the rate at which that volume is increasing. The goals of most observational studies of dark energy are to measure its equation of state (the ratio of its pressure to its energy density), variations in its properties, and the degree to which dark energy provides a complete description of gravitational physics. In cosmological theory, dark energy is a general class of components in the stress-energy tensor of the field equations in Einstein’s theory of general relativity. In this theory, there is a direct correspondence between the matter-energy of the universe (expressed in the tensor) and the shape of space-time. Both the matter (or energy) density (a positive quantity) and the internal pressure contribute to a component’s gravitational field. While familiar components of the stress-energy tensor such as matter and radiation provide attractive gravity by bending space-time, dark energy causes repulsive gravity through negative internal pressure. If the ratio of the pressure to the energy density is less than −1/3, a possibility for a component with negative pressure, that component will be gravitationally self-repulsive. If such a component dominates the universe, it will accelerate the universe’s expansion.

Fine-tuning of the Big Bang  Content-universe

matter-energy content of the universe Matter-energy content of the universe.

The simplest and oldest explanation for dark energy is that it is an energy density inherent to empty space, or a “vacuum energy.” Mathematically, vacuum energy is equivalent to Einstein’s cosmological constant. Despite the rejection of the cosmological constant by Einstein and others, the modern understanding of the vacuum, based on quantum field theory, is that vacuum energy arises naturally from the totality of quantum fluctuations (i.e., virtual particle-antiparticle pairs that come into existence and then annihilate each other shortly thereafter) in empty space. However, the observed density of the cosmological vacuum energy density is ~10−10 ergs per cubic centimetre; the value predicted from quantum field theory is ~10110 ergs per cubic centimetre. This discrepancy of 10120 was known even before the discovery of the far weaker dark energy. While a fundamental solution to this problem has not yet been found, probabilistic solutions have been posited, motivated by string theory and the possible existence of a large number of disconnected universes. In this paradigm the unexpectedly low value of the constant is understood as a result of an even greater number of opportunities (i.e., universes) for the occurrence of different values of the constant and the random selection of a value small enough to allow for the formation of galaxies (and thus stars and life).

Another popular theory for dark energy is that it is a transient vacuum energy resulting from the potential energy of a dynamical field. Known as “quintessence,” this form of dark energy would vary in space and time, thus providing a possible way to distinguish it from a cosmological constant. It is also similar in mechanism (though vastly different in scale) to the scalar field energy invoked in the inflationary theory of the big bang.

Another possible explanation for dark energy is topological defects in the fabric of the universe. In the case of intrinsic defects in space-time (e.g., cosmic strings or walls), the production of new defects as the universe expands is mathematically similar to a cosmological constant, although the value of the equation of state for the defects depends on whether the defects are strings (one-dimensional) or walls (two-dimensional).

There have also been attempts to modify gravity to explain both cosmological and local observations without the need for dark energy. These attempts invoke departures from general relativity on scales of the entire observable universe.

A major challenge to understanding accelerated expansion with or without dark energy is to explain the relatively recent occurrence (in the past few billion years) of near-equality between the density of dark energy and dark matter even though they must have evolved differently. (For cosmic structures to have formed in the early universe, dark energy must have been an insignificant component.) This problem is known as the “coincidence problem” or the “fine-tuning problem.” Understanding the nature of dark energy and its many related problems is one of the most formidable challenges in modern physics.

Joseph M. Fedrow Anti-Anthropic Solutions to the Cosmic Coincidence Problem July 30, 2018
The first fine-tuning problem above is called the cosmological constant problem and the second fine-tuning problem is called the cosmic coincidence problem. The essence of the cosmic coincidence problem is that while radiation and matter densities drop very rapidly and at different rates as the Universe expands, a dark energy density described by a cosmological constant stays constant throughout the entire history of the Universe. Thus there is only one unique time in the long history of the Universe where the DE density and matter density are roughly equal. The cosmic coincidence is that this occurred very recently at around a redshift of z ≈ 0.39. If this current epoch of cosmic acceleration had started even slightly earlier, the DE dominance would have stopped structure formation, and galaxies, stars, and life on this planet would not exist. If this epoch had been even slightly later, we would not have discovered the current accelerated expansion.

Graphical Parameter Comparisons
Standard ΛCDM requires only 6 independent parameters to completely specify the cosmological model. The specific set of six parameters used to define the cosmological model is somewhat open to choice. Within the context of fitting a ΛCDM model to a CMB power spectrum, the six selected key parameters are primarily chosen to avoid degeneracies and thus speed convergence of the model fit to the data. Other interesting parameters providing additional physical insight may be derived from the model once the defining six parameters have been set.

1. Scalar power law index ns
Primordial fluctuations are density variations in the early universe which are considered the seeds of all structure in the universe. Currently, the most widely accepted explanation for their origin is in the context of cosmic inflation. According to the inflationary paradigm, the exponential growth of the scale factor during inflation caused quantum fluctuations of the inflation field to be stretched to macroscopic scales, and, upon leaving the horizon, to "freeze in". At the later stages of radiation- and matter-domination, these fluctuations re-entered the horizon, and thus set the initial conditions for structure formation.

2. Age of the Universe t0
3. Optical depth to reionization τ
4. Hubble constant H0
5. Physical baryon density Ωbh2
6. Physical CDM density Ωch2
7. Matter density Ωm
8. Fluctuation amplitude σ8

Problems with the cosmic inflation hypothesis at the beginning of the universe
1. The Big Bang was the first and most precisely fine-tuned event in all of the history of the universe. It had it to be adjusted to permit the right expansion rate, a balance between attraction and repulsion, between contraction and expansion, or it would have expanded too fast, and produced an unlimited expansion, and a void, lifeless universe, or it would have recollapsed back to a singularity, a Big Crunch. But also many different parameters had to be set just right in the first instants, right after the first nanosecond or two, in order to form stable atoms, or it would also be void of stars, planets, chemicals, and life. 
2.  The Lambda-CDM model, composed of six parameters, is a parameterization of the Big Bang. The standard model of particle physics contains 26 fundamental constants. A variety of physical phenomena, atomic, gravitational, and cosmological, must combine in the right way in order to produce a life-permitting universe.
3. Inflation is supposed to provide a dynamical explanation for the seemingly very fine-tuned initial conditions of the standard model of cosmology. It faces however ist own problems. There would have to be an inflation field with negative pressure,  dominating the total energy density of the universe, dictating its dynamic, and so, starting inflation. It would have to last for the right period of time.  And once inflation takes over, there must be some special reason for it to stop; otherwise, the universe would maintain its exponential expansion and no complex structure would form. It would also have to be ensured that the post-inflation field would not possess a large, negative potential energy, which would cause the universe to recollapse altogether. Inflation would also have to guarantee a homogeneous, but not perfectly homogeneous universe. Inhomogeneities had to be there for gravitational instability to form cosmic structures like stars, galaxies, and planets. Inflation would require an astonishing sequence of correlations and coincidences, to suddenly and coherently convert all its matter into a scalar field with just enough kinetic energy to roll to the top of its potential and remain perfectly balanced there for long enough to cause a substantial era of “deflation”.  It would be far more likely, that the inflation field would drop its energy rather than be converted into baryons and ordinary matter, dump its energy into radiation.  The odds to have a successful, finely adjusted inflaton field are maximally one in a thousand at its peak and drop rapidly. There is no physical model of inflation, and the necessary coupling between inflation and ordinary matter/radiation is just an unsupported hypothesis. 
4. Designed setup is the best explanation for the life-permitting conditions at the beginning of the universe. 

Claim: We don't know what a non-created universe would look like since we have nothing to compare it to.
Reply: We can say with high confidence, that without fine-tuning, there would be based on mathematical calculus no universe. The likelihood to have the right expansion rate at the Big bang is one to 10^123 ( Cosmological constant )  This probability is hard to imagine but an illustration may help. Imagine covering the whole of the USA with small coins, edge to edge. Now imagine piling other coins on each of these millions of coins. Now imagine continuing to pile coins on each coin until reaching the moon about 400,000 km away! If you were told that within this vast mountain of coins there was one coin different to all the others. The statistical chance of finding that one coin is about 1 in 10^55. In other words, the evidence that our universe is designed is overwhelming!

Question: Why was the universe in an extraordinarily low-entropy state right after the big bang?
Answer: Your specific question is about why uniform gas is a low entropy state for the universe. The reason is that you make entropy by allowing the gas to self-gravitate and compress, releasing heat to the environment in the process. The end result is a black hole where the gas is compressed maximally, and these are the maximum entropy gravitational states. But the uniform gas comes from a nearly uniform inflaton field shaking over all space after inflation. This inflaton produces uniform denisty of matter, which then becomes uniform baryons and hydrogen. Ultimately, it is the uniformity of the energy density in the inflaton field which is responsible for the low entropy of the initial conditions, and this is linked to the dynamics of inflation.

Entropy: The Hidden Force That Complicates Life

Brian Greene The Fabric of the Cosmos: Space, Time, and the Texture of Reality 2004
Entropy and Gravity Because theory and observation show that within a few minutes after the big bang, primordial gas was uniformly spread throughout the young universe, you might think, given our earlier discussion of the Coke and  its carbon dioxide molecules, that the primordial gas was in a high-entropy, disordered state. But this turns out not to be true. Our earlier discussion of entropy completely ignored gravity, a sensible thing to do  because gravity hardly plays a role in the behavior of the minimal amount of gas emerging from a bottle of Coke. And with that assumption, we  found that uniformly dispersed gas has high entropy. But when gravity  matters, the story is very different. Gravity is a universally attractive force; hence, if you have a large enough mass of gas, every region of gas will  pull on ever\' other and this will cause the gas to fragment into clumps, somewhat as surface tension causes water on a sheet of wax paper to fragment into droplets. When gravity matters, as it did in the high-density early universe, dumpiness —not uniformity—is the norm; it is the state toward which a gas tends to evolve, as illustrated in the Figure below.

Fine-tuning of the Big Bang  Initia11
For huge volumes of gas, when gravity matters, atoms and molecules evolve from a smooth, evenly spread configuration, into one  involving larger and denser clumps.

Even though the clumps appear to be more ordered than the initially  diffuse gas —much as a playroom with toys that are neatly grouped in trunks and bins is more ordered than one in which the toys are uniformly  strewn around the floor—in calculating entropy you need to tally up the  contributions from all sources. For the playroom, the entropy decrease in  going from wildly strewn toys to their all being "clumped" in trunks and  bins is more than compensated for by the entropy increase from the fat burned and heat generated by the parents who spent hours cleaning and  arranging everything. Similarly, for the initially diffuse gas cloud, you find  that the entropy decrease through the formation of orderly clumps is  more than compensated by the heat generated as the gas compresses, and, ultimately, by the enormous amount of heat and light released when nuclear processes begin to take place. This is an important point that is sometimes overlooked. The overwhelming drive toward disorder does not mean that orderly structures like stars and planets, or orderly life forms like plants and animals, can't form. They can. And they obviously do. What the second law of thermodynamics entails is that in the formation of order there is generally a more-than compensating generation of disorder. The entropy balance sheet is still in the black even though certain constituents have become more ordered. And of the fundamental forces of nature, gravity is the one that exploits this feature of the entropy tally to the hilt. Because gravity operates across vast distances and is universally attractive, it instigates the formation of the ordered clumps —stars—that give off the light we see in a clear night sky, all in keeping with the net balance of entropy increase. The more squeezed, dense, and massive the clumps of gas are, the larger the overall entropy. Black holes, the most extreme form of gravitational clumping and squeezing in the universe, take this to the limit. The gravitational pull of a black hole is so strong that nothing, not even light, is able to escape, which explains why black holes are black. Thus, unlike ordinary stars, black holes stubbornly hold on to all the entropy they produce: none of it can escape the black hole's powerful gravitational grip. In fact, nothing in the universe contains more disorder—more entropy—than a black hole.* This makes good intuitive sense: high entropy means that many rearrangements of the constituents of an object go unnoticed. Since we can't see inside a black hole, it is impossible for us to detect any rearrangement of its constituents — whatever those constituents may be —and hence black holes have maximum entropy. When gravity flexes its muscles to the limit, it becomes the most efficient generator of entropy in the known universe. We have now come to the place where the buck finally stops. The ultimate source of order, of low entropy, must be the big bang itself. In its earliest moments, rather than being filled with gargantuan containers of entropy such as black holes, as we would expect from probabilistic considerations, for some reason the nascent universe was filled with a hot, uniform, gaseous mixture of hydrogen and helium. Although this configuration has high entropy when densities are so low that we can ignore gravity, the situation is otherwise when gravity can't be ignored; then, such a uniform gas has extremely low entropy. In comparison with black holes, the diffuse, nearly uniform gas was in an extraordinarily low-entropy state. Ever since, in accordance with the second law of thermodynamics, the overall entropy of the universe has been gradually getting higher and higher; the overall, net amount of disorder has been gradually increasing. At least one such planet had a nearby star that provided a relatively low-entropy source of energy that allowed low entropy life forms to evolve, and among such life forms there eventually was a chicken that laid an egg that found its way to your kitchen counter, and much to your chagrin that egg continued on the relentless trajectory to a higher entropic state by rolling off the counter and splattering on the floor. The egg splatters rather than unsplatters because it is carrying forward the drive toward higher entropy that was initiated by the extraordinarily low entropy state with which the universe began. Incredible order at the beginning is what started it all off, and we have been living through the gradual unfolding toward higher disorder ever since. This is the stunning connection we've been leading up to for the  the big bang gave rise to an extraordinarily ordered nascent cosmos. The same idea applies to all other examples. The reason why tossing the newly unbound pages of War and Peace into the air results in a state of higher entropy is that they began in such a highly ordered, low entropy form. Their initial ordered form made them ripe for entropy increase. By contrast, if the pages initially were totally out of numerical order, tossing them in the air would hardly make a difference, as far as entropy goes. So the question, once again, is: how did they become so ordered? Well, Tolstoy wrote them to be presented in that order and the printer and binder followed his instructions. And the highly ordered bodies and minds of Tolstoy and the book producers, which allowed them, in turn, to create a volume of such high order, can be explained by following the same chain of reasoning we just followed for an egg, once again leading us back to the big bang. How about the partially melted ice cubes you saw at 10:30 p.m.? Now that we are trusting memories and records, you remember that just before 10 p.m. the bartender put fully formed ice cubes in your glass. He got the ice cubes from a freezer, which was designed by a clever engineer and fabricated by talented machinists, all of whom are capable of creating something of such high order because they themselves are highly ordered life forms. And again, we can sequentially trace their order back to the highly ordered origin of the universe. The revelation we've come to is that we can trust our memories of a past with lower, not higher, entropy only if the big bang —the process, event, or happening that brought the universe into existence —started off the universe in an extraordinarily special, highly ordered state of low entropy. Without that critical input, our earlier realization that entropy should increase toward both the future and the past from any given moment would lead us to conclude that all the order we see arose from a chance fluctuation from an ordinary disordered state of high entropy, a conclusion, as we've seen, that undermines the very reasoning on which it's based. But by including the unlikely, low-entropy starting point of the universe in our analysis, we now see that the correct conclusion is that entropy increases toward the future, since probabilistic reasoning operates fully and without constraint in that direction; but entropy does not increase toward the past, since that use of probability would run afoul of our new proviso that the universe began in a state of low, not high, entropy. Thus, conditions at the birth of the universe are critical to directing time's arrow. The future is indeed the direction of increasing entropy. The arrow of time —the fact that things start like this and end like that but never start like that and end like this — began its flight in the highly ordered, low-entropy state of the universe at its inception.

The Remaining Puzzle
That the early universe set the direction of time's arrow is a wonderful and satisfying conclusion, but we are not done. A huge puzzle remains. How is it that the universe began in such a highly ordered configuration, setting things up so that for billions of years to follow everything could slowly evolve through steadily less ordered configurations toward higher and higher entropy? Don't lose sight of how remarkable this is. As we emphasized, from the standpoint of probability it is much more likely that the partially melted ice cubes you saw at 10:30 p.m. got there because a statistical fluke acted itself out in a glass of liquid water, than that they originated in the even less likely state of fully formed ice cubes. And what's true for ice cubes is true a gazillion times over for the whole universe. Probabilistically speaking, it is mind-bogglingly more likely that everything we now see in the universe arose from a rare but every-so-often expectable statistical aberration away from totai disorder, rather than having slowly evolved from the even more unlikely, the incredibly more ordered, the astoundingly low-entropy starting point required by the big bang. Yet, when we went with the odds and imagined that everything popped into existence by a statistical fluke, we found ourselves in a quagmire: that route called into question the laws of physics themselves. And so we are inclined to buck the bookies and go with a low-entropy big bang as the explanation for the arrow of time. The puzzle then is to explain how the universe began in such an unlikely, highly ordered configuration. That is the question to which the arrow of time points. It all comes down to cosmology/0

Stephen C. Meyer: The return of the God hypothesis, page 159
Initial-Entropy Fine-Tuning Physicists refer to the initial distribution of mass-energy as “entropy” (or “initial entropy”) fine-tuning. Entropy measures the amount of disorder in a material system (of, e.g., molecules, atoms, or subatomic particles). Decreases in entropy correspond to increases in order. Increases in entropy correspond to increased disorder.  A universe in which ordered structures such as galaxies and solar systems can arise would require a low-entropy (highly specific) configuration of mass and energy at the beginning. In a universe with larger initial entropy, black holes would come to dominate. Making an assessment of entropy requires determining the number of configurations of matter and energy that generate, or are consistent with, a particular state of affairs. If there are many configurations consistent with a given state of affairs, then physicists say that state has high entropy and is highly disordered. If there are only a few configurations consistent with a given state of affairs, then physicists say that state has low entropy and is highly ordered. For example, there are far fewer ways to arrange the books, papers, pencils, clothes, and furniture in a room that will result in it looking neat than there are ways of arranging those items that will result in it looking messy. We could say then that a tidy room represents a low-entropy, highly ordered state, whereas a messy room obviously represents a disordered, high-entropy state. Or consider another illustration of these concepts. Liquid water exemplifies a high-entropy state. That’s because, at temperatures between 32 and 212 degrees Fahrenheit, many different arrangements of water molecules are possible, all consistent with H2O in a liquid state.  In other words, there are lots of different ways (i.e., configurations of molecules) to have water as a liquid. Conversely, water in a solid-state—namely, ice—exemplifies a low-entropy state, because ice has a rigidly ordered lattice structure. That structure restricts the number of ways of arranging water molecules. Consequently, there are relatively few ways (i.e., configurations) to have water in a solid-state. In the universe, a black hole represents a highly disordered (high-entropy) state, like an extremely messy room. That’s because the intense gravitational forces at work in a black hole ensure that matter and energy may adopt many different chaotic configurations.  Yet regardless of which of those configurations result from the intense gravitational forces, the large-scale structure of the black hole will remain roughly the same. Conversely, a galaxy represents a low entropy state, like a tidy room, because there are relatively few ways to configure the elements out of which galaxies are made that will result in the orderly structures they exhibit. The universe as a whole also represents a lower entropy system because the galaxies are uniformly distributed throughout space. On the other hand, if the universe were characterized by large irregularly distributed clumps of matter (e.g., in the form of lots of black holes), it would exhibit high entropy. So how unlikely is it that our universe would have the low-entropy, highly ordered arrangement of matter that it has today? Stephen Hawking’s colleague Roger Penrose knew that if he could answer that question, he would have a measure of the fine-tuning of the initial arrangement of matter and energy at the beginning of the universe. Penrose determined that getting a universe such as ours with highly ordered configurations of matter required an exquisite degree of initial fine-tuning—an incredibly improbable low-entropy set of initial conditions.  His analysis began by assuming that neither our universe nor any other would likely exhibit more disorder (or entropy) than a black hole, the structure with the highest known entropy. He then calculated the entropy of a black hole using an equation based upon general relativity and quantum mechanics. The entropy value he calculated established a reasonable upper bound, or maximum possible entropy value, for the distribution of the mass energy in our visible universe. Penrose then asked: Given the wide range of possible values for the entropy of the early universe, how likely is it that the universe would have the precise entropy that it does today? To answer that question, he needed to know the entropy of the present universe. Penrose made a quantitative estimate of that value.  He then assumed that the early universe would have had an entropy value no larger than the value of the present universe, since entropy (disorder) typically increases as energy moves through a system, which would have occurred as the universe expanded. (Think of a tornado moving through a junkyard or a toddler through a room.) Then he compared the number of configurations of mass-energy consistent with an early black-hole universe to the number consistent with more orderly universes like ours. Mathematically, he was comparing the number of configurations associated with the maximum possible entropy state (a black hole) with the number associated with a low-entropy state (our observable universe). By comparing that maximum expected value of the entropy of the universe with the observed entropy, Penrose determined that the observed entropy was extremely improbable in relation to all the possible entropy values it could have had. 10 In particular, he showed that there were 10 10101 configurations of mass-energy—a vast number—that correspond to highly ordered universe like ours. But he had also shown that there were vastly more configurations 10 to10^123  that would generate black-hole dominated universes. And since 10 to 10^101 is a minuscule fraction of 10 to 10^123 , he concluded that the conditions that could generate a life-friendly universe are extremely rare in comparison to the total number of possible configurations that could have existed at the beginning of the universe. Indeed, dividing 10 to 10^101 by 10 to 10^123 just yields the number 10 to 10^123 all over again. Since the smaller exponential number represents such an incredibly small percentage of the larger exponential number, the smaller number can be ignored as the massively larger exponential number effectively swallows up the smaller one. In any case, the number that Penrose calculated—1 in 10 to 10^123—provides a quantitative measure of the unimaginably precise fine-tuning of the initial conditions of the universe.  In other words, his calculated entropy implied that out of the many possible ways the available mass and energy of the universe could have been configured at the beginning, only a few configurations would result in a universe like ours. Thus, as Paul Davies observes, “The present arrangement of matter indicates a very special choice of initial conditions.”  That’s putting it mildly. The mathematical expression 10 to 10^123 represents what mathematicians call a hyper-exponential number—10 raised to the 10th power (or 10 billion) raised again to the 123rd power. To put that number in perspective, it might help to note that physicists have estimated that the whole universe contains “only” 10^80 elementary particles (a huge number—1 followed by 80 zeroes). But that number nevertheless represents a minuscule fraction of 10 to 10^123 . In fact, if we tried to write out this number with a 1 followed by all the zeros that would be needed to represent it accurately without the use of exponents, there would be more zeros in the resulting number than there are elementary particles in the entire universe. Penrose’s calculation thus suggests an incredibly improbable arrangement of mass-energy—a degree of initial fine-tuning that really is not adequately reflected by the word “exquisite.” I’m not aware of a word in English that does justice to the kind of precision we are discussing. 

Question: What is the evidence that the universe is expanding? 
Answer: (1) Einstein’s description of gravitation as a changeable spacetime fabric whose curvature is determined by the matter and energy in it, and 
(2) the observed relationship between the measured distance of galaxies beyond our own and their redshift. When taken together, these two pieces of evidence — both verified to incredible degrees of accuracy through multiple lines of observation — lead us to conclude that we live in a Universe whose spacetime fabric is expanding over time.

Graphical Parameter Comparisons

Planck 2015 results. XIII. Cosmological parameters

Last edited by Otangelo on Fri Nov 26, 2021 5:34 pm; edited 11 times in total


3Fine-tuning of the Big Bang  Empty Re: Fine-tuning of the Big Bang Fri Jul 09, 2021 8:55 am



Elisabeth Vangioni Cosmic origin of the chemical elements rarety in nuclear astrophysics 23 November 2017
The Universe was born in a ‘big bang’ at a moment when the density and the temperature were (almost) infinite, in the most symmetric state. At that time, all fundamental physical forces such as gravitational, electromagnetic, strong and weak ones were united. Due to the rapid expansion of the Universe (the first pillar), the temperature dropped quickly, and the fundamental forces were separated. At about 10−36 second after the big bang, the Universe went through a very short and rapid expansion called inflation, during the time range, 10−35 to 10−30 second, the dimension of the Universe was multiplied by an enormous factor of at least 10^26, even to 10^100, due to probably the separation of the strong interaction. Cosmic inflation is thought to be responsible for the remarkable degree of homogeneity seen in the present Universe at a large scale, and at the same time of the structure formation in the Universe. On the one hand, inflation stretched space and eliminated its defects and on the other hand, the quantum fluctuation of the inflation (the field that drives inflation) led to the seeds of galaxies . A few minutes later, atomic nuclei started to form. Then protons and neutrons began to combine into atomic nuclei producing hydrogen, helium and a trace of lithium. BBN lasted until the temperature and density of baryons (baryons correspond to normal matter, protons and neutrons) became too low for further nucleosynthesis. The elements necessary for life, such as carbon and oxygen, had not been formed at this moment. After inflation and primordial nucleosynthesis, not much change occurred for the next hundred thousand years or so. The Universe continued to expand, gradually cooling off until its temperature fell to a few thousand Kelvin. At that point, its density was only 10−21 g/cm3, on average. At that time the electrons were captured by hydrogen and helium nuclei, forming a gas of electrically neutral atoms. Prior to this moment, the Universe was a plasma. A plasma is a physical state of matter where protons, photons, and electrons are not formed as atoms but are strongly interacting. So, about 380,000 years after the big bang, the Universe cooled below 3000–4000 K. The photons decoupled from matter and streamed freely. This radiation, the CMB, was first detected by A. Penzias and R. Wilson in 1964. They won the Nobel Price in 1978. In 1989, the COBE mission had observed CMB in great details and another Nobel Prize in 2006 was awarded to J. Mather and G. Smoot (Smoot 2000) for the discovery of the anisotropy of the CMB radiation. More recently, observational CMB results from the two satellites Wilkinson Microwave Anisotropy Probe (WMAP) and Planck confirm inflationary cosmology and determined the cosmological parameters with an unprecedented precision (Ade et al. 2016; Hinshaw et al. 2013). The small temperature/density variations detected on the sky map are the ‘seeds’ that will grow into the galaxies and galaxy clusters seen in the present Universe. These successful experiments lead to conclude that the Universe contains about only 4.9% of baryons (atoms), 26.6% of dark matter and 68.4% of dark energy. Dark matter and dark energy have opposed gravitational effects. Dark matter has an attractive gravitational effect, whereas dark energy has a repulsive (anti-gravitational) one. . The Universe then entered the cosmic dark ages because there were no stars and no light from stars. Only hydrogen and helium clouds were present at that time. A few hundred million years after the big bang, matter collapsed into minihalos which became the birth sites for the first stars since they provided gravitational wells that retained gas to form stars. The light from the first stars ended the dark ages. These first stars forged the first complex nuclei as carbon and oxygen. Thus, they play a crucial role in the global evolution of the Universe. To summarize, at the beginning, all space and time, all energy and matter emerged from symmetric conditions that we can guess today only in abstract equations. The history of the Universe from this earliest instant has been a saga of ever-growing asymmetry and increasing complexity. As space expanded and Universe cooled, particles began aggregating and structures started forming from this ultrahot plasma. Eventually, clusters and galaxies, stars, planets, and even life itself emerged. In at least one corner of the cosmos where conditions were ideal, intelligent beings evolved to the point where they could begin to comprehend these fantastic origins. Above all, from the big bang on, the Universe is continuously evolving.

The Lambda-CDM model
The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parameterization  of the Big Bang  cosmological  model.

J. Bel Determination of the abundance of cosmic matter via the cell count moments of the galaxy distribution 15 November 2013
Current estimations suggest that we live in a homogeneous and isotropic universe where baryonic matter is a minority (1/6) of all matter, matter is a minority (1/4) of all forms of energy, geometry is spatially flat, and cosmic expansion is presently accelerated. However evocative these cosmological results may be, incorporating them into a physical theory of the universe is a very perplexing problem. Virtually all the attempts to explain the nature of dark matter which is the indispensable ingredient in models of cosmic structure formation, as well as of dark energy, which is the physical mechanism that drives cosmic acceleration, invoke exotic physics beyond current theories. The Wilkinson microwave anisotropy probe and the Planck mission Planck Collaboration, for example, fix the parameters of a “power-law ΛCDM” model with impressive accuracy. This is a cosmological model characterized by a flat geometry, by a positive dark energy (DE) component ΩX with w = −1, and by primordial perturbations that are scalar, Gaussian, and adiabatic.

ΛCDM Model of Cosmology
We illustrate a brief and simplified picture of theorized stages in the evolution of the universe, to provide a context for discussing ΛCDM parameters.

Fine-tuning of the Big Bang  Univer10

Fine-tuning of the Big Bang  Parame10

1. In this picture, the infant universe is an extremely hot, dense, nearly homogeneous mixture of photons and matter, tightly coupled together as a plasma. An approximate graphical timeline of its theoretical evolution is shown in the figure above, with numbers keyed to the explanatory text below. The initial conditions of this early plasma are currently thought to be established during a period of rapid expansion known as inflation. Density fluctuations in the primordial plasma are seeded by quantum fluctuations in the field driving inflation. The amplitude of the primordial gravitational potential fluctuations is nearly the same on all spatial scales (see e.g. reviews by Tsujikawa 2003 and Baumann 2009). The small perturbations propagate through the plasma collisionally as a sound wave, producing under- and overdensities in the plasma with simultaneous changes in density of matter and radiation. CDM doesn't share in these pressure-induced oscillations, but does act gravitationally, either enhancing or negating the acoustic pattern for the photons and baryons (Hu & White 2004).
2. Eventually physical conditions in the expanding, cooling plasma reach the point where electrons and baryons are able to stably recombine, forming atoms, mostly in the form of neutral hydrogen. The photons decouple from the baryons as the plasma becomes neutral, and perturbations no longer propagate as acoustic waves: the existing density pattern becomes "frozen". This snapshot of the density fluctuations is preserved in the CMB anisotropies and the imprint of baryon acoustic oscillations (BAO) observable today in large scale structure (Eisenstein & Hu 1998).
3. Recombination produces a largely neutral universe which is unobservable throughout most of the electromagnetic spectrum, an era sometimes referred to as the "Dark Ages". During this era, CDM begins gravitational collapse in overdense regions. Baryonic matter gravitationally collapses into these CDM halos, and "Cosmic Dawn" begins with the formation of the first radiation sources such as stars. Radiation from these objects reionizes the intergalactic medium.
4. Structure continues to grow and merge under the influence of gravity, forming a vast cosmic web of dark matter density. The abundance of luminous galaxies traces the statistics of the underlying matter density. Clusters of galaxies are the largest bound objects. Despite this reorganization, galaxies retain the BAO correlation length that was established in the era of the CMB.
5. As the universe continues to expand over time, the negative pressure associated with the cosmological constant (the form of dark energy in ΛCDM) increasingly dominates over opposing gravitational forces, and the expansion of the universe accelerates.

The ΛCDM (Lambda cold dark matter) or Lambda-CDM model is a parameterization of the Big Bang cosmological model in which the universe contains three major components: first, a cosmological constant denoted by Lambda (Greek Λ) and associated with dark energy; second, the postulated cold dark matter (abbreviated CDM); and third, ordinary matter. It is frequently referred to as the standard model of Big Bang cosmology because it is the simplest model that provides a reasonably good account of the following properties of the cosmos:
The existence and structure of the cosmic microwave background
The large-scale structure in the distribution of galaxies
The observed abundances of hydrogen (including deuterium), helium, and lithium
The accelerating expansion of the universe observed in the light from distant galaxies and supernovae

Graphical Parameter Comparisons
Standard ΛCDM requires only 6 independent parameters to completely specify the cosmological model. The specific set of six parameters used to define the cosmological model is somewhat open to choice. Within the context of fitting a ΛCDM model to a CMB power spectrum, the six selected key parameters are primarily chosen to avoid degeneracies and thus speed convergence of the model fit to the data. Other interesting parameters providing additional physical insight may be derived from the model once the defining six parameters have been set.

1. Scalar power law index ns
2. Age of the Universe t0
3. Optical depth to reionization τ
4. Hubble constant H0
5. Physical baryon density Ωbh2
6. Physical CDM density Ωch2
7. Matter density Ωm
8. Fluctuation amplitude σ8


1. Scalar Power-Law Index, ns
ΛCDM characterizes the power spectrum of the primordial scalar perturbations as a power law, ∝ kns-1

Scalar perturbations, or Primordial fluctuations are density variations in the early universe which are considered the seeds of all structure in the universe. Currently, the most widely accepted explanation for their origin is in the context of cosmic inflation. According to the inflationary paradigm, the exponential growth of the scale factor during inflation caused quantum fluctuations of the inflation field to be stretched to macroscopic scales, and, upon leaving the horizon, to "freeze in". At the later stages of radiation- and matter-domination, these fluctuations re-entered the horizon, and thus set the initial conditions for structure formation.  As with scalar fluctuations, tensor fluctuations are expected to follow a power law and are parameterized by the tensor index (the tensor version of the scalar index).

An epoch of accelerated expansion in the early universe, inflation, dynamically resolves cosmological puzzles such as homogeneity, isotropy, and flatness of the universe, and generates superhorizon fluctuations without appealing to fine-tuned initial setups. During the accelerated expansion phase, generation and amplification of quantum fluctuations in scalar fields are unavoidable. These fluctuations become classical after crossing the event horizon. Later during the deceleration phase they re-enter the horizon, and seed the matter and the radiation fluctuations observed in the universe. The majority of inflation models predict Gaussian, adiabatic, nearly scale-invariant primordial fluctuations. These properties are generic predictions of inflationary models. The cosmic microwave background (CMB) radiation anisotropy is a promising tool for testing these properties, as the linearity of the CMB anisotropy preserves basic properties of the primordial fluctuations. In companion papers, Spergel et al. find that adiabatic scale-invariant primordial fluctuations fit the WMAP CMB data as well as a host of other astronomical data sets including the galaxy and the Lyman-α power spectra; Komatsu et al. find that the WMAP CMB data is consistent with Gaussian primordial fluctuations. These results indicate that predictions of the most basic inflationary models are in good agreement with the data. While the inflation paradigm has been very successful, radically different inflationary models yield similar predictions for the properties of fluctuations: Gaussianity, adiabaticity, and near-scale-invariance. To break the degeneracy among the models, we need to measure the primordial fluctuations precisely. Even a slight deviation from Gaussian, adiabatic, nearscale-invariant fluctuations can place strong constraints on the models. The CMB anisotropy arising from primordial gravitational waves can also be a powerful method for model testing. In this paper, we confront predictions of various inflationary models with the CMB data from the WMAP, CBI, and ACBAR experiments, as well as the 2dFGRS and Lyman-α power spectra.

Scale invariance is indicated when the scalar index ns is identically one, e.g., the power in the primordial gravitational potential fluctuations is the same over all physical scales. In ΛCDM, field fluctuations are the root cause of the observed CMB anisotropies a, and thus ns is a critical parameter for characterizing the strength of the CMB anisotropies on all angular scales. The power-law index has been observed to be approximatey scale invariant for some time, but the associated uncertainties have been sufficiently large to accomodate a variety of model scenarios. In general, inflation predicts values slightly less than 1: this is a consequence of a slowly varying inflationary potential and horizon size over the period of inflation. Early CMB analyses of temperature-only data contended with parameter degeneracy existing between ns and the optical depth to reionization, τ. This degeneracy is broken with the addition of polarization measurements such as those obtained by WMAP and Planck. Recent determinations from these missions, shown in the figure below, obtain values for ns near 0.96-0.97, with uncertainties which indicate a signficantly detected difference from scale invariance (indicated by the dashed vertical fiducial line at ns = 1). This supports inflationary scenarios. The 2013 BOSS Lyα forest analysis result supports the CMB analyses, but this method is not yet at the stage where it has the statistical power to exclude scale invariance.

Fine-tuning of the Big Bang  Ns_1d_history_v6_480
In ΛCDM, the temperature and matter anisotropies we observe today in the cosmic microwave background and large-scale structure are thought to be seeded by primordial scalar perturbations in the field driving inflation. How closely these perturbations follow scale invariance (dashed vertical line at ns=1) has been a subject of interest for some time. Recent determinations by Planck and WMAP have firmly established ns<1, supportive of inflation scenarios, which generally predict values slightly less than 1. The gray vertical line, representing the weighted average of WMAP and Planck data points, is positioned at ns = 0.9655.

2. Age of the Universe t0
In physical cosmology, the age of the universe is the time elapsed since the Big Bang. Today, astronomers have derived two different measurements of the age of the universe a measurement based on direct observations of an early state of the universe, which indicate an age of 13.772±0.040 billion years within the Lambda-CDM concordance model as of 2018

N. Aghanim et al. Planck 2018 results. VI. Cosmological parameters September 15, 2020
We find good consistency with the standard spatially-flat 6-parameter ΛCDM cosmology having a power-law spectrum of adiabatic scalar perturbations b (denoted “base ΛCDM” in this paper), from polarization, temperature, and lensing, separately and in combination. A combined analysis gives dark matter density Ωch 2 = 0.120 ± 0.001, baryon density Ωbh 2 = 0.0224 ± 0.0001, scalar spectral index ns = 0.965 ± 0.004, and optical depth τ = 0.054 ± 0.007

3. Optical depth to reionization τ
As discussed in the ΛCDM Theory section, the "Dark Ages" following recombination are brought to an end by an epoch of reionization. Although there is at present no firm scenario that describes the detailed history and mechanisms by which the intergalactic medium (IGM) evolves from a largely neutral state (post-recombination) to one of being largely ionized, the existence of reionization introduces key observational consequences. The history of the reionization process is important for its relevance to how and when the first stars and galaxies formed, since these objects are presumed to be sources for most of the ionizing photons. Reionization affects our ability to measure the CMB radiation propagating through the IGM: Thomson scattering of CMB photons by the free electrons produced by reionization serves as an opacity source that suppresses the amplitude of the observed primordial anisotropies. Additionally, electron scattering associated with reionization induces large-scale polarization of the CMB radiation above that produced earlier during recombination.

Fine-tuning of the Big Bang  Tau_1d_history_v7_480

The optical depth to reionization, τ, serves as a means of quantifying the era when emitting sources first began forming and reionizing the neutral gas that existed after recombination. The optical depth can be related to an approximate redshift, or range of redshifts, during which reionization occured: larger values imply an earlier onset of reionization. Lower limits based on lack of continuous Lyα absorption (by a neutral IGM) in the spectra of the most distant quasars and galaxies at redshifts z ≈ 6 suggest τ > 0.03 (Pryke et al. 2002, Fan et al. 2005). The detection of polarized CMB signal by WMAP in 2003 (Spergel et al. 2003) allowed the first determination of τ. Since that time, determinations of τ from CMB polarization data have improved in precision and accuracy, and polarized observations of the CMB are currently the best determinants of τ. Ongoing efforts seek to place even stricter limits on τ in order to constrain early galaxy formation models and inflation scenarios. The gray vertical line, representing the weighted average of WMAP and Planck data points, is positioned at τ = 0.0.0633.

4. Hubble constant H0
In cosmology, it is the constant of proportionality in the relation between the velocities of remote galaxies and their distances. It expresses the rate at which the universe is expanding. It is denoted by the symbol H0, where the subscript denotes that the value is measured at the present time, and named in honour of Edwin Hubble, the American astronomer who attempted in 1929 to measure its value.

The current value of the cosmological expansion rate, the Hubble constant H0, is an important cosmological datum. Although one of the most measured cosmological parameters, it was more than seven decades after Hubble’s first measurement before a consensus value for H0 started to emerge. In 2001 Freedman et al. (2001) provided H0 = 72±8 km s−1 Mpc−1 (1σ error including systematics) as a reasonable summary of the Hubble Space Telescope Key Project H0 value. I

Fine-tuning of the Big Bang  H0_1d_history_v8_480
The Hubble Constant H0 characterizes the present-day expansion rate of the universe. Its value may be determined using a variety of methods. The figure includes results from distance ladder determinations (labeled HST Key Project, Cepheids+SNIa, Distance Ladder, TRGB Dist Ladder), indirect CMB measurements (WMAP9++, Planck_PR3++, both of which combine CMB with other data), BAO in combination with baryon abundance (BAO+D/H), the thermal SZ effect (CHANDRA+tSZ, XMM+Planck tsZ), strong gravitational lensing (Gravlens Time Delay) and gravitational waves (LIGO/Virgo grav waves). More detailed descriptions of these methods are given in the text. Uncertainties in the gravitational wave method are expected to decrease as the method matures. A tightly constrained value for H0 with agreement across multiple methods has yet to be achieved. The gray vertical line, representing the weighted average of WMAP and Planck data points, is positioned at 67.88 km/sec/Mpc.

5. Physical baryon density Ωbh2
Values for the physical baryon density shown in the figure are dominated by determinations based on two separate methods: 
1) CMB determinations, in which the derived baryon density depends primarily on the amplitudes of the first and second peaks of the temperature power spectrum (e.g., Page et al. 2003), and 
2) primordial deuterium abundance (D/H) measurements coupled with Big Bang Nucleosynthesis (BBN) theory.

We quote four separate determinations of baryon density which use D/H+BBN to analyze independent observations of Lyα clouds in QSO or damped Lyα systems ([url=https://lambda.gsfc.nasa.gov/education/graphic_history/more_ref.cfm#Pettini and Bowen]Pettini & Bowen 2001[/url], Kirkman et al. 2003, [url=https://lambda.gsfc.nasa.gov/education/graphic_history/more_ref.cfm#Pettini and Cooke]Pettini & Cooke 2012[/url], Cooke et al. 2016). Error estimates depend both on observational and modeling uncertainties. Kirkman et al. (2003) note that some early determinations may have underestimated the errors; see also e.g. Dvorkin et al. (2016) for further discussion. With continued improvements in data quality and reduction of observational systematics, Cooke et al. (2016) raise the near-term prospect of D/H+BBN determinations of Ωbh2 with uncertainties rivaling those derived using CMB data. However, both Cooke et al. (2016) and Dvorkin et al. (2016) note the increasing importance of accurate nuclear cross section data in the BBN computations. We illustrate this in the figure by quoting two estimates for Ωbh2 from Cooke et al. (2016), which are derived from the same D/H data but use two different estimates of a key nuclear cross section in the BBN computation.
We also include a determination using "present day" measurements. Steigman, Zeller & Zentner: (2002) employed SNIa data and the assumption of ΛCDM to obtain Ωm, and then computed the baryon density by combining Ωm with an estimate of the baryon fraction fb derived from X-ray observations of clusters of galaxies (e.g. Ωb = fb Ωm).
There is general good agreement between the recent CMB and D/H+BBN determinations shown in the figure. As noted above, reduction of baryon density uncertainties determined via D/H+BBN will require an increased accuracy in modeling inputs.

Fine-tuning of the Big Bang  Ombh2_1d_history_v7_480
Most of the visible mass in the universe is composed of baryons (protons and neutrons). Values for the physical baryon density in the universe have generally been determined using either observations of the CMB or deuterium abundance in distant gas clouds with near-primordial composition. Big Bang Nucleosynthesis (BBN) models the production of light elements such as deuterium in the early universe, and given an observed deuterium abundance (D/H), BBN can predict baryon content. We include four such determinations in the plot; these serve as an important independent check on baryonic densities derived from ΛCDM model fits to CMB power spectra. Recent determinations of baryonic content based on CMB and deuterium abundance analyses are in reasonable agreement: see text for discussion of the 2016 D/H+BBN values. Only about 4-5% of the present-day universe is comprised of baryons. The gray vertical line, representing the weighted average of WMAP and Planck data points, is positioned at 100Ωbh2 = 2.239.

6. Physical CDM Density, Ωch2
By definition, ΛCDM cosmology assumes the presence of non-baryonic cold dark matter. Estimations shown in the figure are all limited to application of ΛCDM to CMB measurements. Non-CMB derived values of baryonic and total matter densities support the dark matter content found by CMB analyses (Ωc ≈ Ωm - Ωb), but don't have the tight uncertainties that current determinations using WMAP and Planck data (combined with external datasets) provide. The slight (1σ-2σ) tension between the WMAP and Planck results is consistent with the corresponding tension in Ωm and reasonably good agreement in the baryonic content values.

Fine-tuning of the Big Bang  Omch2_1d_history_v6_480
In ΛCDM, most of the Large Scale Structure in the universe is associated with cold dark matter. Dark matter is not easily detected, but its presence is inferred from gravitational effects such as lensing and galactic rotation curves. All four entries in the above plot are determined from analysis of CMB data: the current most precisely determined values are represented by the Planck 2018 and WMAP 2013 points, which also include data from other experiments. The values indicate very roughly 25% of the present-day universe is comprised of CDM; the percentage contribution from ordinary matter is much lower (see the Ωbh2 figure). Slight tension exists between WMAP and Planck CMB determinations for cold dark matter content. Unfortunately, alternate methods of corroboration at this precision level are not currently available. The gray vertical line, representing the weighted average of WMAP and Planck data points, is positioned at Ωch2 = 0.1186.

7. Matter density Ωm
The Ωm parameter specifies the mean present day fractional energy density of all forms of matter, including baryonic and dark matter. As can be seen in the figure below, a wide variety of techniques have been employed to determine Ωm. The diversity of techniques have not yet converged upon a tightly constrained concordance value, but recent estimates indicate only ∼30% of the total energy density in the universe is contributed by matter
Fine-tuning of the Big Bang  Omm_1d_history_v7_480
Aside from CMB-related determinations, methods to determine total matter density include cosmological interpretation of galaxy cluster abundances, redshift-space distortion measurements, BAO and SNIa luminosity distances. A number of difficulties arise from parameter degeneracies involved in the galaxy clustering techniques. These may be avoided with SNIa analyses, but in this case control of luminosity distance determination systematics is a key factor and affects the quoted uncertainties. Ωm is one of the less well-determined parameters presented here: mild tension exists between the two CMB results quoting the smallest uncertainties (WMAP 2013 and Planck 2018), and constraints from non-CMB determinations are not yet sufficiently tight to highlight any coflicts with CMB results. The gray vertical line, representing the weighted average of WMAP and Planck data points, is positioned at Ωm = 0.3049.

8. Fluctuation amplitude σ8
While Ωm specifies the mean density of matter in the present day universe, σ8 specifies how that matter is distributed (clumped) at a fiducial scale length. More specifically, the amplitude of the matter power spectrum can be parameterized in terms of the amplitude of the linear matter fluctuations, σ8, defined as the rms of the z=0 density perturbations on scales of 8h-1 Mpc
Fine-tuning of the Big Bang  Sig8_1d_history_v6_480
The amplitude of the primordial matter power spectrum is parameterized by σ8. This provides an important constraint on clustering of matter (CDM dominated) and the formation of structure. As with Ωm, methods using observations of galaxy clusters (abundances, velocity fields, WL) can suffer from degeneracies with other parameters. Alternate methods make use of the SZ effect. Recent determinations of σ8 using these lower-redshift data show a slight preference for lower values than those from CMB determinations. It is too early to definitively invoke interesting cosmological physics as the cause, since underlying systematics may still be present. The gray vertical line, representing the weighted average of WMAP and Planck data points, is positioned at σ8 = 0.8163.

a: Anisotropy is the property of a material which allows it to change or assume different properties in different directions as opposed to isotropy. It can be defined as a difference, when measured along different axes, in a material's physical or mechanical properties (absorbance, refractive index, conductivity, tensile strength, etc.) https://en.wikipedia.org/wiki/Anisotropy
b: Adiabatic/isocurvature fluctuations: Adiabatic fluctuations are density variations in all forms of matter and energy which have equal fractional over/under densities in the number density.


Last edited by Otangelo on Fri Jul 23, 2021 1:30 pm; edited 2 times in total


4Fine-tuning of the Big Bang  Empty Re: Fine-tuning of the Big Bang Sun Jul 18, 2021 6:44 am



Eric Metaxas: Is God dead? page 67:
The American Nobel Prize–winning physicist Steven Weinberg is an atheist with a particular disdain for religion. Before he was able to accept the Big Bang, he admitted that the Steady State theory was “philosophically the most attractive theory because it least resembles the account given in Genesis.” But Weinberg nonetheless admits that life “as we know it would be impossible if any one of several physical quantities had slightly different values,” as we have been saying. But nothing we have mentioned can compare to the number Weinberg has determined to be the most fine-tuned constant yet discovered. In talking about the energy density of the universe —what is often referred to as the “cosmological constant”—he gives us the most extravagantly grotesque number of them all, in the most confounding example of fine-tuning. Weinberg says that if the value of this cosmological constant were different by just one part in 10 to the 120th power, life could not exist. But what are we to make of a one followed by 120 zeroes? One part in a million would get the point across, but a million is only a one followed by six zeroes. A trillion is a one followed by twelve zeroes. A quadrillion is a one followed by fifteen zeroes, and a quintillion is a one followed by eighteen zeroes. From there we go to sextillion (one followed by twentyone zeroes) and septillion (one followed by twenty-four zeroes) and octillion (one followed by twenty-seven zeroes). Who has use for such numbers? Since we had some ink to spare, we can spell it out. We are talking about a deviation of one part in 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,0 00,000,000,000,000,000. Can there be any real point to such a surfeit of zeroes? The number of atoms in the universe is ten to the eightieth power. Our universe is ninety-three billion light-years in diameter. One light-year is about six trillion miles. Our minds cannot begin to fathom these things. But these are simply what science—advanced as it now is—requires that we accept.6 So at what point do we see that our existence—logically speaking— cannot be anything close to a random occurrence? We have not mentioned that each of these outlandish numbers must be multiplied by all the others to get the full picture. We get a number so impossibly large that there can be no way to imagine that our universe and our planet—and we ourselves—are simply here. But science says that changing any one of the parameters mentioned even slightly would destroy the ability for life to exist. So simply to shrug and say this all simply happened “by chance” seems beyond the pale. Saying that would be infinitely dumber than what Jim Carrey’s character in Dumb and Dumber says, after being told by the pretty girl that the chances of her dating him are one in a million: “So you’re telling me there’s a chance!?” No. Not really. No

Imanol Albarran What if gravity becomes really repulsive in the future? 24 March 2018
The Universe is evolving. Hubble's discovery was based on observing that the spectrum of far-away galaxies was red-shifted which implied that those galaxies were moving away from us. He even measured the galaxies radial outward velocities and realised that it followed a rule: (1) the velocities were proportional to the distances at which the galaxies were located from us and (2) the proportionality factor was a constant, the Hubble constant. About 70 years later, two independent teams realized that by measuring further objects, SNeIa, the Hubble constant was not quite constant, as was already expected. The issue was that the deviation from the constancy was not in the anticipated direction. It was no longer enough to invoke only matter to explain those observations. A new dark component had to be invoked, interacting as far as we know only gravitationally, and named dark energy. This component started recently fuelling a second inflationary era of the visible Universe. Of course, all these observations, and subsequent ones, are telling us how gravity behaves at cosmological scales through the kinematic expansion of our Universe .

This kinematic description is linked to the dynamical expansion through the gravitational laws of Einstein's theory. To a very good approximation, we may assume that our Universe is homogeneous and isotropic on large scales and that it is filled with matter (standard and dark) and dark energy.

Summarising, what we have shown is that after all gravity might behave the other way around in the future and, rather than the apple falling from the tree, the apple may fly from the earth surface to the branches of the tree, if dark energy is repulsive enough, as could already be indicated by current observations



5Fine-tuning of the Big Bang  Empty Re: Fine-tuning of the Big Bang Sat Apr 13, 2024 12:41 pm



7. The low-entropy state of the universe

The distance to the edge of the observable universe is about 46 billion light-years in any direction. If the entire volume of the universe were filled with atoms without any space, the number of atoms would be approximately 10^102nd power. Since an atom is 99.9% empty space, if the entire space were filled with protons, the number of protons would be 2.5^13 power.  2.5^13 is equivalent to 1 in 10^9 in decimal form.

The distance to the edge of the observable universe is  46 billion light-years in any direction. To find the total volume of the observable universe, we can use the formula for the volume of a sphere: V = (4/3) × π × r^3, where r is the radius. Since the radius is given as 46 billion light-years, we can substitute it into the formula:
V = (4/3) × π × (46 billion light-years)^3
  = 3.66 × 10^32 cubic light-years

To convert the volume from cubic light-years to cubic meters, we use the conversion factor: 1 light-year ≈ 9.461 × 10^15 meters.

Thus, the total volume of the observable universe in cubic meters is: V = 3.66 × 10^32 × (9.461 × 10^15)^3 cubic meters = 3.46 × 10^85 cubic meters. The volume of an atom is approximately 10^-30 cubic meters. To find the number of atoms that could fit in the observable universe without any space, we divide the total volume by the volume of a single atom: Number of atoms = 3.46 × 10^85 cubic meters / 10^-30 cubic meters = 3.46 × 10^115 atoms.  This is close to the stated value of 10^102nd power. If we filled the entire space with protons, the number of protons would be 2.5^13 power. This can be calculated as follows: The volume of a proton is approximately 10^-45 cubic meters. Number of protons = 3.46 × 10^85 cubic meters / 10^-45 cubic meters = 3.46 × 10^130 protons ≈ 2.5^13 power (rounded) The key points are the immense volume of the observable universe and the astronomical number of atoms or protons required to fill that volume entirely without any empty space. The number of protons filling the entire universe would be 346.

According to Penrose's estimation, the odds of obtaining the extremely low entropy state that our universe had at the Big Bang by chance alone are on the order of 1 in 10^123 (or 10^10^123 to be more precise). This is an incredibly small probability.  To put Penrose's odds in perspective, 10^123 is an incredibly large number, far greater than the total number of fundamental particles in the observable universe (estimated to be around 10^90). It's essentially saying that if you had as many universes as there are fundamental particles in our observable universe, and you randomly selected one, the odds of it having the same incredibly low entropy state as our universe would still be vanishingly small.

Based on Roger Penrose's calculation that the odds of obtaining the extremely low entropy state of our universe at the Big Bang are on the order of 1 in 10^123, we can estimate the number of universes required to potentially find one with a "red proton" or any other specific, extremely unlikely configuration. Given: - Total number of protons in the observable universe if filled without any empty space = 3.46 x 10^130 - Penrose's odds of our universe's low entropy state = 1 in 10^123. To find the number of universes needed to potentially find one with a "red proton" or similarly improbable configuration, we need to consider a sample size much larger than Penrose's odds. A common rule of thumb is that to have a reasonable probability of observing an event with odds of 1 in N, you need a sample size several orders of magnitude larger than N. Let's assume we want a 1 in 10^20 chance of finding such an improbable configuration. This means we need a sample size of at least 10^143 universes (123 + 20 orders of magnitude larger than 10^123). Each universe containing 3.46 x 10^130 protons, the total number of protons across 10^143 universes would be: 10^143 x 3.46 x 10^130 = 3.46 x 10^273 protons So, to have a reasonable chance (1 in 10^20) of finding a universe with a specific "red proton" or similarly improbable configuration, based on Penrose's odds, you would need a total of roughly 3.46 x 10^273 protons spread across 10^143 universes. This is an astronomically large number, far exceeding the total number of fundamental particles in our observable universe (estimated to be around 10^90). In summary, if Penrose's calculation is accurate, finding a universe with a specific, incredibly improbable configuration like a "red proton" would require an inconceivably vast number of parallel universes, many orders of magnitude larger than what is conceivable or observable based on our current understanding of the universe.

The low-entropy state of the universe represents an unfathomable degree of fine-tuning that defies comprehension. The staggering odds of 1 in 10^(10^123) for the universe to naturally attune to such a life-nurturing state are a number so vast, so mind-bogglingly immense, that it transcends the realms of probability and plunges into the abyss of apparent impossibility. To grasp the magnitude of this number, consider the following analogy: If we were to represent this probability as a single grain of sand amidst the entire universe, it would be akin to finding that solitary grain within a billion trillion universes, each containing over a trillion trillion grains of sand. The sheer immensity of this number dwarfs the total number of atoms in the observable cosmos, rendering it a statistical improbability so extreme that it borders on the inconceivable.

Fine-tuning of the Big Bang  Sem_t217


6Fine-tuning of the Big Bang  Empty Re: Fine-tuning of the Big Bang Thu Jun 13, 2024 7:30 am



Tom Siegfried The second law of thermodynamics underlies nearly everything. But is it inviolable?

Two centuries on, scientists are still searching for a proof of its universal validity

In real life, laws are broken all the time. Besides your everyday criminals, there are scammers and fraudsters, politicians and mobsters, corporations and nations that regard laws as suggestions rather than restrictions.

It’s not that way in physics.

For centuries, physicists have been identifying laws of nature that are invariably unbreakable. Those laws govern matter, motion, electricity and gravity, and nearly every other known physical process. Nature’s laws are at the root of everything from the weather to nuclear weaponry.

Most of those laws are pretty well understood, at least by the experts who study and use them. But one remains mysterious.

It is widely cited as inviolable and acclaimed as applicable to everything. It guides the functioning of machines, life and the universe as a whole. Yet scientists cannot settle on one clear way of expressing it, and its underlying foundation has evaded explanation — attempts to prove it rigorously have failed. It’s known as the second law of thermodynamics. Or quite commonly, just the Second Law.

In common (oversimplified) terms, the Second Law asserts that heat flows from hot to cold. Or that doing work always produces waste heat. Or that order succumbs to disorder. Its technical definition has been more difficult to phrase, despite many attempts. As 20th century physicist Percy Bridgman once wrote, “There have been nearly as many formulations of the second law as there have been discussions of it.”

This month, the Second Law celebrates its 200th birthday. It emerged from the efforts of French engineer Sadi Carnot to figure out the physics of steam engines. It became the bedrock of understanding the role of heat in all natural processes. But not right away. Two decades passed before physicists began to realize the significance of Carnot’s discovery.

“If your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.” Arthur Stanley Eddington, astrophysicist

By the early 20th century, though, the Second Law was recognized in the eyes of some as the premier law of physical science. It holds “the supreme position among the laws of Nature,” British astrophysicist Arthur Stanley Eddington declared in the 1920s. “If your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.”

In the two centuries since its birth, the Second Law has proved equally valuable for technological progress and fundamental science. It underlies everyday processes from cooling coffee to air conditioning and heating. It explains the physics of energy production in power plants and energy consumption in cars. It’s essential to understanding chemical reactions. It forecasts the “heat death of the universe” and plays a key role in answering why time flows in one direction.

But in the beginning, it was all about how to build a better steam engine.

The birth of the second law of thermodynamics

Nicolas Léonard Sadi Carnot was born in 1796, son of a well-known French engineer and government official named Lazare Carnot. It was a turbulent time in France, and Sadi’s father soon found himself on the wrong side of prevailing politics. Lazare went into exile in Switzerland (and later Germany), while Sadi’s mother took her baby to a small town in northern France. Eventually power in France shifted and Lazare returned, aided by a previous relationship with Napoleon Bonaparte. (At one time, Napoleon’s wife even babysat little Sadi.)

A biography written by Sadi’s younger brother, Hippolyte, described him as of delicate constitution, compensated for by vigorous exercise. He was energetic but something of a loner, reserved almost to the point of rudeness. But from a young age, he also exhibited intense intellectual curiosity, ultimately maturing into undeniable genius.

By age 16, Sadi was ready to begin higher education in Paris at the famed École Polytechnique (having already been well-trained by his father in math, physics and languages). Subsequent education included mechanics, chemistry and military engineering. During this time, he began writing scientific papers (that have not survived).

After service as a military engineer, Carnot moved back to Paris in 1819 to focus on science. He attended further college courses, including one dealing with steam engines, amplifying his longtime interest in industrial and engineering processes. Soon he began composing a treatise on the physics of heat engines in which he, for the first time, deduced the underlying scientific principles governing the production of useful energy from heat. Published on June 12, 1824, Carnot’s Reflections on the Motive Power of Heat marked the world’s first glimpse of the second law of thermodynamics.

Through his studies of heat engines, French engineer Sadi Carnot introduced the second law of thermodynamics in 1824. It would take another two decades for physicists to recognize his work’s significance.Courtesy of Science History Institute

“He was able to successfully show that there was a theoretical maximum efficiency for a heat engine, that depended only on the temperatures of its hot and cold reservoirs of heat,” computer scientist Stephen Wolfram wrote in a recent survey of the Second Law’s history. “In the setup Carnot constructed he basically ended up introducing the Second Law.”

Carnot had studied the steam engine’s use in 18th century England and its role in powering the Industrial Revolution. Steam engines had become the dominant machines in society, with enormous importance for industry and commerce. “They seem destined to produce a great revolution in the civilized world,” Carnot observed. “Already the steam-engine works our mines, impels our ships, excavates our ports and our rivers, forges iron, fashions wood, grinds grain, spins and weaves our cloths….”

Despite the societal importance of steam engines, Carnot noted, not much was known of the physical principles governing their conversion of heat into work. “Their theory is very little understood,” he wrote, “and the attempts to improve them are still directed almost by chance.” Improving steam engines, he decided, required a more general understanding of heat, apart from any particular properties of steam itself. So he investigated how all heat engines worked regardless of what substance was used to carry the heat.

In those days, heat was commonly believed to be a fluid substance, called caloric, that flowed between bodies. Carnot adopted that view and traced the flow of caloric in an idealized engine consisting of a cylinder and piston, a boiler and a condenser. An appropriate fluid (say water) could be converted to steam in the boiler, the steam could expand in the cylinder to drive the piston (doing work), and the steam could be restored to liquid water in the condenser.

Carnot’s key insight was that heat produced motion for doing work by dropping from a high temperature to a lower temperature (in the case of steam engines, from the boiler to the condenser). “The production of motive power is then due in steam-engines not to an actual consumption of caloric, but to its transportation from a warm body to a cold body,” he wrote.

A simple engine

In the type of system Sadi Carnot studied, water is converted into steam in a boiler. That steam drives a piston, which can in turn drive a wheel, to get work out. The steam cools into liquid water in the condenser. Carnot’s key insight was that heat is not consumed in the process but produces work by moving from a hot region (the boiler) to a cold region (the condenser).

T. TibbittsT. Tibbitts

His evaluation of this process, now known as the Carnot cycle, held the key to calculating the maximum efficiency possible for any engine — that is, how much work could be produced from the heat. And it turned out that you can never transform all the heat into work, a major consequence of the Second Law.

Carnot’s belief in caloric, of course, was erroneous. Heat is a manifestation of the motion of molecules. Nevertheless his findings remained correct — the Second Law applies no matter what substance an engine uses or what the actual underlying nature of heat is. Maybe that’s what Einstein had in mind when he called thermodynamics the scientific achievement most likely to stand firm as further advances rewrote humankind’s knowledge of the cosmos.

Within the realm of applicability of its basic concepts, Einstein wrote, thermodynamics “is the only physical theory of universal content concerning which I am convinced … will never be overthrown.”

The Second Law predicts the heat death of the universe

Although Carnot’s book received at least one positive review (in the French periodical Revue Encyclopédique), it went largely unnoticed by the scientific world. Carnot published no more and died of cholera in 1832. Two years later, though, French engineer Émile Clapeyron wrote a paper summarizing Carnot’s work, making it accessible to a broader audience. A decade later, British physicist William Thomson — later to become Lord Kelvin — encountered Clapeyron’s paper; Kelvin soon established that the core of Carnot’s conclusions survived unscathed even when the caloric theory was replaced by the new realization that heat was actually the motion of molecules.

Around the same time, German physicist Rudolf Clausius formulated an early explicit statement of the Second Law: An isolated machine, without external input, cannot convey heat from one body to another at a higher temperature. Independently, Kelvin soon issued a similar conclusion: No part of matter could do work by cooling itself below the temperature of the coldest surrounding objects. Each statement could be deduced from the other, so Kelvin’s and Clausius’ views were equivalent expressions of the Second Law.

In the decades after Sadi Carnot’s death, physicists Lord Kelvin (left) and Rudolf Clausius (right) came up with their own but equivalent ways of expressing the Second Law.From left: T. & R. Annan & Sons/Scottish National Portrait Gallery; Armin Kübelbeck/Wikimedia Commons

It became known as the Second Law because during this time, other work had established the law of conservation of energy, designated the first law of thermodynamics. Conservation of energy merely meant that the amount of energy involved in physical processes remained constant (in other words, energy could be neither created nor destroyed). But the Second Law was more complicated. Total energy stays the same but it cannot all be converted to work — some of that energy is dissipated as waste heat, useless for doing any more work.

“There is an absolute waste of mechanical energy available to man when heat is allowed to pass from one body to another at a lower temperature,” Kelvin wrote.

Kelvin recognized that this dissipation of energy into waste heat suggested a bleak future for the universe. Citing Kelvin, German physicist Hermann von Helmholtz later observed that eventually all the useful energy would become useless. Everything in the cosmos would then converge on the same temperature. With no temperature differences, no further work could be performed and all natural processes would cease. “In short,” von Helmholtz declared, “the universe from that time forward would be condemned to a state of eternal rest.”

Fortunately, this “heat death of the universe” would not arrive until eons into the future.

In the meantime, Clausius introduced the concept of entropy to quantify the transformation of useful energy into useless waste heat — providing yet another way of expressing the Second Law. If the First Law can be stated as “the energy of the universe is constant,” he wrote in 1865, then the Second Law could be stated as “the entropy of the universe tends to a maximum.”

Entropy, roughly, means disorder. Left to itself, an orderly system will degenerate into a disorderly mess. More technically, temperature differences in a system will tend to equalize until the system reaches equilibrium, at a constant temperature.

From another perspective, entropy refers to how probable the state of a system is. A low-entropy, ordered system is in a state of low probability, because there are many more ways to be disordered than ordered. Messier states, with higher entropy, are much more probable. So entropy is always likely to increase — or at least stay the same in systems where molecular motion has already reached equilibrium.

Bringing probability into the picture suggested that it would be impossible to prove the Second Law from analyzing the motions of individual molecules. It was necessary instead to study statistical measures that described large numbers of molecules in motion. Work along these lines by physicists James Clerk Maxwell, Ludwig Boltzmann and J. Willard Gibbs generated the science of statistical mechanics, the math describing large-scale properties of matter based on the statistical interactions of its molecules.

Maxwell concluded that the Second Law itself must possess merely statistical validity, meaning it was true only because it described the most probable of processes. In other words, it was not impossible (though unlikely) that cold could flow to hot. He illustrated his point by inventing a hypothetical little guy (he called it a “being”; Kelvin called it a demon) that could operate a tiny door between two chambers of gas. By allowing only slow molecules to pass one way and fast molecules the other, the demon could make one chamber hotter and the other colder, violating the Second Law.

But in the 1960s, IBM physicist Rolf Landauer showed that erasing information inevitably produces waste heat. Later his IBM colleague Charles Bennett pointed out that a Maxwell demon would need to record molecular velocities in order to know when to open and shut the door. Without an infinite memory, the demon would eventually have to erase those records, preserving the Second Law.

Demon at the door

In a famous thought experiment, physicist James Clerk Maxwell proposed that a hypothetical being — now called Maxwell’s demon — could separate fast-moving “hot” molecules from slow-moving “cold” molecules, creating a temperature difference in apparent violation of the second law of thermodynamics. Later work suggested that this demon cannot really break the Second Law.

The thought experiment begins with two compartments of gas at the same temperature. Both have a mix of fast-moving (red) and slow-moving (blue) molecules.

T. Tibbitts

A hypothetical being selectively opens a door between the compartments to let the fast molecules pass one way and the slow molecules pass the other way.

T. Tibbitts

That effort would make the gas in one chamber hotter, a temperature difference that could now be used to do work.

T. Tibbitts

Another enigmatic issue emerging from studies of the Second Law involved its connection to the direction of time.

Laws governing molecular motion do not distinguish between future and past — a video of molecular collisions running backward shows them observing the same laws as a video moving forward. Yet in real life, unlike science fiction stories, time always flows forward.

It seems logical to suggest that time’s arrow was aimed by the Second Law’s requirement of increasing entropy. But the Second Law cannot explain why entropy in the universe has not already reached a maximum. Many scientists today believe time’s arrow cannot be explained by the Second Law alone (SN: 4/1/14), but also must have something to do with the origin of the universe and its expansion following the Big Bang. For some reason, entropy must have been low at the beginning, but why remains a mystery.

The Second Law hasn’t been rigorously proved

In his history of the Second Law, Wolfram recounts the many past efforts to provide the Second Law with a firm mathematical foundation. None have succeeded. “By the end of the 1800s … the Second Law began to often be treated as an almost-mathematically-proven necessary law of physics,” Wolfram wrote. But there were always weak links in the mathematical chain of reasoning. Despite the common belief that “somehow it must all have been worked out,” he commented, his survey showed that “no, it hasn’t all been worked out.”

Some recent efforts to verify the Second Law invoke Landauer’s emphasis on erasing information, which links the Second Law to information theory. In a recent paper, Shintaro Minagawa of Nagoya University in Japan and colleagues assert that merging the Second Law with information theory can secure the law’s foundation.

“The second law of information thermodynamics,” they write, “can now be considered a universally valid law of physics.”

In another information-related approach, Wolfram concludes that the Second Law’s confirmation can be found in principles governing computation. The Second Law’s basis, he says, is rooted in the fact that simple computational rules can produce elaborately complicated results, a principle he calls computational irreducibility.

Whether the Second Law is in fact universally true remains unsettled. Perhaps resolving that question will require a better definition of the law itself.

While many researchers have sought proofs of the Second Law, others have repeatedly challenged it with attempts to contradict its universal validity (SN: 3/8/16; SN: 7/17/17). But a 2020 review in the journal Entropy concludes that no such challenges to the Second Law have yet succeeded. “In fact, all resolved challengers’ paradoxes and misleading violations of the Second Law to date have been resolved in favor of the Second Law and never against,” wrote thermodynamicist Milivoje M. Kostic of Northern Illinois University in DeKalb. “We are still to witness a single, still open Second Law violation, to be confirmed.”

Yet whether the Second Law is in fact universally true remains unsettled. Perhaps resolving that question will require a better definition of the law itself. Variations of Clausius’ statement that entropy tends to a maximum are often given as the Second Law’s definition. But the physicist Richard Feynman found that unsatisfactory. He preferred “a process whose only net result is to take heat from a reservoir and convert it to work is impossible.”

When the Second Law was born, Carnot simply described it without defining it. Perhaps he knew it was too soon. He did, after all, realize that the future would bring new insights into the nature of heat. In unpublished work preserved in personal papers, he deduced the equivalence between heat and mechanical motion — the essence of what would become the first law of thermodynamics. And he foresaw that the caloric theory would probably turn out to be wrong. He cited experimental facts “tending to destroy” caloric theory. “Heat is simply motive power, or rather motion which has changed form,” he wrote. “It is a movement among the particles of bodies.”

Carnot planned to do experiments testing these ideas, but death intervened, one of nature’s two (along with taxes) inviolable certainties. Maybe the Second Law is a third.

But whether the Second Law is inviolable or not, it will forever be true that human laws will be a lot easier to break.

Commentary: In conclusion, the second law of thermodynamics stands as a remarkable scientific principle, celebrating its bicentennial with a legacy that has profoundly shaped our understanding of the physical world. From its humble beginnings in Sadi Carnot's quest to improve steam engines to its far-reaching implications for the fate of the universe, the Second Law has proven to be a cornerstone of physics, chemistry, and engineering.

Yet, for all its apparent universality and importance, the Second Law remains enigmatic. Its various formulations – from the flow of heat, to the inevitability of waste, to the increase of entropy – all point to a deeper truth about nature that continues to elude rigorous proof. The ongoing efforts to establish its mathematical foundations and the periodic challenges to its validity underscore both its complexity and its centrality to scientific thought.

As we reflect on two centuries of the Second Law, we are reminded that even the most fundamental principles of science are not immune to scrutiny and refinement. The interplay between the Second Law and emerging fields like information theory and computational science hints at new frontiers of understanding. Perhaps, like Carnot himself who glimpsed beyond the caloric theory of his time, we stand on the cusp of deeper insights into the nature of energy, order, and time.

What remains clear is that the Second Law, whether ultimately inviolable or not, will continue to guide our technological progress, inform our view of the cosmos, and inspire the curiosity of scientists for generations to come. In a world where human laws are frequently broken, the enduring legacy of the second law of thermodynamics serves as a humbling reminder of nature's unyielding principles.



Sponsored content

Back to top  Message [Page 1 of 1]

Permissions in this forum:
You cannot reply to topics in this forum