• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Distinguishing Design from Fine-Tuning

LegionOnomaMoi

Veteran Member
Premium Member
What Fine-Tuning is (and isn't)

Like so many terms within the sciences, “fine-tuning” is both widely misunderstood, used inconsistently even by specialists, and misleadingly suggestive. Fine-tuning, or the fine-tuning problem, intuitively suggests (if true/real) the existence of a “fine-tuner” or designer. It’s true that many specialists have used the term this way, but most aren’t physicists (most are actually philosophers, theologians, or scientists in unrelated fields). Physicists, on the other hand, tend to use the term differently. To understand the various senses the term conveys, it is useful to find some common ground to them all. At heart, fine-tuning is similar to the Weak Anthropic Principle (WAP). The WAP is the simple and obvious truth that, because we are here, the structures, laws, constants, forces, dynamics, etc., of the universe must be such to allow for the fact that we are here. Fine-tuning is a bit more involved, but only because it turns out there are a number of properties of the universe that, were any to be slightly different (sometimes by a percent or two, sometimes by over a billion billionths of a percent), life or the universe wouldn’t exist. However, life and the universe do exist.

Many, if not most, physicists are extremely dissatisfied with these fine-tunings. The most general problem is that not only are the finely-tuned parameters related, they are often dependent upon the standard model or grand unified theories (GUTs). Most physicists, probably even those who believe in a creator deity, do not like to rely on divine creation to explain the cosmos. However, if the nature of the universe isn’t designed, we can’t simply state that the values required for fundamental constants or other “fine-tuned” parameters are due to chance. Why? Because these values (and even the parameters) differ in different models of the cosmos. We would like to be able to rule-out different theories like string theory and its various successors from other alternatives to or extensions of the standard model and one great way to do this would be to find models that adhere to known empirical findings and accepted theories as well as solve the “fine-tuning problem.” Put differently, if e.g., some model based on supersymmetry requires numerous parameters to all be extremely precise for no reason, but Susskind’s cosmic landscape or some other multiverse theory makes finding such parameters necessary because there are literally infinitely many universes with different values for the parameters and we happened to find ourselves in one that allowed us to be there (compared to all the ones in which life doesn’t exist because it can’t), then the general preference is for a model that requires the “fine-tuning” we find rather than one in which the values must be extremely precise for no reason.

You’ll note that I’ve referred to values being extremely precise, but a clear objection is that e.g., there is nothing problematic with this at all. For example, the speed of light is very precise, and if it weren’t we wouldn’t have special relativity. Precision itself clearly isn’t a problem. The problem is probabilistic (of sorts). If there is no reason for all the fundamental forces, the initial conditions of the universe, and other parameters that are basic, foundational components of the nature of reality to be all precisely set to values that are “biophilic” or friendly to life, why should countless (actually, uncountably infinite) small deviations from any of these would yield a non-hospitable universe? Let’s make this more precise with two examples by Tegmark:

“Suppose you check into a hotel, are assigned room 1967 and, surprised, note that this is the year you were born. After a moment of reflection, you conclude that this is not all that surprising after all, given that the hotel has many rooms and that you would not be having these thoughts in the first place if you’d been assigned another one. You then realize that even if you knew nothing about hotels, you could have inferred the existence of other hotel rooms, because if there were only one room number in the entire universe, you would be left with an unexplained coincidence.
As a more pertinent example, consider M, the mass of the Sun. M affects the luminosity of the Sun, and using basic physics, one can compute that life as we know it on Earth is only possible if M is in the narrow ranger 1.6 x 10^30 kg - 2.4 x 10^30 kg- otherwise Earth's climate would be colder than on Mars or hotter than on Venus. The measured value is ~ 2.0 x 10^30 kg. This apparent coincidence of the habitable and observed M-values may appear disturbing given that calculations show that stars in the much broader mass range M ~ 10^29 kg - 10^32 kg can exist. However, just as in the hotel example, we can explain this apparent coincident if there is an ensemble and a selection effect: if there are in fact many solar systems with a range of sizes of the central star and the planetary orbits, then we obviously expect to find ourselves living in one of the inhabitable ones."
Tegmark, M (2004). Parallel universes. In J. D. Barrow, P. C. W. Davies and C. L. Harper (Eds.) Science and Ultimate Reality: Quantum Theory, Cosmology, and Complexity. Cambridge University Press.

So we can imagine (indeed, we must if we are to advance beyond the standard model, universally acknowledged to be deficient) a variety of possible universes. The odds against the one we actually reside is less probable than getting hit by lightning twice a few seconds after winning several lotteries. However, if, like Rees’ hotel rooms or the actual suns, our “universe” is really just a universe in a multiverse, then the fact that we find ourselves to have won this incredibly improbable lottery is because out of all the universes in the multiverse we exist in one that we can.

This explanation, though, is problematic in two ways. First, it doesn’t adequately deal with the probability space of “possible universes”. That is, if we assume that our universe is it, we can’t use Rees’ argument because there are no other “hotel room universes”. This is it. Second, it sneakily introduces non-conditional probability after introducing the wrong way to understand the conditional probability. Going back to the hotel example, we can calculate the probability that we find ourselves in a particular room just by taking the number of hotel rooms possible and dividing 1 by this number. We can’t do the example of our sun, but because we know a lot about stellar astrophysics we can calculate a decent approximation of the probability. But in both cases, we’re not interested in the probability of getting any hotel room or any star as the Sun, but rather that, given that we find ourselves in a hotel room or a star with the M value we do, what is the probability that the hotel room will have a number equal to our birthday or to the M value of the Sun?”

I’ve come to agree that an example by Leslie is far better than either of Tegmark’s and better than I myself have thought for some time (much of my opposition, I realized, was due to its misuse by those who argued that fine-tuning “proves” a designer). I use a condensed form given by Rees:


"There are various ways of reacting to the apparent fine tuning of our six numbers. One hard-headed response is that we couldn't exist if these numbers weren't adjusted in the appropriate 'special' way: we manifestly are here, so there's nothing to be surprised about. Many scientists take this line, but it certainly leaves me unsatisfied. I'm impressed by a metaphor given by the Canadian philosopher John Leslie. Suppose you are facing a firing squad. Fifty marksmen take aim, but they all miss. If they hadn't all missed, you wouldn't have survived to ponder the matter. But you wouldn't just leave it at that - you'd still be baffled, and would seek some further reason for your good fortune.”
M. J. Rees (1999). Just Six Numbers: The Deep Forces that Shape the Universe. Weidenfeld and Nicholson.

It is important to note quickly that Rees’ “six numbers” are a very low number compared to the typical number of fine-tuned parameters, and constitute Rees’ “solution”, but that this number requires a lot of other assumptions and ignores additional problems (as Rees admits here and in e.g., Rees, M. J. (2007). Cosmology and multiverse. In B. Carr (Ed.) Universe or Multiverse? (pp. 57-75). Cambridge University Press). However, the point is that it gives us the right way to deal with the conditional probability of finding ourselves in a universe just right for life. We don’t ask “given that we won the lottery, what is the probability we did?” or “given that a firing squad of 50 all fired but missed me, what are the chances I survived death by firing squad?” Likewise, we don’t ask “given we exist, what is the probability we find ourselves in a universe that allows our existence?” Rather, we want to know, if the universe isn’t designed, why so many possible slight differences to the basic make-up of the universe would make it unfit for life (or galaxies, or atoms, or its survival beyond a few seconds)?

This is a very nuanced issue. On the one hand, we might approach it not in terms of probable universes but why we couldn’t have existed in a universe in which gravitation was much stronger, or the cosmological constant different by .00000000000000000000000000001? After all, we are assuming that there is no fine-tuner (no designer), and thus not only very different values for the “fine-tuned” parameters but universes without e.g., infinitely many initial conditions very like to the one which shaped our universe and which wouldn’t re-collapse almost immediately or fail to form galaxies. This assumption fails, but why? Why do we not only find ourselves in a universe fundamentally suited for life, but that life (and in many cases atoms, galaxies, or the survival of the universe beyond a very small interval of time) requires fine-tuning when it presumably wasn’t fine-tuned?

Intelligent Design, Irreducible Complexity, and the Nature of Actual "Design" Arguments

Or, we could approach it using probabilities for particular universes. This is a basic starting point for multiple multiverse cosmologies. Or we could use other approaches. However, what is quite important is to distinguish all of these from the kinds of arguments some argue fine-tuning suggests and even count as part of the more general “tuning” of the sort found in creationist/intelligent design arguments. In reality, there is a vast difference.

Those like Behe, Meyer, Dembski, Collins, Schroeder, Polkinghorne, etc., who argue that we couldn’t have evolved without god or that the nature of life in all its complexities reveals a designer, are dealing (at least partly) with a common fallacy we as humans are predisposed to: seeing purpose, cause, or design whenever we see organization of the type we normally associate with planning or an organizer. In other words, making claims that we can “find god” by realizing the beautiful intricacies of DNA, the incredible complexity of cells, and similar non-god-in-the-gaps fallacies “see” design because of our inclination to equate particular phenomena or structures with design (the difference between “cells are really complex, therefore god” and a god-in-the-gaps fallacy is a matter of what claims are made on the same evidence; saying that such complexity requires god is a “gaps” argument, while saying such complexity requires a designer is like “seeing” a face in the moon or Jesus in some rock formation; in order to process abstract, conceptual information to understand language or recognize faces we MUST ignore details and generalize away from specifics, but this has the downside of “seeing” instantiations of abstractions when they aren’t there).

In other words, the claim that living systems are “fine-tuned” in that they consist of millions of intricate structures that seem impossibly complicated to have resulted by chance is a very different claim than fine-tuning. First, the structure of DNA or the evolution of a flagellum are specific instances of structures among a much, much larger set that some claim are just “too” complicated to be relegated to chance the way that such claims (either implicitly or explicitly) assume we could of e.g., stalagmites or diamonds. Second, we are talking about instances of elements (systems, phenomena, etc.) in the universe, not the make-up of the universe itself. To see this, think about one way of defeating an argument for a designer based upon apparent irreducible complexity or the changes that evolutionary processes would result in the human brain or whatever. We might say that the physics of the universe make this possible, and reduce all such issues to the problems in the probability arguments made by creationist/I.D. proponents here. Fine-tuning, though, is what makes not only the would-be evidence for “design” possible, but even the survival of the universe for long enough to make evolution start, or long enough to allow for atoms, or for atoms to form no matter how long the universe “survived”.

Conclusion:

Essentially, the difference is that between trying to find the correct way to approach the probability of particular, seeming “designs” in the universe given that the universe makes them at least possible, and having a universe in which these are possible when, under the assumption there is no designer, we shouldn’t find that there are so many things that must be “just so” in order to for anything that seems to be designed as well as all that which doesn’t to even exist.

Also, fine-tuning is most frequently used to argue for theories like multiverse cosmologies or particular unified theories rather than a fine-tuner, while evidence for a designer from evidence for designs we find in the universe makes no predictions, explains nothing, and advances no models or theories.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I think the universe fined-tuned itself for life though eons of trial and error.
The problem is that this is to use fine-tuning in a sense in conflict with the most fundamental (and important) definition. One of the most important set of finely-tuned "values" constitute the initial conditions of the universe. By definition, initial conditions only occur once. Its true that the common (or maybe a common) solution here is to posit many initial conditions, but then we are no longer talking about our universe, as our universe was specified by one and only one initial set of conditions. However, the "trial and error" proposal isn't that far from a common solution to the fine-tuning problem; namely, that many universes in a chaotic multiverse are initialized all the time and for all time, and that ours just happens to be the universe "trial" that is free from the "errors" which would prevent our existence.
 

Parsimony

Well-Known Member
I think the universe fined-tuned itself for life though eons of trial and error.
That would require the Universe to have had different laws in the past, which we see no evidence of currently. Not only that, but it would have had to have some kind of reinforcing process that would have caused non-life friendly laws to change until they became life-friendly. What process would this be?
 

Nakosis

Non-Binary Physicalist
Premium Member
To distinguish fine-tuning from design arguments.

What about this idea that what we observe in the present can affect the past?
http://physicsworld.com/cws/article/news/2012/aug/03/can-the-future-affect-the-past

We make a choice of how to measure some phenomenon, this in turn creates a past event. If time isn't so linear. Could our scientific observations create a fine tuned beginning for the universe?

As we observe more of the universe it creates the past that leads to the present we observe. Basically that fine tuned past doesn't exist until we make the observation which requires the past to be what it was in order for us to make that observation in the present.

I don't know if this has been proven on the quantum level or if it is still theory.

The past and future are only possibilities until we make an observation. So the myriad other universes which couldn't sustain life never existed because they would require a conscious observation of their present existence. Conscious observation would be a fundamental requirement of any universe's existence.
 

Kilgore Trout

Misanthropic Humanist
Essentially, the difference is that between trying to find the correct way to approach the probability of particular, seeming “designs” in the universe given that the universe makes them at least possible, and having a universe in which these are possible when, under the assumption there is no designer, we shouldn’t find that there are so many things that must be “just so” in order to for anything that seems to be designed as well as all that which doesn’t to even exist.

I'm wondering if you could try rewriting this in parsable English?
 

outhouse

Atheistically
To distinguish fine-tuning from design arguments.

They cannot be fully separated.


Fine tuning implies the mechanic despite the naturalistic arguments used by some.

NOTHING refutes life evolved in an very rare environment within the universe.

Why promote the term fine tuning, if NOTHING was tuned for life??????????????????????
 

outhouse

Atheistically
Fine tuning fails often when large volcanic global events wipe out many species

Same for the next large asteroid impact.


Puddle thinking refutes it perfectly well.
 

LegionOnomaMoi

Veteran Member
Premium Member
Fine-Tuning Explained: Take II

I’m going to do something I don’t often do. I am going to completely contradict myself. I argued above that fine-tuning should not be taken, in general, to imply a “fine-tuner.” Now I am going to try to explain things a bit more clearly by saying that fine-tuning is a clear, explicit reference to a fine-tuner. Actually, it refers to “fine-tuners”. No, I didn’t just go from agnosticism to deism to polytheism since I started writing this; we are the “fine-tuners” that fine-tuning refers to (or rather, theoretical physicists and cosmologists are).
I mentioned above that the finely tuned parameters depend upon the one's model of the cosmos and one's approach to its evolution. I don’t think I sufficiently emphasized how key this is, or why instances of the term “fine-tuning’ in physics literature are typically references to a “fine-tuning problem.” I’m going to start with a hideously technical example of what I mean, because I want at least one example that isn’t a popular science type of explanation, but rather something akin to what one might find as a practicing physicist in relevant fields.

Fine-Tuning Example I: A model with parameters you can see

If you want to skip it, you won’t miss that much, but I shouldn't feel like if you read it you have to understand much of it other than my explanatory paragraph which follows it. I promise the relevant bit of the example is readily explainable, and that I will provide another, far clearer example immediately following this one:
“An example of the KKLT-based modular inflation is provided by the racetrack inflation model of. It uses a slightly more complicated superpotential
full

The potential of this theory has a saddle point as a function of the real and the complex part of the volume modulus: it has a local minimum in the direction Re ρ, which is simultaneously a very flat maximum with respect to Im ρ. Inflation occurs during a slow rolling of the field Im ρ away from this maximum (i.e. from the saddle point). The existence of this regime requires a significant fine-tuning of parameters of the superpotential. However, in the context of the string landscape scenario describing from 10^100 to 10^1000 different vacua (see below), this may not be such a big issue. A nice feature of this model is that it does not require adding any new branes to the original KKLT scenario, i.e. it is rather economical" (emphasis added)
Linde, A. (2010). Inflationary Cosmology. In M. Lemoine, J. Martin, & P. Peter (Eds.). Inflationary Cosmology (Lecture Notes in Physics Vol. 738). Springer.

Ok- even the bolded portion is probably kind of difficult to understand, but the key point is simple enough: equation 1.34 is a model of “inflationary” dynamics (not exactly, but you can feel free to research superpotentiality on your own), and in order to make it consistent with observed “reality” the parameters (esp. ρ) require extreme precision or the model fails. However, this turns out to be a scheinproblem because the model isn't meant to be isolated from a more general theory of the universe and its evolution. In other words, although inflation poses various problems for string theory (some of which have been dealt with, at least mathematically), in this case a type of string inflation not only works out mathematically but makes a would-be issue of fine-tuning evaporate (or at least according to the author and his co-developers of KKLT; the “L” is for “Linde”, the author of the chapter).


Fine-Tuning Example II: A simple real example and an explanatory analogy:

I promised a better, simpler example. It is given after another real example to make clear what the real example implies. If you suffered through the previous example, feel free to skip down to the portion I’ve marked with “!***!”. If you skipped the previous example, this one is easier and brief, so I would recommend reading all of it:

“Even though the standard model has already been tested in various ways at the fraction of a percent level (and passed with flying colors) it does suffer from a serious problem. When people calculate the quantum corrections (that is the corrections that take into account quantum mechanics) to the standard model they find that the corrections tend to drive the Higgs condensate up to the highest scale in the theory, which gives W and Z [weak boson] masses about a million trillion times bigger than the value found in experiments. Technically this problem can be overcome by adjusting one of the basic numbers in the theory to an accuracy of 32 digits. This is called “fine-tuning” and usually causes a deep feeling of revulsion in a physicist. !***! It would be like owning a radio whose tuning knob had to be adjusted very precisely in order to get your favorite radio station. If the knob on your radio had to be adjusted to a precise angle down to a million-trillionth of a degree you would probably just throw the radio away since you would be extremely unlikely to ever be able to tune your radio precisely enough to listen to your favorite station. Using the same kind of logic most physicists have thrown away the standard model in the sense that they expect that the Large Hadron Collider will prove it wrong, even though it is perfectly adequate to describe any other particle physics experiment that has ever been done.”
Bars, I., Terning, J., & Nekoogar, F. (2010). Extra Dimensions in Space and Time (Multiversal Journeys). Springer.

This time the model is the standard model (SM, or the standard model of particle physics). The problem is that theory and experiment disagree, which happens all of the time (“all models are wrong, but some are useful”). Whenever we have a model in the sciences, whether a model to predict future temperatures in climate science or a neuronal model that assumes neurons transmit information via the rate of spikes, and that model doesn’t agree with observation, we try to adjust it to by e.g., fiddling around with the equations that make up the model. When we do this with the SM, though, we find that in order to make the model right we need assume that for no reason at all, the entire universe depends upon a degree of precision we usually can’t measure and almost never care about- accuracy to 32 digits. In various mathematical, physics, and engineering textbooks (among others) one will find e.g., π calculated to a few decimals or some other numerical approximation along with the statement like “it is inconceivable that anyone would require any further accuracy for any practical considerations” (from Thomson, Bruckner, & Brucker’s Elementary Real Analysis).

Consider what this means: essentially, to fix the model requires a fundamental parameter to be exact to 32 decimals, which is to say that for no reason if this “basic number” in the model were even a billion billion billionths off, the whole standard model would fall apart. So which is more likely: that this “basic number” in the standard model does actually require unheard of precision for no reason, or that there’s a bigger problem with the model that doesn’t involve tuning?

Conclusion: A Simple Definition of Fine-Tuning

Finally, time for a real “textbook” fine-tuning definition:
“Focusing on more quantitative aspects, the SM poses several questions, which remain unsolved in GUTs, related to the idea of naturalness or fine-tuning. Physical systems often contain small parameters, which are considered natural if the system develops a new symmetry when the parameter is set to zero…The SM however contains several parameters experimentally constrained to be extremely small, and for which no known symmetry exists within the SM, so they must be unnaturally fine-tuned, in the above sense.”
Ibáñez, L. E., & Uranga, A. M. (2012). String Theory and Particle Physics: An Introduction to String Phenomenology. Cambridge University Press.

Despite having said so explicitly, I’m really not suggesting that my initial post was wrong in some sense. There, I was focusing more on what fine-tuning isn’t and describing what it is compared to (almost always) sensational claims about fine-tuning being evidence for a designer/divine creation. Here, I wanted to provide something more down-to-earth regarding what it actually is. Overly simplistic? Sure. But even overly simplistic here can be a bit technical, and accuracy requires writing a book or assuming at least a graduate level understanding of particle physics, QFT, and SM.
 

idea

Question Everything
Either you go with multiverse - in which everything exists, including God... or you go with fine-tuning, which case the universe was designed by God. Either way, God wins.
 

LegionOnomaMoi

Veteran Member
Premium Member
What about this idea that what we observe in the present can affect the past?
In the actual paper your link relies on, this is limited to a well-known "kind" of "influence". It is very much related to Wheeler's delayed-choice experiment. Simplistically, with sufficiently controlled experiments in quantum mechanics, we can "choose" the nature of the measurement outcome after we make the measurement. The article's approach is a bit different. It relies on EPR-Bell nonlocality, exploiting the kind of causal violations Bell formalized and which were first demonstrated by Aspect et. al., along with a time-symmetric understanding of "weak measurements" or weak values of measurements to yield (seemingly) retrocausal effects. The authors present resolutions in terms of a dichotomy (either adopt their solution or change one of a few other conditions, one of which at least is not a condition we wish to relax), this isn't really necessary as made explicit in e.g., Dressel, J. (2015). Weak values as interference phenomena. Physical Review A, 91(3), 032116. Another solution involves the interpretation of the measurement process.

Regardless, however, retrocausality in quantum physics (including QFT) cannot happen at the macroscopic level, it isn't a matter of the future or present affecting the past but an example of past measurements on entangled systems effecting future measurement outcomes of that system, and while fascinating is less relevant to us than an honest-to-goodness cat actually being both alive and dead at the same time.

We make a choice of how to measure some phenomenon, this in turn creates a past event.
Actually, assuming the conditions of measurement the paper does, we derive a paradox in which present/future measurements logically seem to be influenced by past measurement. Put overly-simply, it's past measurements influencing future measurement outcomes, not future measurements affecting past measurements.

If time isn't so linear
It isn't. In fact just what "time" is happens to be a major matter of debate. But what isn't is entropy. The evolution of the universe can theoretically be understood as being at multiple stages in times depending upon one's frame of reference, and retrocausality could be an intrinsic property of the cosmos that actually matters on the quantum scale, but still the universe would unfold linearly. Also, entropy is actually extremely relevant to fine-tuning:

“When the second law of thermodynamics is applied to the whole Universe, then the immediate implication for the time’s arrow is that the Universe must have started in an extremely low entropy state that has been growing ever since its birth to its present value. Low entropy states imply high energies, which are a small subset of the general phase space for the possible initial conditions, a universe could be born with. They are the exception rather than the generic rule. On these basis, Sir Roger Penrose argued that starting the Universe at high energies seems to make the choice of our Initial Conditions very special indeed. Argument about the improbability of high energy states relies on equilibrium statistics, with a possible loophole of eliminating the possibility of some dynamic selection being at work. Statistical mechanics estimates, often used in literature, can quantify the Penrose statement as follows: the probability to have an initial patch inflate at some high energy scale
full
goes as
full
where the entropy
full
. This expression indicates that a GUT scale inflating patch like ours, is the most special and unlikely event to have started the universe, as likely as 1 part in
full
. Yet without this low entropy we can’t explain the observed arrow of time
." (emphasis added)
Mersini-Houghton, L. (2012). The Arrow of Time in a Universe with a Positive Cosmological Constant Λ. In F. Adams, T. Buchert, & L. Mersini-Houghton (Eds.). Cosmic Update: Dark Puzzles. Arrow of Time. Future History (Multiversal Journeys) (pp. 51-69). Springer.

Regardless of the nature of time, the "observed arrow of time" is linear, and especially in this case when we refer to a system's entropy.

Could our scientific observations create a fine tuned beginning for the universe?

Apart from in a model? No. I think most physicists would even prefer a divine creator than to suppose that measurements we make now can change origins of the universe billions and billions of years ago.

The past and future are only possibilities until we make an observation.
Let's grant that for the sake of argument. It's possibilities that create the fine-tuning problem. There are many possible initial conditions for the universe, but the one observed is extraordinarily unlikely.

So the myriad other universes which couldn't sustain life never existed because they would require a conscious observation of their present existence.
That's the sort of "weak multiverse" argument that some cosmologists/physicists employ: the multiverse is more like possible world semantics in that alternative universes aren't realized because they don't "work" (I'm simplifying here).

Conscious observation would be a fundamental requirement of any universe's existence.
That's explicitly or implicitly a part of most interpretations of quantum mechanics and quantum theory more generally, is the basis for the many-minds relative state interpretation (akin to the many-worlds interpretation), and underlies a lot of the transition in modern physics to referring to information as the fundamental "constituent" of reality.
 

LegionOnomaMoi

Veteran Member
Premium Member
Either you go with multiverse - in which everything exists, including God... or you go with fine-tuning, which case the universe was designed by God.
1) Assuming that there is no multiverse and also assuming that a universe like ours which allows for life is EXTREMELY unlikely, the inference "that the universe would allow for life is astronomically improbable, therefore god" is invalid. There is no reason to suppose that a divine creator is not ten times more astronomically improbable nor a reason to rule out the impossibility of what, by definition, must be possible (something can't be astronomically improbable without there being some infinitesimal probability that it will occur).
2) No multiverse theory holds that "everything exists". Most don't even allow universes that different from ours, and in some they aren't "real" universes.
3) Physics beyond the standard model isn't limited to multiverse cosmologies. Alternative solutions to fine-tuning problems exist.
4) Fine-tuning refers to the region of values of parameters in our models, not the universe.
 

Runewolf1973

Materialism/Animism
If that's so, it is far more tuned to non-life, considering the overwhelming majority of the universe isn't living.


Technically, none of the universe is "living", but there are certain forms of matter which interact in a more complex, peculiar manner. So given enough time and a bit of luck I would say, conditions eventually arose which were favorable for the formation of those more complex interactions we call life. Everything interacts, therefore it is not suprising that given billions of years of change, things just might begin interacting...differently.
 
Last edited:

LegionOnomaMoi

Veteran Member
Premium Member
I'm wondering if you could try rewriting this in parsable English?
Here's the original:
"Essentially, the difference is that between trying to find the correct way to approach the probability of particular, seeming “designs” in the universe given that the universe makes them at least possible, and having a universe in which these are possible when, under the assumption there is no designer, we shouldn’t find that there are so many things that must be “just so” in order to for anything that seems to be designed as well as all that which doesn’t to even exist."

Here's the attempted rewrite:

Simple version: Arguments which claim that life and our evolution clearly suggest a designer because of how intricate and so forth they are involve how improbable certain things in the universe are compared to others. Fine-tuning concerns the probability of the nature of the universe itself.

More nuanced version:

On the one hand, we have design arguments that point to things like how complex life is, how implausible it is that random evolutionary processes would yield such so-called "irreducibly complex" structures, how obviously "designed" things like DNA or nervous systems must be, etc.. Nowhere is it asserted that any of these supposed "designs" are violations of physics. Indeed, those like Dembski, Behe, etc., use physics to show the supposed improbability that X would have evolved by chance or that Y set of processes would yield Z. Moreover, all such arguments require a universe in which other processes must exist without intelligent design or there would be no reason for asserting that e.g., DNA is somehow "designed" but e.g., soap bubbles aren't. Thus all such claims for the evidence of design which point to specific instances require a universe in which these instances must be so highly improbable, and then the problem is reduced to showing flaws in the application of probability theory and other problematic assumptions. For example, Dembski's arguments for his "specified complexity" involve pointing out how "specified" evolutionary algorithms must be in order to generate the solutions they do. For example, he uses Dawkins' infamous "METHINKS IT IS LIKE A WEASAL" example of how such order can be generated by a fairly simple algorithm fairly quickly. He then points out that whenever we do this (and in particular when we do to describe real evolutionary processes, which are vastly more complicated), the algorithms required must be carefully selected in order to generate the desired results. The problem with this argument is that it mistakes what fitness functions and the like are. They aren't attempts to show how e.g., some specific set of mutations or genetic sequences WOULD have occurred, but how they COULD. That's because we BEGIN our models (most of the time) with what happened, and given that how it might have happened. Thus it is false to say "[e]very sequence is therefore potentially a target for the [evolutionary] algorithm, and the only thing distinguishing targets is the choice of fitness function." (from Dembski's paper "The Concept of Information in Physics and Biology" in the volume From Complexity to Life: On the Emergence of Life and Meaning by Gregersen (Ed.), Oxford University Press).
This is like claiming climate scientists are cheating because they use actual temperatures to understand the climate and predict future temperatures.

Fine-tuning concerns the universe itself, and in particular the most fundamental aspects of it as well as how it "began". Instead of would-be evidence of design in evolution or other particular instances which differ from what we would expect given just physics, fine-tuning concerns how physics itself is "tuned" to allow for life in ways that are problematic given the assumption that there exists no "tuner". That is, when there is no theoretical reason to suppose that so many different fundamental parameters couldn't slightly deviate or even massively vary, we would like to be able to assume that they could. When we allow for such variations, we find that the initial conditions of the universe and the fundamentals of physics can't change and still allow for life. A comparable "design" argument would be like finding out that, had life arisen anywhere else, any way else, or at any other time, intelligent life couldn't arise, and that the time, place, and manner in which it arose basically guaranteed that humans would.
 

LegionOnomaMoi

Veteran Member
Premium Member
So given enough time and a bit of luck I would say, conditions eventually arose
The conditions in question are things like the initial entropy of the universe, the values for the fundamental forces, the dimensionality of the universe, the cosmological constant, and similar conditions that all were set when the universe began or almost immediately after. Nothing much changed over the next several billion years that mattered, or at least mattered comparatively.
 

LegionOnomaMoi

Veteran Member
Premium Member
Fine tuning fails often when large volcanic global events wipe out many species
Fine-tuning can't possibly be relevant here. What fundamental parameters of which models of the cosmos (from the standard model to superstrings and holographic multiverse) involves anything remotely volcanoes such that they aren't finely-tuned? This is so completely irrelevant to what fine-tuning involves that it is no wonder you think it so linked to arguments for design: you are fundamentally mistaken about what it is (and, apparently, uninterested in educating yourself).
 
Top