Please follow the next lines and images about an interesting questions.
That is, why inferring design on functionally specific, complex organisation and associated information, e.g.:
. . . makes good sense.
The article features “Philosopher Nick Bostrom, director of the Future of Humanity Institute at Oxford University.”
I think Bostrom’s argument raises a point worth pondering, one oddly parallel to the Boltzmann brain popping up by fluctuation from an underlying sea of quantum chaos argument, as he discusses “richly detailed software simulation[s] of people, including their historical predecessors, by a very technologically advanced civilization”:
>>Bostrom is not saying that humanity is living in such a simulation. Rather, his “Simulation Argument” seeks to show that one of three possible scenarios must be true (assuming there are other intelligent civilizations):
- All civilizations become extinct before becoming technologically mature;
- All technologically mature civilizations lose interest in creating simulations;
- Humanity is literally living in a computer simulation.
His point is that all cosmic civilizations either disappear (e.g., destroy themselves) before becoming technologically capable, or all decide not to generate whole-world simulations (e.g., decide such creations are not ethical, or get bored with them). The operative word is “all” — because if even one civilization anywhere in the cosmos could generate such simulations, then simulated worlds would multiply rapidly and almost certainly humanity would be in one.
As technology visionary Ray Kurzweil put it, “maybe our whole universe is a science experiment of some junior high school student in another universe.”>>
In short, if once the conditions are set up for a large distribution of possibilities to appear, you have a significant challenge to explain why you are not in the bulk of the possibilities in a dynamic-stochastic system.
Let me put up an outline, general model:
Such a system puts out an output across time that will vary based on mechanical and stochastic factors, exploring a space of possibilities. And in particular, any evolutionary materialist model of reality will be a grand dynamic-stochastic system, including a multiverse.
Now, too, as Wiki summarises, there is the Boltzmann Brain paradox:
>>A Boltzmann brain is a hypothesized self aware entity which arises due to random fluctuations out of a state of chaos. The idea is named for the physicist Ludwig Boltzmann (1844–1906), who advanced an idea that the Universe is observed to be in a highly improbable non-equilibrium state because only when such states randomly occur can brains exist to be aware of the Universe. The term for this idea was then coined in 2004 by Andreas Albrecht and Lorenzo Sorbo.
The Boltzmann brains concept is often stated as a physical paradox. (It has also been called the “Boltzmann babies paradox”.) The paradox states that if one considers the probability of our current situation as self-aware entities embedded in an organized environment, versus the probability of stand-alone self-aware entities existing in a featureless thermodynamic “soup”, then the latter should be vastly more probable than the former.>>
In short, systems with strong stochastic tendencies tend to have distributions in their outcomes, which are dominated by the generic and typically uninteresting bulk of a population. Indeed this is the root of statistical mechanics, the basis for a dynamical understanding of thermodynamics i/l/o the behaviour of large collections of small particles.
For instance, one of my favourites (explored in Mandl) is an idealised two-state element paramagnetic array, with atoms having N-pole up/down, a physical atomic scale close analogue of the classic array of coins exercise. We can start with 500 or 1,000 coins in a string, which will of course pursue a binomial distribution [3.27 * 10^150 or 1.07*10^301 possibilities respectively, utterly dominated by coins in near 50-50 outcomes, in no particular orderly or organised pattern], then look at an array where each atom of our 10^57 atom sol system has a tray of 500 coins flipped say every 10^-13 – 10^-15 s:
The outcome of such an exercise is highly predictably that no cases of FSCO/I (meaningful complex strings) will emerge, as the number of possible observed outcomes is so small relative to the set of possibilities that it rounds down to all but no search, as the graphic points out.
This is of course an illustration of the core argument to design as credible cause on observing FSCO/I, that once functionally specific complex organisation and associated information are present in a situation, it demands an observed to be adequate explanation that does not require us to believe in statistical needle- in- vast- haystack- search- challenge miracles:
The Captain Obvious fact of serious thinkers making similar needle in haystack arguments, should lead reasonable people to take pause before simply brushing aside the inference to design on FSCO/I. Including in the world of life and in the complex fine tuned physics of our cosmos that sets up a world in which C-chemistry, aqueous medium terrestrial planet life is feasible.
But we’re not finished yet.
What’s wrong with Bostrom’s argument, and wheere else does it point.
PPolish and Mapou raise a point or two:
In short, a detailed simulation world will be a designed world.
Likewise High School student projects do not credibly run for 13.7 BY. Not even PhD’s, never mind Kurzweil’s remark.
So, what is wrong with the argument?
First, an implicit assumption.
It is assuming that unless races keep killing off themselves too soon, blind chance and mechanical necessity can give rise to life then advanced, civilised high tech life that builds computers capable of whole universe detailed simulations.
But ironically, the argument points to the likeliest, only observed cause of FSCO/I, design, and fails to address the significance of FSCO/I as a sign of design, starting with design of computers, e.g.:
So, real or simulation, we are credibly looking at design, and have no good empirical observational grounds to infer that FSCO/I is credibly caused by blind chance and mechanical necessity.
So, the set of alternative possible explanations has implicitly questionable candidates and implicitly locks out credible but ideologically unacceptable ones, i.e. intelligent design of life and of the cosmos. That is, just maybe the evidence is trying to tell us that if we have good reason to accept that we live in a real physical world as opposed to a “mere” speculation, then that puts intelligent design of life and cosmos at the table as of right not sufferance.
And, there is such reason.
Not only is it that the required simulation is vastly too fine grained and fast-moving to be credibly centrally processed but the logic of complex processing would point to a vast network of coupled processors. Which is tantamount to saying we have been simulating on atoms etc. In short, it makes good sense to conclude that our processing elements are real world dynamic-stochastic entities: atoms, molecules etc in real space.
This is backed up by a principle that sets aside Plato’s Cave worlds and the like: any scheme that implies grand delusion of our senses and faculties of reasoning i/l/o experience of the world undermines its own credibility in an infinite regress of further what if delusions.
Reduction to absurdity in short.
So, we are back to ground zero, we have reason to see that we live in a real world in which cell based life is full of FSCO/I and the fine tuning of the cosmos also points strongly to FSCO/I.
Thence, to the empirically and logically best warranted explanation of FSCO/I.
Thank you Dr Bostrom for affirming the power of the needle in haystack challenge argument.
Where that argument leads, is to inferring design as best current and prospective causal explanation of FSCO/I, in life and in observed cosmos alike.
Any suggestions and comments?