Not sure why galaxies “should not exist”

Astronomers are constantly uncovering the “most distant,” “most massive” or “most energetic” objects in our universe, but today, researchers have announced the discovery of a truly monstrous structure consisting of a ring of galaxies around 5 billion light-years across.

Astronomers believe these GRBs (and therefore the galaxies they inhabit) are somehow associated as all 9 are located at a similar distance from Earth. According to its discoverers, there’s a 1 in 20,000 probability of the GRBs being in this distribution by chance – in other words, they are very likely associated with the same structure, a structure that, according to cosmological models, should not exist.

The anti-Big Bang theorists always sensed that the stakes were high.

In 1938, renowned physical chemist Walther Nernst informed German physicist Carl F. von Weizsächer that “infinite duration of time was a basic element of all scientific thought.” A beginning to the universe would betray “the very foundations of science,” and “we could not form a scientific hypothesis which contradicted the very foundations of science.”

Science-Fictions-square.gifBut they did, and fruitful research resulted. Mid-2013 experiments confirmed the Standard Model, as it came to be known.

Worse, the Big Bang gave evidence of fine-tuning, the universe’s apparent preparedness for life, as a theoretical problem for materialists. In an eternal universe, wait long enough and anything might happen. Put a date on the universe and you invoke probability.

Irritatingly (for materialists), Earth seems particularly fine-tuned for life. We live in a nice neighborhood on a spiral arm of the galaxy, far from the black holes, supernovas (exploding stars) and magnetars (deadly radiation sources) at the center. But not so far as to lack heavy elements such as iron. And we have nice neighbors. Giant Jupiter stays far away and sucks up the asteroids that would otherwise kill us. By contrast, the giant uninhabitable planets that orbit stars other than our Sun either hog the habitable space or follow deadly, wonky orbits. Earth, by contrast, is aGoldilocks planet, just right for life.

Unable to dispute it, materialist cosmologists allow us to know they don’t like it. Nobelist Steven Weinberg has heard prominent physicist David Gross say, “I hate it,” and responds,

This is understandable. Theories based on anthropic calculation certainly represent a retreat from what we had hoped for: the calculation of all fundamental parameters from first principles. It is too soon to give up on this hope, but without loving it, we may just have to resign ourselves to a retreat … 1

Not necessarily.

How can we get the universe to play out of tune?

One alternative response has been denial. British intellectual Bertrand Russell declared in his 1935 book Religion and Science that humanity is a “curious accident in a backwater.” Conceivably, Russell didn’t know just how favorable our position is. But Stephen Hawking certainly does, and he has said of our dreary little backwater (1989):

We are such insignificant creatures on a minor planet of a very average star in the outer suburbs of one of a hundred billion galaxies. So it is difficult to believe in a God that would care about us or even notice our existence.

Another response has been to invoke extraterrestrials. University of Sussex astronomer John Gribbin argues that the creators of the world were “closer to men than to gods”:

Evolution by natural selection, and all the other processes that produced our planet and the life on it, are sufficient to explain how we got to be the way we are, given the laws of physics that operate in our universe.However, there is still scope for an intelligent designer of universes as a whole.

For now. Science writer Michael Shermer goes Gribbin one better, proposing“Shermer’s last law,” that any sufficiently advanced extraterrestrial intelligence is indistinguishable from God:

What would we call an intelligent being capable of engineering life, planets, stars, and even universes? If we knew the underlying science and technology used to do the engineering, we would call it an extraterrestrial intelligence, if we did not know the underlying science and technology, we would call it God.

In short, no designer can have qualities that transcend a sophisticated space alien. Intelligence maybe. But not wisdom.

A third, far more effective, response has been to develop the “Copernican” Principle (though Copernicus would have rejected it), sometimes called the Principle of Mediocrity: Scientists must assume — as a principle — that our planet is mediocre. At present, there is no way of knowing if that is true. It is a guiding assertion.

Media star astronomer Carl Sagan (1934-1996) dramatized the Principle in Pale Blue Dot:

You might imagine an uncharitable extraterrestrial observer looking down on our species over all that time — with us excitedly chattering, “The Universe is created for us! We’re at the center! Everything pays homage to us!” — and concluding that our pretensions are amusing, our aspirations pathetic, that this must be the planet of the idiots. (p. 12)

People don’t want to be thought idiots. The Principle sold. As a BBC writer riffs, “Far from being unique, many now regard Earth as an ordinary lump of space rock and believe that life ‘out there’ is almost inevitable.”

But mark what follows: In the absence of evidence, the Copernican Principle, itself a mere assertion, enables new Earths to merely be asserted. They do not need to be demonstrated; they can now be conjured. The Principle is thus hauntingly akin to Darwinism, which asserts a history of life consistent with materialism, conjures scenarios, and brooks no opposition from evidence.

Curiously, Darwin is frequently invoked in materialist cosmology. Steady State cosmologist Geoffrey Burbidge, who had taxed his colleagues with joining the “First Church of Christ of the Big Bang,” sought to link the 1957 paper that brought him fame with Darwin’s theory of evolution. The conclusion echoedintentionally the conclusion of On the Origin of Species.

We will encounter that theme again in this tale, and consider what it means.

References Cited:

(1) Steven Weinberg, “Living in the Multiverse,” in Bruce L. Gordon and William A. Dembski, The Nature of Nature: Examining the Role of Naturalism in Science (Wilmington, DE: ISI Books, 2011), pp. 554.

Advertisements

Simulation Universe

Please follow the next lines and images about an interesting questions.

That is, why inferring design on functionally specific, complex organisation and associated information, e.g.:

abu_6500c3magand equally:

cell_metabolism

. . . makes good sense.

Now, overnight, UD’s Newsdesk posted on a Space dot com article, Is Our Universe a Fake?

The article features “Philosopher Nick Bostrom, director of the Future of Humanity Institute at Oxford University.”

I think Bostrom’s argument raises a point worth pondering, one oddly parallel to the Boltzmann brain popping up by fluctuation from an underlying sea of quantum chaos argument, as he discusses “richly detailed software simulation[s] of people, including their historical predecessors, by a very technologically advanced civilization”:

>>Bostrom is not saying that humanity is living in such a simulation. Rather, his “Simulation Argument” seeks to show that one of three possible scenarios must be true (assuming there are other intelligent civilizations):

  1. All civilizations become extinct before becoming technologically mature;
  2. All technologically mature civilizations lose interest in creating simulations;
  3. Humanity is literally living in a computer simulation.

His point is that all cosmic civilizations either disappear (e.g., destroy themselves) before becoming technologically capable, or all decide not to generate whole-world simulations (e.g., decide such creations are not ethical, or get bored with them). The operative word is “all” — because if even one civilization anywhere in the cosmos could generate such simulations, then simulated worlds would multiply rapidly and almost certainly humanity would be in one.

As technology visionary Ray Kurzweil put it, “maybe our whole universe is a science experiment of some junior high school student in another universe.”>>

In short, if once the conditions are set up for a large distribution of possibilities to appear, you have a significant challenge to explain why you are not in the bulk of the possibilities in a dynamic-stochastic system.

Let me put up an outline, general model:

gen_sys_proc_modelSuch a system puts out an output across time that will vary based on mechanical and stochastic factors, exploring a space of possibilities. And in particular, any evolutionary materialist model of reality will be a grand dynamic-stochastic system, including a multiverse.

Now, too, as Wiki summarises, there is the Boltzmann Brain paradox:

>>A Boltzmann brain is a hypothesized self aware entity which arises due to random fluctuations out of a state of chaos. The idea is named for the physicist Ludwig Boltzmann (1844–1906), who advanced an idea that the Universe is observed to be in a highly improbable non-equilibrium state because only when such states randomly occur can brains exist to be aware of the Universe. The term for this idea was then coined in 2004 by Andreas Albrecht and Lorenzo Sorbo.[1]

The Boltzmann brains concept is often stated as a physical paradox. (It has also been called the “Boltzmann babies paradox”.[2]) The paradox states that if one considers the probability of our current situation as self-aware entities embedded in an organized environment, versus the probability of stand-alone self-aware entities existing in a featureless thermodynamic “soup”, then the latter should be vastly more probable than the former.>>

In short, systems with strong stochastic tendencies tend to have distributions in their outcomes, which are dominated by the generic and typically uninteresting bulk of a population. Indeed this is the root of statistical mechanics, the basis for a dynamical understanding of thermodynamics i/l/o the behaviour of large collections of small particles.

For instance, one of my favourites (explored in Mandl) is an idealised two-state element paramagnetic array, with atoms having N-pole up/down, a physical atomic scale close analogue of the classic array of coins exercise. We can start with 500 or 1,000 coins in a string, which will of course pursue a binomial distribution [3.27 * 10^150 or 1.07*10^301 possibilities respectively, utterly dominated by coins in near 50-50 outcomes, in no particular orderly or organised pattern], then look at an array where each atom of our 10^57 atom sol system has a tray of 500 coins flipped say every 10^-13 – 10^-15 s:

sol_coin_fliprThe outcome of such an exercise is highly predictably that no cases of FSCO/I (meaningful complex strings) will emerge, as the number of possible observed outcomes is so small relative to the set of possibilities that it rounds down to all but no search, as the graphic points out.

This is of course an illustration of the core argument to design as credible cause on observing FSCO/I, that once functionally specific complex organisation and associated information are present in a situation, it demands an observed to be adequate explanation that does not require us to believe in statistical needle- in- vast- haystack- search- challenge miracles:

islands_of_func_challAlso:

is_ o_func2_activ_info

The Captain Obvious fact of serious thinkers making similar needle in haystack arguments, should lead reasonable people to take pause before simply brushing aside the inference to design on FSCO/I. Including in the world of life and in the complex fine tuned physics of our cosmos that sets up a world in which C-chemistry, aqueous medium terrestrial planet life is feasible.

But we’re not finished yet.

What’s wrong with Bostrom’s argument, and wheere else does it point.

PPolish and Mapou raise a point or two:

>>1

  • Simulated Universes scream Intelligent Design. Heck, Simulated Universes prove Intelligent Design.

    I can see why some Scientists are leaning in this direction. Oops/Poof does not cut it any more. Unscientific, irrational, kind of dumb.

  • ppolish,

    It’s a way for them to admit intelligent design without seeming to do so (for fear of being crucified by their peers). Besides, those who allegedly designed, built and are running the simulation would be, for all intents and purposes, indistinguishable from the Gods.

    Edit: IOW, they’re running away from religion only to fall into it even deeper.>>

In short, a detailed simulation world will be a designed world.

Likewise High School student projects do not credibly run for 13.7 BY. Not even PhD’s, never mind Kurzweil’s remark.

So, what is wrong with the argument?

First, an implicit assumption.

It is assuming that unless races keep killing off themselves too soon, blind chance and mechanical necessity can give rise to life then advanced, civilised high tech life that builds computers capable of whole universe detailed simulations.

But ironically, the argument points to the likeliest, only observed cause of FSCO/I, design, and fails to address the significance of FSCO/I as a sign of design, starting with design of computers, e.g.:

mpu_modelWhere, cell based life forms show FSCO/I-rich digital information processors in action “everywhere,” e.g. the ribosome and protein synthesis:

Protein Synthesis (HT: Wiki Media)

So, real or simulation, we are credibly looking at design, and have no good empirical observational grounds to infer that FSCO/I is credibly caused by blind chance and mechanical necessity.

So, the set of alternative possible explanations has implicitly questionable candidates and implicitly locks out credible but ideologically unacceptable ones, i.e. intelligent design of life and of the cosmos. That is, just maybe the evidence is trying to tell us that if we have good reason to accept that we live in a real physical world as opposed to a “mere” speculation, then that puts intelligent design of life and cosmos at the table as of right not sufferance.

And, there is such reason.

Not only is it that the required simulation is vastly too fine grained and fast-moving to be credibly  centrally processed but the logic of complex processing would point to a vast network of coupled processors. Which is tantamount to saying we have been simulating on atoms etc. In short, it makes good sense to conclude that our processing elements are real world dynamic-stochastic entities: atoms, molecules etc in real space.

This is backed up by a principle that sets aside Plato’s Cave worlds and the like: any scheme that implies grand delusion of our senses and faculties of reasoning i/l/o experience of the world undermines its own credibility in an infinite regress of further what if delusions.

Reduction to absurdity in short.

So, we are back to ground zero, we have reason to see that we live in a real world in which cell based life is full of FSCO/I and the fine tuning of the cosmos also points strongly to FSCO/I.

Thence, to the empirically and logically best warranted explanation of FSCO/I.

Design.

Thank you Dr Bostrom for affirming the power of the needle in haystack challenge argument.

Where that argument leads, is to inferring design as best current and prospective causal explanation of FSCO/I, in life and in observed cosmos alike.

Any suggestions and comments?

#cosmology, #math, #metaphysics, #philosophy, #physics, #science, #science-news, #universe

Harsh Thoughts: Cynicism Linked to Stroke Risk

Middle-age and older people who are highly stressed, have depression or who are perhaps even just cynical may be at increased risk of stroke, according to new research.

In the study, more than 6,700 healthy adults ages 45 to 84 completed questionnaires about their stress levels, depressive symptoms, feelings of anger, and hostility, which is a measure of holding cynical views about other people. The researchers then followed the participants for eight to 11 years, and looked at the relationship between these psychological factors and people’s risk of having a stroke.

“There’s such a focus on traditional risk factors — cholesterol levels, blood pressure, smoking and so forth. And those are all very important, but studies like this one show that psychological characteristics are equally important,” said study researcher Susan Everson-Rose, an associate professor of medicine at the University of Minnesota in Minneapolis.

Maths and Extra Terrestrial Civilization

I’ve found an interesting article from Robert Walkers which can be found here:

Modern maths has a “Heath Robinson” type approach – at least philosophically –  with its many sizes of infinity and logical paradoxes. Would this be the same for ETs? Also, what if they experience time and space differently from us? Perhaps they can only reason using flashes of insight?

Or, perhaps topology is easy, but counting, for them, is an advanced concept few understand? Or perhaps they use quantum logic or some other logic we haven’t thought of yet? Or, might they see everything as fractals?

With no experience of ET mathematicians, we haven’t got much to go on. But, let’s take a look at a few of the ways ET maths could take different approaches from ours, or be hard for us to understand.

INFINITY, SETS AND LOGICAL PARADOXES

This is an area of maths (use of sets or infinity or both) – that for us is full of paradoxes – such as Russell’s paradox, various Cantor’s paradoxes, the Banach Tarski paradox etc. It’s lead to much debate and puzzlement over the century or so.

Mathematicians and philosophers have many different ideas about it here on Earth,so it’s easy to imagine that ETs would also.

Some say the paradoxes have been solved.
Yes our maths is elegant in a way, and if you follow the rules carefully you don’t get any contradictions (at least as far as we know). However, if you look at those rules from a philosophically unattached standpoint you may get a different impression.

Modern set theory with

  • The puzzling impossibility of counting many fundamental things in mathematics – as in – ordering them into an unending list.Yet everything “interesting” can be counted. Ratios, finite decimals, square roots, more generally, solutions to polynomial and trig equations – everything like that can be counted easily.If you haven’t come across this before, see Impossibility of counting most mathematical objects by Robert Walker (just a short summary I did, linking to the material on the subject).

    Our maths is so “Heath Robinson” at least from a philosophical point of view, why this need to include so many things you never need in everyday mathematical life? It’s a bit like this potato peeling machine:

    Ingenious maybe, beautiful even if you like such things – but why go to all that trouble to peel the potatoes?

    We have all this apparatus of higher orders of infinity, just to include a whole bunch of obscure numbers that nobody ever needs as working mathematicians. That is to say – they never need any of them as individual numbers, just need to know, for logical reasons only, that all those uncountably many things exist.

    Why? It seems so clumsy.

    It is even stranger when you find out about the Löwenheim and Skolem paradox –  that if somehow “behind the scenes”, you replace all those uncountable infinities by other (rather intricate) finite and countable things, all the same results still hold true about them.

    That is – so long as the maths is expressible in a straightforward way using a finite number of symbols and proofs are easy to verify – “first order” maths

    Techy detail for logicians:  – you can avoid the paradox, technically, with a “second order” formal language with uncountably many distinct symbols. Which doesn’t really solve the philosophical issue of course.

    Any human or ET mathematician will only be able to distinguish a (small) finite number of symbols from each other. It’s a general issue for any higher-order logic – it needs a proof theory before mathematicians can use it in practice – and when you do that, the paradox surfaces again. Second-order logic – metalogical results

    An ET could reinterpret our maths in this way and their theorems would match ours in every detail.

    • Would ETs follow the usual approach of human mathematicians – that most numbers and mathematical entities can’t be counted?
    • Or take other views on infinity like some human mathematicians – perhaps very practical “constructive” in their approach to maths for instance, so the question doesn’t arise (more on that later)?
    • Or – reinterpret all our maths in some complex abstract way, as in the Löwenheim and Skolem paradox  – but for them it’s not a paradox, just how they think about maths?
    • Or does the question just not arise for them for some other reason we haven’t thought of yet, or have some other meaning for them?
    • Or, like us, have lots of points of view on the subject? An unending philosophical debate that’s gone on for millions of years?
    • Could they have some other take on the whole question which we haven’t thought of?
  • Continuum hypothesis – why does our maths say that we can never know whether or not there are other orders of infinity between the number of ratios or whole numbers, and the number of infinite decimals like pi?
  • Axiom of choice – given infinitely many pairs of shoes, it is easy to choose one of each – for instance choose the left hand shoe each time.But for indistinguishable socks – is it possible to choose one from each pair?

    Howard Rheingold painted Shoes (photo by Hoi Ito)

    When you have a mathematical equivalent of infinitely many pairs of shoes, there is no problem picking out one of each. It’s easy, for instance, just choose the left one out of each pair.

    But it gets far harder to cope with the mathematical equivalents of infinitely many pairs of socks.

    That’s because they are identical to each other (you can swap your left and right socks and not notice that anything has changed). Our maths doesn’t let us pick out one of each – unless we add in an extra axiom, the axiom of choice.

    It seems an obvious axiom, innocuous even – that if you have infinitely many pairs, you can choose a singleton from each one. However, it turns out that if you add it in, this leads – not to inconsistencies quite – but to results so strange that they seem paradoxical to human minds.

    For instance, one of many famous puzzling consequences – it lets you split a sphere into a small number of geometrical “pieces” – and combine them together to make two spheres of same volume as the original – without any gaps.

    =

    Banach–Tarski paradox

    If you accept it, you end up with maths that is more powerful – but let’s you prove these unintuitive results such as, that it’s possible to dissect a sphere geometrically into a small number of “pieces” (discontinuous but “rigid”) and re-assemble it to make two spheres of the same volume, without gaps.

    As another example – it lets you fill 3D space entirely with radius 1 circles – with none of them intersecting, yet no gaps, a sort of 3D space filling chain mail. Again most would find that paradoxical.

    Why does this axiom keep cropping up in Maths – and should we use it – or is it too powerful since it lets us prove paradoxical seeming results?

    Why does it matter, since in practice nobody ever is able to choose an infinite number of anythings in the real world? Nobody ever has an infinite number of pairs of socks, or of anything. So why do mathematicians need to think so much about their mathematical equivalents?

    Would ETs use the axiom of choice? If so, what do they make of its paradoxical results? Or is it not even an issue for them for some reason?

  • The arbitrary rules we use to keep maths consistent.For instance in one of the most popular ways of creating a logical foundations for maths, ZF, large sets are called “classes” and a class can’t be a member of a set.There is no good mathematical reason for this. It is just a “kludge” – we have to do it or we end up with an inconsistent theory.

    You do it just because, if you don’t  keep to the rules that have been worked out and just “follow your intuitions” about sets you end up with contradictory results and pardoxes. Genuine unresolvable paradoxes.

    The most famous one, Russell’s paradox (more about this later in this page).

The whole thing is really a bit of a kludge viewed somewhat dispassionately with your philosopher’s hat on rather than with your mathematician’s hat on.

It seems to work okay and is beautiful in its way. But is this really the best that we can do? And whether or not – is it such an obvious way of proceeding that ETs would have to end up with the same system, with all the same mathematical and philosophical ideas as ourselves?

I think it is possible that some ET mathematicians might have found some other solution or solutions.

Which might be better than ours, or worse, or just different. But it would be really interesting to learn – if

  • ET maths is generally similar to ours in its analysis of infinity, as well as paradoxes like Russell’s paradox
  • Or if there are many wildly different ways of doing it and we’ve only got one of them
  • Or if perhaps we are the odd ones out with a clumsy system because somehow as humans we have missed seeing some really simple ideas that seem obvious to most intelligent ETs.
  • Or even, cant rule this possibility out also, that amongst all these ideas, somewhere, we have some unique insight into it ourselves that other ETs have missed.

GODEL’S THEOREM

Godel’s theorem also is quite a strange result – especially if understood in the context of Hilbert’s program to provide a firm foundation for maths  which failed.

Godel showed that you can’t ever prove that maths is consistent – that if you ever prove that it is consistent then you know that you have done something wrong because that means it is inconsistent.

They might well have a different slant on Godel’s theorem I think  it might mean something different to them or might have other results there we haven’t thought of.

INCONSISTENT MATHS

They might know that some of our axiom systems are actually inconsistent.

They might also use Paraconsistent logic far more extensively than we do and not be bothered about inconsistencies in the way we do.

‘I daresay you haven’t had much practice,’ said the Queen. ‘When I was your age, I always did it for half-an-hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast’5.

Through the Looking Glass and what Alice found there
Lewis Carroll – the White Queen in Through the Looking Glass.

They might work happily with systems of mathematics in which a statement and its negation are both provable. In normal logic, then anything follows from a contradiction so such theories are useless – but in paraconsistent logic, then the same doesn’t apply and you can work fine with both a statement and it’s negation simultaneously.

OTHER ET LIKE MATHS WE HAVE ALREADY – CALCULUS RESULTS PROVED USING INFINITESIMALS

Here an infinitesimal is a quantity that is non zero, and yet smaller than the reciprocal of any normal positive whole number. So smaller than 1/1, 1/2, 1/3, 1/4, … 1/1000, 1/10^10 – smaller than any of those – but non zero. It’s hard to make this idea consistent.

But it is also hard to make the ideas of convergent sequences consistent also – and the “epsilon delta” method more usually used in calculus historically took several centuries to develop. The fundamental idea goes back to Bolzano in 1817, the (ε, δ)-definition of limit

I won’t go into how it works (you can check out the  (ε, δ)-definition of limit ) – but if you’ve done calculus rigorously, e.g. at university, you’ve probably seen this diagram.

It took a lot of effort by mathematicians before they had a reasonably rigorous way of doing calculus – and then even so, during the rest of the nineteenth century they found many “wild cases” bizarre things they found really hard to study – which lead eventually to Cantor’s ideas and to the paradoxes we’ve already met, in the late C19 and early C20.

Robinson showed that you can get the same calculus results with infinitesimals as with ordinary convergent sequences. His proofs are, generally, simpler and more elegant also (once you have the infinitesimals).

Vopenka in Prague then developed an “Alternative Set Theory” which starts maths on a different basis as regards ideas of infinity. With his ideas, then the idea of an infinitesimal is far easier to make consistent – and it becomes more natural as a way to develop calculus than the idea of a  convergent sequence, and gives a way you could develop maths from scratch where you might get the infinitesimal type theorems proved before the convergent sequence theorems.

So – that might just be an eccentric approach everywhere in the galaxy.

Or could be that some ETs take that as their basis for maths and see our approach as eccentric. They might prove calculus results with infinitesimals – and treat the “epsilon delta” method as an unusual alternative few use in practice – the reverse of our maths society.

That’s just a hint but enough of a hint to show that there can be other ways of looking at it. If they had gradually developed AST back in their equivalent of our C19 and C20 instead of ZF – then they might find our ZF strange.

AST is unlikely to become the basis of maths now, and it is not Vopenka’s objective to do that as far as I know. But – if it came first before ZF, in an ET civilizations mathematics, who knows?

OUR MATHS FOUNDATIONS COULD BE JUST HISTORICAL FLUKE LEADING BACK TO ORIGINS OF CALCULUS

Our present day ideas could date back to some historical incident way way back in maths history. E..g. perhaps if we had favoured the Leibnitz approach to calculus more, instead of the Newtonian one – both were incomplete and had flaws in them – but Leibnitz thought much more in terms of something rather like modern infinitesimals – maybe we’d have ended up with something more like AST when it finally got formalized better.
There are other ideas around that could be used as a basis also – just mentioning AST as one of many alternative foundational maths ideas.

MATHS WITHOUT INFINITY

ETs could also be pure Finitist or Intuitionist in their reasoning. If so they might make no use of different orders of infinity at all. This deals with many but not all of the puzzling features of modern maths.

They still would have some set theory paradoxes such as Russell’s paradox – intuitionistic or finitist maths doesn’t get around that.

See Intuitionism and infinity

DIFFERENT METHODS OF LOGICAL DEDUCTION

They might do mathematical deduction in a different way from us.

Actually human mathematicians have explored many methods of logical deduction, see:

Perhaps ETs have come up with other methods of logical deduction we haven’t thought of yet.

RUSSELL’S PARADOX

This is worth describing in detail because it uses such simple ideas, you’d think that just about all ETs would encounter it in their reasoning.

I like the way this is presented in wikipedia, so will just quote from the article on Russell’s paradox

“Let us call a set “abnormal” if it is a member of itself, and “normal” otherwise. For example, take the set of all squares in the plane. That set is not itself a square, and therefore is not a member of the set of all squares. So it is “normal”. On the other hand, if we take the complementary set that contains all non-squares, that set is itself not a square and so should be one of its own members. It is “abnormal”.

Now we consider the set of all normal sets, R. Determining whether R is normal or abnormal is impossible: if R were a normal set, it would be contained in the set of normal sets (itself), and therefore be abnormal; and if R were abnormal, it would not be contained in the set of all normal sets (itself), and therefore be normal. This leads to the conclusion that R is neither normal nor abnormal: Russell’s paradox.”

As soon as you start thinking in terms of abstract concepts, and idea  of a set, or collection of things – then Russell’s paradox is not far away.

Human mathematicians didn’t spot this paradox in our thinking until 1901. Though it’s closely related to the ancient Epimenides paradox

There’s no resolution to it, except to limit our reasoning to prevent it happening, with no really good mathematical basis for doing that.

Is it possible that some ETs don’t encounter Russell’s paradox – if so – why, and how do they reason? Or do encounter it but for some reason don’t find it paradoxical? Or is it a paradox for all ET mathematicians?

MATHS WITHOUT LINEAR TIME

More radically than that – ETs might not necessarily have a sense of linear time like us. We have a clear sense of past, present and future. And know exactly where we are in that time stream. But some ETs might live in a world where hardly anything changes from day to day. So is no need to remember when things happened but may be very important to know where they happened.

If so they might have a way of seeing the world that is spatially based – with linearly ordered time an abstract concept they find really hard to grasp. I can imagine e.g. if they live in the oceans beneath the surface of an icy moon like Europa, no idea that the rest of the universe exists, no seasons, nothing except gradients of temperature, and chemical gradients etc. They might have long term memory but no short term memory – as we understand their world.

After all in special relativity then time does play a rather strange role. It’s not as easy to understand in a single ordered time stream.

Perhaps there are other ways of thinking about the universe that start from a more spatial basis – not that they have no idea of time at all – but – that they don’t order it in a strictly linear way. What other ways of ordering it, they might have, I don’t know.

MATHS THAT IS BASED ON A QUANTUM MECHANICS TYPE WAY OF EXPERIENCING THE WORLD

Or – do use linear time but are totally unable to experience it directly so is a strange very abstract concept – while at the same time maybe find some other ideas, e.g. quantum mechanics type ideas easier to understand.

Maybe think in terms of superpositions of many states at once – and collapsing of uncertainties. Maybe their maths then would somehow reflect that – they would know what a linear ordering is – but would not be like us where nearly all the most interesting mathematical spaces are based on notions of distance and linear orderings along lines – maybe they don’t have geometry either as we have it but in some other form not based on Euclid’s axioms.

MATHS WITH COUNTING AS AN EXTREMELY ABSTRACT CONCEPT RARELY USED AND HARD TO UNDERSTAND

And indeed (this is not necessarily the same ETS – these maybe – entities that live as gas clouds, or films like stromatolites, colonies of microbes that merge and separate and form greater or less intelligence depending how many individual microbes involved – sort of like sponges, can strain them through a sieve and they come together again as if nothing happened) – they could go as fundamental as different ideas about counting.

For creatures like that, topology could be fundamental to their maths, everything continuous, no discrete shapes. They might think naturally in terms of open and closed sets (regions with or without a boundary) – or some other topological primitives we haven’t thought of yet.

Advance complex theorems in topology would be child’s play to them like 1 2 3, while counting would be an incredibly abstract idea they could formulate mathematically but perhaps find hard to grasp.

MATHS WITH EXTREMELY SHORT DEDUCTION SPANS

Perhaps they can’t make long deductions like we do. If they have hardly any time ordered short term memory – remember everything perfectly if they want to but not able to order it in time for more than a few seconds – then the very idea of chains of logical deduction may be alien to them, for anything more than a few deduction steps.

Instead they could rely extensively on seeing things at a glance. For instance with small numbers of things, we have the ability to see how many there are at a glance, without need to count them as 1, 2, 3.

See Subitizing

When you are familiar with geometry, you can often see geometrical theorems at a glance.


If you are used to geometrical ideas, you may be able to see at a glance that both squares have the same total area, and that therefore the two white squares at the right add up to the same total area as the single white square to the left, and see also that this relationship between the area of the square on the diagonal and the square on the two shorter sides holds for any right angle triangle. This is the Pythagorean theorem

Mathematicians often talk about suddenly seeing a proof of a theorem at a glance. Here is Professor Roger Penrose talking about one such moment:

A colleague (Ivor Robinson) had been visiting from the USA and he was engaging me in voluble conversation on a quite different topic as we walked down the street approaching my office in Birkbeck College in London. The conversation stopped momentarily as we crossed a side road, and resumed again at the other side. Evidently, during those few moments, an idea occurred to me, but then the ensuing conversation blotted it from my mind!

Later in the day, after my colleague had left, I returned to my office. I remember having an odd feeling of elation that I could not account for. I began going through in my mind all the various things that had happened to me during the day, in an attempt to find what it was that had caused this elation. After eliminating numerous inadequate possibilities, I finally brought to mind the thought that I had had while crossing the street- a thought which had momentarily elated me by providing the solution to the problem that had been milling around at the back of my head! Apparently, it was the needed criterion that I subsequently called a ‘trapped surface’ and then it did not take me long to form the outline of a proof of the theorem that I had been looking for.  Even so, it was some while before the proof was formulated in a completely rigorous way, but the idea that I had had while crossing the street had been the key.

The Emperor’s New Mind

What if the ETs can only do mathematics in that way – as sudden moments of insight?

If they also have topology as fundamental – things like intersection of sets and various distinctions of types of sets and how they can interact – their theorems might not use straight lines and circles.

Instead, maybe their advanced theorems consist of a huge Jackson Pollock type painting of blotches which interact in complex ways – which they can see at a glance but for us is almost impossible to understand.

Perhaps an ET might draw something like this, show it to us and say “This is the maths we use for constructing our spaceships”  – and expect us to understand at a glance – and have no other way of presenting their maths.

Jackson Pollock – biography, paintings, quotes of Jackson Pollock

Interestingly, “Action painting” like this is based on the idea of trying to tap into an archetypal visual language.

Proving theorems for them might consist of spending hours, even days painting intricate patterns of blotches on a large canvas until they can step back and look at what they painted, and say “I see it now!”.

MATHS AS SUDDEN INSIGHT AIDED BY PROOF

Less radical than that, we can imagine that ET mathematicians might have normal proof methods, as we do – but a far higher degree of sudden insight. What if they are all Ramanujans?

After all human mathematicians don’t, in practice, make much use of formal proof. We work on mathematical intuition most of the time, informal deductions. Even the most detailed proofs of a working mathematician wouldn’t count as a completely rigorous proof in first order formal logic. Yet, we have no doubt that these proofs are correct.

So, though their maths may be based on similar deduction methods to us, they might make so many intuitive leaps that it is really hard for a human mathematician to understand what’s going on.

The Indian mathematician Srinivasa Ramanujan came up with pages of mathematical results which he recorded in his notebooks, with no mathematical proof. That’s partly because paper was expensive, so he did his rough working on slate, and then just recorded the answers in his notebooks.

Still he also had a remarkable level of mathematical intuition, and intuited many results which he could not prove rigorously – most of which were proved later by other mathematicians. His notebooks, which were intended for his personal use, contain a few mistakes, but very few, nearly all his intricate and surprising formulae and results are correct. Many of them were startling new results in mathematics.

A devout Hindu, he attributed his results to inspiration from the goddess Namagiri Thayar, and also saw visions of some of the formulae in his dreams.

“While asleep, I had an unusual experience. There was a red screen formed by flowing blood, as it were. I was observing it. Suddenly a hand began to write on the screen. I became all attention. That hand wrote a number of elliptic integrals. They stuck to my mind. As soon as I woke up, I committed them to writing.”

Perhaps this also might give us an idea of what ET maths might be like if they depend on sudden insight and a high level of mathematical intuition, with only a small amount of deductive proof.


Page from the Ramanujan notebooks describing his “Master Theorem”

pdfs phtographs of his original notebooks at bottom of this page, and photocopy type scans here

Their communications could be filled with dense sheets of equations – and if they are all Ramanujans – just a single line on a single page, which they can see to be true instantly, requires hundreds, or thousands of lines of our more clumsy intuitive proof methods.

FRACTAL MATHS

They might also think in terms of fractals,- see fractals all around them, and classify fractals, and think of everything else in terms of these as their primitives.

This image done by Ondřej Karlík

I don’t know how it would work, we don’t have any maths like this as far as I know, but they might find fractals like this easier to understand than our triangles, squares and circles. And try to approximate a circle as a fractal.

The fractal, shown here is an example of a Mandelbulb  – recently discovered type of 3D  based on the Mandelbox – another 3D fractal discovered in 2010 by Tom Lowe.

ETS WITH DISCRETE GEOMETRY

When you think about geometry, you will probably have in mind continuous geometry with ideas of straight lines and points.

However – a less well known area of maths is taxicab geometry. For humans, this is mainly an area of interest to recreational mathematics. You can use squares, or hexagons or triangles as the building blocks.

But it’s also the geometry used for cellular automata – and for discrete simulations of water flow, and many computer models.


Taxicab geometry – similar to routes traveled by taxis in modern grid network type cities. The three paths shown in red, blue and yellow are all the same length. Green path shows the distance in a continuous geometry.

So that’s another possibility. ETs could make far more extensive use of discrete geometries, and might make hardly any use of continuous geometry.

It’s not as if our space is continuous in any essential obvious way. We can’t measure anything to infinite precision. So continuous space is as much of an approximation as a discrete space. But for some reason human mathematicians have settled on a continuous geometry as the “default” way of thinking about space.

Continous geometry does have advantages of isotropy – hard to make an isotropic discrete geometry (e.g. one with no “preferred direction” for fast travel). But that again might not be impossible (I actually wrote a paper about isotropic discrete geometries, might have a go at publishing it, but haven’t attempted to publish it yet – anyway – found that there are techniques you can use to create isotropic discrete geometries – that is – isotropic in the limit as the cells get smaller and smaller. It took a bit of lateral thinking, but once I got the idea – it wasn’t that hard – I found two different ways to do it, maybe you can think of others? I think the main reason we don’t study them is just because nobody is that much interested in them).

Another way is to use discrete gas cellular automata. There are exact solutions to equations of gas diffusion and incompressible liquid flow on hexagonal lattices. This lets you construct cellular automata evolving just according to rules about nearest neighbours, that have things like expanding circular waves. Here is an example of a gas automata

 

What if ETs think of it in terms of discrete geometry as their “default” way of thinking about the space they live in – and unlike us – do all their physics using discrete geometries like this.

They might have continuous geometry as a recreational area of maths similar to taxicab geometry. Again most ETs, possibly, might not even have heard of continuous geometry.

I don’t know how likely or possible this is. Just putting it forward as a possible idea to think over – is it possible that ETs could have discrete rather than continuous geometry?

ETS WITH COMPLEX MATHS, BUT WITHOUT NUMBERS

All modern human societies have numbers in some form. Many different counting systems, and some of them are inefficient for counting large numbers – but they all have numbers.

Many birds and animals can also “count” to some extent.

So we tend to think that counting will be universal amongst ETs. But would it?

What about an intelligent slime mold? Or an intelligent creature that lives in a Europa type ocean, and has almost no short term memory? Would counting come naturally to them also?

They might think in terms of linear orderings instead for instance. And perhaps have fuzzy continuous geometrical primitives, or topologically equivalent sets as their primitives, understand everything in terms of topology instead of discrete sets.

You can go a long way in some areas of maths without ever mentioning numbers or counting things. Surely they’d have some equivalent but it might be as abstract for them as open and closed sets are for most of us. Could be that non mathematician ETs don’t even know about numbers – and in maths, they use them only in particular specialized fields.

ETS WITHOUT MATHS

What if the ETs don’t use maths at all, as a formal discipline at least?

After all many humans get by fine with very little use of maths. Suppose they do everything by biological engineering and analogue computing, they might have a poetic / artistic approach even to traveling between stars.

It’s only recently that mathematicians have become common and important elements of society – not that long ago there would be only a few mathematicians in an entire country. Perhaps part of the reason we have so many mathematicians nowadays is because of the success of maths in technology.

So, suppose that the ETs don’t need maths to build complex machines, even computers – but somehow – like slime molds perhaps – can do it instinctively.

They might not be as mathematical as humans are. Yet accomplish as much or more technologically. Or indeed also what about hive minds? Colonial ETs where no individual is intelligent, just the community as a whole. Would they be able to count?

Also, how limited is our vision of the range of possibilities for ETIs?

We have so many examples on Earth – slime moulds, ants, bees, dolphins, birds etc to use as analogies for ETS- but they all

  • use the same DNA
  • same biochemistry, same building blocks
  • all evolved under 1 Earth gravity, one atmosphere pressure, limited temperature range
  • on the surface of a planet of a G type yellow dwarf star with a large Moon etc etc

Of course, we can only reason by analogy from what we know.

But some ET life might be radically different in some way we haven’t yet imagined in their fundamental biology or life processes, not closely resembling any of the creatures we know about on the Earth. So what might that do to their maths?

WHAT WOULD COMPUTERS BE LIKE FOR ETS WHO RARELY USE NUMBERS?

This is a somewhat forgotten episode of computing.

If you used the word “computer” in 1950, this is what they would think you are talking about. It’s not a programmed Babbage type mechanical computer – rather – is an analogue machine, doesn’t use numbers internally at all. Skip to 1.26 to see the computer in action. Just a minute or two of it.

 

At 1:45 “If you look inside a computer, you find an impressive assembly of basic mechanisms. Some of them are duplicated many times in one computer”

Wikipedia article about it, range keeper.

If they have no idea of numbers – or numbers are very abstract concepts for them – then they could still have analogue computers like this, as the computers are based on direct analogue connections between things and don’t need to use numbers as such.

They could go on and develop analogue electronic computers also – instead of the numbers based digital computers we have. They’d have many challenges to meet – but then the early digital computers did also.

Hard to say if a technological society much like us that developed analogue computers instead of our digital computers would be further ahead than us or behind us by now.

Surely at any rate they’d be able to develop an analogue electronic computer based technology one way or another.

Here are a few things we are exploring as humans – which might also point the way to alternative histories for other ETs.

The last one points to a rather radical way ETs could be different. They might be slime moulds, able to just extrude parts of themselves to use as computing devices in machines.

This suggests the possibility of ETs that either have little by way of mathematics – or who have maths but not based on numbers, who might well have advanced technology including spaceships.

Or, they might not be technologically advanced. If not mathematically inclined, still they might be great philosophers, or artists, or poets or musicians, and might have long lived non technological civilizations. Could be by inclination, or could be for a simple matter that, for instance, they don’t have hands – maybe like parrots, clumsy and not very strong – or like octopuses – live in the sea, not an easy place to develop technology (without fire) – or like dolphins – no hands or any easy way to build anything.

EXPECT SOME GROUNDS FOR COMMUNICATION

If they do have maths, I think it is possible that ET maths could be so different from ours that it is hard to communicate to start with. But would be astonished if we don’t eventually find close parallels here and there. Which might be counting. Or it might be topology. Or might be Godel’s theorem. Or might be quantum mechanics. Or might be Russell’s paradox, or an alternative set theory that is used by only a dozen or so people in our society – or paraconsistent logic – or fractals – eventually expect we’d find some common area of maths.

Then once we’ve done that, especially since we do live in the same universe – would finally find a way to map almost everything into terms we can understand to some extent.

BUT MIGHT NOT FIND THEIR MATHS EASY TO UNDERSTAND

But I am not certain that we’d find the maths immediately easy to understand. Might or might not. Without any previous experience of ET maths I think hard to know for sure.

Itis possible there are some ways of thinking that would involve many kinds of “aha’s ” of insight before humans can get what they are about. After all if you look at the history of human maths, many ideas that are commonplace to us now were not even thought of for centuries or millennia.

E.g the concept of zero or of a negative number, or of a ratio, or of a uniform way to solve any quadratic equation –  these are all things we teach nowadays – some at primary school and some at secondary school – but a few centuries ago these were advanced areas of maths that only a few humans in the whole world understood – and go back further and there were times before any of those concepts were understood.

A few millennia back – nobody in the world understood the mathematical idea of zero, their idea of ratios was very different from ours, they had no idea of solving the quadratic in its general case – they could solve a few special cases of the Pythagoras theorem by trial and error probably – and had bizarre ways of working with fractional amounts e.g. the unit fractions of the Sumerians – everything expressed as sums of reciprocals of whole numbers – seems very clumsy to us – did have some nice points about it – but main thing is – that was a whole society of humans – as intelligent as ourselves – who didn’t think of any of the modern ideas of maths.

So – I think – there could well be similar concepts that ET mathematicians have that we haven’t thought of yet.

And at the end of that -as in some ET stories, perhaps we’d no longer be thinking quite as humans do today. For good or for bad.

THEIR MATHS LIKELY TO BE MILLIONS OF YEARS FURTHER DEVELOPED THAN OURS

A mathematical ET might not be technological – can be mathematical without technology e.g. if don’t have hands or for whatever reason can’t manipulate their environment much.

If we meet an ET with maths – the chance they developed it in the last few thousand years of the billions of years since conditions suitable for evolution in our galaxy must be tiny – so small as to be almost impossible.

So, if we do encounter ET mathematicians, there is an excellent chance that they are using maths concepts that they have developed, not for our few millennia – but for millions of years, possibly even billions of years.

What will our maths be like a billion years from now? What concepts would every young school child understand then? Perhaps some of them things that our brightest minds haven’t’ thought of yet.

#extra-terrestrial, #math, #networks, #research, #science, #study

Mutating Ebola Viruses Not As Scary As Evolving Ones

scanning_electron_micrograph_of_ebola_virus_budding_from_the_surface_of_a_vero_cell

 

Scanning electron micrograph of Ebola virus budding from the surface of a Vero cell (African green monkey kidney epithelial cell line. Credit:NIAID

By Rob Brooks
My social media accounts today are cluttered with stories about “mutating” Ebola viruses. The usually excellent ScienceAlert, for example, rather breathlessly informs us “The Ebola virus is mutating faster in humans than in animal hosts.”

But what does that even mean? Should we be terrified of mutant viruses?

The story is based on a paper just published online at the journal Science under the title Genomic surveillance elucidates Ebola virus origin and transmission during the 2014 outbreak. It’s a timely piece of genetic detective work sequencing Ebola virus genomes from 78 patients in Sierra Leone. Viruses accumulate genetic changes through mutation and selection within a host, so the team sequenced multiple viruses from several of the patients making up 99 genomes in total.

They found that mutations – minute changes in the virus genetic code – have accumulated rapidly, both during infections of individuals and during the outbreak of the current epidemic. The accumulation of genetic changes can tell us about the dynamics of an outbreak because when one patient infects another, the virus in the second patient is the descendant of the virus in the first, and contains all mutations that had accumulated in the first, plus any new mutations that occur in the second patient.

The Science paper concludes, from studying these changes, that the current West African outbreak comes from a single zoonotic infection (when a virus crosses from an animal to humans, which is how Ebola outbreaks start). The virus, living in animals like fruit bats, last shared an ancestor with the Middle African strains (which have repeatedly infected people in places like the Democratic Republic of Congo) in approximately 2004.

The virus behind this outbreak made the jump to humans late last year, in Guinea. The key event in the spread to Sierra Leone was the funeral of a faith healer who claimed to be able to cure Ebola patients. When she contracted Ebola and died in late May, a large number of people attended her funeral.

Twelve of the first Ebola cases in Sierra Leone all attended that funeral, and appear to have contracted the virus there. These include two distinct forms of the virus which diverged, in Guinea, in late April.

This paper represents a superb piece of investigative genomics. Because Ebola viruses replicate (make copies of themselves) so rapidly and the epidemic has spread so quickly, there exist many small changes in the virus genome with which to track what has happened.

Are the mutations dangerous?

And those changes have accumulated far faster since the virus made the jump into humans, triggering the current epidemic. Which is the finding behind the headlines that the Ebola virus is mutating rapidly.

Nobody can tell whether mutations happen more rapidly in human infections than in the reservoir host animals where most viruses live. What has happened is that mutations have accumulatedtwice as fast in the course of this infection as they typically do during the long stretches living in other animals.

So when Reuters’ Julie Steenhuysen writes “more than 300 genetic changes in the virus as it has leapt from person to person”, she’s not talking about some mysterious, sinister process that literally happens in the air between one host and the next. Ebola isn’t even an airborne disease; it is transmitted in bodily fluids.

These mutations are simple mistakes in the genetic code, made when the virus is replicating within a host. With millions of replication events during thousands of infections, a huge number of mistakes happen. Steenhuysen quotes study lead author Pardis Sabeti (Harvard University and the Broad Institute) who points out just how mundane this process really is:

We found the virus is doing what viruses do. It’s mutating.

The majority of mutations either render the virus useless at doing its job, or have no effect. Its “job” being to make more copies of the virus and occasionally to infect another host.

So the mutations that do get passed on are usually the very few that succeed at improving the rate of virus replication, or the rate of infection. Exactly how many of the mutations alter the effectiveness of the virus at replicating and being transmitted, and how they do so, remains to be established. And the study’s authors certainly expect this to be an important follow-up:

Since many of the mutations alter protein sequences and other biologically meaningful targets, they should be monitored for impact on diagnostics, vaccines, and therapies critical to outbreak response.

When mutations arise – and they always arise – and effect the way in which an organism (yes, I know some people don’t like calling viruses “living organisms”) makes copies of itself, we have the two main ingredients for natural selection. There can be no clearer or more frightening illustration of natural selection and its inevitable result – biological evolution – in the modern world.

For some, reason, however, the popular press doesn’t like calling evolution – the most important biological process of all – by its proper name. “Mutation” and “development” are neither synonyms, nor euphemisms for evolution.

A new host spurs new adaptation

The reason mutations have accumulated so rapidly during this epidemic is that the virus is in a new host – human beings. While other versions of the virus have jumped across to humans before, the Ebola viruses currently ravaging West Africa have spent all of their history in other animals. They are now adapting to human bodies, tissues and immune systems. The mutations that help the virus work most effectively in the human body and transmit most effectively from sufferer to uninfected victim are the ones we are going to be hearing a whole lot more of in the coming months.

So we have little to fear from mutating viruses. It is the rapidly evolving viruses, fixing the mutations that randomly occur like typos in a Tweet, that we should seriously fear.

#ebola, #health, #mutation, #science, #virus

Are allergies trying to protect us from ourselves?

This article is more than actual so I should repost it, again.  Sorry )

 

“I have a love/hate relationship with spring, thanks to the aggravating bouts of hay fever that transform me into a faucet for pretty much the entire season. So I’ll admit I was a little skeptical when my editor at Scientific American asked me last week if I wanted to write about a new paper coming out in Nature suggesting that allergies may actually be a good thing. But always curious, I said sure.

I have a love/hate relationship with spring, thanks to the aggravating bouts of hay fever that transform me into a faucet for pretty much the entire season. So I’ll admit I was a little skeptical when my editor at Scientific American asked me last week if I wanted to write about a new paper coming out in Nature suggesting that allergies may actually be a good thing. But always curious, I said sure.

Turns out it’s a fascinating—and pretty convincing—read. It’s dense, but the lead author, Yale immunobiologist Ruslan Medzhikov, was kind to take a good two hours out of his day on Monday to explain some of the gnarlier concepts to me. (Medzhikov is fascinating—you can read more about him in this profilepublished in Disease Models & Mechanisms.)

Medzhikov’s basic argument is that there is a convincing body of research suggesting that allergies have beneficial effects. They break down the toxic components of bee, snakescorpion and gila monster venom, for instance, and our allergic reactions to tick saliva prevent the parasites from feeding.

Ultimately, all allergic responses work towards a common goal: avoidance and expulsion, Medzhitov argues. As I explain in my piece,

More generally, hated allergic symptoms keep unhealthy environmental irritants out of the body, Medzhitov posits. “How do you defend against something you inhale that you don’t want? You make mucus. You make a runny nose, you sneeze, you cough, and so forth. Or if it’s on your skin, by inducing itching, you avoid it or you try to remove it by scratching it,” he explains. Likewise, if you’ve ingested something allergenic, your body might react with vomiting. Finally, if a particular place or circumstance ramps up your allergies, you’re likely to avoid it in the future. “The thing about allergies is that as soon as you stop exposure to an allergen, all the symptoms are gone,” he says.

Obviously, Medzhitov’s theory is just a theory, and it involves a lot of speculation (albeit informed speculation by a really smart guy). But some research suggests an association between allergy severity and cancer risk, in that people with more allergy symptoms are less likely to develop certain cancers. (One shouldn’t read too much into this though; some other factor may drive the association. Perhaps people who eat lots of eggs are more likely to have allergies but less likely to have cancer.) But all in all, I think Medzhitov’s idea does make sense and is well-supported, and most of the outside experts I spoke with agreed, though they did raise questions about some of the specifics.

One aspect of the theory that I didn’t mention in my piece is that it could explain a medical mystery: penicillin allergies. Medzhitov argues that in addition to protecting against venoms, vector-borne diseases and environmental irritants, allergies also evolved to protect against a class of toxins called haptens: proteins that bind to extracellular or membrane-bound proteins in the body, rendering them useless and ultimately causing all sorts of problems. As it turns out, in some people, the penicillin molecule undergoes transformation into a hapten. This transformation is very slow and inefficient—very few penicillin markers turn into haptens, which is a good thing because haptenated penicillin could be dangerous—but nevertheless, some people may develop allergic responses to these few haptenated penicillin molecules, and this can result in an allergic hypersensitivity to the drug, Medzhitov posits.

In the case of something like a penicillin allergy, management is fairly simple (though medically inconvenient): avoid penicillin. The problem today is that there may be millions of allergens in the form of environmental pollutants and irritants, and they may simply be unavoidable. This idea could help explain why allergic diseases have become more common in recent decades: We’re exposed to many more pollutants now than we were 50 years ago, and this chemical flurry could be dialing up our innate defense systems to a constant level of 11. An allergy may be protective, but “if it’s taken to an extreme, it is pathological,” Medzhitov says. I wonder, then, if we may have built ourselves a world that will forever make us sick.”

Citations:

Palm, N., Rosenstein, R., & Medzhitov, R. (2012). Allergic host defences Nature, 484 (7395), 465-472 DOI: 10.1038/nature11047

Medzhitov, Ruslan (2011). Innovating immunology: an interview with Ruslan Medzhitov Disease Models & Mechanisms, 4 (4), 430-432 DOI:10.1242/dmm.008151

Akahoshi M, Song CH, Piliponsky AM, Metz M, Guzzetta A, Abrink M, Schlenner SM, Feyerabend TB, Rodewald HR, Pejler G, Tsai M, & Galli SJ (2011). Mast cell chymase reduces the toxicity of Gila monster venom, scorpion venom, and vasoactive intestinal polypeptide in mice. The Journal of clinical investigation, 121 (10), 4180-91 PMID: 21926462

Wada T, Ishiwata K, Koseki H, Ishikura T, Ugajin T, Ohnuma N, Obata K, Ishikawa R, Yoshikawa S, Mukai K, Kawano Y, Minegishi Y, Yokozeki H, Watanabe N, & Karasuyama H (2010). Selective ablation of basophils in mice reveals their nonredundant role in acquired immunity against ticks. The Journal of clinical investigation, 120 (8), 2867-75 PMID: 20664169

Sherman, P., Holland, E., & Sherman, J. (2008). Allergies: Their Role in Cancer Prevention The Quarterly Review of Biology, 83 (4), 339-362 DOI:10.1086/592850

#allergy, #conditions-and-diseases, #health, #research, #science, #theory

Demolishing Darwin’s Tree: Eric Bapteste and the Network of Life

Eric Bapteste with ten other researchers across Europe and the United States are ready to provide a more “expansive” view of evolution that replaces Darwin’s tree with a “network” of life. Why is this necessary? Because “genetic data are not always tree-like.”

We’ve heard Bapteste criticize the tree of life before (see here and here). His new paper in Trends in Genetics, “Networks: Expanding Evolutionary Thinking” (see the summary at PhysOrg), seeks to “expand” evolutionary thinking by incorporating it within a larger “network” model. But if they replace the tree with a complex set of interconnections, what happens to the notion of universal common descent?

Down with Trees

The pro-network gang finds trees inadequate on several grounds. For one, a tree diagram is too simplistic:

“However, many patterns in these data cannot be represented accurately by a tree. The evolution of genes in viruses and prokaryotes, of genomes in all organisms, and the inevitable noise that creeps into phylogenetic estimations, will all create patterns far more complicated than those portrayed by a simple tree diagram. Genetic restructuring and non-vertical transmission are largely overlooked by a methodological preference for phylogenetic trees and a deep-rooted expectation of tree-like evolution.

 

Interesting: they still want “evolutionary thinking,” but what kind without trees? Another problem is that much of the genetic data is not tree-like:

Evolutionary networks today are most often used for population genetics, investigating hybridization in plants, or the lateral transmission of genes, especially in viruses and prokaryotes. However, the more we learn about genomes the less tree-like we find their evolutionary history to be, both in terms of the genetic components of species and occasionally of the species themselves.

Interesting: if “evolutionary history” is not tree-like, does universal common ancestry still hold? They explain that many patterns are mosaic-like rather than tree-like due to a number of non-vertical processes. What predominates are “reticulate” (net-like) relationships. Another problem is that tree diagrams are often inaccurate:

Tree-based genomic analysis is proving to be an accuracy challenge for the evolutionary biology community, and although genome-scale data carry the promise of fascinating insights into treelike processes, non-treelike processes are commonly observed.

Further, tree diagrams are often contradictory:

There are long-standing controversies regarding the evolutionary history of many taxonomic groups, and it has been expected by the community that genome-scale data will end these debates. However, to date none of the controversies has been adequately resolved as an unambiguous tree-like genealogical history using genome data. This is because quantity of data has never been a satisfactory substitute for quality of analysis. Many of the underlying data patterns are not tree-like at all, and the use of a tree model for interpretation will oversimplify a complex reticulate evolutionary process.

Interesting: how does a “reticulate evolutionary process” square with universal common descent? They give examples: the yeast phylogenetic data can only be force-fit into a tree, but then, “a species tree becomes only a mathematical average estimate of evolutionary history, and even if it is supported it suppresses conflicting phylogenetic signals.” It’s misleading, in other words.

Another example is the tree of placental mammals: “a problem that has been difficult to resolve as a bifurcating process because different genetic datasets support different trees.” Wriggling out of the tree-thinking straitjacket can resolve these controversies: “the network provides biological explanations that go beyond what can be accommodated by a simple tree model.”

Up with Networks

The team believes that network theory has matured to the point where it can be a valuable tool for biologists. It also promises job opportunities: “The further improvement of networks for evolutionary biology offers many outstanding opportunities for mathematicians, statisticians, and computer scientists.”

A network can be both a more parsimonious description of the amount of discordance between genes, and a starting point for generating hypotheses to explain that discordance.

Trees, Networks, and Scientific Explanation

The authors recognize that network-thinking is not a panacea. Biologists will still need to “interpret” the findings correctly:

However, biologists must also keep in mind that networks are not yet free of interpretive challenges. One must knowledgeably select from the various types of network methods available to interpret properly such features as internal nodes and the meaning of taxon groupings, which differ in important ways among methods. Furthermore, community standards do not yet exist for network assessment and interpretation. As with tree methods, the responsibility remains with the researcher to understand network methodology, apply it correctly, and make valid inferences.

Philosophers could have fun with this paragraph. It has the potential for investigator bias at each stage. It sounds like Finagle’s First Law: “To study a subject best, understand it thoroughly before you start” — i.e., know what the valid inferences are before you infer anything; know the right methods before you choose which method is right; and if all else fails, trust the consensus (community standards). But that’s predictable; they are, after all, still Darwinian evolutionists. What matters is the extreme paradigm shift this represents.

Historic Juncture

Calling it “historic,” the authors recognize the extent of the shift they are proposing:

These challenges do not detract from the fact that networks represent an historic juncture in the development of evolutionary biology: it is a shift away from strict tree-thinking to a more expansive view of what is possible in the development of genes, genomes, and organisms through time.

They use “development… through time” as a synonym for evolution. But what kind of evolution? If it is not tree-like, what is it? In a network diagram, common descent gets scrambled if one accepts “random lateral gene transfer” and “hybridization” as key processes, as these authors do. In fact, they say nothing about natural selection. The new picture is of interconnected nodes, with no clear progression from simple to complex. After all, a gene has to already exist to be laterally transferred. Two species must already exist in order to hybridize. There’s nothing here about a beginning and a progression. It’s all about relationships between nodes that could have (avoiding tree-thinking) been in existence all along. The sample network diagram in the PhysOrg article shows lines going up, down, and sideways between nodes. It claims that “Moving from tree-like depictions of evolution to network diagrams is an effective way to amend the Tree of Life without dismissing it,” but the move turns the tree upside down and inside out. The focus is on nodes and relationships — not progression.

Even the strictest creationists allow for “change over time” in terms of new interconnections and horizontal modifications among existing kinds of organisms. There’s nothing really Darwinian about Bapteste’s proposal. It could even be considered ID-friendly: pre-existing intelligently designed organisms change their relationships through time, occasionally sharing genetic information. By “expanding” the tree of life, this team is demolishing it. Bifurcating trees within network diagrams vanish as artifacts, like imagined faces in a bumpy ceiling when one backs away and sees the whole.

Trees as Dogma

Back to the complaint of Bapteste et al. that tree-thinking is an “expectation” and a “preference” – i.e., a set of assumptions chosen before the data has a chance to speak. Their opening paragraph shows that Darwinian evolutionists produce trees because they are trained to produce them:

Ever since Darwin, a phylogenetic tree has been the principal tool for the presentation and study of evolutionary relationship among species. A familiar sight to biologists, the bifurcating tree has been used to provide evidence about the evolutionary history of individual genes as well as about the origin and diversification of many lineages of eukaryotic organisms. Community standards for the selection and assessment of phylogenetic trees are well developed and widely accepted. The tree diagram itself is ingrained in our research culture, our training, and our textbooks. It currently dominates the recognition and interpretation of patterns in genetic data.

What they are saying is that this dominant way of looking at the data is both ingrained as a method, and also used to provide evidence for evolution! That’s circular. They are trained to think in terms of evolutionary trees, and then use evolutionary trees as evidence for evolutionary trees.

Conclusions

One can only welcome this paper’s bold proposal to overturn entrenched dogma and offer a more “expansive” view of “development…through time.” For one thing, if trees are artifacts emerging from expectations, they should be exposed as such. For another, the “network” diagram seems conducive to ID research inasmuch as it calls into question universal common ancestry via natural selection (i.e., neo-Darwinism), and seeks to portray the evidence honestly.

Their paper is the product of a meeting in Leiden last October called “The Future of Phylogenetic Networks.” It’s too soon to tell if Darwin security forces will let this band of independent thinkers gather a following. If nothing else, it shows (notwithstanding the insistences of the National Center for Science Education) that insiders know about the fundamental controversies in evolutionary theory, and are calling for some of the same reforms that advocates of intelligent design do.

  • See more at: http://www.evolutionnews.org/2013/09/demolishing_dar076431.html#sthash.6YSZHfEa.dpuf

#darwin, #evolution, #id, #research, #science