Is E.T. calling us? Stay tuned!

New Scientist magazine reports on a paper by Hippke, Domainko and Learned, suggesting that fast radio bursts, which were first discovered in 2001, may be artificial signals produced by alien – or human – technology. Ten fast bursts of radio waves have been detected within the past 15 years, and the delay between the arrival of the first and last waves is always very close to a multiple of 187.5. The authors claim there is a 5 in 10,000 probability that the line-up is coincidence, and they argue that no known natural process can explain this curious fact. They conclude that if the signals are not due to “a [natural] galactic source producing quantized chirped signals” (which would be “most surprising”) then “an artificial source (human or non-human) must be considered, particularly since most bursts have been observed in only one location (Parkes radio telescope).” The authors consider the possibility that fast radio bursts are “Earthly noise” – a strong possibility, since they “show arrival times with a strong correlation to Earth’s integer second,” which “hints at some man-made device, such as mobile phone base stations.” The article in New Scientist points out that if the signals are produced by aliens, “the aliens would have to be from what SETI scientists call a Kardashev Type II civilisation” – one which “has a star’s worth of output at its disposal,” and is capable of capturing all its sun’s radiation, throwing material into a black hole and sucking up the radiation, or alternatively, traveling to many planets and stripping them of resources.

What do readers think about these curious radio signals? Are they human, alien or natural? Whatever your conclusion may be, this is a clear-cut case of Intelligent Design reasoning at work in the scientific realm.

Advertisements

#extra-terrestrial, #science-fictions, #science-news

DNA PRINTING IS HERE; WHAT NEW LIFEFORM WOULD YOU CREATE?

Cambrian Genomics has figured out how to print DNA in a process that greatly reduces the cost.  They make the first hardware/systems for laser printing DNA.  As company CEO, Austen Heinz puts it “We print life. Life is very simple, it’s just code. Four letters — we print that.”  He invented a 3D laser printer that prints custom DNA sequences.  The idea behind the company is that everything that’s alive is simply code.  If you remember back to your biology class, the primary nucleobases — adenine (A), cytosine (C), thymine (T), and guanine (G) — form base pairs in a specific order to create strands of DNA.

Cambrian Genomics uses proprietary process to assemble ACTG to create custom DNA for customers. The process is truly revolutionary.  You can either alter current DNA to create characteristics like a plant that glows in the dark, or create brand new DNA.  The process lets you play God in creating things that don’t currently exist in nature.

It’s currently much easier to alter existing DNA than to create new DNA into a new lifeform, but the possibility exists.  As you can imagine, there are significant government clearances that are needed for these processes, and Cambrian Genomics leaves that part to the customers to deal with.

However, think about the possibilities.  Heinz proposes “Plants can be made to take out much more carbon out of the atmosphere. We can make humans that are born without disease that can live much longer. We can make humans that can interface directly with computers by growing interfaces into the brain.”

3D DNA printing is not without its obvious controversy though.   There is a larger movement dedicated to banning all GMOs (Genetically Modified Organisms).   There is also significant concern about what effects there could be of releasing GMOs into the environment–also known as the Jurassic Park Effect.  There are current government safeguards in effect to help prevent this now.  All GMO products must first go through a rigorous approval process before a project can be started.  Then, there is government testing that occurs after the product is created to assure that there no ill effects of creating such a product.

Heinz explains how the current regulatory environment in America is fairly open for plant life, but locked down for animal and human life.  However, in Europe they are locked down on plant life, but much more open on human life.  In the UK has the first 3 parent child which is in a sense a GMO.  Heinz presented the possible paradox that GMO people could be anti-GMO activists in the future, but ironically be a GMO themselves.

#science-controversy, #science-fictions, #science-news

Planetary Science Journal Icarus, the “Wow!” Signal of Intelligent Design

Here’s a new paper that can be added to the growing stack of intelligent-design articles in peer-reviewed journals. Even though the authors do not use the phrase “intelligent design,” their reasoning centers on the detection of an intelligent signal embedded in the genetic code — a mathematical and semantic message that cannot be accounted for by a natural cause, “be it Darwinian, Lamarckian,” chemical affinities or energetics, or any other.

Dr. Vladimir I. shCherbak, a mathematician at the al-Farabi Kazakh National University of Kazakhstan, and Maxim A. Makukov, an astrobiologist at Kazakhstan’s’s Fesenkov Astrophysical Institute, gave their paper a catchy title: “The ‘Wow! signal’ of the terrestrial genetic code.” Their paper has been accepted for publication in the prestigious planetary science journal Icarus, where it’s already available online.

Their title comes from a curious SETI signal back in 1977 that looked so artificial at first, a researcher wrote “Wow!” next to it. With no follow-up examples, that signal has remained interesting but inconclusive. shCherbak and Makukov looked into “biological SETI” — the “biological channel” of communication (e.g., DNA) and concluded “Wow!” — the genetic code has features that defy natural explanation. The abstract states:

It has been repeatedly proposed to expand the scope for SETI, and one of the suggested alternatives to radio is the biological media. Genomic DNA is already used on Earth to store non-biological information. Though smaller in capacity, but stronger in noise immunity is the genetic code. The code is a flexible mapping between codons and amino acids, and this flexibility allows modifying the code artificially. But once fixed, the code might stay unchanged over cosmological timescales; in fact, it is the most durable construct known. Therefore it represents an exceptionally reliable storage for an intelligent signature, if that conforms to biological and thermodynamic requirements. As the actual scenario for the origin of terrestrial life is far from being settled, the proposal that it might have been seeded intentionally cannot be ruled out. A statistically strong intelligent-like “signal” in the genetic code is then a testable consequence of such scenario. (Emphasis added.)

Since intelligent design theory doesn’t consider the question of the identity of the designer, design by space aliens is one possible intelligent cause; however, the phrase used here, “seeded intentionally,” would appear to refer to a broader class of intelligence(s).

Here we show that the terrestrial code displays a thorough precision-type orderliness matching the criteria to be considered an informational signal. Simple arrangements of the code reveal anensemble of arithmetical and ideographical patterns of the same symbolic language. Accurate and systematic, these underlying patterns appear as a product of precision logic and nontrivial computing rather than of stochastic processes (the null hypothesis that they are due to chance coupled with presumable evolutionary pathways is rejected with P-value < 10-13). The patterns display readily recognizable hallmarks of artificiality, among which are the symbol of zero, the privileged decimal syntax and semantical symmetries. Besides, extraction of the signal involves logically straightforward but abstract operations, making the patterns essentially irreducible to natural origin. Plausible ways of embedding the signal into the code and possible interpretation of its content are discussed. Overall, while the code is nearly optimized biologically, its limited capacity is used extremely efficiently to pass non-biological information.

From there, the authors explore a number of fascinating patterns they find in the genetic code itself (not necessarily in animal genomes) — i.e., the relationship between the base pairs of DNA and the 20 amino acids. They are driven to the conclusion of design not only by what they observe, but also “by the fact that how the code came to be apparently non-random and nearly optimized remains disputable and highly speculative.” This reasoning is similar to Stephen Meyer’s in Signature in the Cell in which all the possible natural causes for a phenomenon were considered before inferring design.

The signal of intelligent origin, they reasoned, was strong because both arithmetic and ideographic signals are apparent, both using the same symbolic language. They predicted that a signal, if it exists, should be robust from modification. They did their best to avoid arbitrariness, considering what natural causes could be available to explain their findings. They identified two dimensionless integers — redundancy of codons and number of nucleons in the amino acid set — as “ostensive numerals” forming the basis of the signal, showing in detail how the patterns in those numerals satisfy the conditions for intelligent signals.

Considerations of brevity prohibit giving a complete analysis of their arguments, but let an example suffice. Of the 20 amino acids, only proline holds its side chain with two bonds, and has one less hydrogen in its block. The effect of this is to “standardize” the code to a 73 + 1 block nucleon number. Yet the distinction between block and chain is “purely formal,” they argue, since there is no stage in amino acid synthesis where the block and side chain are detached. Here’s their comment:

Therefore, there is no any [sic] natural reason why nucleon transfer in proline; it can be stimulated only in the mind of a recipient to achieve the array of amino acids with uniform structure. Such nucleon transfer thus appears artificial. However, exactly, this seems to be its destination: it protects the patterns from any natural explanation. Minimizing the chances for appealing to natural origin is a distinct concern of messaging of such kind, and this problem seems to be solved perfectly for the signal in the genetic code. Applied systematically without exceptions, the artificial transfer in proline enables holistic and precise order in the code. Thus, it acts as an “activation key”. While nature deals with the actual proline which does not produce the signal in the code, an intelligent recipient easily finds the key and reads messages in arithmetical language….

In addition, they find a decimal system including zero (via stop codons), and many other fascinating signs of intelligent origin. They examine possible criticisms, such as the claim that the patterns could be due to unknown natural causes:

But this criterion is equivalent to asking if it is possible at all to embed informational patterns into the code so that they could beunequivocally interpreted as an intelligent signature. The answer seems to be yes, and one way to do so is to make patterns virtual, not actual. Exactly that is observed in the genetic code. Strict balances and decimal syntax appear only with the application of the“activation key”.

In effect, the proline nucleon transfer is like a decoder ring that makes the signal apparent and all the blocks balance out. Some other signs of artificiality are the fact that nucleon sums are multiples of 037; the stop codons act as zero in a decimal system, and all the three-digit decimals (111, 222, 333, 444, 555, 666, 777, 888, and 999) appear at least once in the code, “which also looks like an intentional feature.

Could these patterns be due to selection or any other natural process? Could they be mere “epiphenomena” of chemical pressures for mass equalities, or something else?

But it is hardly imaginable how a natural process can drive mass distribution in abstract representations of the code where codons are decomposed into bases or contracted by redundancy…. no natural process can drive mass distribution to produce the balance … amino acids and syntactic signs that make up this balance are entirely abstract since they are produced by translation of a string read across codons.

Even more convincing, no natural cause can produce semantics — particularly the kind involving “interpretive or linguistic semantics peculiar to intelligence,” they write. “Exactly the latter kind of semantics is revealed in the signal of the genetic code.” Here’s a summary of the patterns they conclude show design:

In total, not only the signal itself reveals intelligent-like features — strict nucleon equalities, their distinctive decimal notation, logical transformations accompanying the equalities, the symbol of zero and semantic symmetries, but the very method of its extractioninvolves abstract operations — consideration of idealized (free and unmodified) molecules, distinction between their blocks and chains, the activation key, contraction and decomposition of codons. We find that taken together all these aspects point at artificial nature of the patterns.

Lest anyone perceive a creationist message, they write: “Whatever the actual reason behind the decimal system in the code, it appears that it was invented outside the solar system already several billions years [sic] ago.” In other words, their favored position is panspermia. (Keep in mind, though, that there are multiple versions of panspermia.)

If their thesis of “biological SETI” sounds a little like ideas floated by Paul Davies, the authors thank Davies in their Acknowledgements, along with Manfred Eigen in Germany.

How will evolutionists respond to this paper? It’s hard to see how they could dismiss it. Maybe they will try to mock it as old Arabian numerology, or religiously inspired (since Kazakhstan, which funded the study, is 70% Muslim). Those would be unfair criticisms. The authors have Russian names, certified doctorates, and wrote in collaboration with leading lights in the West. Or perhaps critics could argue that the authors hail from a foreign country whose name has too many adjacent consonants in it to take them seriously.

No, it appears the only way out for Darwinists would be the “Dawkins Dodge.” You may remember that one from the documentary Expelled, where Dawkins admits the possibility of panspermia for Earth, so long as the designers themselves evolved by a Darwinian process.

What’s most notable about this paper is the similarity in design reasoning between the authors and the more familiar advocates of intelligent design theory. No appeals to religion or religious texts; no identifying the designer; just logical reasoning from effect to sufficient cause. The authors even applied the “design filter” by considering chance and natural law, including natural selection, before inferring design.

If Darwinists want to go on equating intelligent design with creationism, they will now have to take on the very secular journal Icarus.

#academic-freedom, #intelligent-design, #science-news

Simulation Universe

Please follow the next lines and images about an interesting questions.

That is, why inferring design on functionally specific, complex organisation and associated information, e.g.:

abu_6500c3magand equally:

cell_metabolism

. . . makes good sense.

Now, overnight, UD’s Newsdesk posted on a Space dot com article, Is Our Universe a Fake?

The article features “Philosopher Nick Bostrom, director of the Future of Humanity Institute at Oxford University.”

I think Bostrom’s argument raises a point worth pondering, one oddly parallel to the Boltzmann brain popping up by fluctuation from an underlying sea of quantum chaos argument, as he discusses “richly detailed software simulation[s] of people, including their historical predecessors, by a very technologically advanced civilization”:

>>Bostrom is not saying that humanity is living in such a simulation. Rather, his “Simulation Argument” seeks to show that one of three possible scenarios must be true (assuming there are other intelligent civilizations):

  1. All civilizations become extinct before becoming technologically mature;
  2. All technologically mature civilizations lose interest in creating simulations;
  3. Humanity is literally living in a computer simulation.

His point is that all cosmic civilizations either disappear (e.g., destroy themselves) before becoming technologically capable, or all decide not to generate whole-world simulations (e.g., decide such creations are not ethical, or get bored with them). The operative word is “all” — because if even one civilization anywhere in the cosmos could generate such simulations, then simulated worlds would multiply rapidly and almost certainly humanity would be in one.

As technology visionary Ray Kurzweil put it, “maybe our whole universe is a science experiment of some junior high school student in another universe.”>>

In short, if once the conditions are set up for a large distribution of possibilities to appear, you have a significant challenge to explain why you are not in the bulk of the possibilities in a dynamic-stochastic system.

Let me put up an outline, general model:

gen_sys_proc_modelSuch a system puts out an output across time that will vary based on mechanical and stochastic factors, exploring a space of possibilities. And in particular, any evolutionary materialist model of reality will be a grand dynamic-stochastic system, including a multiverse.

Now, too, as Wiki summarises, there is the Boltzmann Brain paradox:

>>A Boltzmann brain is a hypothesized self aware entity which arises due to random fluctuations out of a state of chaos. The idea is named for the physicist Ludwig Boltzmann (1844–1906), who advanced an idea that the Universe is observed to be in a highly improbable non-equilibrium state because only when such states randomly occur can brains exist to be aware of the Universe. The term for this idea was then coined in 2004 by Andreas Albrecht and Lorenzo Sorbo.[1]

The Boltzmann brains concept is often stated as a physical paradox. (It has also been called the “Boltzmann babies paradox”.[2]) The paradox states that if one considers the probability of our current situation as self-aware entities embedded in an organized environment, versus the probability of stand-alone self-aware entities existing in a featureless thermodynamic “soup”, then the latter should be vastly more probable than the former.>>

In short, systems with strong stochastic tendencies tend to have distributions in their outcomes, which are dominated by the generic and typically uninteresting bulk of a population. Indeed this is the root of statistical mechanics, the basis for a dynamical understanding of thermodynamics i/l/o the behaviour of large collections of small particles.

For instance, one of my favourites (explored in Mandl) is an idealised two-state element paramagnetic array, with atoms having N-pole up/down, a physical atomic scale close analogue of the classic array of coins exercise. We can start with 500 or 1,000 coins in a string, which will of course pursue a binomial distribution [3.27 * 10^150 or 1.07*10^301 possibilities respectively, utterly dominated by coins in near 50-50 outcomes, in no particular orderly or organised pattern], then look at an array where each atom of our 10^57 atom sol system has a tray of 500 coins flipped say every 10^-13 – 10^-15 s:

sol_coin_fliprThe outcome of such an exercise is highly predictably that no cases of FSCO/I (meaningful complex strings) will emerge, as the number of possible observed outcomes is so small relative to the set of possibilities that it rounds down to all but no search, as the graphic points out.

This is of course an illustration of the core argument to design as credible cause on observing FSCO/I, that once functionally specific complex organisation and associated information are present in a situation, it demands an observed to be adequate explanation that does not require us to believe in statistical needle- in- vast- haystack- search- challenge miracles:

islands_of_func_challAlso:

is_ o_func2_activ_info

The Captain Obvious fact of serious thinkers making similar needle in haystack arguments, should lead reasonable people to take pause before simply brushing aside the inference to design on FSCO/I. Including in the world of life and in the complex fine tuned physics of our cosmos that sets up a world in which C-chemistry, aqueous medium terrestrial planet life is feasible.

But we’re not finished yet.

What’s wrong with Bostrom’s argument, and wheere else does it point.

PPolish and Mapou raise a point or two:

>>1

  • Simulated Universes scream Intelligent Design. Heck, Simulated Universes prove Intelligent Design.

    I can see why some Scientists are leaning in this direction. Oops/Poof does not cut it any more. Unscientific, irrational, kind of dumb.

  • ppolish,

    It’s a way for them to admit intelligent design without seeming to do so (for fear of being crucified by their peers). Besides, those who allegedly designed, built and are running the simulation would be, for all intents and purposes, indistinguishable from the Gods.

    Edit: IOW, they’re running away from religion only to fall into it even deeper.>>

In short, a detailed simulation world will be a designed world.

Likewise High School student projects do not credibly run for 13.7 BY. Not even PhD’s, never mind Kurzweil’s remark.

So, what is wrong with the argument?

First, an implicit assumption.

It is assuming that unless races keep killing off themselves too soon, blind chance and mechanical necessity can give rise to life then advanced, civilised high tech life that builds computers capable of whole universe detailed simulations.

But ironically, the argument points to the likeliest, only observed cause of FSCO/I, design, and fails to address the significance of FSCO/I as a sign of design, starting with design of computers, e.g.:

mpu_modelWhere, cell based life forms show FSCO/I-rich digital information processors in action “everywhere,” e.g. the ribosome and protein synthesis:

Protein Synthesis (HT: Wiki Media)

So, real or simulation, we are credibly looking at design, and have no good empirical observational grounds to infer that FSCO/I is credibly caused by blind chance and mechanical necessity.

So, the set of alternative possible explanations has implicitly questionable candidates and implicitly locks out credible but ideologically unacceptable ones, i.e. intelligent design of life and of the cosmos. That is, just maybe the evidence is trying to tell us that if we have good reason to accept that we live in a real physical world as opposed to a “mere” speculation, then that puts intelligent design of life and cosmos at the table as of right not sufferance.

And, there is such reason.

Not only is it that the required simulation is vastly too fine grained and fast-moving to be credibly  centrally processed but the logic of complex processing would point to a vast network of coupled processors. Which is tantamount to saying we have been simulating on atoms etc. In short, it makes good sense to conclude that our processing elements are real world dynamic-stochastic entities: atoms, molecules etc in real space.

This is backed up by a principle that sets aside Plato’s Cave worlds and the like: any scheme that implies grand delusion of our senses and faculties of reasoning i/l/o experience of the world undermines its own credibility in an infinite regress of further what if delusions.

Reduction to absurdity in short.

So, we are back to ground zero, we have reason to see that we live in a real world in which cell based life is full of FSCO/I and the fine tuning of the cosmos also points strongly to FSCO/I.

Thence, to the empirically and logically best warranted explanation of FSCO/I.

Design.

Thank you Dr Bostrom for affirming the power of the needle in haystack challenge argument.

Where that argument leads, is to inferring design as best current and prospective causal explanation of FSCO/I, in life and in observed cosmos alike.

Any suggestions and comments?

#cosmology, #math, #metaphysics, #philosophy, #physics, #science, #science-news, #universe

Spectacular Convergence: A Camera Eye in a Microbe

94600_web (1)

They thought it was a joke. A century ago, biologists could not believe that a one-celled creature had an eye. But since the warnowiid dinoflagellate was difficult to find and grow in the lab, detailed research was rare, until now. A team from the University of British Columbia gathered specimens off the coast of BC and Japan for a closer look. They found that the structure, called an ocelloid, has structures that mimic the complex eye of higher animals. PhysOrgsays:

In fact, the ‘ocelloid’ within the planktonic predator looks so much like a complex eye that it was originally mistaken for the eye of an animal that the plankton had eaten.

“It’s an amazingly complex structure for a single-celled organism to have evolved,” said lead author Greg Gavelis, a zoology PhD student at UBC. “It contains a collection of sub-cellular organelles thatlook very much like the lens, cornea, iris and retina of multicellular eyes found in humans and other larger animals.” [Emphasis added.]

New Scientist shares the astonishment:

It is perhaps the most extraordinary eye in the living world — soextraordinary that no one believed the biologist who first described it more than a century ago.

Now it appears that the tiny owner of this eye uses it to catch invisible prey by detecting polarised light. This suggestion is also likely to be greeted with disbelief, for the eye belongs to a single-celled organism called Erythropsidinium. It has no nerves, let alone a brain. So how could it “see” its prey?

The “retina” of this eye, a curved array of chromosomes, appears arranged to filter polarized light. The news item from the Canadian Institute for Advanced Research quotes Brian Leander, co-supervisor of the project:

“The internal organization of the retinal body is reminiscent of the polarizing filters on the lenses of cameras and sunglasses,” Leander says. “Hundreds of closely packed membranes lined up in parallel.”

And that’s not all this wonder of the sea has in its toolkit. It also has a piston and a harpoon:

Scientists still don’t know exactly how warnowiids use the eye-like structure, but clues about the way they live have fuelled compelling speculation. warnowiids hunt other dinoflagellates, many of which are transparent. They have large nematocysts, which Leander describes as “little harpoons,” for catching prey. And some have apiston — a tentacle that can extend and retract very quickly — with an unknown function that might be used for escape or feeding.

Did This Eye Evolve?

Lest anyone think the dinoflagellate’s eye presents an easy evolutionary stepping stone to more complex eyes, the data reveal several problems. The paper inNature claims that the ocelloids are built from “different endosymbiotically acquired components” such as mitochondria and plastids. “As such, the ocelloid is a chimaeric structure, incorporating organelles with different endosymbiotic histories.” We can treat endosymbiosis as a separate issue. For now, we can ask if this complex structure is explainable by unguided natural selection.

The authors did not think this is a clear evolutionary story. “The ocelloid isamong the most complex subcellular structures known, but its function andevolutionary relationship to other organelles remain unclear,” they say. Never in the paper do they explain how organelles with different histories came together into a functioning eye. Most of the paper is descriptive of the parts and how they function individually, or where they might have been derived by endosymbiosis. To explain the eye’s origin as a functioning whole, they make up a phrase, “evolutionary plasticity” —

Nevertheless, the genomic and detailed ultrastructural data presented here have resolved the basic components of the ocelloid and their origins, and demonstrate how evolutionary plasticity of mitochondria and plastids can generate an extreme level of subcellular complexity.

Other than that, they have very little to say about evolution, and nothing about natural selection.

In the same issue of Nature, Richards and Gomes review the paper. They list other microbes including algae and fungi that have light-sensitive spots. Some have the rhodopsin proteins used in the rods and cones of multicellular animals. But instead of tracing eye evolution by common ancestry, they attribute all these innovations to convergence:

These examples demonstrate the wealth of subcellular structures and associated light-receptor proteins across diverse microbial groups. Indeed, all of these examples represent distinct evolutionary branches in separate major groups of eukaryotes. Even the plastid-associated eyespots are unlikely to be the product of direct vertical evolution, because the Chlamydomonas plastid is derived from a primary endosymbiosis and assimilation of a cyanobacterium, whereas the Guillardia plastid is derived from a secondary endosymbiosis in which the plastid was acquired ‘second-hand’ by intracellular incorporation of a red alga. Using gene sequences recovered from the warnowiid retinal body, Gavelis et al. investigated the ancestry of this organelle by building phylogenetic trees for the plastid-derived genes. Their analysis demonstrated that this modified plastid is also of secondary endosymbiotic originfrom a red alga.

Although derived independently, there are common themes in theevolution of these eye-like structures. Many of them involve thereconfiguration of cellular membrane systems to produce anopaque body proximal to a sensory surface, a surface that in four of the five examples probably involves type 1 rhodopsins. Given the evolutionary derivation of these systems, this represents a complex case of convergent evolution, in which photo-responsive subcellular systems are built up separately from similar components to achieve similar functions. The ocelloid example isstriking because it demonstrates a peak in subcellular complexity achieved through repurposing multiple components. Collectively, these findings show that evolution has stumbled on similar solutions to perceiving light time and time again.

But is convergence just a word masquerading as an explanation? We read:

The work sheds shed new light on how very different organisms can evolve similar traits in response to their environments, a process known as convergent evolution. Eye-like structures haveevolved independently many times in different kinds of animals and algae with varying abilities to detect the intensity of light, its direction, or objects.

“When we see such similar structural complexity at fundamentally different levels of organization in lineages that are very distantly related to each other, in this case warnowiids and animals, then you get a much deeper understanding of convergence,” Leander says.

But “convergent evolution” is not a process. It is a post-hoc observation based on evolutionary assumptions. An environment has no power to force an organism to respond to it with a complex function. Light exists, whether or not an organism sees it. Magnetism exists, too; does it contain the power to nudge fish, turtles, and butterflies to employ it for navigation?

#academic-freedom, #debate, #evolution, #eye, #intelligen-design, #science, #science-news

Black holes do (not) exist and the Big Bang Theory is wrong ?

  • Scientist claims she has mathematical proof black holes cannot exist
  • She said it is impossible for stars to collapse and form a singularity
  • Professor Laura Mersini-Houghton said she is still in ‘shock’ from the find
  • Previously, scientists thought stars much larger than the sun collapsed under their own gravity and formed black holes when they died
  • During this process they release a type of radiation called Hawking radiation
  • But new research claims the star would lose too much mass and wouldn’t be able to form a black hole
  • If true, the theory that the universe began as a singularity, followed by the Big Bang, could also be wrong

When a huge star many times the mass of the sun comes to the end of its life it collapses in on itself and forms a singularity – creating a black hole where gravity is so strong that not even light itself can escape.

At least, that’s what we thought.

A scientist has sensationally said that it is impossible for black holes to exist – and she even has mathematical proof to back up her claims.

If true, her research could force physicists to scrap their theories of how the universe began.

A scientist from University of North Carolina states she has mathematical proof that black holes (illustrated) can't exist. She said it is impossible for stars to collapse and form a singularity. Previously, scientists thought stars  larger than the sun collapsed under their own gravity and formed black holes as they died

A scientist from University of North Carolina states she has mathematical proof that black holes (illustrated) can’t exist. She said it is impossible for stars to collapse and form a singularity. Previously, scientists thought stars larger than the sun collapsed under their own gravity and formed black holes as they died

The research was conducted by Professor Laura Mersini-Houghton from the University of North Carolina at Chapel Hill in the College of Arts and Scientists.

She claims that as a star dies, it releases a type of radiation known as Hawking radiation – predicted by Professor Stephen Hawking.

THE BLACK HOLE INFORMATION PARADOX

One of the biggest unanswered questions about black holes is the so-called information paradox.

Under current theories for black holes it is thought that nothing can escape from the event horizon around a black hole – not even light itself.

Inside the black hole is thought to be a singularity where matter is crushed to an infinitesimally small point as predicted by Einstein’s theory of gravity.

However, a fundamental law of quantum theory states that no information from the universe can ever disappear.

This creates a paradox; how can a black hole make matter and information ‘disappear’?

Professor Mersini-Houghton’s new theory manages to explain why this might be so – namely because black holes as we know them cannot exist.

However in this process, Professor Mersini-Houghton believes the star also sheds mass, so much so that it no longer has the density to become a black hole.

Before the black hole can form, she said, the dying star swells and explodes.

The singularity as predicted never forms, and neither does the event horizon – the boundary of the black hole where not even light can escape.

‘I’m still not over the shock,’ said Professor Mersini-Houghton.

‘We’ve been studying this problem for a more than 50 years and this solution gives us a lot to think about.’

Experimental evidence may one day provide physical proof as to whether or not black holes exist in the universe.

But for now, Mersini-Houghton says the mathematics are conclusive.

What’s more, the research could apparently even call into question the veracity of the Big Bang theory.

Most physicists think the universe originated from a singularity that began expanding with the Big Bang about 13.8 billion years ago.

If it is impossible for singularities to exist, however, as partially predicted by Professor Mersini-Houghton, then that theory would also be brought into question.

THIS is what a black hole looks like – simulation shows disc…

During the collapse process stars release a type of radiation called Hawking radiation (shown). But Professor Mersini-Houghton claims this process means the star loses too much mass and can't form a black hole. And this also apparently means the Big Bang theory, that the universe began as a singularity, may not be correct

During the collapse process stars release a type of radiation called Hawking radiation (shown). But Professor Mersini-Houghton claims this process means the star loses too much mass and can’t form a black hole. And this also apparently means the Big Bang theory, that the universe began as a singularity, may not be correct

THERE ARE NO BLACK HOLES, ONLY GREY HOLES, CLAIMS HAWKING

Earlier this year Professor Stephen Hawking shocked physicists by saying ‘there are no black holes’.

In a paper published online, Professor Hawking instead argues there are ‘grey holes’

‘The absence of event horizons means that there are no black holes – in the sense of regimes from which light can’t escape to infinity,’ he says in the paper, called Information Preservation and Weather Forecasting For Black Holes.

He says that the idea of an event horizon, from which light cannot escape, is flawed.

He suggests that instead light rays attempting to rush away from the black hole’s core will be held as though stuck on a treadmill and that they can slowly shrink by spewing out radiation.

One of the reasons black holes are so bizarre is that they pit two fundamental theories of the universe against each other.

Namely, Einstein’s theory of gravity predicts the formation of black holes. But a fundamental law of quantum theory states that no information from the universe can ever disappear.

Efforts to combine these two theories proved problematic, and has become known as the black hole information paradox – how can matter permanently disappear in a black hole as predicted?

Professor Mersini-Houghton’s new theory does manage to mathematically combine the two fundamental theories, but with unwanted effects for people expecting black holes to exist.

‘Physicists have been trying to merge these two theories – Einstein’s theory of gravity and quantum mechanics – for decades, but this scenario brings these two theories together, into harmony,’ said Professor Mersini-Houghton.

‘And that’s a big deal.’

Read more: http://www.dailymail.co.uk/sciencetech/article-2769156/Black-holes-NOT-exist-Big-Bang-Theory-wrong-claims-scientist-maths-prove-it.html#ixzz3ELcu47ue
Follow us: @MailOnline on Twitter | DailyMail on Facebook

#big-bang-theory, #black-holes, #fantastic-discovery, #florida-state-university, #laura-mersini-houghton, #new-theory, #science-news

How the Brain Responds to Missing Information

“It sometimes happens that when someone asks a question, the addressee does not give an adequate answer, for instance by leaving out part of the required information. The person who posed the question may wonder why the information was omitted, and engage in extensive processing to find out what the partial answer actually means. The present study looks at the neural correlates of the pragmatic processes invoked by partial answers to questions. Two experiments are presented in which participants read mini-dialogues while their Event-Related brain Potentials (ERPs) are being measured. In both experiments, violating the dependency between questions and answers was found to lead to an increase in the amplitude of the P600 component. We interpret these P600-effects as reflecting the increased effort in creating a coherent representation of what is communicated. This effortful processing might include the computation of what the dialogue participant meant to communicate by withholding information. Our study is one of few investigating language processing in conversation, be it that our participants were ‘eavesdroppers’ instead of real interactants. Our results contribute to the as of yet small range of pragmatic phenomena that modulate the processes underlying the P600 component, and suggest that people immediately attempt to regain cohesion if a question-answer dependency is violated in an ongoing conversation.”

 

Introduction

During conversation, speakers and listeners act upon certain basic assumptions which enable them to communicate swiftly, and seemingly effortlessly [1][5]. If, for instance, someone asks a question, both speaker and hearer have knowledge of what would constitute a valid answer. To be more specific, a question can be said to impose constraints and create expectations regarding both the information structure (i.e., specifying what is given and what is new, and thus how the information contained in an utterance should be linked to the existing discourse representation) and the content of the answer. Consider for instance someone inquiring about the activities of two protagonists, ‘John’ and ‘Peter’:

1. What did John and Peter do?

On the level of information structure, this question introduces two entities that make them likelytopics in the answer, where a topic can be loosely described as the entity about which the sentence imparts information [6]. On the content level, in turn, the question requires the answer to impart on the activities of these specific people (‘John’ and ‘Peter’), and not, for instance, about their respective spouses. Answer (2) satisfies both of these constraints.

2. John cleaned the house and Peter fixed the window.

In contrast, by leaving out information about the second protagonist, answer (3) violates expectations regarding both information structure and content. Utterance (3) is thus pragmatically infelicitous as an answer to question (1).

3. John cleaned the house.

If there is no additional information, and the answer consists of only this sentence, the person who posed the question is faced with the task of determining what the speaker meant to communicate by being incomplete. The speaker might, for instance, be taken to convey that Peter did nothing, that what he did was of no importance, or just that Peter is terribly lazy [1],[7]. The computation of such beliefs, and thus of a coherent mental representation of intended meaning, may require extensive pragmatic processing [Regel, Gunter, & Friederici [8] provide a similar argument on the computation of ironic meaning]. How the human language processor deals with this kind of processing is still poorly understood, and neurocognitive investigations of such phenomena are scarce.

This study presents two Event-Related brain Potential (ERP) experiments that examine the neural correlates of the pragmatic processes invoked by partial answers to questions. ERPs provide a means of disentangling different processes involved in online language comprehension, on the basis of the qualitatively different signatures they leave behind. There are many ERP studies on word- and sentence-level processing [Kutas, van Petten, & Kluender[9] provide an overview], but researchers have only recently started to use ERPs to investigate pragmatic processing [8][10][13]. These latter studies provide evidence that pragmatic processes such as the computation of bridging inferences or of ironic meaning modulate the amplitude of the P600 component, a positive deflection of the ERP signal that usually peaks around 600 ms post stimulus onset.

Brouwer, Fitz, & Hoeks [14] have recently argued, on the basis of a thorough review of the ERP literature, that the P600 component is best defined as a family of late positivities that reflect the processing involved in the word-by-word construction, reorganization, or updating of a mental representation of what is being communicated (MRC)–see also [15][16]. Different varieties of the P600-effect (in terms of electrophysiological properties like onset, amplitude, duration, and scalp distribution) are assumed to reflect different sub-processes of MRC construction. These sub-processes may include, among other things, the accommodation of new discourse referents, the establishment of relations between entities, thematic role assignment and revision, and for instance, the resolution of conflicts between different information sources (e.g., with respect to world knowledge). For instance, in the computation of bridging inferences, as in a sentence pair like “We went for a picnic. The beer was warm” [17], some of the sub-processes involved will concern the accommodation of the new discourse referent “The beer”. The computation of ironic meaning, on the other hand, may involve more sub-processes aimed at overcoming the conflict between the unfolding discourse and the ‘literal meaning’ of the ironic utterance—cf. “These artists are gifted!” in the context of a bad musical performance, see [8].

The present study investigates whether the processes invoked by partial answers to questions also produce an increase in P600 amplitude, which would provide strong support for the MRC hypothesis discussed above (i.e., P600 amplitude reflects ease of ‘making sense’).

Results

Experiment 1

In the first experiment, participants read short question-answer pairs that appeared word-by-word in the middle of a computer screen, and were occasionally asked to answer a comprehension question (see Procedure section below). During reading, brain activity of the participants was monitored through ERP recording. The question-answer pairs differed in the pragmatic felicity of the answer given the preceding question. We used two types of questions: ‘neutral’ questions like (4), which do not impose any strong constraints on the information structure of the answer, and questions such as (5) that require the answer to contain two topics in a so-called ‘contrastive topic’ information structure—cf. [18]. For the answers we used Dutch sentences containing NP-coordinations with a one-topic information structure, based on materials taken from [19]. In these sentences, the NP following the coordinator is temporarily ambiguous between being the subject of a new clause, or the object of the present clause. In Dutch and also in other languages, the object reading is preferred [20]. If such a one-topic answer follows a contrastive-topic question, as in (5), this constitutes a pragmatic violation: The question requires the answer to impart on the activities of two topics (“the mayor” and “the alderman”); in the answer these entities are mentioned, but only one of them (“the mayor”) turns out to be a topic.

It is important to note that in Dutch (unlike in English), the presence of the adverb at the end of the sentence unambiguously indicates that the ambiguous NP (“the alderman”) cannot be a topic, and that the sentence only has one topic. Thus at the adverb, the reader is confronted with a clear pragmatic violation. It should be noted, however, that whereas in the experiment there is no sentence following the partial answer, the missing information could in principle be given in a next sentence (e.g., question: “What did the mayor and the alderman do?”—answer: “The mayor praised the councilor and the alderman exuberantly. The alderman therefore thanked the mayor”). It would be interesting for a future experiment to manipulate the presence or absence of such an additional sentence.

4. Neutral

Q: Wat gebeurde er?

‘What happened?’

A: De burgemeester prees het raadslid en de wethouder uitbundig.

‘The mayor praised the councilor and the alderman exuberantly.’

5. Violation

Q: Wat deden de burgemeester en de wethouder?

‘What did the mayor and the alderman do?’

A: De burgemeester prees het raadslid en de wethouder uitbundig.

‘The mayor praised the councilor and the alderman exuberantly.’

Data analysis.

Participants were reading attentively, answering on average 85% (SD = 5.6) of the 35 content questions correctly. ERP waveforms were time-locked to the presentation of the critical adverb (“exuberantly”), see Figure 1.

thumbnail

Figure 1. ERP waveforms for the two conditions in Experiment 1:

Neutral (black line) and Violation (red line); topographic maps represent Violation minus Neutral; there is an extended pre-stimulus time-window in which the onset of the coordinator (CRD), determiner (DET), and noun (N) is indicated by arrows.

doi:10.1371/journal.pone.0073594.g001

Three time-windows for statistical analysis were chosen a priori: a window in which early effects might be observed (150–350 ms post-onset), a time-window in which possible N400 effects might be observed (350–550 ms post-onset), and a later time-window for a possible P600 (600–900 ms post-onset). For each of those intervals, average ERPs were computed for participant, condition and electrode separately. Prior to averaging, trials with ocular or amplifier-related artifacts were excluded from the analysis. For analysis purposes, three sets of electrodes were created: the three prefrontal electrodes FP1, FZA, and FP2; the two occipital electrodes O1 and O2; and the main set of the 15 remaining electrodes. For each of those sets, Repeated Measures ANOVAs were conducted with Violation (violation vs. neutral), Laterality and Anteriority as within-participant factors. In the prefrontal analysis, Laterality had 3 levels (i.e., left, midline, and right side of the scalp); in the occipital analysis, Laterality had 2 levels (i.e., left and right); for the main analysis, Laterality had 5 levels (far left, left, middle, right, far right), and Anteriority had 3 levels (anterior, central, and posterior). Where appropriate, the Huynh-Feldt correction was applied; corrected p-values will be reported with the original degrees of freedom. Only effects involving the factor Violation will be discussed.

Non-standard baseline. The pre-critical word (the ambiguous NP “the alderman”) in the target sentence is introduced in the context question of the violation condition, but not in the neutral condition. This gives rise to 1) a ‘repetition’ N400-effect, where the N400 in the violation condition is attenuated (as compared to the neutral condition) through word repetition; 2) a P600 effect, due to the fact that in the neutral condition “the alderman” is a new discourse entity, whereas in the violation condition it is already given [10][14][16]. As we wanted to avoid including these effects in our baseline, we chose a baseline on the coordinator “en” (“and”) that precedes the ambiguous NP (i.e., “… and the alderman exuberantly.”). Importantly, the presence of the positivity for the neutral condition may still affect the size of subsequent effects (if we assume that ERP waves are additive), as the violation condition starts out more negative than the neutral condition at some of the electrodes. Hence, our ‘early-baseline’ procedure may overestimate the size of negativities following the target word in the violation condition. Conversely, the fact that the violation condition is more negative to begin with may have decreased the amplitude of subsequent positivities associated with the violation condition. Thus, the early-baseline procedure may underestimate the size of any positivity following the target word in the violation condition.

Early Time Window (150–350 ms post-onset).

In the analysis of the main set of electrodes, there was a marginally significant interaction of Violation×Anteriority (F(2,30) = 3.1; p = .08). Follow-up analyses showed that this trend towards an interaction was most probably caused by a negativity for the violation condition (as compared to the neutral condition) that was largest at the frontal electrodes (violation: 2.1 µV(SE = 0.6); neutral: 4.3 µV (SE = 1.3)), smaller at central sites (violation: 2.7 µV (SE = 0.7); neutral: 3.7 µV (SE = 1.1)) and smallest at posterior electrodes (violation: 1.5 µV (SE = 0.9)); neutral: 1.6 µV (SE = 1.0)). At the prefrontal electrodes there was a marginally significant main effect of condition, again with violation being more negative than neutral (violation: 2.9 µV (SE = 0.7); neutral: 5.3 µV (SE = 1.1); F(1,15) = 3.9; p = .066). No effects were found in the analysis of the occipital electrodes.

N400 Time-Window (350–550 ms post-onset).

We did not find significant effects for the main set or for the prefrontal electrodes (all p-values>.27). At the occipital electrodes there was a marginally significant interaction of Violation×Laterality (F(1,15) = 3.5; p = .08), most probably because the positivity elicited in the violation condition was bigger at the left than at the right of the scalp (Left: violation: −0.28 µV(SE = 0.9); neutral: −1.25 µV (SE = 0.7); Right: violation: −0.65 µV (SE = 0.8); neutral: −0.88 µV (SE = 0.7)).

P600 Time-Window (600–900 ms post-onset).

The analysis on the main set of electrodes produced a significant interaction of Violation×nteriority×Laterality (F(8,120) = 2.5; p<.05). Follow-up analyses per level of Laterality suggested that this interaction was due to a specific pattern of results for electrodes situated at the far left (Violation×Anteriority: F(2,30) = 3.3; p = .059), indicating a positivity for the violation condition that was present at T7 (violation: 3.2 µV (SE = 0.8); neutral: 1.6 µV (SE = 0.7); F(1,15) = 4.5; p = .05) and P7 (violation: 1.1 µV (SE = 1.1); neutral: −1.1 µV (SE = 1.0); F(1,15) = 6.0; p<.05), but not at F7 (violation: 1.9 µV (SE = 0.8); neutral: 1.9 µV (SE = 1.3); F<1). At the other levels of Laterality, the violation condition was always more positive than the neutral condition, but none of these differences were significant (e.g., left: violation: 3.5 µV (SE = 0.7); neutral: 1.7 µV (SE = 1.0); middle: violation: 4.2 µV (SE = .7); neutral: 3.1 µV (SE = 1.3); right: violation: 4.3 µV (SE = 0.7); neutral: 2.9 µV (SE = 1.1); far right: violation: 3.1 µV(SE = 0.5); neutral: 1.8 µV (SE = 1.0); all p-values>.10). Analysis of the occipital electrodes showed a significant interaction of Violation×Laterality (F(1,15) = 2.8; p<.01), due to a larger positivity for the violation condition at the left side (O1: violation: 0.9 µV (SE = 1.2); neutral: −0.7 µV (SE = 1.2)) than at the right side (O2: violation: 0.6 µV (SE = 1.1); neutral: 0.3 µV (SE = 1.1)). At prefrontal electrodes, the violation condition (4.6 µV (SE = 0.9)) was numerically more positive than the neutral condition (3.2 µV (SE = 1.3)) but this difference did not reach significance (p>.12).

Discussion.

Leaving a question partially unanswered gave rise to a significant, left-lateralized positive shift (600–900 ms after the onset of the target) which we interpret as a P600. The marginally significant effect at occipital electrodes in the “N400 time-window” suggests that this positivity already started earlier (350–550 ms post-onset), though with a different scalp distribution. These findings are consistent with the MRC hypothesis [14], where difficulties in creating a mental representation of language input are assumed to be reflected in (late) positivities. In addition to these positive effects, we found evidence for an early negativity (150–350 ms post-onset) with a frontal focus.

To start with this early negativity, Lau, Stroud, Plesch, and Phillips [21] reported a very similar finding in sentences containing a word category violation. They interpreted this effect as an Early Left Anterior Negativity or ELAN [22][23]—see [24] for a critical review. ELAN effects are typically observed when the syntactic category of the presented word does not match reader expectation. In the present study, the question in the violation condition sets up the expectation that the two protagonists in the answer act as AGENTS, each involved in a separate event (e.g., an event depicting what “the mayor” did, and another event depicting what “the alderman” did). However, instead of with the expected verb, readers were presented with an adverb. This mismatch in category may have produced the ELAN-effect.

After reading the disambiguating adverb, the reader must deal with the fact that the mental representation of the sentence, based on the assigned information structure and on the assigned thematic roles, is partially incorrect and in need of revision: “the alderman” is (i) not a topic, but should become part of the comment, and (ii) not an AGENT but a PATIENT. However, this ‘local’ revision of the mental representation created thus far will not solve the larger, more ‘global’ problem of the missing information, which may require extensive pragmatic processing. That is, after revising the interpretation to reflect that “the alderman” is a PATIENT and part of a comment, rather than an AGENT and a topic, one is still faced with the problem of what is meant by leaving out information on what “the alderman” did. Hence, to regain a coherent interpretation of the unfolding dialogue, people have to update their mental representation to reflect, for instance, that the speaker has left out the information on purpose, for instance, to communicate that “the alderman” was passive, and did nothing at all.

In the present experiment, it is not possible to separate processes of local revision and global pragmatic processes, although one might be tempted to speculate that the local revision is reflected by the early positivity in the N400 window (the size of this effect was rather small, but possibly underestimated through the early baseline procedure, see Data Analysis section above), and the global, more pragmatic processing by the later positivity. In order to disentangle these processes, we conducted a second experiment, using target sentences which did not contain the ambiguous NP (“the alderman”), thereby eliminating the need for local revision.

 

From Plos One, the whole article:

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0073594;jsessionid=9ECB98F0E056D62CC03F6A2DD4FE8307

#brain, #plosone, #research, #science, #science-news