How does the Extended Evolutionary Synthesis differ from design?

Reader asks:

Further to: New call for an Extended Evolutionary Synthesis (The main problem the extended evolutionary synthesis creates for Darwinism is that evolution happens in many different ways, not just their way):

From the paper:

By contrast, the EES regards the genome as a sub-system of the cell designed by evolution to sense and respond to the signals that impinge on it. Organisms are not built from genetic ‘instructions’ alone, but rather self-assemble using a broad variety of
inter-dependent resources.

A reader writes to ask,

1. “designed by evolution”?

That means that design is so obvious that you can not get rid of it. But you can not represent “evolution” as an agent because “evolution” is not an agent, a force, a cause… Evolution is just “nothing”, the way we name the passing of time, but not the cause of the change.

2. “Designed by evolution to sense and respond to the signals that impinge on it” That is purely teleological, thank you.

3. “Self -assembly??”Ontogeny is not a process of assembly of parts. Aristotle called this process “epigenesis” 2.500 years ago. Kant explained that parts and the whole form being cause and effect to each other.

4. “…using a broad variety of inter-dependent resources”This interdependence sounds a little bit like “irreducible complexity””resources” has big teleological implications. The cell (or the organism that is being formed) “uses the resources” in order to…(Form is the final cause of the process)

Thanks to Jablonka, Müler et al. for reminding us how evident teleology and design are in biology.

Doubtless, the extended evolutionary synthesizers will be asked by others to explain.

Should be an interesting discussion
.

Advertisements

#academic-freedom, #intelligent-design, #science

Simulation Universe

Please follow the next lines and images about an interesting questions.

That is, why inferring design on functionally specific, complex organisation and associated information, e.g.:

abu_6500c3magand equally:

cell_metabolism

. . . makes good sense.

Now, overnight, UD’s Newsdesk posted on a Space dot com article, Is Our Universe a Fake?

The article features “Philosopher Nick Bostrom, director of the Future of Humanity Institute at Oxford University.”

I think Bostrom’s argument raises a point worth pondering, one oddly parallel to the Boltzmann brain popping up by fluctuation from an underlying sea of quantum chaos argument, as he discusses “richly detailed software simulation[s] of people, including their historical predecessors, by a very technologically advanced civilization”:

>>Bostrom is not saying that humanity is living in such a simulation. Rather, his “Simulation Argument” seeks to show that one of three possible scenarios must be true (assuming there are other intelligent civilizations):

  1. All civilizations become extinct before becoming technologically mature;
  2. All technologically mature civilizations lose interest in creating simulations;
  3. Humanity is literally living in a computer simulation.

His point is that all cosmic civilizations either disappear (e.g., destroy themselves) before becoming technologically capable, or all decide not to generate whole-world simulations (e.g., decide such creations are not ethical, or get bored with them). The operative word is “all” — because if even one civilization anywhere in the cosmos could generate such simulations, then simulated worlds would multiply rapidly and almost certainly humanity would be in one.

As technology visionary Ray Kurzweil put it, “maybe our whole universe is a science experiment of some junior high school student in another universe.”>>

In short, if once the conditions are set up for a large distribution of possibilities to appear, you have a significant challenge to explain why you are not in the bulk of the possibilities in a dynamic-stochastic system.

Let me put up an outline, general model:

gen_sys_proc_modelSuch a system puts out an output across time that will vary based on mechanical and stochastic factors, exploring a space of possibilities. And in particular, any evolutionary materialist model of reality will be a grand dynamic-stochastic system, including a multiverse.

Now, too, as Wiki summarises, there is the Boltzmann Brain paradox:

>>A Boltzmann brain is a hypothesized self aware entity which arises due to random fluctuations out of a state of chaos. The idea is named for the physicist Ludwig Boltzmann (1844–1906), who advanced an idea that the Universe is observed to be in a highly improbable non-equilibrium state because only when such states randomly occur can brains exist to be aware of the Universe. The term for this idea was then coined in 2004 by Andreas Albrecht and Lorenzo Sorbo.[1]

The Boltzmann brains concept is often stated as a physical paradox. (It has also been called the “Boltzmann babies paradox”.[2]) The paradox states that if one considers the probability of our current situation as self-aware entities embedded in an organized environment, versus the probability of stand-alone self-aware entities existing in a featureless thermodynamic “soup”, then the latter should be vastly more probable than the former.>>

In short, systems with strong stochastic tendencies tend to have distributions in their outcomes, which are dominated by the generic and typically uninteresting bulk of a population. Indeed this is the root of statistical mechanics, the basis for a dynamical understanding of thermodynamics i/l/o the behaviour of large collections of small particles.

For instance, one of my favourites (explored in Mandl) is an idealised two-state element paramagnetic array, with atoms having N-pole up/down, a physical atomic scale close analogue of the classic array of coins exercise. We can start with 500 or 1,000 coins in a string, which will of course pursue a binomial distribution [3.27 * 10^150 or 1.07*10^301 possibilities respectively, utterly dominated by coins in near 50-50 outcomes, in no particular orderly or organised pattern], then look at an array where each atom of our 10^57 atom sol system has a tray of 500 coins flipped say every 10^-13 – 10^-15 s:

sol_coin_fliprThe outcome of such an exercise is highly predictably that no cases of FSCO/I (meaningful complex strings) will emerge, as the number of possible observed outcomes is so small relative to the set of possibilities that it rounds down to all but no search, as the graphic points out.

This is of course an illustration of the core argument to design as credible cause on observing FSCO/I, that once functionally specific complex organisation and associated information are present in a situation, it demands an observed to be adequate explanation that does not require us to believe in statistical needle- in- vast- haystack- search- challenge miracles:

islands_of_func_challAlso:

is_ o_func2_activ_info

The Captain Obvious fact of serious thinkers making similar needle in haystack arguments, should lead reasonable people to take pause before simply brushing aside the inference to design on FSCO/I. Including in the world of life and in the complex fine tuned physics of our cosmos that sets up a world in which C-chemistry, aqueous medium terrestrial planet life is feasible.

But we’re not finished yet.

What’s wrong with Bostrom’s argument, and wheere else does it point.

PPolish and Mapou raise a point or two:

>>1

  • Simulated Universes scream Intelligent Design. Heck, Simulated Universes prove Intelligent Design.

    I can see why some Scientists are leaning in this direction. Oops/Poof does not cut it any more. Unscientific, irrational, kind of dumb.

  • ppolish,

    It’s a way for them to admit intelligent design without seeming to do so (for fear of being crucified by their peers). Besides, those who allegedly designed, built and are running the simulation would be, for all intents and purposes, indistinguishable from the Gods.

    Edit: IOW, they’re running away from religion only to fall into it even deeper.>>

In short, a detailed simulation world will be a designed world.

Likewise High School student projects do not credibly run for 13.7 BY. Not even PhD’s, never mind Kurzweil’s remark.

So, what is wrong with the argument?

First, an implicit assumption.

It is assuming that unless races keep killing off themselves too soon, blind chance and mechanical necessity can give rise to life then advanced, civilised high tech life that builds computers capable of whole universe detailed simulations.

But ironically, the argument points to the likeliest, only observed cause of FSCO/I, design, and fails to address the significance of FSCO/I as a sign of design, starting with design of computers, e.g.:

mpu_modelWhere, cell based life forms show FSCO/I-rich digital information processors in action “everywhere,” e.g. the ribosome and protein synthesis:

Protein Synthesis (HT: Wiki Media)

So, real or simulation, we are credibly looking at design, and have no good empirical observational grounds to infer that FSCO/I is credibly caused by blind chance and mechanical necessity.

So, the set of alternative possible explanations has implicitly questionable candidates and implicitly locks out credible but ideologically unacceptable ones, i.e. intelligent design of life and of the cosmos. That is, just maybe the evidence is trying to tell us that if we have good reason to accept that we live in a real physical world as opposed to a “mere” speculation, then that puts intelligent design of life and cosmos at the table as of right not sufferance.

And, there is such reason.

Not only is it that the required simulation is vastly too fine grained and fast-moving to be credibly  centrally processed but the logic of complex processing would point to a vast network of coupled processors. Which is tantamount to saying we have been simulating on atoms etc. In short, it makes good sense to conclude that our processing elements are real world dynamic-stochastic entities: atoms, molecules etc in real space.

This is backed up by a principle that sets aside Plato’s Cave worlds and the like: any scheme that implies grand delusion of our senses and faculties of reasoning i/l/o experience of the world undermines its own credibility in an infinite regress of further what if delusions.

Reduction to absurdity in short.

So, we are back to ground zero, we have reason to see that we live in a real world in which cell based life is full of FSCO/I and the fine tuning of the cosmos also points strongly to FSCO/I.

Thence, to the empirically and logically best warranted explanation of FSCO/I.

Design.

Thank you Dr Bostrom for affirming the power of the needle in haystack challenge argument.

Where that argument leads, is to inferring design as best current and prospective causal explanation of FSCO/I, in life and in observed cosmos alike.

Any suggestions and comments?

#cosmology, #math, #metaphysics, #philosophy, #physics, #science, #science-news, #universe

Spectacular Convergence: A Camera Eye in a Microbe

94600_web (1)

They thought it was a joke. A century ago, biologists could not believe that a one-celled creature had an eye. But since the warnowiid dinoflagellate was difficult to find and grow in the lab, detailed research was rare, until now. A team from the University of British Columbia gathered specimens off the coast of BC and Japan for a closer look. They found that the structure, called an ocelloid, has structures that mimic the complex eye of higher animals. PhysOrgsays:

In fact, the ‘ocelloid’ within the planktonic predator looks so much like a complex eye that it was originally mistaken for the eye of an animal that the plankton had eaten.

“It’s an amazingly complex structure for a single-celled organism to have evolved,” said lead author Greg Gavelis, a zoology PhD student at UBC. “It contains a collection of sub-cellular organelles thatlook very much like the lens, cornea, iris and retina of multicellular eyes found in humans and other larger animals.” [Emphasis added.]

New Scientist shares the astonishment:

It is perhaps the most extraordinary eye in the living world — soextraordinary that no one believed the biologist who first described it more than a century ago.

Now it appears that the tiny owner of this eye uses it to catch invisible prey by detecting polarised light. This suggestion is also likely to be greeted with disbelief, for the eye belongs to a single-celled organism called Erythropsidinium. It has no nerves, let alone a brain. So how could it “see” its prey?

The “retina” of this eye, a curved array of chromosomes, appears arranged to filter polarized light. The news item from the Canadian Institute for Advanced Research quotes Brian Leander, co-supervisor of the project:

“The internal organization of the retinal body is reminiscent of the polarizing filters on the lenses of cameras and sunglasses,” Leander says. “Hundreds of closely packed membranes lined up in parallel.”

And that’s not all this wonder of the sea has in its toolkit. It also has a piston and a harpoon:

Scientists still don’t know exactly how warnowiids use the eye-like structure, but clues about the way they live have fuelled compelling speculation. warnowiids hunt other dinoflagellates, many of which are transparent. They have large nematocysts, which Leander describes as “little harpoons,” for catching prey. And some have apiston — a tentacle that can extend and retract very quickly — with an unknown function that might be used for escape or feeding.

Did This Eye Evolve?

Lest anyone think the dinoflagellate’s eye presents an easy evolutionary stepping stone to more complex eyes, the data reveal several problems. The paper inNature claims that the ocelloids are built from “different endosymbiotically acquired components” such as mitochondria and plastids. “As such, the ocelloid is a chimaeric structure, incorporating organelles with different endosymbiotic histories.” We can treat endosymbiosis as a separate issue. For now, we can ask if this complex structure is explainable by unguided natural selection.

The authors did not think this is a clear evolutionary story. “The ocelloid isamong the most complex subcellular structures known, but its function andevolutionary relationship to other organelles remain unclear,” they say. Never in the paper do they explain how organelles with different histories came together into a functioning eye. Most of the paper is descriptive of the parts and how they function individually, or where they might have been derived by endosymbiosis. To explain the eye’s origin as a functioning whole, they make up a phrase, “evolutionary plasticity” —

Nevertheless, the genomic and detailed ultrastructural data presented here have resolved the basic components of the ocelloid and their origins, and demonstrate how evolutionary plasticity of mitochondria and plastids can generate an extreme level of subcellular complexity.

Other than that, they have very little to say about evolution, and nothing about natural selection.

In the same issue of Nature, Richards and Gomes review the paper. They list other microbes including algae and fungi that have light-sensitive spots. Some have the rhodopsin proteins used in the rods and cones of multicellular animals. But instead of tracing eye evolution by common ancestry, they attribute all these innovations to convergence:

These examples demonstrate the wealth of subcellular structures and associated light-receptor proteins across diverse microbial groups. Indeed, all of these examples represent distinct evolutionary branches in separate major groups of eukaryotes. Even the plastid-associated eyespots are unlikely to be the product of direct vertical evolution, because the Chlamydomonas plastid is derived from a primary endosymbiosis and assimilation of a cyanobacterium, whereas the Guillardia plastid is derived from a secondary endosymbiosis in which the plastid was acquired ‘second-hand’ by intracellular incorporation of a red alga. Using gene sequences recovered from the warnowiid retinal body, Gavelis et al. investigated the ancestry of this organelle by building phylogenetic trees for the plastid-derived genes. Their analysis demonstrated that this modified plastid is also of secondary endosymbiotic originfrom a red alga.

Although derived independently, there are common themes in theevolution of these eye-like structures. Many of them involve thereconfiguration of cellular membrane systems to produce anopaque body proximal to a sensory surface, a surface that in four of the five examples probably involves type 1 rhodopsins. Given the evolutionary derivation of these systems, this represents a complex case of convergent evolution, in which photo-responsive subcellular systems are built up separately from similar components to achieve similar functions. The ocelloid example isstriking because it demonstrates a peak in subcellular complexity achieved through repurposing multiple components. Collectively, these findings show that evolution has stumbled on similar solutions to perceiving light time and time again.

But is convergence just a word masquerading as an explanation? We read:

The work sheds shed new light on how very different organisms can evolve similar traits in response to their environments, a process known as convergent evolution. Eye-like structures haveevolved independently many times in different kinds of animals and algae with varying abilities to detect the intensity of light, its direction, or objects.

“When we see such similar structural complexity at fundamentally different levels of organization in lineages that are very distantly related to each other, in this case warnowiids and animals, then you get a much deeper understanding of convergence,” Leander says.

But “convergent evolution” is not a process. It is a post-hoc observation based on evolutionary assumptions. An environment has no power to force an organism to respond to it with a complex function. Light exists, whether or not an organism sees it. Magnetism exists, too; does it contain the power to nudge fish, turtles, and butterflies to employ it for navigation?

#academic-freedom, #debate, #evolution, #eye, #intelligen-design, #science, #science-news

Darwin’s “Horrid Doubt”: The Mind

Charles_Darwin_by_Barraud_c1881-crop.jpg

Many people in their forties today grew up with science as the business end of naturalist atheism. In their view, a “scientific” explanation is one that describes a universe devoid of meaning, value, or purpose. That is how we know it is a scientific explanation.

Science wasn’t always understood that way, and the new approach has consequences. It means, for example, that multiverse cosmology can consist entirely of evidence-free assumptions. Yet only a few question whether it is science.

Indeed, physicist Carlo Rovelli sounds distinctly old-fashioned when he says, “Science does not advance by guessing.” That depends on what you count as an advance. If science means projects such as ruling out the Big Bang and fine-tuning of the universe — irrespective of evidence, because they smack of theism — then guessing is an accepted and acceptable strategy.

Similarly, origin-of-life studies are “scientific” to the extent that they seek an origin without any intelligent cause. A century and a half of dead ends prompts no rethink; neither would a millennium. Even if probability theorists can show, beyond reasonable doubt, that an intelligent cause is required, their correct explanation would be rejected because it is not “scientific.”

And in studies of human evolution, the starting point is that “humans are evolved primates, an unexceptional twig on the tree of life, though like other twigs, we are accidental outliers.” Again, no one seeks to demonstrate that proposition. And no finding that doesn’t support that interpretation can be considered “science.” Any thesis that does support it, even that humans are chimp-pig hybrids, may be considered science.

So the “scientific” approach to that least material of entities, the human mind, means interpreting it in a naturalist and materialist way.

Darwin had doubts about how the Cambrian period fitted his theory. But his “horrid doubt” concerned the human mind:

But then with me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of the lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?

In future articles, we will look at the “hard problem” of consciousness and the conundrums that free will, altruism, and religion create for naturalism. Plus a side trip into naturalism’s pop culture expressions: “evolutionary” claims about psychology, politics, business, and art. These claims are often taken seriously by opinion leaders. After all, however exotic, they need only be fully naturalist to qualify, at least potentially, as science.

Most partial or whole explanations of the human mind propose one of the following models:

  • The brain randomly generates illusions that self-organize as a “mind.” Behavior is thus better accounted for by the study of neurons (neuroscience) than the study of the illusory “mind.”
  • Our hominoid ancestors passed on hypothetical genes via natural selection acting on random mutation. These claimed (not demonstrated) genes result in our attitudes, values, beliefs, and behavior — mistakenly seen as the outcome of thought processes (evolutionary psychology).
  • Identified genes determine behavior in the present day, the way a light switch controls a circuit. These include the “bad driver” gene, the infidelity gene, and the liberal gene, for starters. Whether or not such claims correspond to how genes work, the pop science media deems them plausible because they are naturalist. They bypass widespread illusions such as moral and intellectual choice.
  • Our primate cousins’ behavior can explain ours, because we are 98 percent chimpanzee. Naturalism means never having to ask commonsense questions like: If chimps’ behavior explains ours, why didn’t they develop as we did? Naturalism simply does not process such questions. It is true without evidence, and cannot be confuted by evidentiary failures.
  • Artificial intelligence enthusiasts hope to create conscious machines with superior intelligence, in short, a material mind. 2020 is the current apocalypse year according to some. We’ll swing by that approach, if only because so many people take it seriously. Again, however preposterous, if it is naturalist, it is science.

Ironically, while Darwin may have doubted the fully naturalized mind and felt horrid about it, most of his latter-day supporters believe and feel good. And, on its own terms, their faith cannot be disconfirmed.

My “Science Fictions” series on cosmology is here, origin of life is here, and human evolution ishere.

#darwin, #id, #science

World First As Message Sent From Brain To Brain

A man wears a brain-machine interface, e

A technique known as electroencephalogry recorded thoughts.

In a world first, a team of researchers has achieved brain-to-brain transmission of information between humans.

The team managed to send messages from India to France – a distance of 5,000 miles – without performing invasive surgery on the test subjects.

There were four participants in the study, aged between 28 and 50.

One was assigned to a brain-computer interface to transmit the thought, while the three others were assigned to receive the thought.

The first participant, located in India, was shown words translated into binary, and had to envision actions for each piece of information.

For example, they could move their hands for a 1 or their legs for a 0.

A technique known as electroencephalogry – which monitors brain signals from the outside – was used to record the thoughts as outgoing messages and send them via the internet.

At the other end, electromagnetic induction was used to stimulate the brain’s visual cortex from the outside and pass on the signal  successfully to the three other participants in France.

The report’s co-author, Alvaro Pascual-Leone, said: “We wanted to find out if one could communicate directly between two people by reading out the brain activity from one person and injecting brain activity into the second person, and do so across great physical distances by leveraging existing communication pathways.

“One such pathway is, of course, the internet, so our question became, ‘Could we develop an experiment that would bypass the talking or typing part of internet and establish direct brain-to-brain communication between subjects located far away from each other in India and France?”

The research team was made up of researchers from Harvard University, as well experts from France and Spain.

#brain-to-brain-transmission, #news, #science

Self-Folding Robot Suggests Answer to Common Objection to Intelligent Design

An article that recently appeared on Google News, “Origami robot doesn’t need a human to assemble itself and start working,” has a fascinating video of a self-folding robot that mimics the way proteins or insect wings spontaneously fold into their functional form.

This suggests an answer to a common objection to the theory of intelligent design. The objection, stated in various forms by thinkers dating back to David Hume, goes something like this:

We do observe that intelligent designers build complex technology. But they never build things that grow, reproduce, or evolve — i.e., we humans never produce things like life. Thus it’s inappropriate to analogize between human-designed technology and living organisms because human-designed technology lacks key features of life. This causes the argument for design in nature based upon nature’s similarities to human designs to break down.

This objection has always seemed less than compelling to me. Consider reproduction. True, humans haven’t (yet) produced technology capable of self-replication or reproduction in the biological sense, but why should that count against the argument for design? Surely something that cannot reproduce or self-replicate is less complex than something that can. But if human technology (which cannot reproduce) is less complex than biological systems, yet it is designed, doesn’t that suggest a fortiori that living organisms — which are more complex and can reproduce — were designed? In other words, the flaw in the analogy seems to strengthen the argument for design rather than weaken it.

Moreover, the objection is based upon the presumption that human technology will never reproduce. Who is to say what human technology will be able to do in the future? We’re now starting to build self-folding robots. Why is it so hard to imagine that in the future, human technology might reproduce and grow and self-assemble? (In fact, computer simulations can reproduce all of these capacities already.) This objection seems to retreat into the gaps as human technology becomes more and more advanced. And, incidentally, much of that progress comes as human technology mimics nature.

In short, the objection claims that differences between human technology and natural structures count against intelligent design in nature. But I think the logic of the objection is backwards. Here’s how I would frame it:

  • (a) If intelligent causes make more complex and efficient designs than unintelligent causes,
  • then (b) if nature’s designs are more complex and efficient than human technology, and
  • (c) human technology is designed,
  • then (d) nature’s features must also exhibit some design.

True, human technology and natural features are not always identical. But those differences tend to point towards design in nature rather than against it.

#intelligent-design, #news, #science

Fire and water – how global warming is making weather more extreme and costing us money

Trees burn as flames move towards the City of Berkeley's Toulumne Family Camp near Groveland, California in August 2013. Global warming creates conditions that intensify wildfires and the costs of fighting them.
Trees burn as flames move towards the City of Berkeley’s Toulumne Family Camp near Groveland, California in August 2013. Global warming creates conditions that intensify wildfires and the costs of fighting them. Photograph: Noah Berger/EPA

Connecting the dots between human-caused global warming and specific extreme weather events has been a challenge for climate scientists, but recent research has made significant advances in this area. Links have been found between some very damaging extreme weather events and climate change.

For example, research has shown that a “dipole” has formed in the atmosphere over North America, with a high pressure ridge off the west coast, and a low pressure trough over the central and eastern portion of the continent.

Departure of the November 2013 – January 2014 250 hPa geopotential height from the normal climatology.
Departure of the November 2013 – January 2014 250 hPa geopotential height from the normal climatology. Source: Wang et al. (2014), Geophysical Research LettersPhotograph: Wang et al. (2014), Geophysical Research Letters

These sorts of pressure ridges in the atmosphere are linked to “waves” in the jet stream. Research has shown that when these jet stream waves form, they’re accompanied by more intense extreme weather. The high pressure zone off the west coast or North America has been termed the “Ridiculously Resilient Ridge” due to its persistence over the past two years. It’s been the main cause of California’s intense drought by pushing rain storms around the state.

California drought as of 26 August 2014.  58% of the state is in 'exceptional drought' conditions.
California drought as of 26 August 2014. 58% of the state is in ‘exceptional drought’ conditions. Source: United States Drought Monitor

A paper led by S.-Y. Wang of Utah State University found the high pressure ridge is linked to a precursor of the El Niño Southern Oscillation (ENSO), but also that human-caused global warming has amplified the strength of these ridges. The authors concluded,

It is important to note that the dipole is projected to intensify, which implies that the periodic and inevitable droughts California will experience will exhibit more severity.

Similarly, a recent paper led by Kevin Trenberth and published in Nature Climate Change concluded,

Increased heating from global warming may not cause droughts but it is expected that when droughts occur they are likely to set in quicker and be more intense.

Another study recently published in the Journal of Climate examined data from past climate changes, and found that climate models are underestimating the likelihood of intense droughts in the southwestern USA due to global warming.

In the US Southwest, for instance, state-of-the-art climate model projections suggest the risk of a decade-scale megadrought in the coming century is less than 50%; our analysis suggests that the risk is at least 80%, and may be higher than 90% in certain areas. The likelihood of longer lived events (> 35 years) is between 20% and 50%, and the risk of an unprecedented 50 year megadrought is non-negligible under the most severe warming scenario (5-10%).

There are several ways in which global warming intensifies drought. Hotter temperatures increase evaporation from soil and reservoirs. They cause more precipitation to fall as rain and less as snow, which for a region like California that relies on the snowpack in the Sierra Nevada mountains as its natural water storage system, is problematic. Hotter temperatures also cause the snowpack to melt earlier in the year. The problem can be alleviated by building more water storage infrastructure, but that costs money.

On top of all that, there’s the apparent strengthening of high pressure ridges off the coast, pushing rain storms around California. Research suggests that there may be a connection between these ridges and the decline in Arctic sea ice, although this connection is debated among climate experts.

#climate, #disaster, #publication, #science