Followers

Tuesday, July 22, 2008

Video Shows Moon From Other Side

A NASA spacecraft designed to look for comets turned its cameras homeward, capturing a unique view of the moon passing in front of the Earth as seen from 31 million miles away. The spacecraft, Deep Impact, took shots at 15-minute intervals, which were combined to make the sequence shown below.


Readers’ Opinions


The latest images show the moon and Earth in greater detail than previous ones taken by orbiting spacecraft, showing oceans and continents on our planet and craters on the moon. By studying how Earth looks from so far away, the scientists hope to sharpen their search for alien worlds that may share similar characteristics.

Sara Seager, a planetary theorist at the Massachusetts Institute of Technology and a co-investigator on the extended mission for Deep Impact, notes that the data being gathered are just for planning purposes because the discovery of a good candidate alien planet is a long way off in the future.

But should that time come, by comparing this detailed image of Earth to a glimmering, flickering point source of light, “we want to be able to infer whether there are oceans and continents on another planet,” she said. — SARAH GRAHAM

Original here

'Tongue Drive System' Controls Wheelchair, Computer

Quadriplegics may gain a new degree of freedom via their tongues, if a new control system becomes widely available.

The new system uses that famously strong, agile and sensitive muscle, the tongue, to provide computer accessibility and wheelchair control to severely disabled people.

Designed by researchers for people with debilitating spinal cord injuries and diseases, the tongue-drive tech takes advantage of the nearly direct connection between the tongue and the brain via cranial nerves, which makes it particularly likely to remain functional, even after severe accidents.

The system has two parts: a small magnet, attached to the tip of the tongue via adhesive, piercing or implantation, and a headset with two three-dimensional magnetic sensors mounted on it. The headset picks up the location of the tongue via the magnet and transmits that information to a smartphone.

Tonguedrive43_md Maysam Ghovanloo, the lead Georgia Tech researcher, designed software that converts the position of the tongue into joystick or mouse movements, allowing the severely disabled to control a wheelchair or computer. The setup could provide an unprecedentedly simple and powerful means of locomotion for the disabled.

"This device could revolutionize the field of assistive technologies," Ghovanloo said in a release.

See also:

Image and Video: Courtesy of Georgia Tech. Credit: Gary Meek.

WiSci 2.0: Alexis Madrigal's Twitter , Google Reader feed, and webpage; Wired Science on Facebook.

Original here

World’s Largest Offshore Wind Farm Back on Track

Are We Living in a Giant Void?

As a science fiction fan, I have come across something that annoys me. There seem to be a limited number of things that can happen in a series, and episodes start repeating themselves from series to series. One such occurrence is the idea of a ship being trapped in a void of stars; being literally nowhere near another star, and thus, all in black.

This void, as most episodes are entitled, is similar to the descriptions that filter out of Antarctica. When explorers are traversing miles and miles of white, they begin to lose the ability to determine where they are, and if they are moving.


The idea of an astronomical void is not just science fiction fodder however, and rather, according to Timothy Clifton and colleagues Pedro G. Ferreira and Kate Land at the University of Oxford, a possible explanation for why it looks as if our universe is expanding at an accelerated pace.

So far, the general consensus has been that dark energy – though unfound and unproven – is to blame for this acceleration. And although corroboration has been found from several independent sources, such as the cosmic microwave background and large scale structure, as well as improved measurements of the supernovae, this consensus is filled with uncertainties, considering that the observed value of dark energy is 120 orders of magnitude smaller than what is predicted from quantum physics.

At this point, Timothy Clifton’s paper, entitled ‘Title: Living in a Void: Testing the Copernican Principle with Distant Supernovae’, can be brought in to play as the basis of an alternative theory explaining what we are witnessing outside our proverbial window.

The opening line of their paper, states that “a fundamental presupposition of modern cosmology is the Copernican Principle”.

The Copernican Principle states that the Earth is not in a central, specially favored position, according to Herman Bondi in his 1952 book Cosmology. Clifton and co want to challenge this principle scientific theory, with an explanation that would also help us understand what we are seeing.

Their theory posits that if in fact Earth and our surrounding neighbors are in fact in an unusual or special region of space, ie, a void, then our perspective on the universe would be severely challenged. The local geometry of space-time would be different than expected. The curvature of space around us would affect how light from those distant supernovae that originally saw us explain their dimness as an ever accelerating and expanding universe. In fact, if the proposed void were large enough, it could do away entirely with the scientific need for dark energy to explain what we cannot.

It is no surprise that Clifton’s theories are speculative, but the best science always starts out that way. But one aspect of this paper, that at least one writer – Amanda Gefter, opinion editor at New Scientist – has picked up on, is that by blinding adhering to a scientific principle because to do otherwise is too hard, is tantamount to crime. Without making this a “rattle the cages” message, rules are there to be broken, and in science this is even more the case than elsewhere.

Posted by Josh Hill.

Original here

Coming to you - the search for ET

The radio telescope at Parkes is at the centre of the SETI operation. Getty Images © [Enlarge photo]

To some it's the ultimate prize in science - the discovery of life elsewhere in the universe.

Right now there are teams all around the world searching the skies in an effort to prove the existence of intelligent life beyond our planet.

In Australia, the team involved in the search for extraterrestrial intelligence - known as SETI - are about to ratchet up their capacity for analysing the data they collect with improved technology, while at the same time sharing the information with other institutions and the public.

"I think it's important because humankind is fascinated about origins of life," says Frank Stootman, who heads the team of scientists of SETI at the University of Western Sydney.

"There are different paradigms. One of the paradigms coming out of science is that perhaps life evolved in other places and if that's so is there any evidence for that. I think SETI and the Mars probes and looking for microbial life - all of these go to answering these kinds of questions," he says.

One of the benefits of this improved technology will be the ability to analyse data in real time.

"Rather than doing post-analysis which is slow and requires a lot of what we call 'eyeballing the data', this will allow us to do things in real time and that will be an immediate advantage," he says.

"Previously we logged data at the site and had to transfer the data back to the University of Western Sydney and this now will do something quite different.

"The data will come online back to us, but not only to us. The data will be available to other institutions like museums and if they have the right client software which we hope to provide them, they can actually see live what's happening and the client software will have an analysis part to it and so the people watching can actually see some of the analysis going on."

Dr Stootman is hoping the high-tech equipment will soon be back at the radio telescope at Parkes in central-western New South Wales, which is at the centre of their operation.

"We pulled the gear out in March and have taken it back to our laboratories to do upgrading. We are probably about halfway through," he says.

"Our hold-up at the moment is getting some of the low-level software to communicate correctly with the machine, but basically we hope we might have something back in Parkes probably by August."

While many people might think that searching the universe for signs of intelligent life might be tantamount to rocket science, Dr Stootman wants to make their job more accessible for the rest of us.

"We're going to try to make it so that people can access it and they can actually understand what's going on," he says.

"I think it's something that interests lots of people, whether there's life out there, particularly whether there's life that's radio-aware. I think that people will enjoy having greater access to it. It's kind of fun and it's important at the same time."

Original here

Man to live on the moon

Man to live on the moon
THIRTY-NINE years to the day Neil Armstrong radioed "The Eagle has landed!" from the Sea of Tranquility, NASA is turning its eyes to the moon again in a first step towards settling the solar system. Go to our science and space special section. |

THIRTY-NINE years to the day after Neil Armstrong radioed "The Eagle has landed!" from the Sea of Tranquility, NASA has turned its eyes toward the moon, gazing both forward and backward in time.

For the next three days, Silicon Valley will be the base for planning humankind's return to the moon, as more than 400 scientists from around the world assemble at NASA's Ames Research Center for a conference on what type of science should be done when astronauts revisit Earth's nearest neighbor.

It could happen in the decade after NASA retires the space shuttle in 2010 and begins flying a new generation of rocket booster. And it won't be a temporary visit, NASA officials and scientists said Sunday.

The United States, they said, focus on creating a permanent presence on the moon, using it as a training platform for missions to Mars and beyond.

"We're going back, and this time we're going to stay,'' S. Pete Worden, director of NASA Ames, said in remarks opening the lunar science conference. "This is the first step in settling the solar system.''

The conference, hosted by the newly created NASA Lunar Science Institute at Ames, doesn't start officially until today.

Sunday's event at Moffett Field was a celebration for all those who remember exactly what they were doing on the historic day Apollo 11 landed in 1969, and for the generation who hadn't been born yet - those who might take the next steps on the moon.

Even as scientists and NASA officials wrestled with the philosophical and technical questions of a return to the moon, scores of kids got a chance to build and fly their own paper rockets, and to re-enact their own version of an upcoming NASA Ames robotic mission that will crash a rocket into the moon to try to discover if there is water there.

NASA also showcased a new generation of scientists, people in their 20s and even younger who are already working on reaching the moon or Mars.

Mary Beth Wilhelm is just 18 and won't begin studying physics and astronomy at Cornell University until the fall. But on Sunday, the Ames research assistant spoke on a panel of young NASA experts about the return to the moon.

Wilhelm noted that the far side of the moon would be an excellent place to build a radio telescope, and she weighed in the value of human explorers over robotic probes.

"A human can do in a minute what it takes a robot a day to do on Mars,'' she said.

American astronauts last visited the moon in 1972, and there are huge questions - both of technology and politics - about how and when astronauts will return. For now, NASA is focused on building a new generation of powerful rocket boosters that could reach the Moon and beyond.

Chris McKay, a planetary scientist at NASA Ames who is working on the Constellation program - the name of the next generation of manned space flight - said the space agency needs to develop a whole new culture, along with the new hardware.

"I would argue that long-term planning has been something that NASA has not been very good at,'' McKay said in a speech to about 200 scientists and members of the public. ``We are going to the moon to stay - and to stay means 50 years.''

McKay said the return to the moon should imitate the way scientists have explored Antarctica, using an international base at the South Pole as a permanent outpost for scientific exploration.

Before NASA attempts the even more difficult, expensive and remote journey to Mars, the space agency will first have to learn how to do things like grow plants that can recycle human waste in the low-gravity, radioactive environment of the moon, he said.

NASA also will need to learn more about how people function and relate in the most remote place where humans have ever lived.

On the moon, NASA can learn "how we get 10 people to live together productively on another world,'' McKay said.

But as the world's top lunar scientists begin wrestling at Ames on Sunday morning with questions ranging from what kind of astronomy could be done on the moon to the chemistry of lunar dust, the 39th anniversary of Apollo 11's touchdown was a time to remember its historical footprint.

Andrew Chaikin, whose book "A Man on the Moon'' was the basis for an HBO television mini-series, asked people to recall the sense of wonder people felt at the Apollo voyages.

"This was something that was in the culture; science fiction was becoming reality,'' Chaikin said.

"This was one of the great moments of the 20th century, not just the '60s.''

Original here


Prehistoric Explosions Wiped Out Ocean Life-- And Created Petroleum


Submarine Volcano: A massive undersea volcano (much like a larger version of this one captured in 2006, South of Japan) may have been the source of much of the world's petroleum stores. Photo by Submarine Ring of Fire 2006 Exploration, NOAA Vents Program

A new study by the University of Alberta suggests that a massive undersea volcano eruption 93 million years ago was the source of much of the world’s oil.

Researchers Steven Turgeon and Robert Creaser were alerted to the prehistoric blast when they found specific levels of osmium isotopes (indicators of volcanic activity in sea water) in black shale rocks off the coast of South America and in the mountains of central Italy.

According to Turgeon and Creaser, lava fountains from the ancient eruption changed oceanic chemistry, triggering widespread extinction of marine life. This happened in a two-step process: First, as the volcano erupted, nutrients were released into the ocean, encouraging the growth of vegetation and the feeding and reproduction of marine organisms. As this overgrowth of new plant and animal populations died off, the decomposing organic matter released clouds of carbon dioxide into the ocean and atmosphere, leading to an anoxic, or oxygen-depleted, environment.

Normally, decaying materials are completely broken down in the ocean, but due to the lack of oxygen, the prehistoric organic matter settled at the bottom of the sea bed and became trapped there, forming the petroleum-rich shale deposits which are important sources of oil today.

Original here

Physicists shed light on key superconductivity riddle

This scanning tunneling microscope image of a bismuth superconducting compound shows a characteristic checkerboard pattern. The researchers believe this pattern indicates the presence of a charge density wave. Image / Doug Wise, Kamalesh Chatterjee and Michael Boyer, MIT

Led by Eric Hudson, associate professor of physics, the researchers are exploring materials that conduct electricity with no resistance at temperatures around 30 degrees Kelvin above absolute zero. Such materials could have limitless applications if they could be made to superconduct at room temperature.

Hudson's team is focusing on the state of matter that exists at temperatures just above the temperature at which materials start to superconduct. This state, known as the pseudogap, is poorly understood, but physicists have long believed that characterizing the pseudogap is important to understanding superconductivity.

In their latest work, published online on July 6 in Nature Physics, they suggest that the pseudogap is not a precursor to superconductivity, as has been theorized, but a competing state.

If that is true, it could completely change the way physicists look at superconductivity, said Hudson.

"Now, if you want to explain high-temperature superconductivity and you believe the pseudogap is a precursor, you need to explain both. If it turns out that it is a competing state, you can instead focus more on superconductivity," he said.

The researchers studied several samples of a bismuth compound that superconducts at high temperatures. Each has a different level of doping (number of extra oxygen atoms that change the material's electrical properties), which influences both its superconducting and pseudogap properties.

"We've studied a variety of samples and found trends which point toward one possible identity, which is a charge-density wave," said Hudson.

Others have suggested that the pseudogap might be a charge-density wave, but this is the first systematic study of a "checkerboard" pattern, which appears when the material is imaged with scanning tunneling microscopy (STM) across a range of samples. The doping dependency of the checkerboard pattern offers strong evidence of a charge-density wave, Hudson said.

"If it is true that the pseudogap is a charge-density wave, that would be a major, major outcome because people have been looking for this for the past decade," he said.

Lead author of the paper is graduate student William Wise. Other MIT authors are graduate students Michael Boyer and Kamalesh Chatterjee, postdoctoral associate Yayu Wang, and former postdoctoral associate Takeshi Kondo.

Original here

Stoooopid .... why the Google generation isn’t as smart as it thinks

On Wednesday I received 72 e-mails, not counting junk, and only two text messages. It was a quiet day but, then again, I’m not including the telephone calls. I’m also not including the deafening and pointless announcements on a train journey to Wakefield – use a screen, jerks – the piercingly loud telephone conversations of unsocialised adults and the screaming of untamed brats. And, come to think of it, why not include the junk e-mails? They also interrupt. There were 38. Oh and I’d better throw in the 400-odd news alerts that I receive from all the websites I monitor via my iPhone.

I was – the irony! – trying to read a book called Distracted: The Erosion of Attention and the Coming Dark Age by Maggie Jackson. Crushed in my train, I had become the embodiment of T S Eliot’s great summary of the modern predicament: “Distracted from distraction by distraction”. This is, you might think, a pretty standard, vaguely comic vignette of modern life – man harassed by self-inflicted technology. And so it is. We’re all distracted, we’re all interrupted. How foolish we are! But, listen carefully, it’s killing me and it’s killing you.

David Meyer is professor of psychology at the University of Michigan. In 1995 his son was killed by a distracted driver who ran a red light. Meyer’s speciality was attention: how we focus on one thing rather than another. Attention is the golden key to the mystery of human consciousness; it might one day tell us how we make the world in our heads. Attention comes naturally to us; attending to what matters is how we survive and define ourselves.

The opposite of attention is distraction, an unnatural condition and one that, as Meyer discovered in 1995, kills. Now he is convinced that chronic, long-term distraction is as dangerous as cigarette smoking. In particular, there is the great myth of multitasking. No human being, he says, can effectively write an e-mail and speak on the telephone. Both activities use language and the language channel in the brain can’t cope. Multitaskers fool themselves by rapidly switching attention and, as a result, their output deteriorates.

The same thing happens if you talk on a mobile phone while driving – even legally with a hands-free kit. You listen to language on the phone and lose the ability to take in the language of road signs. Worst of all is if your caller describes something visual, a wallpaper pattern, a view. As you imagine this, your visual channel gets clogged and you start losing your sense of the road ahead. Distraction kills – you or others.

Chronic distraction, from which we all now suffer, kills you more slowly. Meyer says there is evidence that people in chronically distracted jobs are, in early middle age, appearing with the same symptoms of burn-out as air traffic controllers. They might have stress-related diseases, even irreversible brain damage. But the damage is not caused by overwork, it’s caused by multiple distracted work. One American study found that interruptions take up 2.1 hours of the average knowledge worker’s day. This, it was estimated, cost the US economy $588 billion a year. Yet the rabidly multitasking distractee is seen as some kind of social and economic ideal.

Meyer tells me that he sees part of his job as warning as many people as possible of the dangers of the distracted world we are creating. Other voices, particularly in America, have joined the chorus of dismay. Jackson’s book warns of a new Dark Age: “As our attentional skills are squandered, we are plunging into a culture of mistrust, skimming and a dehumanising merger between man and machine.”

Mark Bauerlein, professor of English at Emory University in Atlanta, has just written The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardises Our Future. He portrays a bibliophobic generation of teens, incapable of sustaining concentration long enough to read a book. And learning a poem by heart just strikes them as dumb.

In an influential essay in The Atlantic magazine, Nicholas Carr asks: “Is Google making us stupid?” Carr, a chronic distractee like the rest of us, noticed that he was finding it increasingly difficult to immerse himself in a book or a long article – “The deep reading that used to come naturally has become a struggle.”

Instead he now Googles his way though life, scanning and skimming, not pausing to think, to absorb. He feels himself being hollowed out by “the replacement of complex inner density with a new kind of self – evolving under the pressure of information overload and the technology of the ‘instantly available’”.

“The important thing,” he tells me, “is that we now go outside of ourselves to make all the connections that we used to make inside of ourselves.” The attending self is enfeebled as its functions are transferred to cyberspace.

“The next generation will not grieve because they will not know what they have lost,” says Bill McKibben, the great environmentalist.

McKibben’s hero is Henry Thoreau, who, in the 19th century, cut himself off from the distractions of industrialising America to live in quiet contemplation by Walden Pond in Massachusetts. He was, says McKibben, “incredibly prescient”. McKibben can’t live that life, though. He must organise his global warming campaigns through the internet and suffer and react to the beeping pleading of the incoming e-mail.

“I feel that much of my life is ebbing away in the tide of minute-by-minute distraction . . . I’m not certain what the effect on the world will be. But psychologists do say that intense close engagement with things does provide the most human satisfaction.” The psychologists are right. McKibben describes himself as “loving novelty” and yet “craving depth”, the contemporary predicament in a nutshell.

Ironically, the companies most active in denying us our craving for depth, the great distracters – Microsoft, Google, IBM, Intel – are trying to do something about this. They have formed the Information Overload Research Group, “dedicated to promoting solutions to e-mail overload and interruptions”. None of this will work, of course, because of the overwhelming economic forces involved. People make big money out of distracting us. So what can be done?

The first issue is the determination of the distracters to create young distractees. Television was the first culprit. Tests clearly show that a switched-on television reduces the quality and quantity of interaction between children and their parents. The internet multiplies the effect a thousandfold. Paradoxically, the supreme information provider also has the effect of reducing information intake.

Bauerlein is 49. As a child, he says, he learnt about the Vietnam war from Walter Cronkite, the great television news anchor of the time. Now teenagers just go to their laptops on coming home from school and sink into their online cocoon. But this isn’t the informational paradise dreamt of by Bill Gates and Google: 90% of sites visited by teenagers are social networks. They are immersed not in knowledge but in “gossip and social banter”.

“They don’t,” says Bauerlein, “grow up.” They are “living off the thrill of peer attention. Meanwhile, their intellects refuse the cultural and civic inheritance that has made us what we are now”.

The hyper-connectivity of the young is bewildering. Jackson tells me that one study looked at five years of e-mail activity of a 24-year-old. He was found to have connections with 11.7m people. Most of these connections would be pretty threadbare. But that, in a way, is the point. All internet connections are threadbare. They lack the complexity and depth of real-world interactions. This is concealed by the language.

Join Facebook or MySpace and you suddenly have “friends” all over the place. Of course, you don’t. These are just casual, tenuous electronic pings. Nothing could be further removed from the idea of friendship.

These connections are severed as quickly as they are taken up – with the click of a mouse. Jackson and everyone else I spoke to was alarmed by the potential impact on real-world relationships. Teenagers are being groomed to think others can be picked up on a whim and dropped because of a mood or some slight offence. The fear is that the idea of sticking with another through thick and thin – the very essence of friendship and love – will come to seem absurd, uncool, meaningless.

One irony that lies behind all this is the myth that children are good at this stuff. Adults often joke that their 10-year-old has to fix the computer. But it’s not true. Studies show older people are generally more adept with computers than younger. This is because, like all multitaskers, the kids are deluding themselves into thinking that busy-ness is depth when, in fact, they are skimming the surface of cyberspace as surely as they are skimming the surface of life. It takes an adult imagination to discriminate, to make judgments; and those are the only skills that really matter.

The concern of all these writers and thinkers is that it is precisely these skills that will vanish from the world as we become infantilised cyber-serfs, our entertainments and impulses maintained and controlled by the techno-geek aristocracy. They have all noted – either in themselves or in others – diminishing attention spans, inability to focus, a loss of the meditative mode. “I can’t read War and Peace any more,” confessed one of Carr’s friends. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.”

The computer is training us not to attend, to drown in the sea of information rather than to swim. Jackson thinks this can be fixed. The brain is malleable. Just as it can be trained to be distracted, so it can be trained to pay attention. Education and work can be restructured to teach and propagate the skills of concentration and focus. People can be taught to turn off, to ignore the beep and the ping.

Bauerlein, dismayed by his distracted students, is not optimistic. Multiple distraction might, he admits, be a phase, and in time society will self-correct. But the sheer power of the forces of distraction is such that he thinks this will not happen.

This, for him, puts democracy at risk. It is a form of government that puts “a heavy burden of responsibility on our citizens”. But if they think Paris is in England and they can’t find Iraq on a map because their world is a social network of “friends” – examples of appalling ignorance recently found in American teenagers – how can they be expected to shoulder that burden?

This may all be a moral panic, a severe case of the older generation wagging its finger at the young. It was ever thus. But what is new is the assiduity with which companies and institutions are selling us the tools of distraction. Every new device on the market is, to return to Eliot, “Filled with fancies and empty of meaning / Tumid apathy with no concentration”.

These things do make our lives easier, but only by destroying the very selves that should be protesting at every distraction, demanding peace, quiet and contemplation. The distracters have product to shift, and it’s shifting. On the train to Wakefield, with my new 3G iPhone, distracted from distraction by distraction, I saw the future and, to my horror, it worked.

Original here

How blind salamanders make nonsense of creationists' claims.

Illustration by Mark Alan Stamaty. Click image to expand

It is extremely seldom that one has the opportunity to think a new thought about a familiar subject, let alone an original thought on a contested subject, so when I had a moment of eureka a few nights ago, my very first instinct was to distrust my very first instinct. To phrase it briefly, I was watching the astonishing TV series Planet Earth (which, by the way, contains photography of the natural world of a sort that redefines the art) and had come to the segment that deals with life underground. The subterranean caverns and rivers of our world are one of the last unexplored frontiers, and the sheer extent of the discoveries, in Mexico and Indonesia particularly, is quite enough to stagger the mind. Various creatures were found doing their thing far away from the light, and as they were caught by the camera, I noticed—in particular of the salamanders—that they had typical faces. In other words, they had mouths and muzzles and eyes arranged in the same way as most animals. Except that the eyes were denoted only by little concavities or indentations. Even as I was grasping the implications of this, the fine voice of Sir David Attenborough was telling me how many millions of years it had taken for these denizens of the underworld to lose the eyes they had once possessed.

If you follow the continuing argument between the advocates of Darwin's natural selection theory and the partisans of creationism or "intelligent design," you will instantly see what I am driving at. The creationists (to give them their proper name and to deny them their annoying annexation of the word intelligent) invariably speak of the eye in hushed tones. How, they demand to know, can such a sophisticated organ have gone through clumsy evolutionary stages in order to reach its current magnificence and versatility? The problem was best phrased by Darwin himself, in his essay "Organs of Extreme Perfection and Complication":

To suppose that the eye, with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree.

His defenders, such as Michael Shermer in his excellent book Why Darwin Matters, draw upon post-Darwinian scientific advances. They do not rely on what might be loosely called "blind chance":

Evolution also posits that modern organisms should show a variety of structures from simple to complex, reflecting an evolutionary history rather than an instantaneous creation. The human eye, for example, is the result of a long and complex pathway that goes back hundreds of millions of years. Initially a simple eyespot with a handful of light-sensitive cells that provided information to the organism about an important source of the light …

Hold it right there, says Ann Coulter in her ridiculous book Godless: The Church of Liberalism. "The interesting question is not: How did a primitive eye become a complex eye? The interesting question is: How did the 'light-sensitive cells' come to exist in the first place?"

The salamanders of Planet Earth appear to this layman to furnish a possibly devastating answer to that question. Humans are almost programmed to think in terms of progress and of gradual yet upward curves, even when confronted with evidence that the past includes as many great dyings out of species as it does examples of the burgeoning of them. Thus even Shermer subconsciously talks of a "pathway" that implicitly stretches ahead. But what of the creatures who turned around and headed back in the opposite direction, from complex to primitive in point of eyesight, and ended up losing even the eyes they did have?

Whoever benefits from this inquiry, it cannot possibly be Coulter or her patrons at the creationist Discovery Institute. The most they can do is to intone that "the Lord giveth and the Lord taketh away." Whereas the likelihood that the post-ocular blindness of underground salamanders is another aspect of evolution by natural selection seems, when you think about it at all, so overwhelmingly probable as to constitute a near certainty. I wrote to professor Richard Dawkins to ask if I had stumbled on the outlines of a point, and he replied as follows:

Vestigial eyes, for example, are clear evidence that these cave salamanders must have had ancestors who were different from them—had eyes, in this case. That is evolution. Why on earth would God create a salamander with vestiges of eyes? If he wanted to create blind salamanders, why not just create blind salamanders? Why give them dummy eyes that don't work and that look as though they were inherited from sighted ancestors? Maybe your point is a little different from this, in which case I don't think I have seen it written down before.

I recommend for further reading the chapter on eyes and the many different ways in which they are formed that is contained in Dawkins' Climbing Mount Improbable; also "The Blind Cave Fish's Tale" in his Chaucerian collection The Ancestor's Tale. I am not myself able to add anything about the formation of light cells, eyespots, and lenses, but I do think that there is a dialectical usefulness to considering the conventional arguments in reverse, as it were. For example, to the old theistic question, "Why is there something rather than nothing?" we can now counterpose the findings of professor Lawrence Krauss and others, about the foreseeable heat death of the universe, the Hubble "red shift" that shows the universe's rate of explosive expansion actually increasing, and the not-so-far-off collision of our own galaxy with Andromeda, already loomingly visible in the night sky. So, the question can and must be rephrased: "Why will our brief 'something' so soon be replaced with nothing?" It's only once we shake our own innate belief in linear progression and consider the many recessions we have undergone and will undergo that we can grasp the gross stupidity of those who repose their faith in divine providence and godly design.

Original here

Mind trick yields new insights on perception

Cathryn M. Delude, McGovern Institute

Anyone who has seen an optical illusion can recall the quirky moment when you realize that the image being perceived is different from objective reality. Now, a team of scientists from MIT, Harvard and McGill has designed a new illusion involving the sense of touch, which is helping to glean new insights into perception and how different senses--such as touch and sight--work together.

Ambiguous visual images are fascinating because it is often difficult to imagine seeing them any other way--until something flips within the brain and the alternative perception is revealed. This phenomenon, known as perceptual rivalry, is of great interest to neuroscience. Because rivalrous illusions produce changes in perception that are independent of changes in the stimulus itself, they may help to understand how the brain gives rise to conscious experience.

"The most familiar illusions involve vision," explains Christopher Moore, a principal investigator at the McGovern Institute for Brain Research at MIT and an assistant professor in MIT's Department of Brain and Cognitive Sciences. "But we're interested in discovering general principles of perception, and we wanted to see whether similar illusions can occur in the tactile domain."

Moore is senior author of a paper on the new illusion published on the Current Biology web site on July 17.

In the visual illusion known as the apparent motion quartet, two dots are presented at diagonally opposite corners of an imaginary square. When the pattern alternates between the two diagonals--top left/bottom right followed by top right/bottom left--people perceive the dots as moving back and forth either horizontally or vertically. After a period of time, typically a minute or two, most observers report that the axis of motion appears to flip from vertical to horizontal or vice versa.

An example of the illusion can be seen at web.mit.edu/~tkonkle/www/AmbiguousQuartet.html.

To create a tactile version of this illusion, Olivia Carter, a postdoctoral researcher at Harvard University, and Talia Konkle, a graduate student in Moore's MIT lab, used a new piezoelectric stimulator device developed by Qi Wang and Vincent Hayward at McGill University. This device, originally designed as a computer Braille display, uses a centimeter-square array composed of 60 "tactors" to deliver precisely controlled touch stimuli to the finger tips of volunteer subjects.

When volunteer subjects were given the diagonally alternating stimuli, they perceived them as moving smoothly back and forth--and just as with the visual illusion, the direction of apparent motion flipped back and forth from vertical to horizontal, on average about twice per minute, even though there was no change in the stimulus itself.

The authors went on to show that after a period of adaptation to an unambiguous horizontal or vertical stimulation (produced by activating a row of tactors in succession), subjects were more likely to perceive a subsequent ambiguous stimulus as being in the orthogonal direction. Similar after-effects are common in vision and were once thought to reflect fatigue in the brain circuits responsible for a particular perceptual interpretation, but are now thought to reflect a continual recalibration of the brain to its sensory environment. In another experiment, an ambiguous touch stimulus was interrupted by a three-second break, after which subjects tended to experience the same direction as before the break, suggesting that the prior interpretation was somehow retained in memory and used to reinterpret the ambiguous stimulus.

Real-world objects often stimulate multiple senses simultaneously, and our brains must combine these disparate stimuli into a unified interpretation of the world. The authors used their tactile illusion to explore the interaction between touch and vision. They instructed their subjects to make vertical or horizontal eye movements during the ambiguous touch stimuli. Subjects perceived that the direction of tactile motion shifted into alignment with the direction of the eye movements, but only if the head and finger were also aligned. Tilting the head sideways 90 degrees produced a shift to the other direction--suggesting that the tactile and visuomotor systems are somehow aligned with respect to the external world.

"We don't yet understand what's happening in the brain during these illusions," says Konkle. "But we think this illusion will be a useful new tool to understand more about the similarities between different sensory modalities and how they all work together."

This work was funded by the National Health and Medical Research Council of Australia, the U.S. Department of Defense, McGill University, the National Sciences and Engineering Research Council of Canada, the McGovern Institute for Brain Research at MIT and the Mitsui Foundation.

Original here

Genes could explain memory differences between men and women

By Richard Gray, Science Correspondent

When it comes to memory it is clear that men and women are simply not on the same wavelength.

While men may fail to match a woman's ability to remember the date of an anniversary, they are better at storing a seemingly endless cache of facts and figures.

Scientists believe they have now uncovered the reason for this difference between the sexes – they make the memories in different ways.

Researchers at the Institute of Psychiatry, King's College London, have found that males use different genes from females when making the new connections in the brain that are needed to create long-term memories. They believe this might explain why men are far better at remembering "tactical" memories, such as travel directions and trivia, while women form more "emotional" memories such as birthdays, wedding anniversaries and details about the world around them.

Professor Peter Giese, who led the Medical Research Council- funded research , said they had identified two genes that seemed to be important for learning and making memories in males but not females.

He said: "It is unexpected that there should be such a difference within a species, but then we have to remember that males and females are far from identical at the genetic level as males have an X and Y chromosome while females have two X chromosomes.

It is conceivable that the differences we found do account for the differences in the way the memories of men and women perform in different circumstances."

The researchers used mice to study the role that certain genes play in how long term memories are made in males and females. Using a series of tests such as a maze they were able to show that male mice were faster at making the spatial memories that allowed them to learn a route out of the maze.

Professor Giese and his team then bred mice that lacked two key genes and found that the males were no longer able to learn the route out of a maze. The females, however, were unaffected by the loss of these genes. He said: "We see these sex differences in humans too as males and females use different strategies when it comes to remembering a route through a city, for example. In some tasks males are better than females and in other occasions females are better than males.

"These genetic differences could be very important in studying diseases like Alzheimer's, where memory is affected. Females are affected by Alzheimer's more than males, so it could mean the way females make memories is more vulnerable to disease."

His findings follow research elsewhere that is revealing just how different the brain's of men and women really are.

One study at Harvard Medical School found that parts of the frontal lobe, which houses the decision-making and problem-solving functions, are larger in women compared to men. The limbic cortex, which regulates emotions, is also larger in women.

The Parietal cortex, which is involved in space perception and balance, is bigger in men. Professor Carey Cooper, a psychologist who specialises in sex differences at Lancaster University, said: "It is probably a combination of the genetics and hard wiring of the brain together with the social imprinting of gender that has led to the behavioural differences we now see between men and women."

Original here

Al Gore's call for renewable energy sets us up for a useful failure

By John Timmer

Last Thursday, former presidential candidate and Nobel Laureate Al Gore gave a speech in which he called for a national effort to get the entire US electric grid operating on a carbon-neutral basis within a decade. In its aftermath, much of the attention has been focused on whether the idea is actually achievable—Gore says it is; many say otherwise. To a large extent, however, this may not be the most important question. Even if the plan is destined for failure, it's worth considering where it would leave the country if we actually tried it.

Going green is inevitable

A grid based on a combination of renewable and nuclear energy is pretty much inevitable. Fossil fuels are a finite resource, and the world will ultimately run short on them; demand may make their price prohibitive well before that happens. Supplies of coal, which provide roughly half of the US electrical generating capacity, will last a bit longer than other fuels, suggesting we may wind up increasing our reliance on it.

But coal has several disadvantages, starting with the fact that it produces the most pollutants per unit of energy. Domestic coal is now relatively inexpensive, but that's partly a function of eased mine safety enforcement and environmental standards that allow it to be obtained by mountaintop removal. Changes to these policies could greatly increase its cost, as could any carbon tax; there will also be increased competition for the supply as global energy demand increases. The net result is that even coal doesn't look appealing in the long term.

So, the question is not so much whether there are advantages to going carbon-neutral—we will do it anyway, eventually—but rather what the advantages of getting there fast are. This being Al Gore, one of the advantages he noted was a reduced impact on the climate. Gore would have done well to mention ocean acidification as an additional problem; the scientific community's conclusions on the climate remain controversial in some circles, but there have been far fewer questions raised about the potential impact of atmospheric carbon dioxide on the oceans, and clear and accessible examples of its effects on aquatic organisms are now available.

Gore moved past the environmental concerns rapidly, however, and focused on economics. Here, his arguments echoed those of politicians who are promoting green power sources, namely that renewable power will create jobs in the US, and that's something our economy could use. Here, the benefits are a bit oversold, given that China's probably as capable of producing solar panels and wind turbines as anyone else, but the renewable facilities themselves will be run and maintained in the US.

Gore also argues that costs for renewable power will go down, while fossil fuels will only get more expensive. Again, this is generally right, but probably an oversimplification. Costs for the silicon used in solar panels have gone down but, should we undertake a massive expansion in photovoltaic capacity such as the one Gore proposes, demand may cause them to shoot up again for a while. At the same time, if renewables successfully cut the demand for fossil fuels, the prices of those fuels may drop. Still, the long-term trends are inevitable: the supply of renewable energy is unlimited and the technology used to obtain it should become cheaper and more efficient with time.

Can we do this in a decade?

Probably not. To get a sense of the scale of the problem, consider the project just announced by Texas: $5 billion for 18.5 Gigawatts worth of electric grid, designed to get wind power from the Texas panhandle to its population centers. It will take five years to construct, and that doesn't include the cost or time of putting the generating capacity in place. Similar issues face states with solar potential, given that the best areas for solar, like the Mojave and Sonoran deserts, are sparsely populated and thus largely off the grid.

Building both the generation and transmission capacity, and the manufacturing capacity behind them, will be difficult enough to accomplish in 10 years, and are likely to strain the markets for the raw material involved (thus raising costs). We'll have to build a parallel capacity for storing power to smooth out the low points in renewable generating systems, then build replacements for our aging nuclear capacity. Things will go wrong, won't be finished in time, and won't work to planned capacity.

To make matters more challenging, Gore is proposing that the economics of renewable power will get so good, and capacity increase so rapidly, that gasoline use would drop as plug-in hybrids and battery-powered cars take over. Thus, the electric supply would not only have to meet future needs extrapolated from today's usage patterns, but add significant additional capacity in order to power a portion of the country's commuter vehicles.

Even if the US could summon the political will to engage in this project, finishing it in a decade is almost certainly not going to happen. But that doesn't necessarily mean that trying to do it is a bad idea.

Why Gore's plan may be a useful failure

On a practical level, building some renewable facilities will be essential simply to understand how a renewable grid will work. Without starting the process, it can be difficult to tell which approaches—which form of storage, which type of photovoltaic installment, etc.—will scale and make the most sense for wide deployment. We'll also need to know whether we can reach the sort of excess generation capacity that can make Gore's goal of electric vehicles possible, or if we should be looking elsewhere (say, to biofuels) for our future transportation needs.

But right now, the lack of a clear, short-term goal for renewable power is inducing paralysis. Licensing of new nuclear plants remains a baroque process, and there has been little effort put towards finding a long-term solution to the processing and storage of nuclear wastes. The Bureau of Land Management is so indifferent that it temporarily stopped accepting new applications for solar facilities. Clean coal remains as elusive as nuclear fusion, and the project intended to be a pioneering technology demonstration now appears unlikely to be built. A pilot solar thermal project in California is apparently held up because of concerns that the electric grid that will connect it may disrupt the habitat of an endangered species.

The government is not only failing internally, but it's failing to send any signals to the business community. The development of renewable power industries is being approached with some hesitancy by private industry, as fears persist that there will be a repeat of the events of the 80s, where fossil fuel prices dropped and a lack of a long-term energy policy helped kill the startups that had formed in the wake of the energy problems of the 70s. Long-term planning is impossible for any such business, as there has been no direction provided about future carbon restrictions at the national level, leaving the companies facing a patchwork of state and regional planning and legislation.

Thus, the lack of any goal for the short term is contributing to a paralysis that not only leaves the nation in no position to move towards a renewable future, but hinders the ability of businesses or the public to take even the smallest steps in this direction. Adopting Gore's proposal, if nothing else, should force the federal government to streamline the approval process for renewable and nuclear generation facilities, and it will give businesses a better sense of how the future will develop.

It might also force the public to deal with some uncomfortable truths. Nuclear power will be an essential bridge to a renewable future, but the public has rarely come to terms with the presence of nuclear facilities. Even wind turbines have famously been subject to "not in my back yard" complaints. Similarly, the public wants a safe, reliable, and organized power grid, but is remarkably reluctant to pay the cost of building, maintaining, and inspecting one. Even if we reject a grand, national program, the debate may force the US public to come to terms with its internally inconsistent desires.

In short, we're going to be renewable eventually, and there are some distinct advantages to starting down that road sooner, rather than later. But, right now, neither the government nor the public appears to even know where to begin. Going renewable in a decade may not be achievable either on the practical or political levels, but simply considering what might need to be done to get there will be essential for us to make any progress at all.

Original here

The floating ecopolis

LONDON, England (CNN) -- The concept may be radical, but it might just have to be if the worst predictions of climate change are realized.

The Lilypad as imagined by architect Vincent Callebaut moored off the coast of Monaco.

The Lilypad as imagined by architect Vincent Callebaut moored off the coast of Monaco.


The Lilypad, a floating ecopolis for climatic refugees, is the creation of Belgian architect Vincent Callebaut.

"It is" he says, "a true amphibian, half aquatic and half terrestrial city, able to accommodate 50,000 inhabitants and inviting biodiversity".

Callebaut imagines his structure at 250 times the scale of a lilypad, with a skin made of polyester fibres coated in titanium dioxide which would react with ultraviolet light and absorb atmospheric pollution.

The Lilypad comprises of three marinas and three mountain regions with streets and structures strewn with foliage. "The goal is to create a harmonious coexistence of humans and nature," said Callebaut.

With a central fresh water lagoon acting as ballast, the whole construction would be carbon neutral utilizing solar, thermal, wind, hydraulic, tidal and osmotic energies.

With high density populations living in low-lying areas -- The Netherlands, Polynesia, Bangladesh -- the ecopolis, its creator believes, could be the answer to mass human displacement that global warming is predicted to cause.

In its most recent 2007 report the Intergovernmental Panel on Climate Change predicted sea levels will rise by 60-90 cm during this century. Some climate scientists like James Hansen think that if greenhouse gas emissions aren't checked then those figures might be much, much worse.

In practice, Callebaut envisages the Lilypad sailing the seas, following currents like a futuristic cruise ship. He also thinks that it could "widen sustainability in offshore territories of the most developed countries such as Monaco".

You can't help thinking that the well-heeled residents of the Principality might have a thing or two to say about 50,000 climatic refugees bobbing around in the harbour, but you cannot fault Callebaut's ambition.

His previous creations -- showcased on his website -- reveal an imagination working at full throttle with sustainable design lying at its heart.

Anti-Smog -- a prototype of depolluting architecture and Ecomic -- an ecotower rising up from the foundations of Aztec ruins are two further examples of his eco design credentials. The Perfumed Jungle, Fields in Fields and The Fractured Monolith may sound like titles for various genres of novel but are, in fact, names for sustainable projects in Callebaut's growing portfolio.

Now all he needs is to find someone brave enough to build on the vision he has created.

Original here

Seven Best National Parks for Visiting Old Growth Forests

Serra do Divisor National Park

This park includes a huge swath of Amazon rainforest, notably the Serra do Divisor mountain range along the Brazilian-Peruvian border.

Photo by islandspice

The Amazon rainforest is as large as Western Europe or the entire United States. It covers 5 percent of the world’s land, and is thought to be the most diverse ecosystem on Earth – home to nearly 10 percent of the world’s mammals and 15 percent of the world’s terrestrial plant species.

It is home to more than 20 million people, including an estimated 220,000 people from 180 different indigenous nations. This forest ecosystem is also one of the most threatened on the planet.

Muir Woods National Park

The ancient forest ecosystems of North America are extremely diverse. Included in this system is the boreal forest belt stretching between Newfoundland and Alaska; the coastal temperate rainforest of Northern California, Oregon, Washington, Alaska, and Western Canada; and the myriad of residual areas of temperate forest surviving in more remote regions.

These forests store massive amounts of carbon, which helps to stabilize climate change. They also provide habitat for large mammals such as grizzly bear, grey wolf, and puma.

Muir Woods National Park is home to one of the last coastal stands of redwood in the San Francisco Bay are.

Defensores del Chaco National Park

The temperate forest ecosystem of South America, which covers areas of Southern Chile and Argentina, represents the largest tract of essentially undisturbed temperate forest in the world.

The Great Chaco and Yungas Rainforests of Argentina are neighboring ecosystems within this forest complex. Rich in biodiversity, they are home to rare jaguars.

The forests here are being destroyed faster than almost anywhere else in the world. The rate of destruction has accelerated even further after Monsanto introduced genetically engineered soya beans to Argentina .

Lake Khovsgol National Park

The Snow Forests of Asian Russia have contiguous tracts of land ranging from the arctic zone in northeastern Sahha, to the subtropical region along the Amur and Ussuri river basins to the south. Because of its large size, the Amur-Sakhalin region shelters more types of plants and animals than any other temperate forest in the world. Many of these species are unique to this area and exist nowhere else.

The Snow Forests of Asian Russia are also home to indigenous peoples including the Nanai of the Kahbarovsk region.

Photo by mr-c

Ovre Pasvik National Park

The last ancient forests of Europe encompass the last few remaining tracts in Scandinavia with the adjoining forest of European Russia . This contiguous forest area provides habitat for many species that require large tracts of unbroken land such as bears, flying squirrels, and the highly endangered eagle owl.

These boreal forests are also home to tens of thousands of indigenous peoples, such as the reindeer-herding Saami.

Rinjani National Park

The cultural diversity of this area is astounding - more than 1000 languages are spoken on the island of New Guinea alone.

These contiguous forests stretch from South East Asia, across the islands of Indonesia to Papua New Guinea and the Solomon Islands in the Pacific. The island of New Guinea , the world’s second largest island, has the largest continuous tracts of primeval forest in the Asia Pacific region.

The Paradise Forests are home to a rich diversity of species, many of which occur nowhere else on earth. The Sumatran Tiger, the Orangutan, and the Rafflesia, a one meter-wide flower, all reside here.

Virunga National Park

Home of the Congo rainforest, this is the second largest rainforest on earth after the Amazon. This enormous forest covers and area three times the size of France, and plays a vital role in regulating the global climate. It is the fourth largest forest carbon reservoir of any country in the world.

The gorilla, chimpanzee, and bonobo - primates that are our closest relatives, depend on the Congo for survival. This forest is also home to 270 species of mammals, of which 39 are unique to this area.

Tens of millions of people, Bantu farmers, the Twa people, and fishing communities, depend on the Congo for their survival.

Regional causes of forest loss and degradation vary, but the primary factors are agricultural expansion, settlement, mining, shifting agricultural crops, and infrastructure development. Recent research by the World Resources Institute (WRI) indicates that, “commercial logging poses by far the greatest danger to frontier forests…affecting more than 70 percent of the world’s threatened frontiers.”

Community Connection

What can you do to help? Besides visiting these places and studying the issues facing them firsthand, check out the Rainforest Action Network , and Nativeforest.org.

Are you a member of a conservation org or know someone who is? We encourage you to join our network of organizations at matador, where you’ll find a captive audience of thousands of travelers and environmentally-conscious people worldwide.

Ellen Wilson

Ellen Wilson is a freelance writer and photographer. Formerly trained as a wildlife biologist, she has returned to school to obtain teaching credentials.

Original here

Wetlands could unleash "carbon bomb"

By Deborah Zabarenko, Environment Correspondent

WASHINGTON (Reuters) - The world's wetlands, threatened by development, dehydration and climate change, could release a planet-warming "carbon bomb" if they are destroyed, ecological scientists said on Sunday.

Wetlands contain 771 billion tons of greenhouse gases, one-fifth of all the carbon on Earth and about the same amount of carbon as is now in the atmosphere, the scientists said before an international conference linking wetlands and global warming.

If all the wetlands on the planet released the carbon they hold, it would contribute powerfully to the climate-warming greenhouse effect, said Paulo Teixeira, coordinator of the Pantanal Regional Environment Program in Brazil.

"We could call it the carbon bomb," Teixeira said by telephone from Cuiaba, Brazil, site of the conference. "It's a very tricky situation."

Some 700 scientists from 28 nations are meeting this week at the INTECOL International Wetlands Conference at the edge of Brazil's vast Pantanal wetland to look for ways to protect these endangered areas.

Wetlands are not just swamps: they also include marshes, peat bogs, river deltas, mangroves, tundra, lagoons and river flood plains.

Together they account for 6 percent of Earth's land surface and store 20 percent of its carbon. They also produce 25 percent of the world's food, purify water, recharge aquifers and act as buffers against violent coastal storms.

Historically, wetlands have been regarded as an impediment to civilization. About 60 percent of wetlands worldwide have been destroyed in the past century, mostly due to draining for agriculture. Pollution, dams, canals, groundwater pumping, urban development and peat extraction add to the destruction.

IMAGE PROBLEM

"Too often in the past, people have unwittingly considered wetlands to be problems in need of a solution, yet wetlands are essential to the planet's health," said Konrad Osterwalder, UN Under Secretary-General and rector of United Nations University, one of the hosts of the meeting.

So far, the impacts of climate change are minor compared to human depredations, the scientists said in a statement. As is the case with other environmental problems, it is far easier and cheaper to maintain wetlands than try to rebuild them later.

As the globe warms, water from wetlands is likely to evaporate, rising sea levels could change wetlands' salinity or completely inundate them.

Even so, wetland rehabilitation is a viable alternative to artificial flood control for coping with the larger, more frequent floods and severe storms forecast for a warmer world.

Northern wetlands, where permanently frozen soil locks up billions of tons of carbon, are at risk from climate change because warming is forecast to be more extreme at high latitudes, said Eugene Turner of Louisiana State University, a participant in the conference.

The melting of wetland permafrost in the Arctic and the resulting release of carbon into the atmosphere may be "unstoppable" in the next 20 years, but wetlands closer to the equator, like those in Louisiana, can be restored, he said.

Teixeira admitted wetlands have an image problem with the public, which is generally well-disposed to saving the rainforest but not the swamp.

"People don't have a good impression about wetlands, because they don't know about the environmental service that wetlands provide to us," he said.

Original here

Hundreds of Dead Baby Penguins Wash Up on Rio de Janeiro's Beaches

Image from Tjeerd

It is difficult to imagine what must have been going through the heads of Rio de Janeiro beachgoers in recent months as they have seen hundreds of baby penguins wash up onshore dead. At last count, more than 400 penguins, swept from the shores of Patagonia and Antarctica, have been found dead on Rio de Janeiro's beaches, reports the AP's Michael Astor.

Are pollution or overfishing to blame?
Though not an uncommon occurrence -- live and dead penguins are regularly swept in by ocean currents -- officials say it is the first time that they've seen so many dead penguins washing onshore in such a short period of time. While some are suggesting pollution may be to blame for the unprecedented number of deaths, others believe overfishing may have pushed the penguins to swim too far offshore -- leaving them vulnerable to hostile currents.

Another one to pin on climate change?
Erli Costa, a biologist at Federal University, has a different theory: He thinks rapidly fluctuating weather patterns, influenced by climate change, may be altering ocean currents and making the seas more treacherous. Since most of the penguins washing up are young, he postulates that they are babies that had just left their nests in search of food -- and succumbed to the fast-moving currents. If true, this is especially worrisome as it indicates that Rio de Janeiro and other regions can expect to see an increase in such events over the coming years.

Zoos and shelters in Rio de Janeiro have been doing their best to accommodate the arrivals of some live birds, but many are feeling overwhelmed by the sheer number being swept in.

Original here