Followers

Friday, February 29, 2008

Five myths about the satellite smash-up

Last week's Pentagon operation to bring down a falling spy satellite may have been widely termed a "shootdown" of precision accuracy — but the reality is more complex, and much messier.

As military officials take stock of the event's physical and political fallout, it's worth dispelling some of the misconceptions and myths that could otherwise cloud the thinking of policymakers and the public during the debate over past and future "shootdowns."

Reality: Hitting a satellite with a missile is not at all like hitting a bird with a bullet and watching it plummet to the ground. An orbiting satellite stays in orbit not because of its power or guidance, but merely because of its forward speed. An attack that does not substantially change that orbital velocity cannot drive the satellite out of orbit, no matter how much physical damage it does.

The only practical way to remove such targets from orbit is by slowing them down. In practice, that occurs as a result of air drag, an effect that can take hours, weeks, or centuries depending on the thickness of the air at the satellite’s altitude. Breaking a big spacecraft into smaller pieces does increase the effects of air drag — as demonstrated dramatically last week — but it is the key role of air drag that makes the critical causal link between "shooting" and "downing" the target.

Myth No. 2: Falling satellites aren’t really hazardous, and since they’ve never hurt anybody before, they were unlikely to hurt anybody this time. Hence, there must have been a secret "real reason" for the missile mission.

Reality: First, counting on a string of successfully dodging bullets is no open-ended guarantee of being bullet-proof forever. The odds have a way of catching up with you, and defying them is an all-too-common fallacy called “normalization of deviance.” At NASA, this attitude laid the foundation for the Challenger and Columbia shuttle disasters.

Second, it’s not true that past safe outcomes always occurred even when countries let their big satellites randomly fall to Earth. Just the opposite is true — for decades, major spacefaring powers have taken deliberate and expensive steps to mitigate the ground-impact hazards of satellites.

All Russian spacecraft and U.S. military satellites heavier than 15,000 pounds are deliberately steered into untraveled expanses of the far southern Pacific Ocean. NASA steered its Compton Gamma Ray Observatory into a precisely planned atmospheric re-entry in 2000, and tried (but failed) to do the same with the Skylab space station in 1978.

In last week's case, the Pentagon said it resorted to the missile-intercept option because the spy satellite's guidance system was inoperable. Now, the mix of motivations for making the missile attack can be debated — but the up-front official claim about mitigating hazard cannot be glibly dismissed.

Myth No. 3: The hydrazine on the spy satellite was unlikely to reach the ground in any concentration worth worrying about.

Reality: Space officials were so concerned about the satellite's full tank of hydrazine fuel because they believed it had frozen solid, due to the low temperatures aboard the spacecraft. They feared that the titanium-shielded "toxic iceberg" would survive intact all the way to the ground and disperse around the crash site, not in the upper atmosphere. Safety officials had never been faced with this type of falling material before.

How dangerous is hydrazine? The chemical is considered toxic as well as flammable. U.S. space workers have indeed survived massive short-term dosing by the chemical during fueling accidents, but they did so due to the immediate application of pre-deployed safety measures.

The U.S. might have been held legally responsible for damage following the impact of such a hazardous cargo in a region with active agricultural exports or tourism.

As with the Palomares incident 42 years ago, in which two U.S. nuclear weapons fell to earth in Spain after an aircraft accident, people outside the region might be so spooked that they stop buying the regional exports and stop visiting its recreational facilities. The lost business alone could have cost hundreds of millions of dollars — compared with the estimated $60 million cost of the missile intercept.

Myth No. 4: The missile was aimed directly at the fuel tank, in order to pierce it and let the hazardous contents leak out.

Reality: Sure, the fuel tank was the missile's main target — but the missile didn’t have to hit the tank to crack it open. It’s hard to imagine how the warhead’s guidance system could have spotted the tank anyhow, inside the blob that was the image of the entire satellite. Hitting the target dead center and thus smashing the entire satellite to smithereens was the easiest way to ensure maximum damage to the tank.

Myth No. 5: The satellite disintegrated into more than 3,000 pieces because the fuel exploded.

Reality: Some Pentagon officials seemed to imply this, as evidence that they had achieved the goal of destroying the tank. But the kinetic energy involved in the ultra-high-speed collision was more than enough to impart enough force to cause the violent shattering — it certainly was orders of magnitude greater than the chemical energy that would have been liberated from the ignition of the entire fuel supply, even assuming it wasn’t frozen. That collisional energy was also the reason that some pieces of the target satellite got thrown forward so energetically, even though the missile hit the satellite from the front.

Most of the pieces fell through the atmosphere and burned up within a couple of days of the intercept. As of Tuesday, the Air Force Space Command was reportedly tracking 17 fragments that were still in orbit.

What's the harm in just letting all these myths lie? The danger is that the topic of weapons in space is a serious one requiring serious debate, especially in this election year. Hanging onto the technical myths could lead to misconceptions on one side of the debate ("our missiles were so accurate they could make a precision strike on the fuel tank") or the other ("the shootdown created a cloud of toxic debris that's still in orbit").

If we can "shoot down" the fuzzy thinking that has frustrated a serious exchange of views on this important national security issue, that would represent a much more enduring contribution to the safety of this planet than just protecting one random spot from half a ton of plummeting poison.

© 2008 MSNBC Interactive

Original here

British scientists create 'revolutionary' drug that prevents breast cancer developing

breast cancer

The drug could 'vaccinate' women against the disease

A drug that could prevent thousands of young women developing breast cancer has been created by scientists.

If given regularly to those with a strong family history of the cancer, researchers say it could effectively "vaccinate" them against a disease they are almost certain to develop.

The drug, which attacks tumours caused by genetic flaws, could spare those who have the rogue genes the trauma of having their breasts removed.

Currently, a high proportion of women told they have inherited the rogue genes choose to have a mastectomy as a preventative measure.

Researchers hope such a "vaccine" will be available within a decade. Flawed BRCA genes, which are passed from mother to daughter, are responsible for around 2,000 of the 44,000 cases of breast cancer each year in the UK.

Women with the rogue genes have an 85 per cent chance of developing the disease - eight times that of the average woman.

Initial tests suggest that the drug, known only as AGO14699, could also be free of the side-effects associated with other cancer treatments, including pain, nausea and hair loss.

The drug, which is being tested on patients in Newcastle upon Tyne, works by exploiting the "Achilles' heel" of hereditary forms of breast cancer - which is its limited ability to repair damage to its DNA.

Normal cells have two ways of fixing themselves, allowing them to grow and replicate, but cells in BRCA tumours have only one.

The drug, which is part of the class of anti-cancer medicines called PARP inhibitors, blocks this mechanism and stops the tumour cells from multiplying.

The researchers say the drug could also be used against other forms of cancer, including prostate and pancreatic, although further tests are needed.

Researcher Dr Ruth Plummer, senior lecturer in medical oncology at Newcastle University, said: "The implications for women and their families are huge because if you have the gene, there is a 50 per cent risk you will pass it on to your children. You are carrying a time bomb."

Original here

A "Reset Button" for the Brain Could Cure Alzheimers

With a little help, our brains can be trained to heal themselves. After a traumatic brain injury, some of your brain cells go into reset mode, reverting to a stem cell-like state. Using these "reset cells," a group of German researchers were able to coax the brains of injured mice to regrow neurons to replace damaged tissue (the images above are micrographs of the cells regrowing over time).

Though their methods are far from perfect, this breakthrough could help replace dead or damaged brain cells in people suffering from Alzheimer's as well as any type of injury. It's just a matter of extending the brain's natural self-healing powers.

According to an article in the Proceedings of the National Academies of Science today:

Magdalena Götz and colleagues found that cells called astrocytes expand and multiply after brain injury. The authors induced brain injury in mice, then observed as quiescent astrocytes activated themselves and became reactive, causing reactive gliosis, which is the universal cellular reaction to brain injury. The researchers found that the reactive astrocytes remained astrocytes in the cerebral cortex, whereas in a cell culture they could be coaxed to switch to different brain cell types, including neurons. These results identify astrocytes as a source of stem cells in the injury site and show that other types of brain cells do not have this potential. The authors conclude that the cells provide a promising cell type to initiate repair in humans after brain injury.

Original here

Thursday, February 28, 2008

Numbers Guy

According to Stanislas Dehaene, humans have an inbuilt “number sense” capable of some basic calculations and estimates. The problems start when we learn mathematics and have to perform procedures that are anything but instinctive.

According to Stanislas Dehaene, humans have an inbuilt “number sense” capable of some basic calculations and estimates. The problems start when we learn mathematics and have to perform procedures that are anything but instinctive.

ne morning in September, 1989, a former sales representative in his mid-forties entered an examination room with Stanislas Dehaene, a young neuroscientist based in Paris. Three years earlier, the man, whom researchers came to refer to as Mr. N, had sustained a brain hemorrhage that left him with an enormous lesion in the rear half of his left hemisphere. He suffered from severe handicaps: his right arm was in a sling; he couldn’t read; and his speech was painfully slow. He had once been married, with two daughters, but was now incapable of leading an independent life and lived with his elderly parents. Dehaene had been invited to see him because his impairments included severe acalculia, a general term for any one of several deficits in number processing. When asked to add 2 and 2, he answered “three.” He could still count and recite a sequence like 2, 4, 6, 8, but he was incapable of counting downward from 9, differentiating odd and even numbers, or recognizing the numeral 5 when it was flashed in front of him.

To Dehaene, these impairments were less interesting than the fragmentary capabilities Mr. N had managed to retain. When he was shown the numeral 5 for a few seconds, he knew it was a numeral rather than a letter and, by counting up from 1 until he got to the right integer, he eventually identified it as a 5. He did the same thing when asked the age of his seven-year-old daughter. In the 1997 book “The Number Sense,” Dehaene wrote, “He appears to know right from the start what quantities he wishes to express, but reciting the number series seems to be his only means of retrieving the corresponding word.”

Dehaene also noticed that although Mr. N could no longer read, he sometimes had an approximate sense of words that were flashed in front of him; when he was shown the word “ham,” he said, “It’s some kind of meat.” Dehaene decided to see if Mr. N still had a similar sense of number. He showed him the numerals 7 and 8. Mr. N was able to answer quickly that 8 was the larger number—far more quickly than if he had had to identify them by counting up to the right quantities. He could also judge whether various numbers were bigger or smaller than 55, slipping up only when they were very close to 55. Dehaene dubbed Mr. N “the Approximate Man.” The Approximate Man lived in a world where a year comprised “about 350 days” and an hour “about fifty minutes,” where there were five seasons, and where a dozen eggs amounted to “six or ten.” Dehaene asked him to add 2 and 2 several times and received answers ranging from three to five. But, he noted, “he never offers a result as absurd as 9.”

In cognitive science, incidents of brain damage are nature’s experiments. If a lesion knocks out one ability but leaves another intact, it is evidence that they are wired into different neural circuits. In this instance, Dehaene theorized that our ability to learn sophisticated mathematical procedures resided in an entirely different part of the brain from a rougher quantitative sense. Over the decades, evidence concerning cognitive deficits in brain-damaged patients has accumulated, and researchers have concluded that we have a sense of number that is independent of language, memory, and reasoning in general. Within neuroscience, numerical cognition has emerged as a vibrant field, and Dehaene, now in his early forties, has become one of its foremost researchers. His work is “completely pioneering,” Susan Carey, a psychology professor at Harvard who has studied numerical cognition, told me. “If you want to make sure the math that children are learning is meaningful, you have to know something about how the brain represents number at the kind of level that Stan is trying to understand.”

Dehaene has spent most of his career plotting the contours of our number sense and puzzling over which aspects of our mathematical ability are innate and which are learned, and how the two systems overlap and affect each other. He has approached the problem from every imaginable angle. Working with colleagues both in France and in the United States, he has carried out experiments that probe the way numbers are coded in our minds. He has studied the numerical abilities of animals, of Amazon tribespeople, of top French mathematics students. He has used brain-scanning technology to investigate precisely where in the folds and crevices of the cerebral cortex our numerical faculties are nestled. And he has weighed the extent to which some languages make numbers more difficult than others. His work raises crucial issues about the way mathematics is taught. In Dehaene’s view, we are all born with an evolutionarily ancient mathematical instinct. To become numerate, children must capitalize on this instinct, but they must also unlearn certain tendencies that were helpful to our primate ancestors but that clash with skills needed today. And some societies are evidently better than others at getting kids to do this. In both France and the United States, mathematics education is often felt to be in a state of crisis. The math skills of American children fare poorly in comparison with those of their peers in countries like Singapore, South Korea, and Japan. Fixing this state of affairs means grappling with the question that has taken up much of Dehaene’s career: What is it about the brain that makes numbers sometimes so easy and sometimes so hard?

Dehaene’s own gifts as a mathematician are considerable. Born in 1965, he grew up in Roubaix, a medium-sized industrial city near France’s border with Belgium. (His surname is Flemish.) His father, a pediatrician, was among the first to study fetal alcohol syndrome. As a teen-ager, Dehaene developed what he calls a “passion” for mathematics, and he attended the École Normale Supérieure in Paris, the training ground for France’s scholarly élite. Dehaene’s own interests tended toward computer modelling and artificial intelligence. He was drawn to brain science after reading, at the age of eighteen, the 1983 book “Neuronal Man,” by Jean-Pierre Changeux, France’s most distinguished neurobiologist. Changeux’s approach to the brain held out the tantalizing possibility of reconciling psychology with neuroscience. Dehaene met Changeux and began to work with him on abstract models of thinking and memory. He also linked up with the cognitive scientist Jacques Mehler. It was in Mehler’s lab that he met his future wife, Ghislaine Lambertz, a researcher in infant cognitive psychology.

By “pure luck,” Dehaene recalls, Mehler happened to be doing research on how numbers are understood. This led to Dehaene’s first encounter with what he came to characterize as “the number sense.” Dehaene’s work centered on an apparently simple question: How do we know whether numbers are bigger or smaller than one another? If you are asked to choose which of a pair of Arabic numerals—4 and 7, say—stands for the bigger number, you respond “seven” in a split second, and one might think that any two digits could be compared in the same very brief period of time. Yet in Dehaene’s experiments, while subjects answered quickly and accurately when the digits were far apart, like 2 and 9, they slowed down when the digits were closer together, like 5 and 6. Performance also got worse as the digits grew larger: 2 and 3 were much easier to compare than 7 and 8. When Dehaene tested some of the best mathematics students at the École Normale, the students were amazed to find themselves slowing down and making errors when asked whether 8 or 9 was the larger number.

Dehaene conjectured that, when we see numerals or hear number words, our brains automatically map them onto a number line that grows increasingly fuzzy above 3 or 4. He found that no amount of training can change this. “It is a basic structural property of how our brains represent number, not just a lack of facility,” he told me.

In 1987, while Dehaene was still a student in Paris, the American cognitive psychologist Michael Posner and colleagues at Washington University in St. Louis published a pioneering paper in the journal Nature. Using a scanning technique that can track the flow of blood in the brain, Posner’s team had detailed how different areas became active in language processing. Their research was a revelation for Dehaene. “I remember very well sitting and reading this paper, and then debating it with Jacques Mehler, my Ph.D. adviser,” he told me. Mehler, whose focus was on determining the abstract organization of cognitive functions, didn’t see the point of trying to locate precisely where in the brain things happened, but Dehaene wanted to “bridge the gap,” as he put it, between psychology and neurobiology, to find out exactly how the functions of the mind—thought, perception, feeling, will—are realized in the gelatinous three-pound lump of matter in our skulls. Now, thanks to new technologies, it was finally possible to create pictures, however crude, of the brain in the act of thinking. So, after receiving his doctorate, he spent two years studying brain scanning with Posner, who was by then at the University of Oregon, in Eugene. “It was very strange to find that some of the most exciting results of the budding cognitive-neuroscience field were coming out of this small place—the only place where I ever saw sixty-year-old hippies sitting around in tie-dyed shirts!” he said.

Dehaene is a compact, attractive, and genial man; he dresses casually, wears fashionable glasses, and has a glabrous dome of a head, which he protects from the elements with a chapeau de cowboy. When I visited him recently, he had just moved into a new laboratory, known as NeuroSpin, on the campus of a national center for nuclear-energy research, a dozen or so miles southwest of Paris. The building, which was completed a year ago, is a modernist composition in glass and metal filled with the ambient hums and whirs and whooshes of brain-scanning equipment, much of which was still being assembled. A series of arches ran along one wall in the form of a giant sine wave; behind each was a concrete vault built to house a liquid-helium-cooled superconducting electromagnet. (In brain imaging, the more powerful the magnetic field, the sharper the picture.) The new brain scanners are expected to show the human cerebral anatomy at a level of detail never before seen, and may reveal subtle anomalies in the brains of people with dyslexia and with dyscalculia, a crippling deficiency in dealing with numbers which, researchers suspect, may be as widespread as dyslexia. One of the scanners was already up and running. “You don’t wear a pacemaker or anything, do you?” Dehaene asked me as we entered a room where two researchers were fiddling with controls. Although the scanner was built to accommodate humans, inside, I could see from the monitor, was a brown rat. Researchers were looking at how its brain reacted to various odors, which were puffed in every so often. Then Dehaene led me upstairs to a spacious gallery where the brain scientists working at NeuroSpin are expected to congregate and share ideas. At the moment, it was empty. “We’re hoping for a coffee machine,” he said.

Dehaene has become a scanning virtuoso. On returning to France after his time with Posner, he pressed on with the use of imaging technologies to study how the mind processes numbers. The existence of an evolved number ability had long been hypothesized, based on research with animals and infants, and evidence from brain-damaged patients gave clues to where in the brain it might be found. Dehaene set about localizing this facility more precisely and describing its architecture. “In one experiment I particularly liked,” he recalled, “we tried to map the whole parietal lobe in a half hour, by having the subject perform functions like moving the eyes and hands, pointing with fingers, grasping an object, engaging in various language tasks, and, of course, making small calculations, like thirteen minus four. We found there was a beautiful geometrical organization to the areas that were activated. The eye movements were at the back, the hand movements were in the middle, grasping was in the front, and so on. And right in the middle, we were able to confirm, was an area that cared about number.”

The number area lies deep within a fold in the parietal lobe called the intraparietal sulcus (just behind the crown of the head). But it isn’t easy to tell what the neurons there are actually doing. Brain imaging, for all the sophistication of its technology, yields a fairly crude picture of what’s going on inside the skull, and the same spot in the brain might light up for two tasks even though different neurons are involved. “Some people believe that psychology is just being replaced by brain imaging, but I don’t think that’s the case at all,” Dehaene said. “We need psychology to refine our idea of what the imagery is going to show us. That’s why we do behavioral experiments, see patients. It’s the confrontation of all these different methods that creates knowledge.”

Dehaene has been able to bring together the experimental and the theoretical sides of his quest, and, on at least one occasion, he has even theorized the existence of a neurological feature whose presence was later confirmed by other researchers. In the early nineteen-nineties, working with Jean-Pierre Changeux, he set out to create a computer model to simulate the way humans and some animals estimate at a glance the number of objects in their environment. In the case of very small numbers, this estimate can be made with almost perfect accuracy, an ability known as “subitizing” (from the Latin word subitus, meaning “sudden”). Some psychologists think that subitizing is merely rapid, unconscious counting, but others, Dehaene included, believe that our minds perceive up to three or four objects all at once, without having to mentally “spotlight” them one by one. Getting the computer model to subitize the way humans and animals did was possible, he found, only if he built in “number neurons” tuned to fire with maximum intensity in response to a specific number of objects. His model had, for example, a special four neuron that got particularly excited when the computer was presented with four objects. The model’s number neurons were pure theory, but almost a decade later two teams of researchers discovered what seemed to be the real item, in the brains of macaque monkeys that had been trained to do number tasks. The number neurons fired precisely the way Dehaene’s model predicted—a vindication of theoretical psychology. “Basically, we can derive the behavioral properties of these neurons from first principles,” he told me. “Psychology has become a little more like physics.”

But the brain is the product of evolution—a messy, random process—and though the number sense may be lodged in a particular bit of the cerebral cortex, its circuitry seems to be intermingled with the wiring for other mental functions. A few years ago, while analyzing an experiment on number comparisons, Dehaene noticed that subjects performed better with large numbers if they held the response key in their right hand but did better with small numbers if they held the response key in their left hand. Strangely, if the subjects were made to cross their hands, the effect was reversed. The actual hand used to make the response was, it seemed, irrelevant; it was space itself that the subjects unconsciously associated with larger or smaller numbers. Dehaene hypothesizes that the neural circuitry for number and the circuitry for location overlap. He even suspects that this may be why travellers get disoriented entering Terminal 2 of Paris’s Charles de Gaulle Airport, where small-numbered gates are on the right and large-numbered gates are on the left. “It’s become a whole industry now to see how we associate number to space and space to number,” Dehaene said. “And we’re finding the association goes very, very deep in the brain.”

Last winter, I saw Dehaene in the ornate setting of the Institut de France, across the Seine from the Louvre. There he accepted a prize of a quarter of a million euros from Liliane Bettencourt, whose father created the cosmetics group L’Oréal. In a salon hung with tapestries, Dehaene described his research to a small audience that included a former Prime Minister of France. New techniques of neuroimaging, he explained, promise to reveal how a thought process like calculation unfolds in the brain. This isn’t just a matter of pure knowledge, he added. Since the brain’s architecture determines the sort of abilities that come naturally to us, a detailed understanding of that architecture should lead to better ways of teaching children mathematics and may help close the educational gap that separates children in the West from those in several Asian countries. The fundamental problem with learning mathematics is that while the number sense may be genetic, exact calculation requires cultural tools—symbols and algorithms—that have been around for only a few thousand years and must therefore be absorbed by areas of the brain that evolved for other purposes. The process is made easier when what we are learning harmonizes with built-in circuitry. If we can’t change the architecture of our brains, we can at least adapt our teaching methods to the constraints it imposes.

For nearly two decades, American educators have pushed “reform math,” in which children are encouraged to explore their own ways of solving problems. Before reform math, there was the “new math,” now widely thought to have been an educational disaster. (In France, it was called les maths modernes, and is similarly despised.) The new math was grounded in the theories of the influential Swiss psychologist Jean Piaget, who believed that children are born without any sense of number and only gradually build up the concept in a series of developmental stages. Piaget thought that children, until the age of four or five, cannot grasp the simple principle that moving objects around does not affect how many of them there are, and that there was therefore no point in trying to teach them arithmetic before the age of six or seven.

Piaget’s view had become standard by the nineteen-fifties, but psychologists have since come to believe that he underrated the arithmetic competence of small children. Six-month-old babies, exposed simultaneously to images of common objects and sequences of drumbeats, consistently gaze longer at the collection of objects that matches the number of drumbeats. By now, it is generally agreed that infants come equipped with a rudimentary ability to perceive and represent number. (The same appears to be true for many kinds of animals, including salamanders, pigeons, raccoons, dolphins, parrots, and monkeys.) And if evolution has equipped us with one way of representing number, embodied in the primitive number sense, culture furnishes two more: numerals and number words. These three modes of thinking about number, Dehaene believes, correspond to distinct areas of the brain. The number sense is lodged in the parietal lobe, the part of the brain that relates to space and location; numerals are dealt with by the visual areas; and number words are processed by the language areas.

Nowhere in all this elaborate brain circuitry, alas, is there the equivalent of the chip found in a five-dollar calculator. This deficiency can make learning that terrible quartet—“Ambition, Distraction, Uglification, and Derision,” as Lewis Carroll burlesqued them—a chore. It’s not so bad at first. Our number sense endows us with a crude feel for addition, so that, even before schooling, children can find simple recipes for adding numbers. If asked to compute 2 + 4, for example, a child might start with the first number and then count upward by the second number: “two, three is one, four is two, five is three, six is four, six.” But multiplication is another matter. It is an “unnatural practice,” Dehaene is fond of saying, and the reason is that our brains are wired the wrong way. Neither intuition nor counting is of much use, and multiplication facts must be stored in the brain verbally, as strings of words. The list of arithmetical facts to be memorized may be short, but it is fiendishly tricky: the same numbers occur over and over, in different orders, with partial overlaps and irrelevant rhymes. (Bilinguals, it has been found, revert to the language they used in school when doing multiplication.) The human memory, unlike that of a computer, has evolved to be associative, which makes it ill-suited to arithmetic, where bits of knowledge must be kept from interfering with one another: if you’re trying to retrieve the result of multiplying 7 X 6, the reflex activation of 7 + 6 and 7 X 5 can be disastrous. So multiplication is a double terror: not only is it remote from our intuitive sense of number; it has to be internalized in a form that clashes with the evolved organization of our memory. The result is that when adults multiply single-digit numbers they make mistakes ten to fifteen per cent of the time. For the hardest problems, like 7 X 8, the error rate can exceed twenty-five per cent.

Our inbuilt ineptness when it comes to more complex mathematical processes has led Dehaene to question why we insist on drilling procedures like long division into our children at all. There is, after all, an alternative: the electronic calculator. “Give a calculator to a five-year-old, and you will teach him how to make friends with numbers instead of despising them,” he has written. By removing the need to spend hundreds of hours memorizing boring procedures, he says, calculators can free children to concentrate on the meaning of these procedures, which is neglected under the educational status quo. This attitude might make Dehaene sound like a natural ally of educators who advocate reform math, and a natural foe of parents who want their children’s math teachers to go “back to basics.” But when I asked him about reform math he wasn’t especially sympathetic. “The idea that all children are different, and that they need to discover things their own way—I don’t buy it at all,” he said. “I believe there is one brain organization. We see it in babies, we see it in adults. Basically, with a few variations, we’re all travelling on the same road.” He admires the mathematics curricula of Asian countries like China and Japan, which provide children with a highly structured experience, anticipating the kind of responses they make at each stage and presenting them with challenges designed to minimize the number of errors. “That’s what we’re trying to get back to in France,” he said. Working with his colleague Anna Wilson, Dehaene has developed a computer game called “The Number Race” to help dyscalculic children. The software is adaptive, detecting the number tasks where the child is shaky and adjusting the level of difficulty to maintain an encouraging success rate of seventy-five per cent.

Despite our shared brain organization, cultural differences in how we handle numbers persist, and they are not confined to the classroom. Evolution may have endowed us with an approximate number line, but it takes a system of symbols to make numbers precise—to “crystallize” them, in Dehaene’s metaphor. The Mundurukú, an Amazon tribe that Dehaene and colleagues, notably the linguist Pierre Pica, have studied recently, have words for numbers only up to five. (Their word for five literally means “one hand.”) Even these words seem to be merely approximate labels for them: a Mundurukú who is shown three objects will sometimes say there are three, sometimes four. Nevertheless, the Mundurukú have a good numerical intuition. “They know, for example, that fifty plus thirty is going to be larger than sixty,” Dehaene said. “Of course, they do not know this verbally and have no way of talking about it. But when we showed them the relevant sets and transformations they immediately got it.”

The Mundurukú, it seems, have developed few cultural tools to augment the inborn number sense. Interestingly, the very symbols with which we write down the counting numbers bear the trace of a similar stage. The first three Roman numerals, I, II, and III, were formed by using the symbol for one as many times as necessary; the symbol for four, IV, is not so transparent. The same principle applies to Chinese numerals: the first three consist of one, two, and three horizontal bars, but the fourth takes a different form. Even Arabic numerals follow this logic: 1 is a single vertical bar; 2 and 3 began as two and three horizontal bars tied together for ease of writing. (“That’s a beautiful little fact, but I don’t think it’s coded in our brains any longer,” Dehaene observed.)

Today, Arabic numerals are in use pretty much around the world, while the words with which we name numbers naturally differ from language to language. And, as Dehaene and others have noted, these differences are far from trivial. English is cumbersome. There are special words for the numbers from 11 to 19, and for the decades from 20 to 90. This makes counting a challenge for English-speaking children, who are prone to such errors as “twenty-eight, twenty-nine, twenty-ten, twenty-eleven.” French is just as bad, with vestigial base-twenty monstrosities, like quatre-vingt-dix-neuf (“four twenty ten nine”) for 99. Chinese, by contrast, is simplicity itself; its number syntax perfectly mirrors the base-ten form of Arabic numerals, with a minimum of terms. Consequently, the average Chinese four-year-old can count up to forty, whereas American children of the same age struggle to get to fifteen. And the advantages extend to adults. Because Chinese number words are so brief—they take less than a quarter of a second to say, on average, compared with a third of a second for English—the average Chinese speaker has a memory span of nine digits, versus seven digits for English speakers. (Speakers of the marvellously efficient Cantonese dialect, common in Hong Kong, can juggle ten digits in active memory.)

In 2005, Dehaene was elected to the chair in experimental cognitive psychology at the Collège de France, a highly prestigious institution founded by Francis I in 1530. The faculty consists of just fifty-two scholars, and Dehaene is the youngest member. In his inaugural lecture, Dehaene marvelled at the fact that mathematics is simultaneously a product of the human mind and a powerful instrument for discovering the laws by which the human mind operates. He spoke of the confrontation between new technologies like brain imaging and ancient philosophical questions concerning number, space, and time. And he pronounced himself lucky to be living in an era when advances in psychology and neuroimaging are combining to “render visible” the hitherto invisible realm of thought.

For Dehaene, numerical thought is only the beginning of this quest. Recently, he has been pondering how the philosophical problem of consciousness might be approached by the methods of empirical science. Experiments involving subliminal “number priming” show that much of what our mind does with numbers is unconscious, a finding that has led Dehaene to wonder why some mental activity crosses the threshold of awareness and some doesn’t. Collaborating with a couple of colleagues, Dehaene has explored the neural basis of what is known as the “global workspace” theory of consciousness, which has elicited keen interest among philosophers. In his version of the theory, information becomes conscious when certain “workspace” neurons broadcast it to many areas of the brain at once, making it simultaneously available for, say, language, memory, perceptual categorization, action-planning, and so on. In other words, consciousness is “cerebral celebrity,” as the philosopher Daniel Dennett has described it, or “fame in the brain.”

In his office at NeuroSpin, Dehaene described to me how certain extremely long workspace neurons might link far-flung areas of the human brain together into a single pulsating circuit of consciousness. To show me where these areas were, he reached into a closet and pulled out an irregularly shaped baby-blue plaster object, about the size of a softball. “This is my brain!” he announced with evident pleasure. The model that he was holding had been fabricated, he explained, by a rapid-prototyping machine (a sort of three-dimensional printer) from computer data obtained from one of the many MRI scans that he has undergone. He pointed to the little furrow where the number sense was supposed to be situated, and observed that his had a somewhat uncommon shape. Curiously, the computer software had identified Dehaene’s brain as an “outlier,” so dissimilar are its activation patterns from the human norm. Cradling the pastel-colored lump in his hands, a model of his mind devised by his own mental efforts, Dehaene paused for a moment. Then he smiled and said, “So, I kind of like my brain.”

Original here

Misperceptions meet state of the art in evolution research

The state of the art in the RNA world

As an employee of the National Center for Science Education, Nick Matzke was involved with everything from situations that never made the press to coaching the lawyers in the Dover trial, which gained international attention. One thing that apparently became clear is that, due to the highly technical material and a flood of misinformation on the topic, the public (and even many scientists) simply don't know what the current state of knowledge is when it comes from evolution. As part of an effort to rectify that, the NCSE and the AAAS's Dialog on Science, Ethics, and Religion organized a session on the state of the art in our understanding of evolution, which Matzke moderated.

Four speakers took on topics that appear to be the frequently misunderstood by the public. One of these—the origin of life—isn't directly part of evolutionary theory, but is frequently associated with it by the public. The remaining topics covered major events in the evolutionary history that produced humans, including the origin of bilateral animals during the Cambrian explosion, the origin of tetrapods, and the evolution of human ancestors. Throughout the talks, there were two recurrent themes: we can identify major environmental changes that might have sparked new selective pressures, and many of the major adaptations we view as designed for a specific lifestyle actually originated as an adaptation for something else entirely.

The origin of life

Evolutionary theory, both as proposed by Darwin and elaborated since, deals with the diversification of modern living organisms from a limited number of ancestral living organisms. But the lack of a strong theory for the origin of life is actually treated as an argument against evolution by many of the opponents of teaching the theory. Many of the principles of evolution, including heritable variations and selective pressures, are also applied by origin of life researchers. As such, the two topics appear inextricably linked.

The discussion of life's origins was handled by Andy Ellington of the University of Texas - Austin. He started by noting that simply defining life is as much of a philosophical question as a biological one. He settled on the following: "a self replicating system capable of Darwinian evolution," and focused on getting from naturally forming chemicals to that point. To do so, Ellington developed three different themes.

Chemicals in living organisms can form without life


An RNA ligase ribozyme

The basic idea has been recognized for over a century, but the work of Stanley Miller was cited for triggering the modern era of scientific work on the topic. Since the classic Miller-Urey experiments, science has steadily expanded the range of essential molecules that can be produced under conditions that might reasonably expected to have been present on the early earth.

Ellington emphasized that progress has been slow—we knew how cyanide could react to form the DNA component adenine in the 1960s, but it took over three decades to recognize that a few more reactions converted it to its relative, guanine. And the roadblocks continue to fall. After all attempts to produce sugars created a tar-like sludge, someone eventually found that a small amount of borate could help ethanol form large amounts of ribose, another component of RNA.

The first molecules that could replicate led directly to modern life

With the components of nucleic acids in place, Ellington traced a path through the RNA world to a molecule that could self-replicate. Past attempts to jump to a complex, self-replicating RNA molecule seem to have been on the wrong track. Short palindromic RNA sequences can apparently help catalyze the formation of complementary sequences, meaning what's needed is actually an RNA that can link these short sequences into longer, more complex ones. A number of such sequences, termed RNA ligases, have been identified. Several labs have shown that these ligases can then be improved by an essentially Darwinian process of random mutation followed by selection for increased efficiency.

Modern RNA activities tell us about the RNA world

Ellington's final point was that we can still see remnants of the RNA world in aspects of biology that are common to all life. He noted that many of the cofactors used by modern proteins, including ATP itself, are derivatives of the chemical components of RNA. Researchers have also been able to evolve RNAs that successfully bind these cofactors, which suggests that proteins would only need to have gradually replaced these RNAs. That replacement, Ellington suggested, has never actually been completed: the central core of the ribosome, a complex essential for protein production in all organisms, turns out to be formed from RNA. During questions, he also emphasized that basic cellular metabolism uses some amino acids as intermediates, and suggested that proteins resulted from early RNA "life" simply using what it had lying around, tying in nicely with the theme of preadaptation.

Ars spoke to Dr. Ellington after the talk and asked him about the separate thread of origin of life research that focuses on identifying the energy-harvesting reactions required for the first life. He was very excited about the potential for user-generated genomes to help unify the two fields. The ability to customize a genome would not only help scientists identify the very minimal metabolism necessary for life, but would eventually allow researchers to start replacing proteins with their catalytic RNA equivalents. Ellington suggested that the result—a cell with a hybrid RNA/protein world—would eventually allow us to explore the transition to the first cells.

Ellington's summary of the state of the art is that "we'll never know exactly what happened, but we're getting a really good idea of what is possible."

The Cambrian explodes and fish get limbs

The Cambrian explosion


A Halwaxiid, poised to explode into
three branches of animal life

Douglas Erwin of the Smithsonian then made the jump to the origin of modern animal life during the Cambrian explosion—as he put it, "3 billion years of boredom later..." Erwin presented the Cambrian explosion as a matter of three big ideas as well: biological challenges, ecological opportunities, and developmental potential. The biological challenges of the era are pretty obvious: in his view, the global glaciations that left glacial sediments in the tropics and are likely to have shut down or severely limited the global carbon cycle.

These changes, however, were accompanied by two events that, in Erwin's view, were essential enablers of a broad radiation of species. Both had the common feature of allowing many organisms access to a resource that was essentially unlimited, and thus free from competition. The first of these was oxygen, which reached unusually high levels in the Cambrian atmosphere. Results published while this article was in preparation reveal that the radiation of animal life closely tracked oxygenation of the ancient ocean. The second was nutrient-rich sediments. Tracks from the first burrowing animals appear in sediments just prior to the Cambrian, and Erwin argues that these animals kept resources in the sediments circulating within the biological community long after they would have otherwise settled out and been buried.


This Cambrian jellyfish probably shared many genes with the first bilateral animals

Erwin also described animal life that was poised to explode. The prior era was filled with the Ediacaran Fauna, which he described as, "no eyes, no appendages, lots of fronds, and maybe some guts." But that era also generated fossil embryos that suggest that bilateral animals predated the Cambrian. More telling, however, has been the findings of modern genomics and evo-devo. Genomic studies reveal that many of the genes involved in producing complex animals predate animals themselves, and some of the key regulators of bilateral animal development exist in Cnidarians, which don't share that body plan. Other work has revealed that genetic networks of regulatory genes that are used in appendage and body plan specification probably predate the origins of either limbs or a body plan.

In short, the genetic tools were in place were in place for millions of years before the Cambrian, but it took the Cambrian's unique combination of environmental challenges and opportunities to force organisms to deploy them in new adaptive combinations.

Vanishing gaps in the vertebrate invasion of land

A key event in the origin of modern humans occurred in swamps nearly 400 million years ago. Prior to that time, vertebrates made do with fins and life in the water. Ted Daeschler of the Academy of Natural Sciences in Philadelphia reviewed our latest understanding of how those fish wound up on land, with limbs to propel them.


Which is a fish, which is a tetrapod? Trick question—they're all tetrapods.

Daeschler pointed out that a few decades ago, we had two species that don't even appear on the diagram here, Eusthenopteron and Icthyostega, and a big gap in between them. Although scientists always want to know more, the gap wasn't a huge problem for them, as they could recognize subtle features of skeletons that the lobe-finned fish shared with the earliest tetrapods. But it was a problem in terms of public relations, as the public had a hard time tracking these subtleties, allowing opponents of evolution to focus on the gap and declare it unbridgeable.

In the 1980s and 90s, other species, such as Pandericthys and Acanthostega, began to fill this gap. Daeschler indicated that a clear pattern emerged, one that linked the appearance of these species with a specific environment and one that represented a new ecological opportunity. All of the fossils were found on a band that, given the then-current arrangement of the continents, was equatorial. The specific environment, however, was one that hadn't existed previously: broad, alluvial valleys and flood plains that were transformed in the wake of the origin of trees. The recognition that this environment spurred tetrapod evolution has led directly to the discovery of Gogonasus and, perhaps the most famous transitional species ever, Tiktaalik.

In this environment, many of the features of the transitional species were preadaptive. Lobed fins aided maneuvering in a complex environment in the same way that limbs later did, in water or out. Muscle attachment sites in the bones of the fins worked equally well when used in legs. For paleontologists, the discoveries that filled the gaps revealed that this major transition occurred through a series of forms that were mosaic, with features added to the tetrapod repertoire in an order that's essentially random. Tiktaalik has a broad, fish-like snout, but the far end of its skull has been reordered to allow the first flexible neck seen in a tetrapod.

Two messages were clear from Daeschler's summary. The first is that there's so much left to discover; we don't know which gap will wind up filled next, just that gaps continue to be filled with rich information. He'll continue chasing road crews as they dig through Pennsylvania for as long as they'll let him. The second message is that it's time to let go of the false distinctions that are left over from Linnean times and only serve to confuse the public. In Daeschler's view, all of these animals, on both sides of the former gaps, are tetrapods. Some have limbs, some have fins, but they're clearly all part of a boundary- and gap-less transition.

Modern human origins and learning

The origins of modern humans

The final scientific speaker in the session was John Relethford, an anthropologist in the SUNY system. He had so many big messages that he settled on a top-ten list to present them. The first item was simply that humans have evolved, period. The evidence is so overwhelming that Relethford feels that any remaining argument is simply between two religious perspectives on that fact; science has moved on. Item two was to emphasize that we did not evolve from modern apes. Ape is both a generic and a species term, and biologists need to be careful to use it correctly, because we're confusing the public by being sloppy.


Meet a few of our many relatives

His third message was that scientists study human origins—the plural part is important. The toolkit that we regard as human, including upright walking, tool use, brain size, etc. all arose at different times, some separated by millions of years. A correlate of this was point four: "humanity's birth was feet first," as he put it. Upright walking may date back over six million years, and was definitely present four million years ago. At the time, there were no tools and our ancestors had ape-like teeth and cranial capacity. Relethford suggested we're still not sure what walking adapted us for, but it clearly kept us going for millions of years before we realized it liberated our forelimbs to manipulate sophisticated tools.


Big brains took their time

That delay might have been due to point five, the fact that cranial capacity increased very slowly and gradually over the course of human evolution. This illustrated Relethford's idea six: there's no free lunch. Any adaptation has a cost, and the advantages of expanding brain size were constantly balanced against the selective cost of a big brain's increased energy use and heat output. There was also more than one way to achieve these balances as, for much of their history, our ancestors were not alone. There were many overlapping homo and australopithecus species in the past, as Relethford noted in point seven, and the question of what constitutes a species is often contentious when it comes to our ancestors.

Relethford's final points backed out to the big picture of science and humanity in general. Eighth on his list was the contention that we should always expect the unexpected, as new discoveries represent the strength of science, not its weakness. He suggested that if people didn't like the excited confusion caused by H. florensis, then they probably shouldn't be paying paleontologists. He also voiced disdain for those who speculate that some form of alien intervention was necessary to produce sophisticated humans or the great works of prehistory. "Our ancestors were not dummies," he stated as point nine, suggesting that this type of thinking was little more than a generation gap taken to an extreme.

His final point was that the full package of modern human traits took millions of years to evolve, so questions as to where we're headed are somewhat irrelevant. In the time span we should be concerned with, Relethford suggested, all the relevant evolution will be cultural.

The state of the art meets the public

With the state of the art established, the final speaker, Martin Storksdieck of the Institute for Learning Innovation looked at how to get that information to a public has such a hard time accepting what science is discovering. He argued that, while most of the attention has focused on childhood education, we really should be going after the parents. Everyone is a lifelong learner, Storksdieck said, but once people leave school, that learning becomes a voluntary matter that's largely driven by individual taste.

Storksdieck discussed a number of key aspects of this voluntary learning. He argued that a surprising amount of it is faith-based; adults don't have the time or need to learn large frameworks like evolution, so they're often willing to accept or reject information based reasons beyond its consistency with scientific understanding. As an example, he noted his own understanding of chemistry was weak, so he'd simply have to accept what Andy Ellington told him about the RNA world. The result is that what's accepted or not becomes largely a matter of social influences.

Here, Storksdieck offered two specific suggestions. The first is to get people in positions of leadership involved, as people pay attention to them, regardless of their grip on the facts. His example was Thabo Mbeki of South Africa, who set the country's battle against AIDS back significantly simply by expressing doubt in our scientific and medical understanding of the conditions. His other suggestion was that we should, as he put it, keep preaching to the choir. Enthused learners are the best communicators of information, and arming them with more of what we know is the best way to get that information before the public.

The series of talks was possibly the best overview of the state of knowledge in any field that I have ever seen, and the enthusiasm of the researchers and their excitement about the topics was palpable. I expect that, if the public saw more presentations like this, which revealed not only the full depth of our understanding, but also the enthusiasm, humor, and humanity of the people that have generated that understanding, then the teaching of evolution would generate only a small fraction of the resistance that it currently does.

Original here

Blind Irishman sees with the aid of son's tooth in his eye

DUBLIN (AFP) - An Irishman blinded by an explosion two years ago has had his sight restored after doctors inserted his son's tooth in his eye, he said on Wednesday.

Bob McNichol, 57, from County Mayo in the west of the country, lost his sight in a freak accident when red-hot liquid aluminium exploded at a re-cycling business in November 2005.

"I thought that I was going to be blind for the rest of my life," McNichol told RTE state radio.

After doctors in Ireland said there was nothing more they could do, McNichol heard about a miracle operation called Osteo-Odonto-Keratoprosthesis (OOKP) being performed by Dr Christopher Liu at the Sussex Eye Hospital in Brighton in England.

The technique, pioneered in Italy in the 1960s, involves creating a support for an artificial cornea from the patient's own tooth and the surrounding bone.

The procedure used on McNichol involved his son Robert, 23, donating a tooth, its root and part of the jaw.

McNichol's right eye socket was rebuilt, part of the tooth inserted and a lens inserted in a hole drilled in the tooth.

The first operation lasted ten hours and the second five hours.

"It is pretty heavy going," McNichol said. "There was a 65 percent chance of me getting any sight.

"Now I have enough sight for me to get around and I can watch television. I have come out from complete darkness to be able to do simple things," McNichol said.

Original here

TED 2008: How Good People Turn Evil, From Stanford to Abu Ghraib

As an expert witness in the defense of an Abu Ghraib guard, Philip Zimbardo had access to many images (NSFW) of abuse taken by the guards. His TED presentation puts together a short video of some of the unpublished photos, with sound effects added by Zimbardo. Many of the images are explicit and gruesome, depicting nudity, degradation, simulated sex acts and guards posing with corpses. Viewer discretion is advised.
Courtesy Philip Zimbardo
View slideshow (NSFW)

MONTEREY, California -- Psychologist Philip Zimbardo has seen good people turn evil, and he thinks he knows why.

Zimbardo will speak Thursday afternoon at the TED conference, where he plans to illustrate his points by showing a three-minute video, obtained by Wired.com, that features many previously unseen photographs from the Abu Ghraib prison in Iraq (disturbing content).

In March 2006, Salon.com published 279 photos and 19 videos from Abu Ghraib, one of the most extensive documentations to date of abuse in the notorious prison. Zimbardo claims, however, that many images in his video -- which he obtained while serving as an expert witness for an Abu Ghraib defendant -- have never before been published.

The Abu Ghraib prison made international headlines in 2004 when photographs of military personnel abusing Iraqi prisoners were published around the world. Seven soldiers were convicted in courts martial and two, including Specialist Lynndie England, were sentenced to prison.

Zimbardo conducted a now-famous experiment at Stanford University in 1971, involving students who posed as prisoners and guards. Five days into the experiment, Zimbardo halted the study when the student guards began abusing the prisoners, forcing them to strip naked and simulate sex acts.

His book, The Lucifer Effect: Understanding How Good People Turn Evil, explores how a "perfect storm" of conditions can make ordinary people commit horrendous acts.

He spoke with Wired.com about what Abu Ghraib and his prison study can teach us about evil and why heroes are, by nature, social deviants.

Wired: Your work suggests that we all have the capacity for evil, and that it's simply environmental influences that tip the balance from good to bad. Doesn't that absolve people from taking responsibility for their choices?

Philip Zimbardo: No. People are always personally accountable for their behavior. If they kill, they are accountable. However, what I'm saying is that if the killing can be shown to be a product of the influence of a powerful situation within a powerful system, then it's as if they are experiencing diminished capacity and have lost their free will or their full reasoning capacity.

Situations can be sufficiently powerful to undercut empathy, altruism, morality and to get ordinary people, even good people, to be seduced into doing really bad things -- but only in that situation.

Understanding the reason for someone's behavior is not the same as excusing it. Understanding why somebody did something -- where that why has to do with situational influences -- leads to a totally different way of dealing with evil. It leads to developing prevention strategies to change those evil-generating situations, rather than the current strategy, which is to change the person.

Wired: You were an expert defense witness in the court-martial of Sgt. Chip Frederick, an Abu Ghraib guard. What were the situational influences in his case?

Zimbardo: Abu Ghraib was under bombardment all the time. In the prison, five soldiers and 20 Iraqi prisoners get killed. That means automatically any soldier working there is under high fear and high stress. Then the insurgency starts in 2003, and they start arresting everyone in sight. When Chip Frederick [starts working at Abu Ghraib] in September, there are 200 prisoners there. Within three months there's a thousand prisoners with a handful of guards to take care of them, so they're overwhelmed. Frederick and the others worked 12-hour shifts. How many days a week? Seven. How many days without a day off? Forty. That kind of stress reduces decision-making and critical thinking and rationality. But that's only the beginning.

He [complained] to higher-ups on the record, "We have mentally ill patients who cover themselves with [excrement]. We have people with tuberculosis that shouldn't be in this population. We have kids mixed with adults."

And they tell him, "It's a war zone. Do your job. Do whatever you have to do."

Wired: How did what happened at Abu Ghraib compare to your Stanford prison study?

Zimbardo: The military intelligence, the CIA and the civilian interrogator corporation, Titan, told the MPs [at Abu Ghraib], "It is your job to soften the prisoners up. We give you permission to do something you ordinarily are not allowed to do as a military policeman -- to break the prisoners, to soften them up, to prepare them for interrogation." That's permission to step across the line from what is typically restricted behavior to now unrestricted behavior.

In the same way in the Stanford prison study, I was saying [to the student guards], "You have to be powerful to prevent further rebellion." I tell them, "You're not allowed, however, to use physical force." By default, I allow them to use psychological force. In five days, five prisoners are having emotional breakdowns.

The situational forces that were going on in [Abu Ghraib] -- the dehumanization, the lack of personal accountability, the lack of surveillance, the permission to get away with anti-social actions -- it was like the Stanford prison study, but in spades.

Those sets of things are found any time you really see an evil situation occurring, whether it's Rwanda or Nazi Germany or the Khmer Rouge.

Wired: But not everyone at Abu Ghraib responded to the situation in the same way. So what makes one person in a situation commit evil acts while another in the same situation becomes a whistle-blower?

Zimbardo: There's no answer, based on what we know about a person, that we can predict whether they're going to be a hero whistle-blower or the brutal guard. We want to believe that if I was in some situation [like that], I would bring with it my usual compassion and empathy. But you know what? When I was the superintendent of the Stanford prison study, I was totally indifferent to the suffering of the prisoners, because my job as prison superintendent was to focus on the guards.

As principal [scientific] investigator [of the experiment], my job was to care about what happened to everybody because they were all under my experimental control. But once I switched to being the prison superintendent, I was a different person. It's hard to believe that, but I was transformed.

Wired: Do you think it made any difference that the Abu Ghraib guards were reservists rather than active duty soldiers?

Zimbardo: It made an enormous difference, in two ways. They had no mission-specific training, and they had no training to be in a combat zone. Secondly, the Army reservists in a combat zone are the lowest form of animal life within the military hierarchy. They're not real soldiers, and they know this. In Abu Ghraib the only thing lower than the army reservist MPs were the prisoners.

Wired: So it's a case of people who feel powerless in their lives seizing power over someone else.

Zimbardo: Yes, victims become victimizers. In Nazi concentration camps, the Jewish capos were worse than the Nazis, because they had to prove that they deserved being in this position.

Wired: You've said that the way to prevent evil actions is to teach the "banality of kindness" -- that is, to get society to exemplify ordinary people who engage in extraordinary moral actions. How do you do this?

Zimbardo: If you can agree on a certain number of things that are morally wrong, then one way to counteract them is by training kids. There are some programs, starting in the fifth grade, which get kids to think about the heroic mentality, the heroic imagination.

To be a hero you have to take action on behalf of someone else or some principle and you have to be deviant in your society, because the group is always saying don't do it; don't step out of line. If you're an accountant at Arthur Andersen, everyone who is doing the defrauding is telling you, "Hey, be one of the team."

Heroes have to always, at the heroic decisive moment, break from the crowd and do something different. But a heroic act involves a risk. If you're a whistle-blower you're going to get fired, you're not going to get promoted, you're going to get ostracized. And you have to say it doesn't matter.

Most heroes are more effective when they're social heroes rather than isolated heroes. A single person or even two can get dismissed by the system. But once you have three people, then it's the start of an opposition.

So what I'm trying to promote is not only the importance of each individual thinking "I'm a hero" and waiting for the right situation to come along in which I will act on behalf of some people or some principle, but also, "I'm going to learn the skills to influence other people to join me in that heroic action."

Original here

1,301 Florescent Bulbs Lit Solely by Magnetic Fields

This field has 1,301 florescent bulbs planted in it, and they're all glowing. They aren't plugged into anything, however; they're powered solely from the magnetic fields produced by the power lines above. It's all a large art project by Richard Box, and if you're really interested in it you can order a DVD of the whole thing from him. If you're cheaper and less interested, just peruse our gallery for the cool shots.

Magnetic Fields

Encyclopedia of Life Is Alive!

First 30,000 Represents Just 1.6% of Known Species


The first 30,000 species have been added to an ambitious on-line catalog of the world's diverse life forms, the Encyclopedia of Life.

For anyone fascinated by the natural world, this is big news. It's an inspiration. Plus, it might finally help us all remember how it is those taxonomic divisions fit together. (Was that "kingdom, phylum, class, order, family, genus, species" ... or "kingdom, class, phylum, family, order, genus, species"?)

The database will also be a tool for scientists and policymakers looking to understand, protect and restore the world's biodiversity, for its own sake and for various species' potential to provide useful and valuable services to humanity. (Think everything from cure for cancer to replacement pollinators should the honeybee crisis spiral downward.) Some have suggested the world is in the midst of its sixth great extinction event, this one caused not by volcanism, meteor strikes or other catastrophe, but instead by human pollution and encroachment into wild habitats.

“The EOL provides an extraordinary window onto the living world, one that will greatly accelerate and expand the potential for biological and biomedical discovery,” says Gary G. Borisy, director and chief executive officer of the Marine Biological Laboratory (MBL) in Woods Hole, Mass., and a member of the EOL Steering Committee and Distinguished Advisory Board.

The 30,000 species in the database now is miniscule, not even 2% of the 1.8 million species known to science. No surprise, then, that it will take til 2017 to fill the database with 250 years of scientific exploration and discovery.

"It is exciting to anticipate the scientific chords we might hear once 1.8 million notes are brought together through this instrument," says Jim Edwards, Executive Director of the EOL. “Potential EOL users are professional and citizen scientists, teachers, students, media, environmental managers, families and artists. The site will link the public and scientific community in a collaborative way that’s without precedent in scale.”

Original here

Coal plant to test capturing carbon dioxide

It may not have the panache of a Toyota Prius or the sizzle of the Academy Awards bid to "go green."

But the USA is quietly opening a more significant front this week in the battle against global warming by targeting its biggest source: power plants.

A Wisconsin coal-fired power plant operated by We Energies is scheduled to launch a pilot project to capture a portion of the carbon dioxide produced as the coal is burned. It will be the first time a U.S. power plant has corralled CO2, the main greenhouse gas, before it floats out of the smokestack.

Power plants produce nearly 40% of U.S. carbon emissions; the bulk of that is from coal plants.

The project is a small step on a long road. Alstom, the technology provider, will capture just 3% of the carbon and will immediately release it rather than storing it underground. Carbon storage is widely deemed the biggest hurdle in the worldwide effort to reduce power plant CO2 emissions.

Yet, the pilot program shows that even though the Bush administration recently canceled the clean coal plant called FutureGen, industry is forging ahead, if in a more scattershot style, to strike at the single biggest source of carbon discharges. The Pleasant Prairie, Wis., trial is one of a series of carbon-capture projects Alstom and others are planning at power plants around the nation in the next decade.

The year-long effort, estimated to cost at least $10 million, is being funded by We Energies, Alstom, the Electric Power Research Institute and 35 companies.

"It's a necessary first step," says Robert Hilton, head of business development for Alstom's global environmental business.

Clean coal plants are viewed as vital to fighting global warming. Gas-fired plants emit far less carbon than coal, but natural gas prices are volatile. Wind and solar power are intermittent. Nuclear reactors are emissions-free but pricey and could take many years to build. Despite recent price increases, coal is fairly cheap and abundant.

At the Wisconsin plant, Alstom has built a 90-foot-high addition criss-crossed by huge pipes and heat exchangers to capture the carbon, using a process called chilled ammonia. After coal is burned in a boiler, ammonium carbonate absorbs about 90% of the resulting CO2 to form ammonium bicarbonate, a solid and liquid. The carbon will then be separated under high pressure and released into the air as a gas.

Chilling the carbon and other flue gases eliminates contaminants, such as sulfur dioxide, and permits a much greater amount of carbon to be absorbed, Hilton says. That means the carbon capture uses far less electricity, freeing the power for the grid.

One concern about ammonia is its volatility. "You don't want it coming up the stack," says Howard Herzog, principal research engineer for the MIT Energy and Environment lab.

Hilton says scrubbers will prevent any ammonia from escaping.

The Wisconsin pilot program will be followed by similar but larger trials by Alstom at American Electric Power plants in West Virginia and Oklahoma. Those projects will store CO2 underground or pump it to oil fields to boost output.

Alstom has said carbon capture and storage should be widely available by 2019.

By capturing CO2 after it is produced, Alstom's technology can be used with hundreds of today's traditional pulverized coal plants, Hilton says. General Electric and Siemens are developing technology for a new type of plant that turns coal into synthetic gases, filtering out the CO2 before the gases are burned, a simpler process.

Such plants are 20% cheaper than traditional coal plants, assuming both types add carbon capture and storage, says MIT professor John Deutch.

The FutureGen plant, scheduled to be built in Mattoon, Ill., would have used gasification technology. The Department of Energy canceled it last month, citing construction costs that would have pushed its price tag to nearly $2 billion, most of which the DOE would have funded. Instead, the DOE says it will help fund several smaller gasification plants spearheaded by the power industry around the country.

Deutch says only the federal government can oversee the challenging task of burying carbon in rock formations. Researchers must ensure that the carbon doesn't contaminate water supplies, and officials must determine who is liable if the CO2 leaks to neighboring properties, he says.

Hilton, however, thinks Deutch is underestimating private efforts. "Sometimes government programs prolong a product coming to market," he says.

Original here

Wednesday, February 27, 2008

NASA Takes Aim at Moon with Double Sledgehammer

Scientists are priming two spacecraft to slam into the moon's South Pole to see if the lunar double whammy reveals hidden water ice.

The Earth-on-moon violence may raise eyebrows, but NASA's history shows that such missions can yield extremely useful scientific observations.

"I think that people are apprehensive about it because it seems violent or crude, but it's very economical," said Tony Colaprete, the principal investigator for the mission at NASA's Ames Research Center in Moffett Field, Calif.

NASA's previous Lunar Prospector mission detected large amounts of hydrogen at the moon's poles before crashing itself into a crater at the lunar South Pole. Now the much larger Lunar Crater and Observation Sensing Satellite (LCROSS) mission, set for a February 2009 moon crash, will take aim and discover whether some of that hydrogen is locked away in the form of frozen water.

LCROSS will piggyback on the Lunar Reconnaissance Orbiter (LRO) mission for an Oct. 28 launch atop an Atlas 5 rocket equipped with a Centaur upper stage. While the launch will ferry LRO to the moon in about four days, LCROSS is in for a three-month journey to reach its proper moon smashing position. Once within range, the Centaur upper stage doubles as the main 4,400 pound (2,000 kg) impactor spacecraft for LCROSS.

The smaller Shepherding Spacecraft will guide Centaur towards its target crater, before dropping back to watch - and later fly through - the plume of moon dust and debris kicked up by Centaur's impact. The shepherding vehicle is packed with a light photometer, a visible light camera and four infrared cameras to study the Centaur's lunar plume before it turns itself into a second impactor and strikes a different crater about four minutes later.

"This payload delivery represents a new way of doing business for the center and the agency in general," said Daniel Andrews, LCROSS project manager at Ames, in a statement. "LCROSS primarily is using commercial-off-the-shelf instruments on this mission to meet the mission's accelerated development schedule and cost restraints."

Figuring out the final destinations for the $79 million LCROSS mission is "like trying to drive to San Francisco and not knowing where it is on the map," Colaprete said. He and other mission scientists hope to use observations from LRO and the Japanese Kaguya (Selene) lunar orbiter to map crater locations before LCROSS dives in.

"Nobody has ever been to the poles of the moon, and there are very unique craters - similar to Mercury - where sunlight doesn't reach the bottom," Colaprete said. Earth-based radar has also helped illuminate some permanently shadowed craters. By the time LCROSS arrives, it can zero in on its 19 mile (30 km) wide targets within 328 feet (100 meters).

Scientists want the impactor spacecraft to hit smooth, flat areas away from large rocks, which would ideally allow the impact plume to rise up out of the crater shadows into sunlight. That in turn lets LRO and Earth-based telescopes see the results.

"By understanding what's in these craters, we're examining a fossil record of the early solar system and would occurred at Earth 3 billion years ago," Colaprete said. LCROSS is currently aiming at target craters Faustini and Shoemaker, which Colaprete likened to "fantastic time capsules" at 3 billion and 3.5 billion years old.

LCROSS researchers anticipate a more than a 90 percent chance that the impactors will find some form of hydrogen at the poles. The off-chance exists that the impactors will hit a newer crater that lacks water - yet scientists can learn about the distribution of hydrogen either way.

"We take [what we learn] to the next step, whether it's rovers or more impactors," Colaprete said.

This comes as the latest mission to apply brute force to science.

The Deep Impact mission made history in 2005 by sending a probe crashing into comet Tempel 1. Besides Lunar Prospector's grazing strike on the moon in 1999, the European Space Agency's Smart-1 satellite dove more recently into the lunar surface in 2006.

LCROSS will take a much more head-on approach than either Lunar Prospector or Smart-1, slamming into the moon's craters at a steep angle while traveling with greater mass at 1.6 miles per second (2.5 km/s). The overall energy of the impact will equal 100 times that of Lunar Prospector and kick up 1,102 tons of debris and dust.

"It's a cost-effective, relatively low-risk way of doing initial exploration," Colaprete said, comparing the mission's approach to mountain prospectors who used crude sticks of dynamite to blow up gully walls and sift for gold. Scientists are discussing similar missions for exploring asteroids and planets such as Mars.

Nevertheless, Colaprete said they "may want to touch the moon a bit more softly" after LCROSS has its day.

Original here