Followers

Sunday, March 9, 2008

Ted Haggard Tells Richard Dawkins He is "Arrogant"

dawkins v haggard

This couple want a deaf child. Should we try to stop them?

From embryo selection to abortion, fertility treatment to stem cell research, medical advances have created a furious ethical debate. Now MPs must decide how far science should be allowed to go. Gaby Hinsliff and Robin McKie report

Like any other three-year-old child, Molly has brought joy to her parents. Bright-eyed and cheerful, Molly is also deaf - and that is an issue which vexes her parents, though not for the obvious reasons. Paula Garfield, a theatre director, and her partner, Tomato Lichy, an artist and designer, are also deaf and had hoped to have a child who could not hear.

'We celebrated when we found out about Molly's deafness,' says Lichy. 'Being deaf is not about being disabled, or medically incomplete - it's about being part of a linguistic minority. We're proud, not of the medical aspect of deafness, but of the language we use and the community we live in.'

Now the couple are hoping to have a second child, one they also wish to be deaf - and that desire has brought them into a sharp confrontation with Parliament. The government's Human Fertilisation and Embryology (HFE) bill, scheduled to go through the Commons this spring, will block any attempt by couples like Garfield and Lichy to use modern medical techniques to ensure their children are deaf. The bill is a jumbo-sized piece of legislation intended to pull together all aspects of reproductive science in Britain and pave the way for UK scientists to lead the field in embryology. But in trying to do so, the civil servants drafting the bill have provoked a great deal of unrest.

'Paula is now in her early 40s,' says Lichy. 'Our first daughter was born naturally, but due to Paula's age, we may need IVF for the second.' The trouble is that, according to clause 14/4/9 of the bill, the selection of a hearing child through IVF is permitted, but embryos found to have deafness genes will be automatically discarded. 'This sends out a clear and direct message that the government thinks deaf people are better off not being born,' says Steve Emery, a sign-language expert at Heriot-Watt University.

This point is backed by Lichy. 'It is a cornerstone of modern society and law that deaf and hearing people have equal rights. If hearing people were to have the right to throw away a deaf embryo, then we as deaf people should also have the right to throw away a hearing embryo.'

Garfield and Lichy say they will continue to try for a second baby naturally and will be happy if it is able to hear. Yet their affront over the blocking of their use of IVF is intense. 'I find this shocking, detestable and utterly inhuman,' says Lichy. 'I'm a governor at a school for deaf children. When I visit, I see a class of 30 deaf children, all happily signing to each other, running about and creating chaos. Some will be artists, some will be accountants, some may go to Oxbridge, following the path that several of my deaf friends have already beaten. If they had been conceived via IVF, and detected as deaf at that stage, then all would have been aborted before birth.'

The government is unlikely to change its mind over this issue, it appears. Nevertheless, it illustrates the intense feelings that surround the embryology bill. Its passage through the Commons is set to be one of the most passionately debated in recent years. Designed to update the HFE Act of 1990, which in turn updated the Abortion Act of 1976, the new bill has been drafted to ensure the law is compatible with modern medical practice. Given the issues - from abortion to hybrid embryos - it covers, the bill was always going to stir controversy.

This point is demonstrated, somewhat unexpectedly, at Dr Warren Hern's clinic in Boulder, Colorado. Hern is one of a handful of specialists worldwide willing to perform abortions beyond 24 weeks' gestation, the legal cut-off point in most of Europe for terminating a pregnancy. And he is increasingly seeing British women for terminations that would be against the law in their home country, despite the fact that British providers - nervous of entering a legal grey area - refuse to refer them to him.

'They find me on the internet. The consequences of the refusal to refer is that they are further along, higher risk and higher cost (when I do see them),' Hern told The Observer. 'This is cruel and unusual punishment for women who need this service in issues of foetal abnormality.'

Depending on the bill's passage, his clinic could soon become a lot busier, however. Tory backbencher and former nurse Nadine Dorries is to table an amendment to the bill which would reduce the upper limit for abortions in Britain from 24 to 20 weeks: David Cameron has pledged to support it. Dorries argues there is evidence that babies above this age are sentient - capable of feeling pain - although the scientific evidence is hotly contested.

Dorries first thought there was no chance of changing the law but is now more confident: 'I only have to walk through the House of Commons and MPs say: "I am with you on 20 weeks, I don't want to go any lower, don't want to ban abortion, but I'm with you."'

Pro-choice campaigners, however, argue that the 2 per cent of abortions carried out every year after 20 weeks often involve severe abnormalities discovered only at the 20-week scan, or drastic changes in personal circumstances such as a woman being abandoned by the father. There are also frightened teenagers unable to accept they were pregnant or seek help early enough. Changing the law, they argue, would simply drive these women - if they could afford it - to fly somewhere like Colorado. 'People find ways,' says Louise Hutchins of the pro-choice pressure group Abortion Rights.

The Commons debate will concentrate on viability, the age at which a baby is considered to have a good chance of survival outside the womb. Public Health Minister Dawn Primarolo, who will steer the bill in the Commons, will tell MPs the medical consensus remains that babies cannot be considered viable below 24 weeks, which should remain the legal limit.

'The care of premature babies is clearly improving but it hasn't improved to the point where you can move the point of viability,' she told The Observer. 'There just is a certain time limit when things like lungs are formed. Clearly if the science changes we would have to make that clear to Parliament, but it hasn't.' MPs will have a free vote on abortion, a traditional issue of conscience but Labour's chief whip has rebuffed pleas from anti-abortion MPs to be given the same freedom over other controversial parts of the bill, from provisions on stem-cell therapy to animal-human hybrids.

A further issue provoking controversy has been the decision to include clauses that would require the use of all tissue used to create lines of stem cells to have the explicit consent of its donor, while another clause would block the use of any tissue from children, even if their parents give consent. This has caused considerable concern because scientists take DNA from tissue of individuals with genetic conditions, insert this into a human or animal egg cell and then create stem cells, which can be grown in laboratories. These cell lines have the same genetic defects as the patient. New therapies can then be tested on them.

The new bill, as it stands, would have a devastating impact on this kind of medical research, as leading UK scientists - including three Nobel prize winners, Sir Martin Evans, Sir Paul Nurse, and Sir John Sulston - recently warned. The law would be retrospective, so cell lines on which scientists are now working would have to be thrown out. In addition, in the case of tissue taken from children, there is the problem that many of them have conditions that mean they will not reach adulthood and, therefore, could never give consent for taking their tissue at a later date.

The issues vex scientists. To date, however, the government has refused to back down, further inflaming debate. How the bill will finally emerge from the ensuing negotiations is difficult to determine. One option being discussed is for chief whip Geoff Hoon to impose a 'soft whip', meaning Catholic cabinet ministers such as Ruth Kelly and Paul Murphy can be conveniently absent from the vote over issues conecrning abortion, rather than having to choose between their government and their consciences.

But Gordon Brown is said to be heavily personally committed to the bill, and to the cause of medical research. His second son Fraser has cystic fibrosis - one of the conditions that could ultimately be cured by stem cell therapy.

Original here

Airborne is no "miracle cold buster," but this powder may be

Amid the news of a $23-million court settlement by the makers of Airborne — a supplement that's earned hundreds of millions of dollars in sales with the claim that it boosts the immune system — biomedical engineers are publishing research on a powder that could turn out to be the real thing. As this ScienCentral News video explains, the new powder could first be used to help fight cancer.


Interviewee: Tarek Fahmy, Yale University
Length: 1 min 32 sec
Produced by Brad Kloza
Edited by Brad Kloza/Chris Bergendorff

Countdown begins for Tuesday space shuttle launch

By Irene Klotz

CAPE CANAVERAL, Fla., March 8 (Reuters) - Countdown clocks at the Kennedy Space Center in Florida began ticking on Saturday toward Tuesday's launch of space shuttle Endeavour carrying a Japanese lab section and Canadian-built robot for the International Space Station.

Liftoff is targeted for 2:28 a.m. EDT/0628 GMT. Meteorologists predicted clear skies and light breezes, with a 90 percent chance conditions would be suitable for liftoff.

The seven-man Endeavour crew arrived at the Florida spaceport early on Saturday, delayed several hours by a cold front pushing through central Florida that whipped up winds, thunderstorms and sporadic heavy rain.

"We've had some interesting weather over the last 24 hours," shuttle weather officer Todd McNamara told reporters on Saturday.

The crew includes two veteran NASA astronauts: commander Dominic Gorie and lead spacewalker Richard Linnehan; rookies Greg Johnson, Michael Foreman, Robert Behnken and Garrett Reisman; and Japan's Takao Doi, who flew on a shuttle research mission in 1997.

Reisman will replace France's Leopold Eyharts as a member of the space station crew.

"We all just wanted to convey how excited we are to be here for launch week," Gorie said after the crew's belated arrival. "We've got a very, very ambitious flight schedule, but with a great orbiter waiting for us and this great crew, we're going to have a great mission."

BUSY SCHEDULE

The shuttle is scheduled to spend 16 days in orbit, NASA's longest planned mission to the space station so far. With just 11 flights remaining to the orbital outpost, NASA wants to squeeze in as much construction and maintenance time as possible before the shuttles are retired in two years.

A 12th shuttle mission to upgrade and repair the Hubble Space Telescope is planned for later this year.

During their 12 days at the station, Endeavour's astronauts plan to conduct five spacewalks to install the first part of Japan's Kibo complex and set up a robot to help with station maintenance and other tasks. The main part of Kibo, which is Japanese for "hope," is due to arrive in May.

Japan has spent about $250 billon yen, or more than $2.4 billion, to develop Kibo, said Hiroki Furihata, deputy director of the Japanese Exploration Development Agency liaison office at Kennedy Space Center.

Also on Saturday, Europe, which recently began operating its space station laboratory Columbus, was preparing for the debut flight of its unmanned station cargo ship aboard a massive Ariane 5 rocket from Kourou in French Guiana. Liftoff was scheduled for 11:03 p.m. EST/0403 GMT on Sunday.

The vessel, known as the Automated Transfer Vehicle, or ATV, will idle in orbit during Endeavour's flight before closing in on the station for several rendezvous and navigation tests prior to berthing. Europe plans to fly one ATV a year for the next five years to help keep the station supplied with fuel, food and other gear.

The space station, which is about 60 percent complete, is a $100 billion project of 15 nations. Next year, the station's crew size is expected to double from three to six members.

Endeavour, which will be NASA's 122nd shuttle mission, is the second of six shuttle flights NASA plans for this year. (Editing by Eric Walsh)

Original here

Dark Halos Discovered on Mercury

"The halos are really exceptional," says MESSENGER science team member Clark Chapman of the Southwest Research Institute in Boulder, Colorado. "We've never seen anything like them on Mercury before and their formation is a mystery."

Consider the following:

The two craters at the bottom of the frame are located in Mercury's giant Caloris Basin, a thousand mile wide depression formed billions of years ago when Mercury collided with a comet or asteroid. For scale, the larger of the two is about 40 miles wide. Both craters have dark rims or "halos" and the one on the left is partially filled with an unknown shiny material.

Chapman offers two possible explanations for the halos:

1. The Layer Cake Theory--There could be a layer of dark material under the surface of Caloris Basin, resulting in chocolate-colored rims around craters that penetrate to just the right depth. If such a subterranean layer exists, however, it cannot be unique to the Basin. "We've found a number of dark halos outside of Caloris as well—for instance, these two near Mercury's south pole."

2. The Impact Glass Model--Thermal energy from the impacts melted some of Mercury's rocky surface. Perhaps molten rock splashed to the edge of the craters where it re-solidified as a dark, glassy substance. Similar "impact melts" are found around craters on Earth and the Moon. If this hypothesis is correct, future astronauts on Mercury exploring the crater rims would find themselves crunching across fields of tiny glass shards.

Chapman notes that the Moon also has some dark haloed craters--"Tycho is a well-known example." But lunar halos tend to be subtle and/or fragmentary. "The ones we see on Mercury are much more eye-catching and distinct."

The difference may be gravity. Lunar gravity is low. Any dark material flying out of a crater on the Moon travels a great distance, spreading out in a diffusion that can be difficult to see. The surface gravity of Mercury, on the other hand, is more than twice as strong as the Moon's. On Mercury, debris can't fly as far; it lands in concentrated form closer to the impact site where it can catch the attention of the human eye.

see captionRight: Another dark-haloed crater near Mercury's south pole. [More]

None of this explains the shiny-bottomed crater: "That is an even bigger mystery," says Chapman. Superficially, the bright patch resembles an expanse of ice glistening in the sun, but that's not possible. The surface temperature of the crater at the time of the photo was around 400 degrees Celsius. Perhaps the shiny material is part of another subsurface layer, bright mixed with dark; that would be the Marbled Layer Cake Theory. "I haven't heard any really convincing explanations from our science team," he adds. "We don't yet know what the material is, why it is so bright, or why it is localized in this particular crater."

Fortunately, MESSENGER may have gathered data researchers need to solve the puzzle. Spectrometers onboard the spacecraft scanned the craters during the flyby; the colors they measured should eventually reveal the minerals involved. "The data are still being calibrated and analyzed," says Chapman.

And if those data don't yield an answer….?

There are still two more flybys—one in Oct. 2008 and another in Sept. 2009—before MESSENGER enters Mercury orbit in 2011. In the fullness of time "we'll get to the bottom of this mystery"—and probably many more mysteries yet to be revealed.

Original here

WMAP Reveals Neutrinos, End of Dark Ages, First Second of Universe

NASA released this week five years of data collected by the Wilkinson Microwave Anisotropy Probe (WMAP) that refines our understanding of the universe and its development. It is a treasure trove of information, including at least three major findings:

WMAP cosmic microwave fluctuations over the full sky with 5 WMAP cosmic microwave fluctuations over the full sky with 5-years of data. Colors represent the tiny temperature fluctuations of the remnant glow from the infant universe: red regions are warmer and blue are cooler. Credit: WMAP Science Team
> Click for larger image
  • New evidence that a sea of cosmic neutrinos permeates the universe
  • Clear evidence the first stars took more than a half-billion years to create a cosmic fog
  • Tight new constraints on the burst of expansion in the universe's first trillionth of a second
"We are living in an extraordinary time," said Gary Hinshaw of NASA's Goddard Space Flight Center in Greenbelt, Md. "Ours is the first generation in human history to make such detailed and far-reaching measurements of our universe."

WMAP measures a remnant of the early universe - its oldest light. The conditions of the early times are imprinted on this light. It is the result of what happened earlier, and a backlight for the later development of the universe. This light lost energy as the universe expanded over 13.7 billion years, so WMAP now sees the light as microwaves. By making accurate measurements of microwave patterns, WMAP has answered many longstanding questions about the universe's age, composition and development.

The universe is awash in a sea of cosmic neutrinos. These almost weightless sub-atomic particles zip around at nearly the speed of light. Millions of cosmic neutrinos pass through you every second.

"A block of lead the size of our entire solar system wouldn’t even come close to stopping a cosmic neutrino,” said science team member Eiichiro Komatsu of the University of Texas at Austin.

Relative constituents of the universe today, and for when the universe was 380,000 years old. Relative constituents of the universe today, and for when the universe was 380,000 years old, 13.7 billion years ago. Neutrinos used to be a larger fraction of the energy of the universe than they are now. Credit: WMAP Science Team
> Click for larger image
WMAP has found evidence for this so-called "cosmic neutrino background" from the early universe. Neutrinos made up a much larger part of the early universe than they do today.

Microwave light seen by WMAP from when the universe was only 380,000 years old, shows that, at the time, neutrinos made up 10% of the universe, atoms 12%, dark matter 63%, photons 15%, and dark energy was negligible. In contrast, estimates from WMAP data show the current universe consists of 4.6% percent atoms, 23% dark matter, 72% dark energy and less than 1 percent neutrinos.

Cosmic neutrinos existed in such huge numbers they affected the universe’s early development. That, in turn, influenced the microwaves that WMAP observes. WMAP data suggest, with greater than 99.5% confidence, the existence of the cosmic neutrino background - the first time this evidence has been gleaned from the cosmic microwaves.

Much of what WMAP reveals about the universe is because of the patterns in its sky maps. The patterns arise from sound waves in the early universe. As with the sound from a plucked guitar string, there is a primary note and a series of harmonics, or overtones. The third overtone, now clearly captured by WMAP, helps to provide the evidence for the neutrinos.

The hot and dense young universe was a nuclear reactor that produced helium. Theories based on the amount of helium seen today predict a sea of neutrinos should have been present when helium was made. The new WMAP data agree with that prediction, along with precise measurements of neutrino properties made by Earth-bound particle colliders.

Another breakthrough derived from WMAP data is clear evidence the first stars took more than a half-billion years to create a cosmic fog. The data provide crucial new insights into the end of the "dark ages," when the first generation of stars began to shine. The glow from these stars created a thin fog of electrons in the surrounding gas that scatters microwaves, in much the same way fog scatters the beams from a car’s headlights.

The first peak reveals a specific spot size for early universe sound waves, just as the length ofThe first peak reveals a specific spot size for early universe sound waves, just as the length of guitar string gives a specific note. The second and third peaks are the harmonics. Credit: WMAP Science Team
> Click for larger image
"We now have evidence that the creation of this fog was a drawn-out process, starting when the universe was about 400 million years old and lasting for half a billion years," said WMAP team member Joanna Dunkley of the University of Oxford in the U.K. and Princeton University in Princeton, N.J. "These measurements are currently possible only with WMAP."

A third major finding arising from the new WMAP data places tight constraints on the astonishing burst of growth in the first trillionth of a second of the universe, called “inflation”, when ripples in the very fabric of space may have been created. Some versions of the inflation theory now are eliminated. Others have picked up new support.

"The new WMAP data rule out many mainstream ideas that seek to describe the growth burst in the early universe," said WMAP principal investigator, Charles Bennett, of The Johns Hopkins University in Baltimore, Md. "It is astonishing that bold predictions of events in the first moments of the universe now can be confronted with solid measurements."

The five-year WMAP data were released this week, and results were issued in a set of seven scientific papers submitted to the Astrophysical Journal.

Prior to the release of the new five-year data, WMAP already had made a pair of landmark finds. In 2003, the probe's determination that there is a large percentage of dark energy in the universe erased remaining doubts about dark energy's very existence. That same year, WMAP also pinpointed the 13.7 billion year age of the universe.

Additional WMAP science team institutions are: the Canadian Institute for Theoretical Astrophysics, Columbia University, University of British Columbia, ADNET Systems, University of Chicago, Brown University, and UCLA.

Original here

Mankind to be Represented in Space by Doritos Ad

Apparently unconvinced that the whole of Planet Earth is a large enough customer base, Doritos is teaming up with astronomers to broadcast their advertising into space.
The sum total of mankind’s achievements as a species will be represented to aliens in the form of a Doritos advertisement. Image by Med

The advertisement will be targeted at part of the Ursa Major constellation, a zone astronomers believe contain the conditions for life. Doritos obviously thinks the predicted inhabitants of Ursa Major are “extreme” enough to handle colored flavor dusted tortilla chips and will not blow us up for bombarding their planets with unwanted advertising for a product that doesn’t exist there.

This isn’t the first time advertisements have been broadcast into the cosmos; TV broadcast signals have been slipping into the cosmos since I Love Lucy. It is, however, the first time a company has actually tried to broadcast their commercials directly into space.

The publicity stunt is one more way for Doritos to hype their product and one more humiliation for astronomers, many of whom are desperately searching for funding as observatories face closure in the face of government budget cuts.

The advertisement will be broadcast from the EISCAT Space Centre in Svalbard, Norway, near the site of the “doomsday” Global Seed Vault. Using a 500MHz Ultra High Frequency Radar, the space centre will send a 30 second ad, chosen during a contest, 42 light years away to the zone known as Ursa Major, or The Great Bear of Plough. There is a star there called 47 UMa that has orbiting planets which could possibly harbor life.

Much like the uproar following NASA’s decision to broadcast a Beatles’ song at the North Star, some scientists started freaking out about aliens getting annoyed and attacking us. Most scientists, however, believe aliens either don’t exist,won’t receive the advertisement, or really like Doritos.

Prof Tony van Eyken is the director of EISCAT. He says: “Broadcasting an advert extra terrestrially is a big and exciting step for everyone on Earth as up until now we have only tended to listening for incoming transmissions.”

“In this case we are giving somebody the opportunity to create this message as a way to say hello on behalf of mankind,” he added.

I’m not so much sold on the idea of broadcasting advertisements to aliens as an “exciting step”. I don’t know about you, but I would prefer that aliens’ first “hello from mankind” be something slightly classier than an advertisement for flavored tortilla chips.

Original here

Elusive bird spotted near Papua New Guinea

Species thought extinct rediscovered; Beck’s petrel not seen for 80 years

An adult Beck's petrel photographed off the north-east coast of Papua New Guinea in August 2007. The Beck's petrel had been not seen for almost 80 years.

LONDON - A bird species not seen for 80 years has been rediscovered near Papua New Guinea, experts said Friday. The Beck's petrel, long thought to be extinct, was photographed last summer by an Israeli ornithologist in the Bismarck Archipelago, a group of islands northeast of New Guinea.

Hadoram Shirihai, who was leading an expedition to find the bird, photographed more than 30 Beck's petrels. Shirihai's photographs and his report were published in "The Bulletin of the British Ornithologists' Club" on Friday.

Britain's Royal Society for the Protection of Birds, and BirdLife International — a Cambridge conservation group — both said on Friday that their committees of experts had reviewed Shirihai's evidence and agreed he had found a Beck's petrel.

"I don't think there's much doubt about it," said BirdLife International spokesman Nick Askew.

The pictures are the first hard evidence of the Beck's petrel's existence since unconfirmed sightings of the bird were reported in Australia two years ago.

Beck's petrels are seabirds related to albatrosses and shearwaters. They are dark brown with pale bellies and tube-like noses. Upon first glance they look similar to the Tahitian petrel, one of 66 different petrel species, but Beck's are smaller and have more narrow wings than the Tahitians.

The last known specimen of the Beck's petrel before its rediscovery was collected in 1929 and the species is currently categorized as critically endangered by BirdLife International.

Shirihai compared a dead petrel he brought back with the data collected by Rollo Beck in the late 1920s to verify his was a genuine Beck's petrel.

The ornithologist has previously helped discover several new species in Europe, the Middle East and North Africa, the Royal Society for the Protection of Birds said. He is one of the very few people to have visited almost every sub-Antarctic island and the breeding grounds of all forms of albatrosses, the society said.

Similar discoveries have touched off controversy in the past.

In 2004, ornithologists in the United States took grainy videos of an ivory-billed woodpecker, a magnificent bird thought extinct for decades. After the 2005 announcement, other experts said the sighting in an Arkansas swamp seemed to be a more common woodpecker. Three years later the debate still goes on.

Copyright 2008 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Original here

MIT researchers demonstrate protective role of microRNA

Genetic snippets linked to cancer also key to embryonic cell development


Snippets of genetic material that have been linked to cancer also play a critical role in normal embryonic development in mice, according to a new paper from MIT cancer biologists.

The work, reported in the March 7 issue of Cell, shows that a family of microRNAs--short strands of genetic material--protect mouse cells during development and allow them to grow normally. But that protective role could backfire: The researchers theorize that when these microRNAs become overactive, they can help keep alive cancer cells that should otherwise die--providing another reason to target microRNAs as a treatment for cancer.

Discovered only a decade ago, microRNAs bind to messenger RNAs (mRNAs), preventing them from delivering protein assembly instructions, thereby inhibiting gene expression. The details of how microRNAs act are not yet fully understood.

"The scientific community is busy trying to understand what specific biological functions these microRNAs affect," said Andrea Ventura, lead author of the paper and postdoctoral associate in the Koch Institute for Integrative Cancer Research at MIT (formerly known as the Center for Cancer Research).

Ventura--who works in the laboratory of Tyler Jacks, director of the Koch Institute--and her colleagues studied the function of a family of microRNAs known as the miR-17~92 cluster.

Previous research has shown that the miR-17~92 cluster is overactive in some cancers, especially those of the lungs and B cells.

To better understand these microRNAs' role in cancer, the researchers decided to study their normal function. Knocking out microRNA genes and observing the effects can offer clues into how microRNA helps promote cancer when overexpressed.

They found that when miR-17~92 was knocked out in mice, the animals died soon after birth, apparently because their lungs were too small. Also, their B cells, a type of immune cell, died in an early stage of cell development.

This suggests that miR-17~92 is critical to the normal development of lung cells and B cells. In B cells, these microRNAs are likely acting to promote cell survival by suppressing a gene that induces cell death, said Ventura.

"Understanding why these things are happening provides important insight into how microRNAs affect tumorigenesis," he said.

The researchers theorize that when miR-17~92 becomes overactive in cancer cells, it allows cells that should undergo programmed cell death to survive.

Blocking microRNAs that have become overactive holds promise as a potential cancer treatment. Research is now being done on molecules that prevent microRNAs from binding to their target mRNA.

More work needs to be done to make these inhibitors into stable and deliverable drugs, but Ventura said it's possible it could be done in the near future.

The exact genes targeted by miR-17~92 are not known, but one strong suspect is a gene called Bim, which promotes cell death. However, a single microRNA can have many targets, so it's likely there are other genes involved.

The researchers also studied the effects of knocking out two other microRNA clusters that are closely related to miR-17~92 but located elsewhere in the genome.

They found that if the other two microRNA clusters are knocked out but miR-17~92 remains intact, the mice develop normally. However, if miR-17~92 and one of these similar clusters are removed, the mice die before birth, suggesting there is some kind of synergistic effect between these microRNA families.

Other MIT authors of the paper are Amanda Young, graduate student in biology; Monte Winslow, postdoctoral fellow in the Center for Cancer Research (CCR); Laura Lintault, staff affiliate in the CCR; Alex Meissner, faculty member at the Broad Institute of MIT and Harvard; Jamie Newman, graduate student in biology; Denise Crowley, staff affiliate at the CCR; Rudolf Jaenisch, professor of biology and member of the Whitehead Institute for Biomedical Research; Phillip Sharp, MIT Institute Professor; and Jacks, who is also a professor of biology.

The research was funded by the National Institutes of Health and the National Cancer Institute.

Original here

EYEING THE EVOLUTIONARY PAST

As we survey nature, the eyes of various creatures reveal the underlying means by which a single attribute can express itself over millions of years.

Eyes are a primary organ of the human experience: Not only are we profoundly visual animals, but eyes are organs of expression, they are indicators of health and beauty, and we have a cultural tradition of regarding them as windows into the mind. We value eyes in other animals, too. If you have a pet, you know that one of the signs of its domesticity and your friendliness toward one another is a willingness to look into each other's eyes and acknowledging each other's gaze, while looking into the eyes of a wild animal is a challenge and a threat. The eyes that we like most are those most similar to our own, that share an evolutionary affinity with us, and the farther we move from our own lineage, the more disquieting the face and eyes of an animal can be. Eyes are actually remarkably diverse and, although one might think that visual function is fairly straightforward, and that there wouldn't be that many ways to put together a visual sensor, nature has done it.

While eyes are common in larger animal species, about a third of all animal phyla lack eyes altogether; sea urchins do not bother with them, nor do many worms. Another third have eyes that look rudimentary to us; spots and patches and pits that can sense whether it's night or day or whether a shadow is passing overhead, but that do not form any kind of image. The final third have true image-forming eyes that can capture a picture of what's going on around them and pass that on to some kind of brain or nerve net. The phyla that have true eyes are a diverse subset of the multicellular animals, including jellyfish and sea anemones, molluscs, annelid worms, onychophora (velvet worms), arthropods, and us chordates, which is a strange distribution. It's as if eyes popped up in scattered lineages interspersed with groups that lack them. For a long time, one of the hypotheses to explain all these eyes was that they evolved independently, multiple times within the animal kingdom.

If you look at the deep structure of eyes and their cellular components, that impression is reinforced. For instance, from the outside, octopus eyes look remarkably human. They are fluid-filled eyeballs with an iris, a pupil, and a lens. On the inside, they are fundamentally different. The light-sensitive layer, the retina, is inside out in humans relative to the octopus. Our photoreceptor cells all point to the back of the eye, while theirs point forward toward the lens. Octopus eyes also have all the nervous wiring exiting out the back, rather than being draped over the photoreceptors. They have an organization that is visually superior to ours, but the important point is that the structural details are so radically different that we know cephalopod eyes did not evolve from chordate eyes or vice versa. They evolved independently from simpler precursors along different lines, and the external similarities are a result of convergence, not evolutionary relationship.

The compound eyes of arthropods are even more radically different. Instead of a single lens focusing an image on an array of photoreceptors, they have compound eyes with multiple lenses, each focusing a part of the visual field on a small set of photoreceptors. Furthermore, the photoreceptors are a different kind of cell. We chordates have ciliary photoreceptors, where cells have modified a kind of motile appendage called a cilium into an antenna for collecting light. Arthropods have rhabdomeric photoreceptors, which instead fold up one side of the cell into deeply corrugated furrows to increase the surface area for collecting light. Ciliary and rhabdomeric photoreceptors also use G proteins—signal transduction proteins that activate enzymes in the cell in response to the reception of light and different opsins (a c-opsin vs. an r-opsin)—proteins that carry the photoreceptive pigments. Different cells, different arrangements, different optical elements, different proteins—eyes are overwhelming in their diversity.

To make matters worse, cephalopods with their human-looking eyes use rhabdomeric photoreceptors, like the ones used in the compound eyes of insects. At the same time, those cephalopod cousins, clams, have mantle eyes that use ciliary receptors. It begins to look like genetic chaos, as if animals just slapped together eyes with any old components on hand, and without much respect for any kind of evolutionary unity. Eyes must be incredibly easy to evolve, or we're missing some important unifying principle in their construction. The answer has been found by reaching farther and farther back into evolutionary history, using analysis of the diversity to find the core, common elements of eyes.

One clue can be found in the polychaete worm, Platynereis. Not all animals are limited to just two eyes, and Platynereis is typical. It has one set of eyes for the larval form, and another for the adult form. As an adult it has, in addition to a pair of lens-type eyes with rhabdomeric photoreceptors, another pair, of the ciliary type, embedded in its brain. It has both!

Lest you think this is an obscure condition found only in bizarre marine invertebrates, humans have been found to carry a vestige of a similar condition: Our photoreceptors use c-opsin, but we also have cells in our retinas that use the rhabdomeric form, r-opsin, thought to be important in light/dark detection for setting circadian rhythms, but which don't form images. This tells us something about the last common ancestor of animals—that it might possibly have had multiple kinds of receptors and eyes, and that what we observe in the diversity of extant eyes is not that it is easy to evolve an eye, but that it is easy to lose one or the other kind of eye in a lineage.

The key to figuring out the evolutionary relationships is to look at the most distantly related group in the set of eyed animals, which in this case is the cnidarians, jellyfish and anemones. Recent work by Plachetzki, Degnan, and Oakley has found that that clade have both the ciliary and rhabdomeric opsins, making the division into two kinds of photoreceptors an ancient one, occurring at least 600 million years ago.

This ancient animal probably had very simple eye spots with no image-forming ability, but still needed some diversity in eye function. It needed to be able to sense both slow, long-duration events such as the changing of day into night, and more rapid events, such as the shadow of a predator moving overhead. These two forms arose by a simple gene duplication event and concomitant specialization of association with specific G proteins, which has also been found to require relatively few amino acid changes. This simple molecular divergence has since proceeded by way of the progress of hundreds of millions of years and amplification of a cascade of small changes into the multitude of diverse forms we see now. There is a fundamental unity that arose early, but has been obscured by the accumulation of evolutionary change. Even the eyes of a scorpion carry an echo of our kinship, not in their superficial appearance, but deep down in the genes from which they are built.

Original here

NIST 'Quantum Logic Clock' Rivals Mercury Ion as World's Most Accurate Clock


NIST physicist Till Rosenband adjusts the quantum logic clock, which derives its “ticks” from the natural vibrations of an aluminum ion (electrically charged atom). The aluminum ion is trapped together with one beryllium ion inside the copper-colored chamber in the foreground. Credit: Copyright Geoffrey Wheeler

An atomic clock that uses an aluminum atom to apply the logic of computers to the peculiarities of the quantum world now rivals the world's most accurate clock, based on a single mercury atom. Both clocks are at least 10 times more accurate than the current U.S. time standard.

Sponsored Links (Ads by Google)

Energime Theory - is Perhaps one of the most Comprehensive reviews of physics
www.iwpd.org

PTI: QuantaMaster Series - High-sensitivity fluorometer, easy- to-use, and modular. Visit our site
www.pti-nj.com/steadystate.html

Atom - Better science results & research on Ask.com. Use Ask.com now!
www.ask.com

The measurements were made in a yearlong comparison of the two next-generation clocks, both designed and built at the Commerce Department's National Institute of Standards and Technology (NIST). The clocks were compared with record precision, allowing scientists to measure the relative frequencies of the two clocks to 17 digits—the most accurate measurement of this type ever made. The comparison produced the most precise results yet in the worldwide quest to determine whether some of the fundamental constants that describe the universe are changing slightly over time, a hot research question that may alter basic models of the cosmos.

The research is described in the March 6 issue of Science Express. The aluminum and mercury clocks are both based on natural vibrations in ions (electrically charged atoms) and would neither gain nor lose one second in over 1 billion years—if they could run for such a long time—compared to about 80 million years for NIST-F1, the U.S. time standard based on neutral cesium atoms.

The mercury clock was first demonstrated in 2000 and is now four times better than its last published evaluation in 2006, thanks to ongoing improvements in the clock design and operation. The mercury clock continues its reign as the world’s most accurate for now, by a margin of 20 percent over the aluminum clock, but the designers say both experimental clocks could be improved further.

“The aluminum clock is very accurate because it is insensitive to background magnetic and electric fields, and also to temperature,” says Till Rosenband, the NIST physicist who built the clock and is the first author of the new paper. “It has the lowest known sensitivity of any atomic clock to temperature, which is one of the most difficult uncertainties to calibrate.”

Both the aluminum clock and the mercury clock are based on ions vibrating at optical frequencies, which are 100,000 times higher than microwave frequencies used in NIST-F1 and other similar time standards around the world. Because optical clocks divide time into smaller units, they can be far more precise than microwave standards. NIST scientists have several other optical atomic clocks in development, including one based on thousands of neutral strontium atoms. The strontium clock recently achieved twice the accuracy of NIST-F1, but still trails the mercury and aluminum clocks.

Highly accurate clocks are used to synchronize telecommunications networks and deep-space communications, and for satellite navigation and positioning. Next-generation clocks may also lead to new types of gravity sensors, which have potential applications in exploration for underground natural resources and fundamental studies of the Earth.

Laboratories around the world are developing optical clocks based on a variety of different designs and atoms; it is not yet clear which design will emerge as the best candidate for the next international standard.

The new paper provides the first published evaluation of the operational quantum logic clock, so-named because it is based on the logical reasoning process used in quantum computers (see sidebar below for details). The clock is a spin-off of NIST research on quantum computers, which grew out of earlier atomic clock research. Quantum computers, if they can be built, will be capable of solving certain types of complex problems that are impossible or prohibitively costly or time consuming to solve with today’s technologies.

The NIST quantum logic clock uses two different kinds of ions, aluminum and beryllium, confined closely together in an electromagnetic trap and slowed by lasers to nearly “absolute zero” temperatures. Aluminum is a stable source of clock ticks, but its properties cannot be detected easily with lasers. The NIST scientists applied quantum computing methods to share information from the aluminum ion with the beryllium ion, a workhorse of their quantum computing research. The scientists can detect the aluminum clock’s ticks by observing light signals from the beryllium ion.

NIST’s tandem ion approach is unique among the world’s atomic clocks and has a key advantage: “You can pick from a bigger selection of atoms,” explains NIST physicist Jim Bergquist, who built the mercury clock. “And aluminum has a lot of good qualities—better than mercury’s.”

An optical clock can be evaluated precisely only by comparison to another clock of similar accuracy serving as a “ruler.” NIST scientists used the quantum logic clock to measure the mercury clock, and vice versa. In addition, based on fluctuations in the frequencies of the two clocks relative to each other over time, NIST scientists were able to search for a possible change over time in a fundamental quantity called the fine-structure constant. This quantity measures the strength of electromagnetic interactions in many areas of physics, from studies of atoms and molecules to astronomy. Some evidence from astronomy has suggested the fine-structure constant may be changing very slowly over billions of years. If such changes are real, scientists would have to dramatically change their theories of the fundamental nature of the universe.

The NIST measurements indicate that the value of the fine-structure constant is not changing by more than 1.6 quadrillionths of 1 percent per year, with an uncertainty of 2.3 quadrillionths of 1 percent per year (a quadrillionth is a millionth of a billionth). The result is small enough to be “consistent with no change,” according to the paper. However, it is still possible that the fine-structure constant is changing at a rate smaller than anyone can yet detect. The new NIST limit is approximately 10 times smaller than the best previous measurement of the possible present-day rate of change in the fine-structure constant. The mercury clock is an especially useful tool for such tests because its frequency fluctuations are magnified by any changes in this constant.

Where the ‘Quantum Logic Clock’ Gets Its Name

The NIST quantum logic clock is so named because it borrows techniques that are key to quantum computers, which would solve problems using quantum mechanics, nature’s instruction book for the smallest particles of matter and light. Logic is reasoning that determines an action or result based on which one of different possible options is received as input. In the NIST clock, the input options are two different quantum states, or internal energy levels, of an aluminum ion. Information about this state is transferred to a beryllium ion, which, depending on the input, produces different signals that are easily detected.

NIST scientists use lasers to cool the two ions which are held 4 thousandths of a millimeter apart in an electromagnetic trap. Aluminum is the larger of the two ions, while the beryllium emits light under the conditions of this experiment. Scientists hit the ions with pulses from a “clock laser” within a narrow frequency range. If the laser frequency is at the center of the frequency range, the precise “resonance frequency” of aluminum, this ion jumps to a higher energy level, or 1 in the binary language of computers. Otherwise, the ion remains in the lower energy state, or 0.

If there is no change in the aluminum ion, then another laser pulse causes both ions to begin rocking side to side in unison because of their physical proximity and the interaction of their electrical charges. An additional laser pulse converts this motion into a change in the internal energy level of the beryllium ion. This pulse reverses the direction of the ion’s magnetic “spin,” and the beryllium goes dark, a signal that the aluminum remained in the 0 state.

On the other hand, if the aluminum ion jumps to the higher energy level, then the additional laser pulses fail to stimulate a shared rocking motion and have no effect on the beryllium ion, which keeps emitting light. Scientists detect this light as a signal that the aluminum ion jumped from 0 to 1.

The goal is to tune the clock laser to the exact frequency that prompts the aluminum to jump from 0 to 1. The actual measurement of the ticking of the clock is provided not by the ions but rather by the clock laser’s precisely tuned center frequency, which is measured with a “frequency comb,” a tool for measuring very high optical frequencies, or colors of light.

Original here

OUT OF THE BLUE

Can a thinking, remembering, decision-making, biologically accurate brain be built from a supercomputer?


A computer simulation of the upper layer of a rat brain neocortical column. Here neurons light up in a "global excitatory state" of blues and yellows. Courtesy of Alain Herzog/EPFL

In the basement of a university in Lausanne, Switzerland sit four black boxes, each about the size of a refrigerator, and filled with 2,000 IBM microchips stacked in repeating rows. Together they form the processing core of a machine that can handle 22.8 trillion operations per second. It contains no moving parts and is eerily silent. When the computer is turned on, the only thing you can hear is the continuous sigh of the massive air conditioner. This is Blue Brain.

The name of the supercomputer is literal: Each of its microchips has been programmed to act just like a real neuron in a real brain. The behavior of the computer replicates, with shocking precision, the cellular events unfolding inside a mind. "This is the first model of the brain that has been built from the bottom-up," says Henry Markram, a neuroscientist at Ecole Polytechnique Fédérale de Lausanne (EPFL) and the director of the Blue Brain project. "There are lots of models out there, but this is the only one that is totally biologically accurate. We began with the most basic facts about the brain and just worked from there."

Before the Blue Brain project launched, Markram had likened it to the Human Genome Project, a comparison that some found ridiculous and others dismissed as mere self-promotion. When he launched the project in the summer of 2005, as a joint venture with IBM, there was still no shortage of skepticism. Scientists criticized the project as an expensive pipedream, a blatant waste of money and talent. Neuroscience didn't need a supercomputer, they argued; it needed more molecular biologists. Terry Sejnowski, an eminent computational neuroscientist at the Salk Institute, declared that Blue Brain was "bound to fail," for the mind remained too mysterious to model. But Markram's attitude was very different. "I wanted to model the brain because we didn't understand it," he says. "The best way to figure out how something works is to try to build it from scratch."

The Blue Brain project is now at a crucial juncture. The first phase of the project—"the feasibility phase"—is coming to a close. The skeptics, for the most part, have been proven wrong. It took less than two years for the Blue Brain supercomputer to accurately simulate a neocortical column, which is a tiny slice of brain containing approximately 10,000 neurons, with about 30 million synaptic connections between them. "The column has been built and it runs," Markram says. "Now we just have to scale it up." Blue Brain scientists are confident that, at some point in the next few years, they will be able to start simulating an entire brain. "If we build this brain right, it will do everything," Markram says. I ask him if that includes selfconsciousness: Is it really possible to put a ghost into a machine? "When I say everything, I mean everything," he says, and a mischievous smile spreads across his face.

Henry Markram is tall and slim. He wears jeans and tailored shirts. He has an aquiline nose and a lustrous mop of dirty blond hair that he likes to run his hands through when contemplating a difficult problem. He has a talent for speaking in eloquent soundbites, so that the most grandiose conjectures ("In ten years, this computer will be talking to us.") are tossed off with a casual air. If it weren't for his bloodshot, blue eyes—"I don't sleep much," he admits—Markram could pass for a European playboy.

But the playboy is actually a lab rat. Markram starts working around nine in the morning, and usually doesn't leave his office until the campus is deserted and the lab doors are locked. Before he began developing Blue Brain, Markram was best known for his painstaking studies of cellular connectivity, which one scientist described to me as "beautiful stuff...and yet it must have been experimental hell." He trained under Dr. Bert Sakmann, who won a Nobel Prize for pioneering the patch clamp technique, allowing scientists to monitor the flux of voltage within an individual brain cell, or neuron, for the first time. (This involves piercing the membrane of a neuron with an invisibly sharp glass pipette.) Markram's technical innovation was "patching" multiple neurons at the same time, so that he could eavesdrop on their interactions. This experimental breakthrough promised to shed light on one of the enduring mysteries of the brain, which is how billions of discrete cells weave themselves into functional networks. In a series of elegant papers published in the late 1990s, Markram was able to show that these electrical conversations were incredibly precise. If, for example, he delayed a neuron's natural firing time by just a few milliseconds, the entire sequence of events was disrupted. The connected cells became strangers to one another.

When Markram looked closer at the electrical language of neurons, he realized that he was staring at a code he couldn't break. "I would observe the cells and I would think, 'We are never going to understand the brain.' Here is the simplest possible circuit—just two neurons connected to each other—and I still couldn't make sense of it. It was still too complicated."


Cables running from the Blue Gene/L supercomputer to the storage unit. The 2,000-microchip Blue Gene machine is capable of processing 22.8 trillion operations per second—just enough to model a 1-cubic-mm column of rat brain. Courtesy of Alain Herzog/EPFL

Neuroscience is a reductionist science. It describes the brain in terms of its physical details, dissecting the mind into the smallest possible parts. This process has been phenomenally successful. Over the last 50 years, scientists have managed to uncover a seemingly endless list of molecules, enzymes, pathways, and genes. The mind has been revealed as a Byzantine machine. According to Markram, however, this scientific approach has exhausted itself. "I think that reductionism peaked five years ago," he says. "This doesn't mean we've completed the reductionist project, far from it. There is still so much that we don't know about the brain. But now we have a different, and perhaps even harder, problem. We're literally drowning in data. We have lots of scientists who spend their life working out important details, but we have virtually no idea how all these details connect together. Blue Brain is about showing people the whole."

In other words, the Blue Brain project isn't just a model of a neural circuit. Markram hopes that it represents a whole new kind of neuroscience. "You need to look at the history of physics," he says. "From Copernicus to Einstein, the big breakthroughs always came from conceptual models. They are what integrated all the facts so that they made sense. You can have all the data in the world, but without a model the data will never be enough."

Markram has good reason to cite physics—neuroscience has almost no history of modeling. It's a thoroughly empirical discipline, rooted in the manual labor of molecular biology. If a discovery can't be parsed into something observable—like a line on a gel or a recording from a neuron—then, generally, it's dismissed. The sole exception is computational neuroscience, a relatively new field that also uses computers to model aspects of the mind. But Markram is dismissive of most computational neuroscience. "It's not interested enough in the biology," he says. "What they typically do is begin with a brain function they want to model"—like object detection or sentence recognition—"and then try to see if they can get a computer to replicate that function. The problem is that if you ask a hundred computational neuroscientists to build a functional model, you'll get a hundred different answers. These models might help us think about the brain, but they don't really help us understand it. If you want your model to represent reality, then you've got to model it on reality."

Of course, the hard part is deciphering that reality in the first place. You can't simulate a neuron until you know how a neuron is supposed to behave. Before the Blue Brain team could start constructing their model, they needed to aggregate a dizzying amount of data. The collected works of modern neuroscience had to be painstakingly programmed into the supercomputer, so that the software could simulate our hardware. The problem is that neuroscience is still woefully incomplete. Even the simple neuron, just a sheath of porous membrane, remains a mostly mysterious entity. How do you simulate what you can't understand?

Markram tried to get around "the mystery problem" by focusing on a specific section of a brain: a neocortical column in a two-week-old rat. A neocortical column is the basic computational unit of the cortex, a discrete circuit of flesh that's 2 mm long and 0.5 mm in diameter. The gelatinous cortex consists of thousands of these columns—each with a very precise purpose, like processing the color red or detecting pressure on a patch of skin, and a basic structure that remains the same, from mice to men. The virtue of simulating a circuit in a rodent brain is that the output of the model can be continually tested against the neural reality of the rat, a gruesome process that involves opening up the skull and plunging a needle into the brain. The point is to electronically replicate the performance of the circuit, to build a digital doppelganger of a biological machine.

Felix Schürmann, the project manager of Blue Brain, oversees this daunting process. He's 30 years old but looks even younger, with a chiseled chin, lean frame, and close-cropped hair. His patient manner is that of someone used to explaining complex ideas in simple sentences. Before the Blue Brain project, Schürmann worked at the experimental fringes of computer science, developing simulations of quantum computing. Although he's since mastered the vocabulary of neuroscience, referencing obscure acronyms with ease, Schürmann remains most comfortable with programming. He shares a workspace with an impressively diverse group—the 20 or so scientists working full-time on Blue Brain's software originate from 14 different countries. When we enter the hushed room, the programmers are all glued to their monitors, fully absorbed in the hieroglyphs on the screen. Nobody even looks up. We sit down at an empty desk and Schürmann opens his laptop.


In Markram's laboratory, state-of-the-art equipment allows for computer-controlled, simultaneous recordings of the tiny electrical currents that form the basis of nerve impulses. Here, a technique known as "patch clamp" provides direct access to seven individual neurons and their chemical synaptic interactions. The patch clamp robot—at work 24 hours a day, seven days a week—helped the Blue Brain team speed through 30 years of research in six months. Inset, a system integrates a bright-field microscope with computer-assisted reconstruction of neuron structure. The entire setup is enclosed inside a "Faraday cage" to reduce electromagnetic interference and mounted on a floating table to minimize vibrations. Courtesy of Alain Herzog/EPFL

The computer screen is filled with what look like digitally rendered tree branches. Schürmann zooms out so that the branches morph into a vast arbor, a canopy so dense it's practically opaque. "This," he proudly announces, "is a virtual neuron. What you're looking at are the thousands of synaptic connections it has made with other [virtual] neurons." When I look closely, I can see the faint lines where the virtual dendrites are subdivided into compartments. At any given moment, the supercomputer is modeling the chemical activity inside each of these sections so that a single simulated neuron is really the sum of 400 independent simulations. This is the level of precision required to accurately imitate just one of the 100 billion cells—each of them unique—inside the brain. When Markram talks about building a mind from the "bottom-up," these intracellular compartments are the bottom. They are the fundamental unit of the model.

But how do you get these simulated compartments to act in a realistic manner? The good news is that neurons are electrical processors: They represent information as ecstatic bursts of voltage, just like a silicon microchip. Neurons control the flow of electricity by opening and closing different ion channels, specialized proteins embedded in the cellular membrane. When the team began constructing their model, the first thing they did was program the existing ion channel data into the supercomputer. They wanted their virtual channels to act just like the real thing. However, they soon ran into serious problems. Many of the experiments used inconsistent methodologies and generated contradictory results, which were too irregular to model. After several frustrating failures—"The computer was just churning out crap," Markram says—the team realized that if they wanted to simulate ion channels, they needed to generate the data themselves.

That's when Schürmann leads me down the hall to Blue Brain's "wet lab." At first glance, the room looks like a generic neuroscience lab. The benches are cluttered with the usual salt solutions and biotech catalogs. There's the familiar odor of agar plates and astringent chemicals. But then I notice, tucked in the corner of the room, is a small robot. The machine is about the size of a microwave, and consists of a beige plastic tray filled with a variety of test tubes and a delicate metal claw holding a pipette. The claw is constantly moving back and forth across the tray, taking tiny sips from its buffet of different liquids. I ask Schürmann what the robot is doing. "Right now," he says, "it's recording from a cell. It does this 24 hours a day, seven days a week. It doesn't sleep and it never gets frustrated. It's the perfect postdoc."

The science behind the robotic experiments is straightforward. The Blue Brain team genetically engineers Chinese hamster ovary cells to express a single type of ion channel—the brain contains more than 30 different types of channels—then they subject the cells to a variety of physiological conditions. That's when the robot goes to work. It manages to "patch" a neuron about 50 percent of the time, which means that it can generate hundreds of data points a day, or about 10 times more than an efficient lab technician. Markram refers to the robot as "science on an industrial scale," and is convinced that it's the future of lab work. "So much of what we do in science isn't actually science," he says, "I say let robots do the mindless work so that we can spend more time thinking about our questions."

According to Markram, the patch clamp robot helped the Blue Brain team redo 30 years of research in six months. By analyzing the genetic expression of real rat neurons, the scientists could then start to integrate these details into the model. They were able to construct a precise map of ion channels, figuring out which cell types had which kind of ion channel and in what density. This new knowledge was then plugged into Blue Brain, allowing the supercomputer to accurately simulate any neuron anywhere in the neocortical column. "The simulation is getting to the point," Schürmann says, "where it gives us better results than an actual experiment. We get the same data, but with less noise and human error." The model, in other words, has exceeded its own inputs. The virtual neurons are more real than reality.


A simulated neuron from a rat brain showing "spines"—tiny knobs protruding from the dendrites that will eventually form synapses with other neurons. Pyramidal cells such as these (so-called because of their triangular shape) comprise about 80 percent of cerebral cortex mass. Courtesy of BBP/EPFL

Every brain is made of the same basic parts. A sensory cell in a sea slug works just like a cortical neuron in a human brain. It relies on the same neurotransmitters and ion channels and enzymes. Evolution only innovates when it needs to, and the neuron is a perfect piece of design.

In theory, this meant that once the Blue Brain team created an accurate model of a single neuron, they could multiply it to get a three-dimensional slice of brain. But that was just theory. Nobody knew what would happen when the supercomputer began simulating thousands of brain cells at the same time. "We were all emotionally prepared for failure," Markram says. "But I wasn't so prepared for what actually happened."

After assembling a three-dimensional model of 10,000 virtual neurons, the scientists began feeding the simulation electrical impulses, which were designed to replicate the currents constantly rippling through a real rat brain. Because the model focused on one particular kind of neural circuit—a neocortical column in the somatosensory cortex of a two-week-old rat—the scientists could feed the supercomputer the same sort of electrical stimulation that a newborn rat would actually experience.

It didn't take long before the model reacted. After only a few electrical jolts, the artificial neural circuit began to act just like a real neural circuit. Clusters of connected neurons began to fire in close synchrony: the cells were wiring themselves together. Different cell types obeyed their genetic instructions. The scientists could see the cellular looms flash and then fade as the cells wove themselves into meaningful patterns. Dendrites reached out to each other, like branches looking for light. "This all happened on its own," Markram says. "It was entirely spontaneous." For the Blue Brain team, it was a thrilling breakthrough. After years of hard work, they were finally able to watch their make-believe brain develop, synapse by synapse. The microchips were turning themselves into a mind.

But then came the hard work. The model was just a first draft. And so the team began a painstaking editing process. By comparing the behavior of the virtual circuit with experimental studies of the rat brain, the scientists could test out the verisimilitude of their simulation. They constantly fact-checked the supercomputer, tweaking the software to make it more realistic. "People complain that Blue Brain must have so many free parameters," Schürmann says. "They assume that we can just input whatever we want until the output looks good. But what they don't understand is that we are very constrained by these experiments." This is what makes the model so impressive: It manages to simulate a real neocortical column—a functional slice of mind—by simulating the particular details of our ion channels. Like a real brain, the behavior of Blue Brain naturally emerges from its molecular parts.

In fact, the model is so successful that its biggest restrictions are now technological. "We have already shown that the model can scale up," Markram says. "What is holding us back now are the computers." The numbers speak for themselves. Markram estimates that in order to accurately simulate the trillion synapses in the human brain, you'd need to be able to process about 500 petabytes of data (peta being a million billion, or 10 to the fifteenth power). That's about 200 times more information than is stored on all of Google's servers. (Given current technology, a machine capable of such power would be the size of several football fields.) Energy consumption is another huge problem. The human brain requires about 25 watts of electricity to operate. Markram estimates that simulating the brain on a supercomputer with existing microchips would generate an annual electrical bill of about $3 billion . But if computing speeds continue to develop at their current exponential pace, and energy efficiency improves, Markram believes that he'll be able to model a complete human brain on a single machine in ten years or less.

For now, however, the mind is still the ideal machine. Those intimidating black boxes from IBM in the basement are barely sufficient to model a thin slice of rat brain. The nervous system of an invertebrate exceeds the capabilities of the fastest supercomputer in the world. "If you're interested in computing," Schürmann says, "then I don't see how you can't be interested in the brain. We have so much to learn from natural selection. It's really the ultimate engineer."


An entire neocortical column lights up with electrical activity. Modeled on a two-week-old rodent brain, this 0.5 mm by 2 mm slice is the basic computational unit of the brain and contains about 10,000 neurons. This microcircuit is repeated millions of times across the rat cortex—and many times more in the brain of a human. Courtesy of Alain Herzog/EPFL

Neuroscience describes the brain from the outside. It sees us through the prism of the third person, so that we are nothing but three pounds of electrical flesh. The paradox, of course, is that we don't experience our matter. Self-consciousness, at least when felt from the inside, feels like more than the sum of its cells. "We've got all these tools for studying the cortex," Markram says. "But none of these methods allows us to see what makes the cortex so interesting, which is that it generates worlds. No matter how much I know about your brain, I still won't be able to see what you see."

Some philosophers, like Thomas Nagel, have argued that this divide between the physical facts of neuroscience and the reality of subjective experience represents an epistemological dead end. No matter how much we know about our neurons, we still won't be able to explain how a twitch of ions in the frontal cortex becomes the Technicolor cinema of consciousness.

Markram takes these criticisms seriously. Nevertheless, he believes that Blue Brain is uniquely capable of transcending the limits of "conventional neuroscience," breaking through the mind-body problem. According to Markram, the power of Blue Brain is that it can transform a metaphysical paradox into a technological problem. "There's no reason why you can't get inside Blue Brain," Markram says. "Once we can model a brain, we should be able to model what every brain makes. We should be able to experience the experiences of another mind."

When listening to Markram speculate, it's easy to forget that the Blue Brain simulation is still just a single circuit, confined within a silent supercomputer. The machine is not yet alive. And yet Markram can be persuasive when he talks about his future plans. His ambitions are grounded in concrete steps. Once the team is able to model a complete rat brain—that should happen in the next two years—Markram will download the simulation into a robotic rat, so that the brain has a body. He's already talking to a Japanese company about constructing the mechanical animal. "The only way to really know what the model is capable of is to give it legs," he says. "If the robotic rat just bumps into walls, then we've got a problem."

Installing Blue Brain in a robot will also allow it to develop like a real rat. The simulated cells will be shaped by their own sensations, constantly revising their connections based upon the rat's experiences. "What you ultimately want," Markram says, "is a robot that's a little bit unpredictable, that doesn't just do what we tell it to do." His goal is to build a virtual animal—a rodent robot—with a mind of its own.

But the question remains: How do you know what the rat knows? How do you get inside its simulated cortex? This is where visualization becomes key. Markram wants to simulate what that brain experiences. It's a typically audacious goal, a grand attempt to get around an ancient paradox. But if he can really find a way to see the brain from the inside, to traverse our inner space, then he will have given neuroscience an unprecedented window into the invisible. He will have taken the self and turned it into something we can see.


A close-up view of the rat neocortical column, rendered in three dimensions by a computer simulation. The large cell bodies (somas) can be seen branching into thick axons and forests of thinner dendrites. Courtesy of Dr. Pablo de Heras Ciechomski/Visualbiotech

Schürmann leads me across the campus to a large room tucked away in the engineering school. The windows are hermetically sealed; the air is warm and heavy with dust. A lone Silicon Graphics supercomputer, about the size of a large armoire, hums loudly in the center of the room. Schürmann opens the back of the computer to reveal a tangle of wires and cables, the knotted guts of the machine. This computer doesn't simulate the brain, rather it translates the simulation into visual form. The vast data sets generated by the IBM supercomputer are rendered as short films, hallucinatory voyages into the deep spaces of the mind. Schürmann hands me a pair of 3-D glasses, dims the lights, and starts the digital projector. The music starts first, "The Blue Danube" by Strauss. The classical waltz is soon accompanied by the vivid image of an interneuron, its spindly limbs reaching through the air. The imaginary camera pans around the brain cell, revealing the subtle complexities of its form. "This is a random neuron plucked from the model," Schürmann says. He then hits a few keys and the screen begins to fill with thousands of colorful cells. After a few seconds, the colors start to pulse across the network, as the virtual ions pass from neuron to neuron. I'm watching the supercomputer think.

Rendering cells is easy, at least for the supercomputer. It's the transformation of those cells into experience that's so hard. Still, Markram insists that it's not impossible. The first step, he says, will be to decipher the connection between the sensations entering the robotic rat and the flickering voltages of its brain cells. Once that problem is solved—and that's just a matter of massive correlation—the supercomputer should be able to reverse the process. It should be able to take its map of the cortex and generate a movie of experience, a first person view of reality rooted in the details of the brain. As the philosopher David Chalmers likes to say, "Experience is information from the inside; physics is information from the outside." By shuttling between these poles of being, the Blue Brain scientists hope to show that these different perspectives aren't so different at all. With the right supercomputer, our lucid reality can be faked.

"There is nothing inherently mysterious about the mind or anything it makes," Markram says. "Consciousness is just a massive amount of information being exchanged by trillions of brain cells. If you can precisely model that information, then I don't know why you wouldn't be able to generate a conscious mind." At moments like this, Markram takes on the deflating air of a magician exposing his own magic tricks. He seems to relish the idea of "debunking consciousness," showing that it's no more metaphysical than any other property of the mind. Consciousness is a binary code; the self is a loop of electricity. A ghost will emerge from the machine once the machine is built right.

And yet, Markram is candid about the possibility of failure. He knows that he has no idea what will happen once the Blue Brain is scaled up. "I think it will be just as interesting, perhaps even more interesting, if we can't create a conscious computer," Markram says. "Then the question will be: 'What are we missing? Why is this not enough?'"

Niels Bohr once declared that the opposite of a profound truth is also a profound truth. This is the charmed predicament of the Blue Brain project. If the simulation is successful, if it can turn a stack of silicon microchips into a sentient being, then the epic problem of consciousness will have been solved. The soul will be stripped of its secrets; the mind will lose its mystery. However, if the project fails—if the software never generates a sense of self, or manages to solve the paradox of experience—then neuroscience may be forced to confront its stark limitations. Knowing everything about the brain will not be enough. The supercomputer will still be a mere machine. Nothing will have emerged from all of the information. We will remain what can't be known.

Original here

Startup Makes Cheap Solar Film Cells ... With an Inkjet Printer

Konarka has developed its affordable Power Plastic film with several manufacturing techniques, from an early proprietary printing process (pictured above in image rotated for space) to a new breakthrough with inkjet printers that should dramatically reduce costs, with applications including sensors and RFID. (Photograph by David A. White/Konarka)

This year could bring the Silicon Valley-funded renaissance in solar power we've all been waiting for. First, San Jose-based Nanosolar began delivering its affordable thin-film solar coating, followed by a construction boom in American solar thermal power plants—essentially the reflective equivalent of geothermal power. Now, for the first time, the solar cell revolution is arriving by droplet.

Konarka Technologies, the Massachusetts-based company we first recognized with a 2005 Breakthrough Award for its affordable Power Plastic solar film, said this week that it has successfully manufactured those thin solar cells using an inkjet printer. In addition to decreasing production costs because it relies on existing inkjet technology, the printable Power Plastic cells can be applied to a range of small-scale, highly variable power opportunities, from indoor sensors to small RFID installations.

With printers now capable of producing solar cells, other companies might be able to use plastics and other colors in developing new kinds of power-packing film. But the inkjet process is just one of several different manufacturing techniques Konarka has been busy demonstrating for its solar collectors over the last three years. "Compared to current PV technologies, the Power Plastic has an advantage in flexibility, greater sensitivity to low light and versatility," Konarka president and CEO Rick Hess says of the film cells, which are fused from liquid containing semiconducting polymers.

By 2009 at the latest, Konarka plans to bring multiple forms of its product to market—everything from tiny cells for sensors to fabric-based and larger building panels. Hess says the company is currently working with U.S. Green Building Council LEED designers on custom installations.

Perhaps more promising are all the as-yet-unknown applications for the flexible, plastic solar panels. "We constantly receive calls from innovators who have read about the cells and propose unique—sometimes wild and crazy—concepts for the technology," Hess tells PM.

The burning question for DIYers and eco-conscious geeks alike remains whether we can expect to see rolls of Power Plastic on the shelves of home improvement stores anytime soon. Not exactly, Hess says. "Check back in two years and we'll have an update."

Original here