Sunday, February 8, 2009

New Potentially Hazardous Asteroid Discovered

Written by Nancy Atkinson

Image where PHA 2009 BD81 (left) was discovered. PHA 2008 EV5 is on the right. Image courtesy Robert Holmes.

Image where PHA 2009 BD81 (left) was discovered. PHA 2008 EV5 is on the right. Image courtesy Robert Holmes.

While observing a known asteroid on January 31, 2009, astronomer Robert Holmes from the Astronomical Research Institute near Charleston, Illinois found another high speed object moving nearby through the same field of view. The object has now been confirmed to be a previously undiscovered Potentially Hazardous Asteroid (PHA), with several possible Earth impact risks after 2042. This relatively small near-Earth asteroid, named 2009 BD81, will make its closest approach to Earth in 2009 on February 27, passing a comfortable 7 million kilometers away. In 2042, current projections have it passing within 5.5 Earth radii, (approximately 31,800 km or 19,800 miles) with an even closer approach in 2044. Data from the NASA/JPL Risk web page show 2009 BD81 to be fairly small, with a diameter of 0.314 km (about 1000 ft.) Holmes, one of the world's most prolific near Earth object (NEO) observers, said currently, the chance of this asteroid hitting Earth in 33 years or so is quite small; the odds are about 1 in 2 million, but follow-up observations are needed to provide precise calculations of the asteroid's potential future orbital path.

Holmes operates his one-man observatory at ARI, as part of NASA's Near Earth Observation program and the Killer Asteroid Project. He also produces images for educational and public outreach programs like the International Astronomical Search Collaboration (IASC), which is operated by Patrick Miller at Hardin-Simmons University in Texas, which gives students and teachers the opportunity to make observations and discoveries.

In just the past couple of years, Holmes has found 250 asteroids, 6 supernovae, and one comet (C/2008 N1 (Holmes). However, he said he would trade all of them for this single important NEO discovery.

"I was doing a follow up observation of asteroid 2008 EV5," Holmes told Universe Today, "and there was another object moving right next to it, so it was a pretty easy observation, actually. But you just have to be in the right place at the right time. If I had looked a few hours later, it would have moved away and I wouldn't have seen it."

A map generated by Holmes showing the path of 2009 BD 81. Credit: Robert Holmes

A map generated by Holmes showing the path of 2009 BD 81. Credit: Robert Holmes

A few hours later, teacher S. Kirby, from Ranger High School in Texas, who was taking part in a training class on how to use the data that Holmes collects for making observations used Holmes' data measuring 2008 EV5 and also found the new object. Shortly after that, a student K. Dankov from the Bulgarian Academy of Science, Bulgaria who is part of ARO education and public outreach also noticed the new asteroid. Holmes listed both observers as co-discovers as well as another astronomer who made confirmation follow-up observations of what is now 2009 BD81.

Holmes is a tireless observer. Last year alone he made 10,252 follow-up observations on previously discovered NEO's, more than 2000 more than the second ranked observatory, according to the NEO Dynamics website, based in Pisa, Italy.

Holmes has two telescopes, a 24-inch and 32-inch.
Holmes' 24 inch telescope.  Courtesy Robert Holmes
He works night-after-night to provide real-time images for the IASC program, uploading his images constantly during the night to an FTP site, so students and teachers can access the data and make their own analysis and observations from them. IASC is a network of observatories from 13 countries all around the world.

Holmes is proud of the work he does for education, and proud of the students and teachers who participate.

"They do a great job," he said. "A lot of the teachers are doing this entirely on their own, taking it upon themselves to create a hands-on research class in their schools." Holmes said recently, two students that have been involved with IASC in high school decided to enter the astrophysics program in college.

"I feel like we are making a difference in science and education," he said, "and it is exciting to feel like you're making a contribution, not just following up NEO's but in people's lives."

Holmes also owns some of the faintest observations of anyone in the world.

"My telescopes won't go to 24th magnitude," Holmes said, "but I've got several 23rd magnitudes."

"Getting faint observations is one of the things NASA wants to achieve, so that's one of the things I worked diligently on," Holmes continued. The statistics on the site bear that out clearly, which shows graphs and comparisons of various observatories.

To what does Holmes attribute his success? "It's obviously not the huge number nights we have in Illinois to work," Holmes said. The East-Central region of Illinois is known for its cloudy winter weather, when we often have our poorest astronomical "seeing."

"However, I work every single night if it's clear, even if it’s a full moon," he said. "Most observatories typically shut down three days on either side of a full moon. But I keep working right on through. I found that with the telescopes I work with, I've been able to get to the 22nd magnitude even on a full moon night. Last year, I got about 187 nights of observing, which is the same number as the big observatories in the Southwest, when you take off the number of cloudy nights the 6 nights a month they don’t' work around full moons. Sometimes you just have to work harder, and work when others aren't to be able to catch up. That's how we are able to do it, by working every single chance we have."

He works alone at the observatory, running the pair of telescopes, and doing programming on the fly. "I refresh the confirmation page of new discoveries every hour so I can chase down any new discovery anyone has found," he said. "If I just pre-programmed everything I wouldn't have a fraction of the observations I have each year. I'd miss way too many because some of the objects are moving so fast."

Holmes said some objects are moving 5,000 arc-seconds an hour on objects that are really close to Earth. "I've seen them go a full hour of right ascension per day and that's pretty quick. They can go across the sky in four or five days," said Holmes. "And there have been some that have gone from virtually 50 degrees north to 50 degrees south in one night. That's was a screaming fast object, and you can't preprogram for something like that, you actually have to be running the telescope manually."

Overhead view of 2009 BD81's path. Courtesy Robert Holmes

Overhead view of 2009 BD81's path. Courtesy Robert Holmes

2009 BD81 is listed as a “risk” object on the NASA/JPL website. This is the 1,015th PHA discovered to date.

"It ranks high as a NEO in general," said Holmes, "although not in a super-high category as far as the Torino scale," which categorizes the impact hazard of NEOs. "At this point it's considered a virtual impactor and that is typically is as high of a rating that you get at this point."

"Because it is a virtual impactor, it will remain on that webpage and ask for observations every single night until it is removed as a virtual impactor or becomes too faint to see," said Holmes. "In the past year, we've removed 23 virtual hazardous objects, which means there have been enough observations that the orbit of that object is no longer considered a threat to our planet."

2009 BD81.  Courtesy Robert Holmes

2009 BD81. Courtesy Robert Holmes

Because of the small number of observations of of 2009 BD81, the current chance of it hitting Earth is small. "The odds are really small right now," said Holmes, "however, the smaller your orbital arc is the wider the path is at that point is of potential impact. The longer the arc gets, the narrower the cone of opportunity of impact becomes, and once that cone is no longer pointing at earth in the future, it is removed as a possible impactor."

Holmes said the excitement of this discovery has been exhilarating. "It's been a lot of fun. The energy level gets pretty high when you have something like this show up," he said. "It's pretty rare, and this is the first time I've ever had a NEO discovery. I've had several hundred asteroids, and just since the beginning of the school year we have had about 40 asteroids that students and teachers have discovered in the program. So having this as a NEO is kind of a nice thing."

Holmes said he'll track 2009 BD81 as long as he possibly can.

More information on 2009 BD81.

Holmes previously was a commercial photographer who had over 4,500 photographs published worldwide in over 50 countries. "At first astronomy was just a hobby in the evening," said Holmes. "I worked with schools, who used the data and made some discoveries of supernovae and asteroids. It came to a point where it was really hard to work all day as a photographer and work all night in astronomy getting data for students." So, he chose astronomy over photography.

Holmes now works under a grant from NASA to use astrometry to follow-up new asteroid discoveries for the large sky surveys and help students look for new asteroid discoveries for educational outreach programs.

One would assume that as a former commercial photographer, Holmes would attempt to capture the beauty of the night sky in photographs, but that's not the case.

"The only thing I'm really interesting in is the scientific and educational aspect of astronomy," said Holmes. "I've never taken a single color, pretty picture of the sky in the half a million images I've taken of the sky. It's always been for research or education."

Holmes is considered a professional astronomer by the Minor Planet Center and International Astronomical Union because he is funded by NASA, so that means he wasn't eligible to receive the Edgar Wilson award when he found a comet last year.

Because of Holmes outstanding astronomical work, he is also an adjunct faculty member in the physics department at Eastern Illinois University in Charleston, Illinois.

Original here

Stars Form At Record Speeds In Infant Galaxy

The level of star-forming activity in the Orion-KL region (marked by the rectangle) in the Orion nebula is comparable to that of the central region of J1148+5251, but confined to a much smaller volume of space. (Credit: NASA, ESA, Robberto (STScI/ESA), Orion Treasury Project Team)

When galaxies are born, do their stars form everywhere at once, or only within a small core region? Recent measurements of an international team led by scientists from the Max Planck Institute for Astronomy provide the first concrete evidence that star-forming regions in infant galaxies are indeed small - but also hyperactive, producing stars at astonishingly high rates.

Galaxies, including our own Milky Way, consist of hundreds of billions of stars. How did such gigantic galactic systems come into being? Did a central region with stars first form then with time grow? Or did the stars form at the same time throughout the entire galaxy? An international team led by researchers from the Max Planck Institute for Astronomy is now much closer to being able to answer these questions.

The researchers studied one of the most distant known galaxies, a so-called quasar with the designation J1148+5251. Light from this galaxy takes 12.8 billion years to reach Earth; in turn, astronomical observations show the galaxy as it appeared 12.8 billion years ago, providing a glimpse of the very early stages of galactic evolution, less than a billion years after the Big Bang.

With the IRAM Interferometer, a German-French-Spanish radio telescope, the researchers were able to obtain images of a very special kind: they recorded the infrared radiation emitted by J1148+5251 at a specific frequency associated with ionized carbon atoms, which is a reliable indicator of ongoing star formation.

The resulting images show sufficient detail to allow, for the first time, the measurement of the size of a very early star-forming region. With this information, the researchers were able to conclude that, at that time, stars were forming in the core region of J1148+5251 at record rates - any faster and star formation would have been in conflict with the laws of physics.

"This galaxy's rate of star production is simply astonishing," says the article's lead author, Fabian Walter of the Max Planck Institute for Astronomy. "Every year, this galaxy's central region produces new stars with the combined mass of more than a thousand suns." By contrast, the rate of star formation within our own galaxy, the Milky Way, is roughly one solar mass per year.

Close to the physical limit

It has been known for some time that young galaxies can produce impressive amounts of new stars, but overall activity is only part of the picture. Without knowing the star-forming region's size, it is impossible to compare star formation in early galaxies with theoretical models, or with star-forming regions in our own galaxy.

With a diameter of a mere 4000 light-years (by comparison: the Milky Way galaxy's diameter amounts to 100,000 light-years), the star-forming core of J1148+5251 is extremely productive. In fact, it is close to the limits imposed by physical law. Stars are formed when cosmic clouds of gas and dust collapse under their own gravity. As the clouds collapse, temperatures rise, and internal pressure starts to build. Once that pressure has reached certain levels, all further collapse is brought to a halt, and no additional stars can form. The result is an upper limit on how many stars can form in a given volume of space in a given period of time.

Remarkably, the star-forming core of J1148+5251 reaches this absolute limit. This extreme level of activity can be found in parts of our own galaxy, but only on much smaller scales. For example, there is a region within the Orion nebula (Fig. 2) that is just as active as what we have observed. Fabian Walter: "But in J1148+5251, we are dealing with what amounts to a hundred million of these smaller regions combined!" Earlier observations of different galaxies had suggested an upper limit that amounts to a tenth of the value now observed in J1148+5251.

Growth from within

The compact star-forming region of J1148+5251 provides a highly interesting data point for researchers modelling the evolution of young galaxies. Going by this example, galaxies grow from within: in the early stages of star formation, there is a core region in which stars form very quickly. Presumably, such core regions grow over time, mainly as a result of collisions and mergers between galaxies, resulting in the significantly larger star-filled volume of mature galaxies.

The key to these results is one novel measurement: the first resolved image of an extremely distant quasar's star-forming central region, clearly showing the region's apparent diameter, and thus its size. This measurement is quite a challenge in itself. At a distance of almost 13 billion light-years (corresponding to a red-shift z = 6.42), the star-forming region, with its diameter of 4000 light-years, has an angular diameter of 0.27 seconds of arc - the size of a one euro coin, viewed at a distance of roughly 18 kilometres (or a pound coin, viewed at a distance of roughly 11 miles).

There is one further handicap: the observations rely on electromagnetic radiation with a characteristic wavelength, which is associated with ionized carbon atoms. At this wavelength, the star-forming regions of J1148+5251 outshine even the quasar's ultra-bright core. Due to the fact that the universe is expanding, the radiation is shifted towards longer wavelengths as it travels towards Earth ("cosmological redshift"), reaching our planet in the form of radio waves with a wavelength of about one millimetre. But, owing to the general nature of waves, it is more than a thousand times more difficult to resolve minute details at a wavelength of one millimetre, compared with visible light.

Observations at the required wavelength and level of detail became possible only as recently as 2006, thanks to an upgrade of the IRAM Interferometer, a compound radio telescope on the Plateau de Bure in the French Alps.

Future telescopes

Use of the characteristic radiation of ionized carbon to detect and create images of star-forming regions of extremely distant astronomical objects had been suggested some time ago. A significant portion of the observational program for ALMA, a compound radio telescope currently under construction in Northern Chile, relies on this observational approach. But up until the measurements of Fabian Walter and his colleagues, this technique had not been demonstrated in practice. Quoting Walter: "The early stages of galaxy evolution, roughly a billion years after the Big Bang, will be a major area of study for years to come. Our measurements open up a new window on star-forming regions in very young galaxies".

Original here

Astronomers spot cosmic dust fountain

Astronomers spot cosmic dust fountain


A Hubble Space Telescope image of the Red Rectangle, approximately 2,300 light years from Earth in the constellation Monoceros. What appears to be the central star is actually a pair of closely orbiting stars. Particle outflow from the stars interacts with a surrounding disk of dust, possibly accounting for the X shape. This image spans approximately a third of a light year at the distance of the Red Rectangle. (Photo: Van Winckel, M. Cohen, H. Bond, T. Gull, ESA, NASA)

"We not only do not know what the stuff is, but we do not know where it is made or how it gets into space," said Donald York, the Horace B. Horton Professor in Astronomy and Astrophysics at the University of Chicago.

But now York, the University of Toledo's Adolf Witt and their collaborators have observed a double-star system that displays all the characteristics that astronomers suspect are associated with dust production. The Astrophysical Journal will publish a paper reporting their discovery in March.

The double star system, designated HD 44179, sits within what astronomers call the Red Rectangle, an interstellar cloud of gas and dust (nebula) located approximately 2,300 light years from Earth.

Astronomers spot cosmic dust fountain

An artist’s rendition of the possible appearance of the double star system in the Red Rectangle nebula. The details of the image follow the observations that a team of astronomers at the University of Toledo and the University of Chicago has made using the 3.5-meter telescope at Apache Point Observatory in New Mexico. The image spans a distance of approximately 1 astronomical unit (93 million miles, the average distance between the Earth and sun). (Image by Steven Lane)

One of the double stars is of a type that astronomers regard as a likely source of dust. These stars, unlike the sun, have already burned all the hydrogen in their cores. Labeled post-AGB (post-asymptotic giant branch) stars, these objects collapsed after burning their initial hydrogen, until they could generate enough heat to burn a new fuel, helium.

Dust in the solar wind

During this transition, which takes place over tens of thousands of years, these stars lose an outer layer of their atmosphere. Dust may form in this cooling layer, which radiation pressure coming from the star's interior pushes out the dust away from the star, along with a fair amount of gas.

In double-star systems, a disk of material from the post-AGB star may form around the second smaller, more slowly evolving star. "When disks form in astronomy, they often form jets that blow part of the material out of the original system, distributing the material in space," York explained.

This seems to be the phenomenon that Witt's team observed in the Red Rectangle, probably the best example so far discovered. The discovery has wide-ranging implications, because dust is critical to scientific theories about how stars form.

"If a cloud of gas and dust collapses under its own gravity, it immediately gets hotter and starts to evaporate," York said. Something, possibly dust, must immediately cool the cloud to prevent it from reheating.

The giant star sitting in the Red Rectangle is among those that are far too hot to allow dust condensation within their atmospheres. And yet a giant ring of dusty gas encircles it.

Witt's team made approximately 15 hours of observations on the double star over a seven-year period with the 3.5-meter telescope at Apache Point Observatory in New Mexico. "Our observations have shown that it is most likely the gravitational or tidal interaction between our Red Rectangle giant star and a close sun-like companion star that causes material to leave the envelope of the giant," said Witt, an emeritus distinguished university professor of astronomy.

Some of this material ends up in a disk of accumulating dust that surrounds that smaller companion star. Gradually, over a period of approximately 500 years, the material spirals into the smaller star.

Bipolar behavior

Just before this happens, the smaller star ejects a small fraction of the accumulated matter in opposite directions via two gaseous jets, called "bipolar jets."

Other quantities of the matter pulled from the envelope of the giant end up in a disk that skirts both stars, where it cools. "The heavy elements like iron, nickel, silicon, calcium and carbon condense out into solid grains, which we see as interstellar dust, once they leave the system," Witt explained.

Cosmic dust production has eluded telescopic detection because it only lasts for perhaps 10,000 years—a brief period in the lifetime of a star. Astronomers have observed other objects similar to the Red Rectangle in Earth's neighborhood of the Milky Way. This suggests that the process Witt's team has observed is quite common when viewed over the lifetime of the galaxy.

"Processes very similar to what we are observing in the Red Rectangle nebula have happened maybe hundreds of millions of times since the formation of the Milky Way," said Witt, who teamed up with longtime friends at Chicago for the study.

Witt (Ph.D.,'67) and York (Ph.D.,'71) first met in graduate school at Chicago's Yerkes Observatory, where Lew Hobbs, now Professor Emeritus in Astronomy & Astrophysics, had just joined the University faculty. Other co-authors include Julie Thorburn of Yerkes Observatory; Uma Vijh, University of Toledo; and Jason Aufdenberg, Embry-Riddle Aeronautical University in Florida.

The team had set out to achieve a relatively modest goal: find the Red Rectangle's source of far-ultraviolet radiation. The Red Rectangle displays several phenomena that require far-ultraviolet radiation as a power source. "The trouble is that the very luminous central star in the Red Rectangle is not hot enough to produce the required UV radiation," Witt said, so he and his colleagues set out to find it.

It turned out neither star in the binary system is the source of the UV radiation, but rather the hot, inner region of the disk swirling around the secondary, which reaches temperatures near 20,000 degrees. Their observations, Witt said, "have been greatly more productive than we could have imagined in our wildest dreams."

Provided by University of Chicago

Original here

Annoying Stickler Insists On Every Detail Of Space Mission Being Exactly Right

CAPE CANAVERAL, FL— Moments after having their shuttle launch delayed, Discovery astronauts complained once again Monday about John Wilkins—that annoying little program manager who insists on every detail of every space mission being exactly right.

Enlarge Image Shuttle

Wilkins [inset] nags the flight crew about every single directional coordinate.

Wilkins, who is reportedly always double-checking launch parameters for no good reason, and sticking his nose into parts of the spacecraft that have always worked just fine, delayed the NASA flight for the third time this past month.

"It's always 'Are the solid rocket boosters functioning at full capacity?' and 'Do the liquid oxygen prevalves operate as required?' with John," Discovery commander James Reid said. "If it weren't for that guy, we'd already be in space by now."

In addition to his insistence on mission coordinates being 100 percent accurate, Wilkins reportedly spends all his time obsessing about Discovery's general purpose computers, which ignite the main engines and ensure that the craft can safely reach the speed of 18,000 mph.

"Is there anything John doesn't worry about?" said Michael Dennigan, the shuttle crew's second-in-command. "This isn't rocket science—you'd think he'd try to relax a bit."

Since he was assigned to it last year, Wilkins has aborted the NASA mission for a wide range of seemingly unimportant reasons, including a 4-inch crack in the exterior hull of the ship, the failure of several engine cut-off sensors, and what has been described as "the smallest of possible thunderstorms."

Enlarge Image Stickler

Discovery astronauts run through "about the 30,000th" emergency landing simulation.

Following their latest delay, Discovery crew members were seen throwing up their arms, shaking their heads in disgust, and letting out a unified, exasperated sigh.

"The goal of this mission is to launch into space both safely and successfully," announced Wilkins, who then spent an interminable number of hours tinkering with the retrieval system responsible for guiding the spacecraft back to Earth. "It is of chief importance that everything goes as planned."

The insufferable perfectionist's fixation goes beyond ship maintenance and safety, however. In the past six months, Wilkins has totally smothered Discovery astronauts, forcing them to complete endless navigation simulations, practice sea survival techniques despite the lack of water in space, and train for all kinds of hypothetical emergency evacuations, the vast majority of which will never even happen.

"I'm healthy, willing, and able," pilot Harold King said. "Come on. This is NASA we're talking about here. Everything is going to be fine."

In its 51-year history, NASA has launched more than 150 successful manned missions into space, and, with only three recorded disasters, many at the Kennedy Space Center have reportedly begun to lose their patience.

In the past week, Cape Canaveral sources have observed the astronauts pass the time by pacing back and forth, bouncing tennis balls against the wall, and pretending as if they were already in space. Throughout it all, Wilkins has continued his incessant fiddling, going so far as to fix the ship's onboard radio early Thursday morning.

"He's making the flight engineers review the ship for two days? I can see it from here. It's fine," James Reid said. "The equipment's good, the fuel's good, the gyroscopic compass that keep us from floating aimlessly out into the vacuum space is good. What more does he want?"

Added Reid, "I'm starting to hope our shuttle disintegrates just to spite that guy."

Original here

Giant star factory found in early galaxy

by Maggie McKee

The heart of the nearby galaxy Arp 220, shown in this Hubble image, is bursting with star birth. But a much larger starburst region has been found in a galaxy in the early universe (Image: NASA/ESA/C Wilson/McMaster University)

The heart of the nearby galaxy Arp 220, shown in this Hubble image, is bursting with star birth. But a much larger starburst region has been found in a galaxy in the early universe (Image: NASA/ESA/C Wilson/McMaster University)

A stellar factory millions of times larger than anything comparable in the Milky Way has been identified in a galaxy in the very early universe. The work bolsters the case that massive galaxies formed very quickly - in spectacular bursts of star formation - soon after the big bang.

Regions of intense star formation, called starbursts, span a few light years at most in the Milky Way, and less than a few hundred light years in nearby, bright galaxies such as Arp 220 (pictured). But it has not been clear how large the stellar nurseries were in the early universe.

To find out, researchers led by Fabian Walter of the Max Planck Institute for Astronomy in Heidelberg, Germany, carefully scrutinised a distant galaxy whose light has taken so long to reach Earth that it appears as it was just 870 million years after the big bang.

Warm dust

It is visible at such distances because it hosts a beacon-like quasar, a bright region created by superheated gas falling towards a colossal black hole at the galaxy's core.

The quasar, called J114816.64+525150.3, is so bright that it overwhelms the surrounding galaxy's light at visible and near-infrared wavelengths. But the galaxy's gas and warm dust can be detected at radio and far-infrared wavelengths.

Using an array of telescopes in the French Alps, the team measured the galaxy's ionised carbon, which emits a strong signal at far-infrared wavelengths. Far-infrared radiation is thought to be a signature of dust that has been heated up by nearby star formation.

Maximum rate

The ionised carbon spanned a region at the heart of the galaxy about 5000 light years across. Based on the galaxy's brightness at far-infrared wavelengths, this starburst region is thought to produce an astounding 1000 Sun-like stars every year.

That is "about 1000 times higher than the star-formation rate of the Milky Way", says team member Chris Carilli, chief scientist at the National Radio Astronomy Observatory in Socorro, New Mexico.

"It's forming stars at the maximal rate allowed . . . on scales that are 106 or 108 times larger in volume" than similar regions in the Milky Way, he continues. "That's remarkable."

Merging galaxies?

The immense scale of the stellar factory is probably due to the fact that there was a lot more gas around in the early universe, Carilli says. Matter in the universe was indeed much denser soon after the big bang, since space itself has expanded over time.

But researchers don't know what ignited the star birth in the first place. Mergers between galaxies can trigger gas clouds to collapse into stars (and cause matter to fall into a galaxy's central black hole, turning on a quasar). However, it's not clear from the observations whether or not a merger was involved in this case.

An alternative theory, put forward recently by a team led by Avishai Dekel of Hebrew University, suggests that cold gas flowing into galaxies - in either smooth streams or clumps - may trigger starbursts. "This may be an example of this phenomenon," Dekel told New Scientist.

Sudden and dramatic

Carilli says gas may have fallen onto the galaxy over a long period of time, but some gravitational disturbance - perhaps a merger - must have suddenly kick-started its star birth. "Empirically, what we're seeing is a pretty dramatic event," he says.

The discovery suggests massive galaxies like this one, which is a blob-shaped 'elliptical' rather than a spiral, "form fairly quickly, relatively early in the universe", says Carilli.

Indeed, a similar process appears to have occurred in the Milky Way. The stars that form a bulge around its centre - essentially a miniature elliptical galaxy - all seem to have formed around the same time, in a starburst billions of years ago.

'Pathological objects'

But Carilli says it's too soon to say whether all galaxies formed their stars so quickly, or whether some eked out stars slowly over time.

That's because only the heaviest, brightest objects can be seen at such distances. "All we can study are 'pathological' objects - rare, extreme luminosity objects," he says. "They don't really tell us about normal galaxy formation; they tell us about massive galaxy formation."

A project called ALMA (Atacama Large Millimeter/submillimeter Array), which will be the world's largest array of millimetre-wave telescopes when it is completed in 2012, will shed light on more typical galaxies. It will be able to study objects 100 times less massive than can be detected with current telescopes.

Original here

Planet Harddrive

[Image: "Conceptual diagram of satellite triangulation," courtesy of the Office of NOAA Corps Operations (ONCO)].

For several years I've been fascinated by what might be called the geological nature of harddrives – how certain mineral arrangements of metal and ferromagnetism result in our technological ability to store memories, save information, and leave previous versions of the present behind.
A harddrive, though, would be a geological object as much as a technical one; it is a content-rich, heavily processed re-configuration of the earth's surface.

[Image: Geometry in the sky. "Diagram showing conceptual photographs of how satellite versus star background would appear from three different locations on the surface of the earth," courtesy of the Office of NOAA Corps Operations (ONCO)].

This reminds me of another ongoing fantasy of mine, which is that perhaps someday we won't actually need harddrives at all: we'll simply use geology itself.
In other words, what if we could manipulate the earth's own magnetic field and thus program data into the natural energy curtains of the planet?
The earth would become a kind of spherical harddrive, with information stored in those moving webs of magnetic energy that both surround and penetrate its surface.
This extends yet further into an idea that perhaps whole planets out there, turning in space, are actually the harddrives of an intelligent species we otherwise have yet to encounter – like mnemonic Death Stars, they are spherical data-storage facilities made of content-rich bedrock – or, perhaps more interestingly, we might even yet discover, in some weird version of the future directed by James Cameron from a screenplay by Jules Verne, that the earth itself is already encoded with someone else's data, and that, down there in crustal formations of rock, crystalline archives shimmer.
I'm reminded of a line from William S. Burroughs's novel The Ticket That Exploded, in which we read that beneath all of this, hidden in the surface of the earth, is "a vast mineral consciousness near absolute zero thinking in slow formations of crystal."

[Image: "An IBM HDD head resting on a disk platter," courtesy of Wikipedia].

In any case, this all came to mind again last night when I saw an article in New Scientist about how 3D holograms might revolutionize data storage. One hologram-encoded DVD, for instance, could hold an incredible 1000GB of information.
So how would these 3D holograms be formed?
"A pair of laser beams is used to write data into discs of light-sensitive plastic, with both aiming at the same spot," the article explains. "One beam shines continuously, while the other pulses on and off to encode patches that represent digital 0s and 1s."
The question, then, would be whether or not you could build a geotechnical version of this, some vast and slow-moving machine – manufactured by Komatsu – that moves over exposed faces of bedrock and "encodes" that geological formation with data. You would use it to inscribe information into the planet.
To use a cheap pun, you could store terrabytes of information.
But it'd be like some new form of plowing in which the furrows you produce are not for seeds but for data. An entirely new landscape design process results: a fragment of the earth formatted to store encrypted files.
Data gardens.
They can even be read by satellite.

[Image: The "worldwide satellite triangulation camera station network," courtesy of NOAA's Geodesy Collection].

Like something out of H.P. Lovecraft – or the most unhinged imaginations of early European explorers – future humans will look down uneasily at the earth they walk upon, knowing that vast holograms span that rocky darkness, spun like inexplicable cobwebs through the planet.
Beneath a massive stretch of rock in the remotest state-owned corner of Nevada, top secret government holograms await their future decryption.
The planet thus becomes an archive.

Original here

40,000 planets could be home to aliens

By Ben Leach

Solar system: 40,000 planets could be home to aliens
40,000 planets could be home to aliens Photo: GETTY

Researchers have calculated that up to 37,964 worlds in our galaxy are hospitable enough to be home to creatures at least as intelligent as ourselves.

Astrophysicist Duncan Forgan created a computer programme that collated all the data on the 330 or so planets known to man and worked out what proportion would have conditions suitable for life.

The estimate, which took into account factors such as temperature and availability of water and minerals, was then extrapolated across the Milky Way.

Mr Forgan believes that the life forms would not be amoeba wriggling on the end of a microscope but species at least as advanced as humans.

Mr Forgan, who believes it will take 300 to 400 years for us to make contact with our neighbours, said: "I believe the estimate of 361 intelligent civilisations to be the most accurate.

"These would certainly be the most Earth-like civilisations but the bigger figures are certainly possible. We can't rule them out.

"Most of the other planets we have looked at are older than our own – so I would expect to see more advanced civilisations than ours existing."

Original here

New Robot Could Explore Treacherous Terrain on Mars

Written by Nancy Atkinson

Axel concept as a tethered marsupial rover for steep terrain access. Credit: JPL

Axel concept as a tethered marsupial rover for steep terrain access. Credit: JPL

If you've looked at the high resolution HiRISE images from the Mars Reconnaissance Orbiter, or had the chance to explore the new Google Mars, you know Mars is fraught with craters, mountains, gullies, and all sorts of interesting – and dangerous – terrain. Areas such as these with layered deposits, sediments, fracturing and faulting are just the type of places to look for the sources of methane that is being produced on Mars. But it's much too risky to send our current style of rovers, including the 2011 Mars Science Laboratory (MSL), into treacherous terrain. But engineers from JPL, along with students at the California Institute of Technology have designed and tested a versatile, low-mass robot that could be added to larger rovers like MSL that can rappel off cliffs, travel nimbly over steep and rocky terrain, and explore deep craters.

This prototype rover, called Axel, might help future robotic spacecraft better explore and investigate foreign worlds such as Mars. On Earth, Axel might assist in search-and-rescue operations.

Watch a video showing an Axel test-run at the JPL Mars yard.

"Axel extends our ability to explore terrains that we haven't been able to explore in the past, such as deep craters with vertically-sloped promontories," said Axel's principal investigator, Issa A.D. Nesnas, of JPL's robotics and mobility section. "Also, because Axel is relatively low-mass, a mission may carry a number of Axel rovers. That would give us the opportunity to be more aggressive with the terrain we would explore, while keeping the overall risk manageable."

Nesnas said Axel is like a yo-yo — it is on a tether attached to a larger rover and can go up and down the sides of craters, canyons and gullies, exploring regions not safe for other rovers.

Axel's tether system (and inside electronics) Credit: Axel website

Axel's tether system (and inside electronics) Credit: Axel website

The simple and elegant design of Axel, which can operate both upside down and right side up, uses only three motors: one to control each of its two wheels and a third to control a lever. The lever contains a scoop to gather lunar or planetary material for scientists to study, and it also adjusts the robot's two stereo cameras, which can tilt 360 degrees.
Axel's different possible configurations.  Credit: JPL

Axel's different possible configurations. Credit: JPL

Axel's cylindrical body has computing and wireless communications capabilities and an inertial sensor to operate autonomously. It also sports a tether that Axel can unreel to descend from a larger lander, rover or anchor point. The rover can use different wheel types, from large foldable wheels to inflatable ones, which help the rover tolerate a hard landing and handle rocky terrain.

Axel has been in development since 1999, and students from Caltech, Purdue University, and Arkansas Tech University have collaborated with JPL over the years to develop this versatile rover.

Original here

Green Comet Approaches Earth

In 1996, a 7-year-old boy in China bent over the eyepiece of a small telescope and saw something that would change his life--a comet of flamboyant beauty, bright and puffy with an active tail. At first he thought he himself had discovered it, but no, he learned, two men named "Hale" and "Bopp" had beat him to it. Mastering his disappointment, young Quanzhi Ye resolved to find his own comet one day.

And one day, he did.

Fast forward to a summer afternoon in July 2007. Ye, now 19 years old and a student of meteorology at China's Sun Yat-sen University, bent over his desk to stare at a black-and-white star field. The photo was taken nights before by Taiwanese astronomer Chi Sheng Lin on "sky patrol" at the Lulin Observatory. Ye's finger moved from point to point--and stopped. One of the stars was not a star, it was a comet, and this time Ye saw it first.

Comet Lulin, named after the observatory in Taiwan where the discovery-photo was taken, is now approaching Earth. "It is a green beauty that could become visible to the naked eye any day now," says Ye.

Amateur astronomer Jack Newton sends this photo from his backyard observatory in Arizona:

see caption

"My retired eyes still cannot see the brightening comet," says Newton, "but my 14-inch telescope picked it up quite nicely on Feb. 1st."

The comet makes its closest approach to Earth (0.41 AU) on Feb. 24, 2009. Current estimates peg the maximum brightness at 4th or 5th magnitude, which means dark country skies would be required to see it. No one can say for sure, however, because this appears to be Lulin's first visit to the inner solar system and its first exposure to intense sunlight. Surprises are possible.

Lulin's green color comes from the gases that make up its Jupiter-sized atmosphere. Jets spewing from the comet's nucleus contain cyanogen (CN: a poisonous gas found in many comets) and diatomic carbon (C2). Both substances glow green when illuminated by sunlight in the near-vacuum of space.

In 1910, many people panicked when astronomers revealed Earth would pass through the cyanogen-rich tail of Comet Halley. False alarm: The wispy tail of the comet couldn't penetrate Earth's dense atmosphere; even it if had penetrated, there wasn't enough cyanogen to cause real trouble. Comet Lulin will cause even less trouble than Halley did. At closest approach in late February, Lulin will stop 38 million miles short of Earth, utterly harmless.

To see Comet Lulin with your own eyes, set your alarm for 3 am. The comet rises a few hours before the sun and may be found about 1/3rd of the way up the southern sky before dawn. Here are some dates when it is especially easy to find:

sky mapFeb. 6th: Comet Lulin glides by Zubenelgenubi, a double star at the fulcrum of Libra's scales. Zubenelgenubi is not only fun to say (zuBEN-el-JA-newbee), but also a handy guide. You can see Zubenelgenubi with your unaided eye (it is about as bright as stars in the Big Dipper); binoculars pointed at the binary star reveal Comet Lulin in beautiful proximity. [sky map]

Feb. 16th: Comet Lulin passes Spica in the constellation Virgo. Spica is a star of first magnitude and a guidepost even city astronomers cannot miss. A finderscope pointed at Spica will capture Comet Lulin in the field of view, centering the optics within a nudge of both objects. [sky map]

Feb. 24th: Closest approach! On this special morning, Lulin will lie just a few degrees from Saturn in the constellation Leo. Saturn is obvious to the unaided eye, and Lulin could be as well. If this doesn't draw you out of bed, nothing will. [sky map]

Ye notes that Comet Lulin is remarkable not only for its rare beauty, but also for its rare manner of discovery. "This is a 'comet of collaboration' between Taiwanese and Chinese astronomers," he says. "The discovery could not have been made without a contribution from both sides of the Strait that separates our countries. Chi Sheng Lin and other members of the Lulin Observatory staff enabled me to get the images I wanted, while I analyzed the data and found the comet."

Somewhere this month, Ye imagines, another youngster will bend over an eyepiece, see Comet Lulin, and feel the same thrill he did gazing at Comet Hale-Bopp in 1996. And who knows where that might lead...?

"I hope that my experience might inspire other young people to pursue the same starry dreams as myself," says Ye.

Original here

How will the solar system end?

by Stephen Battersby

MyCn18 is a young planetary nebula, located about 8,000 light-years away. Planetary nebulae are shells of gas and dust, which stars eject when they run out of fuel. This Hubble image reveals the true shape of MyCn18 to be an hourglass with an intricate pattern of

MyCn18 is a young planetary nebula, located about 8,000 light-years away. Planetary nebulae are shells of gas and dust, which stars eject when they run out of fuel. This Hubble image reveals the true shape of MyCn18 to be an hourglass with an intricate pattern of "etchings" in its walls (Image: Raghvendra Sahai and John Trauger (JPL) / WFPC2 science team / NASA)

We live in uninteresting times. Since the ructions that created the planets in the solar system's first 100 million years (see "How was the solar system built?") - and apart from an early migration of the giant planets and the odd colliding comet not swept safely aside by Jupiter - nothing much has really been happening. The planets circle like clockwork, the sun burns steadily, and even delicate life has survived on at least one world.

It cannot last. Something unpleasant is bound to shatter this comfortable calm.

Our sun will die, of course, about six billion years from now. But things could get ugly long before that. The steady gyrations of the solar system today may conceal the seeds of chaos. Even the tiniest of irregularities can build up over time, gradually altering the paths of the planets. Between now and final sundown, it has been calculated, there is a roughly 2 per cent chance of catastrophe. Mars might drift too close to Jupiter and be thrown out of the solar system. If we're very unlucky, hot-headed Mercury could run wild and smash into Earth.

Meanwhile, the sun will slowly get brighter. Within 2 billion years, its heat will probably kill off life on Earth's surface. Mars, on the other hand - if it is still there - should gain a cosier climate. Even if it is dead today, it could one day come to life.

But again, not forever. When the sun's core burns up the last of its hydrogen fuel, the whole structure of the star will radically rearrange. It will slowly bloat to more than a million times its present volume, becoming a red giant. That giant will swallow Mercury and Venus and, according to the latest simulations, probably Earth too.

Baked by the sky-filling sun, and stained redder than ever, Mars will now definitively be dead. The icy moons of Saturn and Jupiter might in turn become hospitable. Saturn's giant moon Titan is particularly promising, as it already holds a rich soup of organic molecules. The red giant's heat could leave once-icy Titan with a global bath of water and ammonia where those organic molecules could form life.

Any creatures that bob to the surface of these outer moons would look up at a rather different sky. By that time, the Milky Way will probably have collided with our neighbouring galaxy Andromeda to form a unified "Milkomeda", where violent bursts of star formation - the nurseries of a new generation of solar systems - will light up the heavens for a time.

Any late flowering of life in our solar system, if it happens at all, will not last long. After its brief escapade as a red giant, the sun's inner furnace will finally fail, and it will cast off its outer layers and shrink into a tiny white dwarf. The briefly balmy Titan will freeze over once more. Its host planet Saturn, together with the other denizens of the outer solar system, will orbit on for tens of billions of years more, until treachery from within or marauders from without do for them, too. Jupiter or Saturn could eject their lighter comrades, Uranus and Neptune, or passing stars could strip away any planet, even massive Jupiter.

The future is never certain, though, and alternative endings can be written. There is a slim chance that the whole solar system, sun and all, might be thrown out of Milkomeda intact. Out in the emptiness of intergalactic space, the planets would be safe from marauders. There they could continue to circle our darkening star until their energy is eventually sapped and they spiral inwards. One by one as they hit the black-dwarf sun, a few final flares will rage against the dying of the light.

Original here

Spirit Resumes Driving

Spirit view of Mars
Spirit stopped short on its drive, apparently from the right front wheel encountering the partially buried rock visible next to that wheel. Image Credit: NASA/JPL-Caltech

Mars Exploration Rover Mission Status Report

PASADENA, Calif. -- NASA's Mars Exploration Rover Spirit resumed driving Saturday after engineers gained confidence from diagnostic activities earlier in the week evaluating how well the rover senses its orientation.

Spirit drove about 30 centimeters (1 foot) Saturday, during the 1,806th Martian day, or sol, of what was originally planned as a 90-day mission. The rover team had commanded a longer drive, but Spirit stopped short after its right-front wheel, which no longer turns, struck a partially buried rock. The rover drivers prepared commands Monday for the next drive in a slightly different direction to get around that rock.

A diagnostic test on Sol 1805 provided an evaluation of how accurately Spirt's accelerometers sense the rover's orientation or attitude. The testing was a follow-up to Spirit's mistaken calculation of where to expect to see the sun on Sol 1802. The sol 1805 results indicate the accelerometers may have a bias of about 3 degrees. This would explain why Spirit pointed a camera about three degrees away from the sun's actual position on Sol 1802. However, the Sol 1805 test also showed that Spirit's gyroscopes are operating properly, which convinced engineers that the rover could safely resume driving. Only the gyroscopes are used for orientation information during driving.

Diagnostic tests last week also checked possible explanations for behavior for one period of activity on Spirit's Sol 1800, when the rover did not save information into its non-volatile flash memory, so the information was lost when the rover next powered down.

"We may not find any data that will explain what happened on Sol 1800, but there's no evidence that whatever happened then has recurred on subsequent sols," said Jacob Matijevic of the rover engineering team at NASA's Jet Propulsion Laboratory, Pasadena. One possibility is that a cosmic-ray hit could have temporarily put Spirit temporarily into a mode that disables use of the flash memory. The team intentionally used that mode -- relying only on volatile random-access memory -- during recovery from a memory problem five years ago on Spirit.

Spirit is just north of a low plateau called "Home Plate." It spent 2008 on a north-facing slope on the edge of Home Plate so that its solar panels stayed tilted toward the winter sun for maximum electrical output.

Spirit drove down off Home Plate on Jan. 6, 2009. It subsequently checked whether a patch of nearby soil, called "Stapledon," had a high concentration of silica, like a silica-rich patch of soil Spirit discovered east of Home Plate in 2007. The earlier discovery was interpreted as evidence left by a hot-spring or steam-vent environment. Examination with Spirit's alpha particle X-ray spectrometer confirmed silica at Stapledon. This indicates that the environment that deposited the silica was not limited to the location found earlier.

JPL, a division of the California Institute of Technology, Pasadena, manages the Mars Exploration Rover project for the NASA Science Mission Directorate, Washington. Spirit and its twin, Opportunity, landed on Mars in January 2004 and have operated 20 times longer than their original prime missions.

Guy Webster (818) 354-6278
Jet Propulsion Laboratory, Pasadena, Calif.

Original here

A talk with Mario Livio

Is mathematics the language of the universe?

By Carolyn Y. Johnson

'Mathematics is somewhat special in that it has an incredible longevity...what was true once remains true forever.' "Mathematics is somewhat special in that it has an incredible longevity...what was true once remains true forever."

MARIO LIVIO IS an astrophysicist, a man whose work and worldview are inextricably intertwined with mathematics. Like most scientists, he depends on math and an underlying faith in its incredible power to explain the universe. But over the years, he has been nagged by a bewildering thought. Scientific progress, in everything from economics to neurobiology to physics, depends on math's ability. But what is math? Why should its abstract concepts be so uncannily good at explaining reality?

The question may seem irrelevant. As long as math works, why not just go with it? But Livio felt himself pulled into a deep question that reaches to the very foundation of science - and of reality itself. The language of the universe appears to be mathematics: Formulas describe how our planet revolves around the sun, how a boat floats, how light glints off the water. But is mathematics a human tool, or is reality, in some fundamental way, mathematics?

Or, put another way: "Is God a Mathematician?" This is the title of Livio's new book, in which he joins a long line of modern thinkers who have questioned "the unreasonable effectiveness of mathematics," in the words of Nobel Laureate Eugene Wigner.

Livio, an astrophysicist at the Space Telescope Science Institute in Baltimore, concludes that math has to be thought of, at least in part, as a human invention. That's a profoundly weird notion in a world where math has always had special status, untainted by people's opinions and biases. Religion, politics, and picking a great work of art can all incite vigorous debate, while 2 + 2 = 4 has always seemed like a cold, hard fact. Math isn't a figment of our imagination, but perhaps it isn't quite as far from great art as we thought.

IDEAS: Can you imagine an alternate universe in which we invent a different type of math? What might that look like?

LIVIO: Let me start with this silly idea - the isolated jellyfish. Imagine that all the intelligence resided not in humans, but in some isolated jellyfish at the bottom of the Pacific Ocean. This jellyfish - all it would feel would be the pressure of the water, the temperature of the water, the motion of the water. Would this jellyfish have invented the natural numbers - 1, 2, 3, 4, 5, and so on? I think probably not, because there's nothing to count there - everything this creature would have felt would have been continuous rather than discrete, so this creature might have invented a completely different type of mathematics.

IDEAS: So, instead of jellyfish math we have math that reflects our abilities?

LIVIO: Why did the ancient Babylonians and Greeks and so on start with arithmetic and geometry? I think that largely this is because of our particular perception system. We are very, very good at seeing edges of things; we can very well tell what is an object, what is the background of the object; we can tell individual objects very well, we can also tell very well whether a line is straight or not, just with our eye.

LIVIO: The mathematics we use to explain the universe were not actually chosen arbitrarily. Imagine that you do this very simple experiment where you put pebbles in a jar. You put first three pebbles, and five more pebbles, and you want the mathematical tool that will predict to you how many pebbles are in the jar. You think this is idiotic - it's mathematical addition - three plus five. If you do the same experiment with drops of oil, and you say three drops and five drops, you just have a pool of oil, the same thing does not apply to the second example.

Humans at some level chose the mathematical tools based on them being suitable for the particular problem.

IDEAS: But even if mathematics is at least partly an invention, you also talk about the "unreasonable effectiveness of math." What do you mean by that?

LIVIO: Mathematicians often formulate new branches of mathematics with absolutely no application in mind. In fact, they are often proud these branches have no applications in mind. Sometimes decades, sometimes centuries later, those mathematics are discovered to provide precisely the type of model that physicists need to model the universe.

IDEAS: So people are out there devising alternate forms of math in their heads, but then finding them reflected in the world?

LIVIO: Every now and then, people invent new concepts, and that starts a new branch of mathematics. At the beginning, humans only had positive numbers, then they had negative numbers, and then they added imaginary numbers, which is a big branch of mathematics. These are the numbers you don't really see in your everyday life in a way, but entire branches of physics and engineering use imaginary numbers.

An example I talk about in the book is knot theory. It's an incredible thing because people for years did not deal with knots in a mathematical way; then they delivered this knot theory and they found that things like string theory, our best candidate for a theory of everything, and [also] the treatment of the way the DNA replicates, requires concepts from knot theory.

IDEAS: How is math different from other human inventions, like art?

LIVIO: Mathematics is somewhat special in that it has an incredible longevity . . . what was true once remains true forever. The area of the circle that was discovered by Archimedes more than 2,000 years ago is still true.

At the same time, the physics of Aristotle no longer holds. We no longer have the same theory of the universe that Aristotle had or Pythagoras had; we don't sing the same music as the ancient Greeks.

So when things in mathematics are found to not be true, we have a name for that: we call it a mistake.

Original here

New World Wolves and Coyotes Owe Debt to Dogs

Daniel Stahler/Associated Press

Researchers have determined that black-coated wolves, like these in Yellowstone National Park, got their distinctive color from dogs.


In a bit of genetic sleuthing, a team of researchers has determined that black wolves and coyotes in North America got their distinctive color from dogs that carried a gene mutation to the New World.

The finding presents a rare instance in which a genetic mutation from a domesticated animal has benefited wild animals by enriching their “genetic legacy,” the scientists write in Thursday’s Science Express, the online edition of the journal Science. Because black wolves are more common in forested areas than on the tundra, the researchers concluded that melanism — the pigmentation that resulted from the mutation — must give those animals an adaptive advantage.

Although common in many species, melanism in dogs follows a unique genetic pathway, said Dr. Gregory S. Barsh, a professor of genetics and pediatrics at the Stanford University School of Medicine and the senior author of the paper.

Last year, Dr. Barsh and his laboratory identified a gene mutation responsible for the protein beta-defensin 3, which regulates melanism in dogs. After finding that the same mutation was responsible for black wolves and black coyotes in North America, and for black wolves from the Italian Apennines where wolves have recently hybridized with free-ranging dogs, the researchers set out to discover where and when the mutation evolved.

Comparing large sections of wolf, dog and coyote genomes, Dr. Barsh and his colleagues concluded that the mutation arose in dogs 12,779 to 121,182 years ago, with a preferred date of 46,886 years ago. Because the first domesticated dogs are estimated to date back just 15,000 to 40,000 years ago in East Asia, the researchers said that they could not determine with certainty whether the mutation arose first in wolves that predate that time, or in dogs at an early date in their domestication.

Robert K. Wayne, an evolutionary biologist at the University of California, Los Angeles, who studies canine evolution and is a co-author on the Science paper, said in an interview that he believed the mutation occurred first in dogs. But even if it arose first in wolves, he said, it was passed on to dogs who brought it to the New World and then passed it to wolves and coyotes soon after their arrival.

Dr. Wayne and his colleagues have dated the presence of dogs in Alaska to about 14,000 years ago and are now checking ancient dog remains from across the Americas for the mutation.

The researchers concluded that the mutation is subject to positive selection, meaning that it serves some adaptive purpose. Cross-breeding produces offspring with one set of genes from each parent, in this case a dog and a wolf. If all subsequent breeding takes place among wolves, the dog genes eventually vanish, unless one or more of them helps the organism survive.

Scientists have not yet identified the mutation’s purpose, but they suggested that its association with forested habitats meant the prevalence of melanism should increase as forests expand northward.

In an interview, Dr. Barsh observed that beta-defensin is involved in providing immunity to viral and bacterial skin infections, which might be more common in forested, warmer environments.

Marc Bekoff, a behavioral ethologist from the University of Colorado, who was not involved in the project, said more work was needed to show what adaptive advantage black coats might provide. But, Dr. Bekoff added, “This is an important paper that among other things should make us revisit and likely revise what we mean by a ‘pure’ species.”

Original here

New Open-source Software Permits Faster Desktop Computer Simulations Of Molecular Motion

A snapshot from a molecular dynamic simulation of the folding of a mutant protein found in chicken intestines. New open-source software permits faster simulations of molecular motion on desktop computers. (Credit: Courtesy of Daniel Ensign)

Whether vibrating in place or taking part in protein folding to ensure cells function properly, molecules are never still. Simulating molecular motions provides researchers with information critical to designing vaccines and helps them decipher the bases of certain diseases, such as Alzheimer's and Parkinson's, that result from molecular motion gone awry.

In the past, researchers needed either supercomputers or large computer clusters to run simulations. Or they had to be content to run only a tiny fraction of the process on their desktop computers. But a new open-source software package developed at Stanford University is making it possible to do complex simulations of molecular motion on desktop computers at much faster speeds than has been previously possible.

"Simulations that used to take three years can now be completed in a few days," said Vijay Pande, an associate professor of chemistry at Stanford University and principal investigator of the Open Molecular Mechanics (OpenMM) project. "With this first release of OpenMM, we focused on small molecular systems simulated and saw speedups of 100 times faster than before."

OpenMM is a collaborative project between Pande's lab and Simbios, the National Center for Physics-based Simulation of Biological Structures at Stanford, which is supported by the National Institutes of Health. The project is described in a paper that was scheduled to be posted online Feb. 3 in the "Early View" section of the Journal of Computational Chemistry.

The key to the accelerated simulations OpenMM makes possible is the advantage it takes of current graphics processors (GPUs), which cost just a few hundred dollars. At its core, OpenMM makes use of GPU acceleration, a set of advanced hardware and software technologies that enable GPUs, working in concert with the system's central processor (CPU), to accelerate applications beyond just creating or manipulating graphics.

The icing on the molecular-simulation cake is that the software has no allegiance to any particular brand of GPU, meaning it is, as computer geeks like to say, "brand agnostic." OpenMM will enable molecular dynamics (MD) simulations to work on most of the high-end GPUs used today in laptop and desktop computers.

This is a boon to MD developers. Converting their code to run on just one GPU product is a challenging project by itself. And until now, if developers wanted to accelerate their MD software on different brands of GPUs, they would have to write multiple versions of their code. OpenMM provides a common interface.

"OpenMM will allow researchers to focus on the science at hand instead of the hardware," Pande said. "Researchers will see a jump in productivity and resourcefulness from computers they already own." With OpenMM, researchers can use GPUs to perform massively parallel calculations.

OpenMM fits squarely with Simbios' mission of providing computational tools to stimulate research in biology and medicine, according to Russ Altman, principal investigator of Simbios and chair of the Department of Bioengineering at Stanford. "OpenMM will be a tool that unifies the MD community," he said. "Instead of difficult, disparate efforts to recode existing MD packages to enjoy the speedups provided by GPUs, OpenMM will bring GPUs to existing packages and allow researchers to focus on discovery."

The new release of OpenMM includes a version of the widely used MD package GROMACS that integrates the OpenMM library, enabling it to be sped up on high-end NVIDIA and AMD/ATI graphics cards. Close collaborations with AMD (which owns the ATI brand) and NVIDIA were critical for getting OpenMM to run on their GPUs.

"Cross-platform solutions like OpenMM enable a much broader community of researchers to leverage GPU acceleration capabilities like ATI Stream technology" said Patricia Harrell, director of Stream Computing, AMD. "AMD is committed to supporting open, cross platform tools that allow researchers to focus on solving problems with their GPU of choice."

NVIDIA is similarly committed to OpenMM. "OpenMM promises to further increase the adoption of GPU technology among the molecular dynamics community," said Andy Keane, general manager, GPU Computing at NVIDIA. "We'll continue our close collaboration with Stanford on OpenMM so that current and future libraries can maximally leverage the power of the GPU."

OpenMM incorporates specially developed algorithms that allow MD software to take full advantage of the GPU architecture. In fact, the OpenMM code is at the heart of the GPU implementations of the Folding@home project, which uses the horsepower of GPUs and CPUs in computers around the world to simulate protein folding. The current release uses an implicit solvent model, in which all the surrounding fluid, such as water, is represented as one continuous medium, rather than having each water molecule represented individually (an explicit solvent model). Future releases will allow the modeling of explicit solvent.

A free workshop on OpenMM and OpenMM Zephyr (, an easy-to-use application for running and visualizing accelerated MD simulations, will be offered sometime in the next three months. Anyone interested in learning about using OpenMM and OpenMM Zephyr will be welcome. A workshop on Feb. 12 is already filled.

Original here

Cognitive Computing Project Aims to Reverse-Engineer the Mind

By Priya Ganapati

Imagine a computer that can process text, video and audio in an instant, solve problems on the fly, and do it all while consuming just 10 watts of power.

It would be the ultimate computing machine if it were built with silicon instead of human nerve cells.

Compare that to current computers, which require extensive, custom programming for each application, consume hundreds of watts in power, and are still not fast enough. So it's no surprise that some computer scientists want to go back to the drawing board and try building computers that more closely emulate nature.

"The plan is to engineer the mind by reverse-engineering the brain," says Dharmendra Modha, manager of the cognitive computing project at IBM Almaden Research Center.

In what could be one of the most ambitious computing projects ever, neuroscientists, computer engineers and psychologists are coming together in a bid to create an entirely new computing architecture that can simulate the brain's abilities for perception, interaction and cognition. All that, while being small enough to fit into a lunch box and consuming extremely small amounts of power.

The 39-year old Modha, a Mumbai, India-born computer science engineer, has helped assemble a coalition of the country's best researchers in a collaborative project that includes five universities, including Stanford, Cornell and Columbia, in addition to IBM.

The researchers' goal is first to simulate a human brain on a supercomputer. Then they plan to use new nano-materials to create logic gates and transistor-based equivalents of neurons and synapses, in order to build a hardware-based, brain-like system. It's the first attempt of its kind.

In October, the group bagged a $5 million grant from Darpa -- just enough to get the first phase of the project going. If successful, they say, we could have the basics of a new computing system within the next decade.

"The idea is to do software simulations and build hardware chips that would be based on what we know about how the brain and how neural circuits work," says Christopher Kello, an associate professor at the University of California-Merced who's involved in the project.

Computing today is based on the von Neumann architecture, a design whose building blocks -- the control unit, the arithmetic logic unit and the memory -- is the stuff of Computing 101. But that architecture presents two fundamental problems: The connection between the memory and the processor can get overloaded, limiting the speed of the computer to the pace at which it can transfer data between the two. And it requires specific programs written to perform specific tasks.

In contrast, the brain distributes memory and processing functions throughout the system, learning through situations and solving problems it has never encountered before, using a complex combination of reasoning, synthesis and creativity.

"The brain works in a massively multi-threaded way," says Charles King, an analyst with Pund-IT, a research and consulting firm. "Information is coming through all the five senses in a very nonlinear fashion and it creates logical sense out of it."

The brain is composed of billions of interlinked neurons, or nerve cells that transmit signals. Each neuron receives input from 8,000 other neurons and sends an output to another 8,000. If the input is enough to agitate the neuron, it fires, transmitting a signal through its axon in the direction of another neuron. The junction between two neurons is called a synapse, and that's where signals move from one neuron to another.

"The brain is the hardware," says Modha, "and from it arises processes such as sensation, perception, action, cognition, emotion and interaction." Of this, the most important is cognition, the seat of which is believed to reside in the cerebral cortex.

The structure of the cerebral cortex is the same in all mammals. So researchers started with a real-time simulation of a small brain, about the size of a rat's, in which they put together simulated neurons connected through a digital network. It took 8 terabytes of memory on a 32,768-processor BlueGene/L supercomputer to make it happen.

The simulation doesn't replicate the rat brain itself, but rather imitates just the cortex. Despite being incomplete, the simulation is enough to offer insights into the brain's high-level computational principles, says Modha.

The human cortex has about 22 billion neurons and 220 trillion synapses, making it roughly 400 times larger than the rat scale model. A supercomputer capable of running a software simulation of the human brain doesn't exist yet. Researchers would require at least a machine with a computational capacity of 36.8 petaflops and a memory capacity of 3.2 petabytes -- a scale that supercomputer technology isn't expected to hit for at least three years.

While waiting for the hardware to catch up, Modha is hoping some of the coalition's partners inch forward towards their targets.

Software simulation of the human brain is just one half the solution. The other is to create a new chip design that will mimic the neuron and synaptic structure of the brain.

That's where Kwabena Boahen, associate professor of bioengineering at Stanford University, hopes to help. Boahen, along with other Stanford professors, has been working on implementing neural architectures in silicon.

One of the main challenges to building this system in hardware, explains Boahen, is that each neuron connects to others through 8,000 synapses. It takes about 20 transistors to implement a synapse, so building the silicon equivalent of 220 trillion synapses is a tall order, indeed.

"You end up with a technology where the cost is very unfavorable," says Boahen. "That's why we have to use nanotech to implement synapses in a way that will make them much smaller and more cost-effective."

Boahen and his team are trying to create a device smaller than a single transistor that can do the job of 20 transistors. "We are essentially inventing a new device," he says.

Meanwhile, at the University of California-Merced, Kello and his team are creating a virtual environment that could train the simulated brain to experience and learn. They are using the Unreal Tournament videogame engine to help train the system. When it's ready, it will be used to teach the neural networks how to make decisions and learn along the way.

Modha and his team say they want to create a fundamentally different approach. "What we have today is a way where you start with the objective and then figure out an algorithm to achieve it," says Modha.

Cognitive computing is hoping to change that perspective. The researchers say they want to an algorithm that will be capable of handling most problems thrown at it.

The virtual environment should help the system learn. "Here there are no instructions," says Kello. "What we have are basic learning principles so we need to give neural circuits a world where they can have experiences and learn from them."

Getting there will be a long, tough road. "The materials are a big challenge," says Kello. "The nanoscale engineering of a circuit that is programmable, extremely small and that requires extremely low power requires an enormous engineering feat."

There are also concerns that the $5 million Darpa grant and IBM's largess -- researchers and resources--while enough to get the project started may not be sufficient to see it till end.

Then there's the difficulty of explaining that mimicking the cerebral cortex isn't exactly the same as recreating the brain. The cerebral cortex is associated with functions such as thought, computation and action, while other parts of the brain handle emotions, co-ordination and vital functions. These researchers haven't even begun to address simulating those parts yet.

Original here