There was an error in this gadget

Followers

Tuesday, June 3, 2008

Astronomy Picture of the Day

Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.

2008 June 3

The Dark River to Antares
Credit & Copyright: Máximo Ruiz

Explanation: Connecting the Pipe Nebula to the bright star Antares is a flowing dark cloud nicknamed the Dark River. The murkiness of the Dark River is caused by absorption of background starlight by dust, although the nebula contains mostly hydrogen and molecular gas. Antares, the bright star that appears yellow just below the center of the frame, is embedded in the colorful Rho Ophiuchi nebula clouds. The Dark River, pictured above across the upper left, spans over 20 times the angular diameter of the Moon and lies about 500 light years distant. Other types of nebulas visible here include red emission nebula and the blue reflection nebula.

Original here

A scoop from Mars! Lander digs in

Soil shows white flecks that could be ice or salt, scientists say


NASA / JPL-Caltech / UA / MPI via EPA
A photograph taken by NASA's Phoenix Mars Lander shows Martian soil within the scoop on the end of the probe's robotic arm. Scientists speculate that the white patches on the right side of the image could be ice or salts that precipitated into the soil. The color was acquired by illuminating the scoop with red, green, and blue light-emitting diodes.'

LOS ANGELES - NASA’s newest spacecraft got down and dirty on Mars, taking its first practice scoop of Martian soil ahead of the actual dig expected later this week, scientists said Monday.

The test dig made Sunday by Phoenix Mars Lander’s 8-foot-long robotic arm uncovered bits of bright specks in the soil believed to be ice or salt.

“We see this nice streak of white material,” said Pat Woida, senior engineer at the University of Arizona at Tucson, which is directing the mission. “We don’t know what this material is yet.”

Scientists expect to find out definitively what the specks are made of once the lander begins conducting its chemical tests.

Phoenix landed in the Martian arctic plains on May 25 to begin a three-month hunt, aimed at finding out whether the far northern latitudes could support primitive life. Its main task is to excavate trenches in the permafrost in search of evidence of past water and organic compounds considered the chemical building blocks of life. The cost of the mission is $420 million.

Close-up images beamed back by the lander over the weekend revealed that its three legs are resting on what appears to be a slab of ice. It apparently was uncovered when the spacecraft’s thrusters blew away the topsoil. Also over the weekend, engineers fixed a nagging short-circuit problem on one of the lander’s instruments.

Image: Mars as seen by Phoenix Mars Lander
NASA via Reuters
This color image, acquired by NASA's Phoenix Mars Lander on Sunday, shows the place where the test dig was made.

With the practice dig out of the way, scientists will scour the landscape for a prime spot for the lander to perform three side-by-side digs. Phoenix will deliver the scoopfuls of dirt to its miniature ovens, and vapors from the heating will be analyzed for traces of organic compounds. Later digs will focus on bringing samples to its microscope and wet chemistry lab.

“We’re ready to go,” said Ray Arvidson of Washington University, who is known by team members as the “dig czar.” “We’re pretty excited to get on with business here.”

Since Phoenix’s arrival, scientists have roped off no-digging zones to preserve parts of the landing site. They’ve also named rocks and other geologic features after fairy-tale and nursery rhyme characters, including “Humpty Dumpty” and “Alice.”

© 2008 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Original here


Dark, Perhaps Forever

BALTIMORE — Mario Livio tossed his car keys in the air.

They rose ever more slowly, paused, shining, at the top of their arc, and then in accordance with everything our Galilean ape brains have ever learned to expect, crashed back down into his hand.

That was the whole problem, explained Dr. Livio, a theorist at the Space Telescope Science Institute here on the Johns Hopkins campus.

A decade ago, astronomers discovered that what is true for your car keys is not true for the galaxies. Having been impelled apart by the force of the Big Bang, the galaxies, in defiance of cosmic gravity, are picking up speed on a dash toward eternity. If they were keys, they would be shooting for the ceiling.

“That is how shocking this was,” Dr. Livio said.

It is still shocking. Although cosmologists have adopted a cute name, dark energy, for whatever is driving this apparently antigravitational behavior on the part of the universe, nobody claims to understand why it is happening, or its implications for the future of the universe and of the life within it, despite thousands of learned papers, scores of conferences and millions of dollars’ worth of telescope time. It has led some cosmologists to the verge of abandoning their fondest dream: a theory that can account for the universe and everything about it in a single breath.

“The discovery of dark energy has greatly changed how we think about the laws of nature,” said Edward Witten, a theorist at the Institute for Advanced Study in Princeton, N.J.

This fall, NASA and the Department of Energy plan to invite proposals for a $600 million satellite mission devoted to dark energy. But some scientists fear that might not be enough. When astronomers and physicists gathered at the Space Telescope Science Institute recently to take stock of the revolution, their despair of getting to the bottom of the dark energy mystery anytime soon, if ever, was palpable, even as they anticipate a flood of new data from the sky in coming years. When it came time for one physicist to discuss new ideas about dark energy, he showed a blank screen.

The institute’s director, Matt Mountain, said that dark energy had given this generation of astronomers a rare opportunity, and he admonished them to use it wisely.

“We are placing a large bet,” Dr. Mountain said, “using our credibility as collateral, that we as a community know what we are doing.”

But many stressed that it was going to be a long march with no clear end in sight. Lawrence Krauss of Case Western Reserve University told them, “In spite of the fact that you are liable to spend the rest of your lives measuring stuff that won’t tell us what we want to know, you should keep doing it.”

Scuffling in the Dark

Through myriad techniques and observations, cosmologists have recently arrived, after decades of strife, at a robust but dark consensus regarding a cosmos in which stars and galaxies, as well as the humans who gawk at them, amount to barely more than a disputatious froth. It was born 13.7 billion years ago in the Big Bang. By weight it is 4 percent atoms and 22 percent so-called dark matter of unknown identity — perhaps elementary particles that will be discovered at the Large Hadron Collider starting up outside Geneva this year. That leaves 74 percent for the weight of whatever began causing the cosmos to accelerate about five billion years ago.

As far as astronomers can tell, there is no relation between dark matter, the particles, and dark energy other than the name, but you never know. Some physicists are even willing to burn down their old sainted Einstein and revise his theory of gravity, general relativity, to make the cosmic discrepancies go away. There is in fact a simple explanation for the dark energy, Dr. Witten pointed out, one whose tangled history goes all the way back to Einstein, but it is also the most troubling.

“Dark energy has the somewhat unusual property that it was embarrassing before it was discovered,” he said.

In 1917, Einstein invented a fudge factor known as the cosmological constant, a sort of cosmic repulsion to balance gravity and keep the universe in balance. He abandoned his constant when the universe was discovered to be expanding, but quantum physics resurrected it by showing that empty space should be foaming with energy that had the properties of Einstein’s constant.

Alas, all attempts to calculate the amount of this energy come up with an unrealistically huge number, enough energy to blow away the contents of the cosmos like leaves in a storm before stars or galaxies could form. Nothing could live there.

Dr. Witten and other physicists used to think this conundrum “would somehow go away.” Something was missing in physicists’ understanding of physics, the logic went. The constant was really zero for deep reasons that, when revealed, would lead physicists closer to an understanding of what they call “the vacuum,” that is to say, the structure of reality.

“It seems now that the answer is not really zero,” Dr. Witten said.

Requiem for a Dream

Einstein’s constant is the most economical explanation for dark energy, Dr. Witten said. The others, involving new force fields or tinkering with Einstein’s gravity, are hard to make work and raise more questions than they answer. But if dark energy is the cosmological constant, it is smaller than predicted by a shocking factor of 1060. No fundamental principles can explain why Einstein’s constant, or any physical parameter, could be so small without being zero, Dr. Witten said. Zero can be a fundamental number, he said, but not a 1 with 59 zeroes between it and the decimal point.

As a result, he said, maybe physicists should give up trying to explain that number and look instead for a theory that generates all kinds of universes, a so-called multiverse.

That idea has been given mathematical form by string theory, which portrays the constituents of nature as tiny wriggling strings, an elegant idea that in principle explains all the forces of nature but in practice leads to at least 10500 potential universes.

This maze was an embarrassment for string theory. As Dr. Witten, one of the leaders of the field, said, “I am tempted to say this was an embarrassment of my youth.”

“Who needs that mess?” he recalled thinking. “There is just one world we live in.”

Now, Dr. Witten allowed, dark energy might have transformed this fecundity from a vice into a virtue, a way to generate universes where you can find any cosmological constant you want. We just live in one where life is possible, just as fish only live in water.

“This interpretation of string theory might be close to the truth,” Dr. Witten said. But that truth comes at a cost.

“Before the discovery of the dark energy, quantum physicists tended to assume that the ‘vacuum’ we live in has some deep meaning that reflects nature’s deepest secrets,” Dr. Witten said. But if ours is only one of a zillion in a haystack, there is nothing special about it, no secret to be found.

It could still turn out that dark energy is some as-yet-undiscovered “fifth force,” say, or the result of not understanding gravity. In that case, Dr. Witten said, “All the old viewpoints would be correct,” and physicists could go back to dreaming of a final theory.

“I’d be happy if that happened,” he said. “Our reward would be to go back to where we were, not understanding the cosmological constant.”

The notion that there are a zillion universes, whose individual properties are just a cosmic dice throw, is a story that has been told before and “raises the blood pressure of many physicists seriously,” as Dr. Livio put it. But the idea has rarely been mentioned by Dr. Witten, who is seen in the community as a symbol of the old Einsteinian ideal.

Dr. Witten said he was just doing his duty to explain what dark energy meant to physics.

“As for how I feel personally, I am not sure what to say,” he said in an e-mail message. “I wasn’t terribly enthusiastic the first, or even second, time I heard the proposal of a multiverse. But none of us were consulted when the universe was created.”

Astronomy of the Invisible

The trouble started in 1998 when two competing teams of astronomers, one led by Saul Perlmutter of the Lawrence Berkeley National Laboratory in California and the other by Brian Schmidt of the Australian National University, discovered that the expansion of the universe was inexplicably accelerating.

Both teams were using a kind of exploding star known as a Type 1a supernova as standard candles — objects whose distance can be inferred from their apparent brightness and a few other tricks of the trade — to investigate the history and fate of the universe. They found, on the basis of a few dozen of these stars, that the more distant ones were dimmer than expected, meaning that they had been carried farther away by the cosmic expansion than expected, meaning that the universe was speeding up. The car keys were streaking for the ceiling.

The groups quibble about who saw and said what first, but they have shared in a cavalcade of awards and prizes, among them the $1 million Shaw Prize in 2006 and the $500,000 Gruber Cosmology Prize, awarded last fall at Cambridge University in England, where Dr. Perlmutter and Dr. Schmidt lectured jointly, trading sentences.

Since then myriad collaborations have joined in the hunt for these exploding stars. In Baltimore, Dr. Perlmutter reported on a new analysis of “the world’s data set,” more than 300 supernovas observed by various groups, which he said would provide the tightest constraints on the nature of dark energy “for at least the next 15 minutes.”

Dr. Perlmutter’s results, along with all the others that were presented over the next four days, were consistent with Einstein’s cosmological constant, plus or minus 10 percent, but with just about everything else the theorists can throw into the pot, as well.

Nor is there any solid evidence yet that dark energy is or is not varying with time — if it is not constant, it cannot be Einstein’s constant. Adam Riess of the Johns Hopkins space telescope institute, a key member of Dr. Schmidt’s team, said, “The biggest thing we could learn is by ruling that out.”

He added, “We have a suspect, but we’re not ready to convict anyone yet.”

Dr. Perlmutter said, “The challenge is to make dramatic improvements in the quality of the data,” adding, “The next decade should be a very fertile time.”

Astronomers have developed a smorgasbord of other ways of tracking the effect of dark energy. They have learned how to map the growth of clusters of galaxies, by analyzing how their gravity distorts the light from galaxies far behind them. Gravity makes the clusters grow; dark energy holds them back.

“We can see dark matter, and in principle even invisible clusters,” said Henk Hoekstra of the University of Victoria in Canada.

Another technique is to simply count the clusters at different times in the cosmic past, the way one might count trees to gauge the growth of a forest. Yet another method is to use sound waves from the hot, early days of the universe, which have left an imprint on the distribution of galaxies today — a 500-million-light-year “bump” — as a cosmic yardstick for measuring the universe as it grew.

Each of these methods has its own strengths and weaknesses, and experts agree that it will be necessary to marry the results from many methods to zoom in on the properties of dark energy. They also agree that the best place to do that is in space.

The Big Bake-Off

Last year a committee from the National Academy of Sciences recommended that a dark energy observatory be the next mission in an astrophysics program called Beyond Einstein.

There are now three competitors angling for the job: Dr. Perlmutter’s SNAP, for Supernova Acceleration Probe; Adept, or Advanced Dark Energy Telescope, led by Charles Bennett of Johns Hopkins; and Destiny, for Dark Energy Space Telescope, led by Tod Lauer of the National Optical Astronomy Observatory in Tucson.

Also in the works, just to add spice, is a European mission known as Euclid, which could fly in 2017, if it is approved by the European Space Agency. NASA and the Department of Energy, working together, expect to make a final selection for the dark energy mission — known colloquially as J-dem for Joint Dark Energy Mission — next spring and launch it in the middle of the next decade.

That sounds like progress, but some astronomers, including the former members of the academy committee itself, have complained that $600 million is less than half of the $1.2 billion to $1.5 billion the academy committee estimated was necessary to do the job. In a recent letter to Michael Salamon, NASA scientist in charge of the project, 11 of the committee members, including both of its chairmen, urged NASA to raise the cost cap on the mission, writing, “Cutting the budget in half would probably make the attainment of these goals impossible.”

NASA’s $600 million does not include the cost of launching the satellite, so the discrepancy is not as big as it looks. But in Baltimore, Jon Morse, director of astrophysics at NASA headquarters, warned that if the astronomers wanted to spend a billion dollars, some other astronomy mission would have to come off the table.

NASA has to live within its means, Dr. Morse said in an interview.

“Otherwise,” he said, “Beyond Einstein becomes beyond reality.”

A Hole in the Future?

Whatever proposal is eventually selected, the dark energy satellite will return a tidal wave of data about the universe and its weird denizens, both visible and invisible. This data is likely to transform astronomy in unpredictable ways, but there is no guarantee that it will nail the mystery of dark energy.

Both alternatives to the constant — some weird energy field in space, or a modification to Einstein’s theory of gravity — could vary wildly over the course of history. But Paul Steinhardt, a theorist from Princeton University, argued that they would tend to mimic the cosmological constant so closely that the different models cannot be distinguished within the projected error limits, of a few percent.. He called this blur of ignorance “the J-dem hole.” The specter of the J-dem hole dominated a panel discussion later in the week devoted to the question, “How well do we have to do?”

The answer, said Dr. Krauss of Case Western, was “better than you will be able to do.”

The only real job, he said, is to distinguish dark energy from the cosmological constant. “If we don’t answer that question, we won’t have learned a thing,” Dr. Krauss said.

He compared the present situation with the development of quantum mechanics, the paradoxical sounding rules that govern inside the atom, which overturned science in the 1920s.

That revolution, he pointed out, stemmed from theorists’ inability to explain the so-called black body radiation emitted from a hot glowing object. The solution did not come from more and more precise measurements of the black body spectrum, but rather from the heads of people like Niels Bohr and Werner Heisenberg, who envisioned new ways that atoms could work and weird new laws of nature.

“We really need new theory, and we have none,” Dr. Krauss said.

In the meantime, astronomers could get lucky. Despite Dr. Steinhardt’s analysis, measurements of dark energy’s strength could converge on a value not quite the same as Einstein’s constant. Or it could turn out that it has changed over cosmic time and is not constant. Einstein and Dr. Witten would be off the hook.

Michael Turner, a University of Chicago cosmologist who coined the term “dark energy,” said you could measure the health of a field by the big questions it takes on, and addressing Dr. Morse of NASA, who was moderating the discussion, as well as his colleagues, he said, “You have a job, to go knock on everyone’s door and say this is the opportunity of a lifetime.”

Dr. Krauss said, “It would be crazy to talk ourselves out of this.”

He added: “You have to do what you can. You would be crazy not to look.”

Original here

How to become a presidential hero

Ares 5 illustration
Promising to reexamine NASA’s implementation of the exploration vision, including such vehcles as the Ares 5 (above), could be a winning proposition for a presidential candidate. (credit: NASA)

Representatives of the three major presidential candidates participated on a panel at the International Space Development Conference (ISDC), the National Space Society’s annual conference in Washington, DC last week, to present candidates’ views. There were no revelations: lots of replies amounted to little more than “we’ll have to study that.” Only Hillary Clinton’s rep, Lori Garver, provided really engaging and direct responses to moderator Miles O’Brien’s questions, particularly on issues such as humans to Mars. (Clinton seems to be in favor of eventual human Mars missions, although she doesn’t exactly flaunt it.)

Taking a vocal stance on NASA and exploration that is visionary yet fiscally responsible might play very well to a populace weary of issues that deal only with negatives, such as war and the economy.

As with some other key sessions at ISDC this year, audience members were not given the opportunity to ask questions. That’s really too bad, because I’m sure audience members would have injected a lot more life and spirited debate into the proceedings. As it was, we got what amounted to exhortations by NASA to support the Vision for Space Exploration (VSE) and ISS programs exactly as they are.

Coming away from these sessions, as well as Stephen Metschan’s talk on DIRECT in a much smaller afternoon breakout talk, I’ve begun to think that a presidential candidate could make a real splash if he or she cast themselves as a reformer of NASA.

To wit: taking a vocal stance on NASA and exploration that is visionary yet fiscally responsible might play very well to a populace weary of issues that deal only with negatives, such as war and the economy. We really need a hero these days, and spaceflight is one of those few areas Americans can point to with ready, justifiable pride. At the same time, this is hardly the point at which a battle cry of “Mars or bust, damn the cost!” can be made.

This is not about the merits or demerits of the specific shuttle-derived vehicles that the DIRECT team advocates. Overall, they make several excellent general points. If we’re truly trying to build a robust family of launch vehicles that will take us into the next 30 or 40 years of spaceflight, to pursue whatever specific goals we choose—and do that basically within the constraints of NASA’s current budget—the Ares 1/5 route doesn’t make much sense.

A far more reasonable approach would leave the vast majority of our unique launch facilities, experience, and workforce in place. The overriding principle would be to use whatever immediate technology we have at hand, and rigorously keep new development work and “requirements creep” at a minimum.

This approach would also assiduously avoid duplication of existing capabilities, especially those available or nearly available commercially. (A particular question: why develop Ares 1 when EELVs can do the job if they’re human-rated? And why not create an incentive for Boeing and Lockheed Martin to do it on their dime, in exchange for flight guarantees?) Because of this, NASA is absorbed in the creation of Ares 1, when it could well focus on getting its Orion Crew Exploration Vehicle (CEV) built and tested, as well as the J-2X engine that is probably required for any new large launcher family. (For now, we’ll leave off why development of a variant of the in-production RS-68 engine wouldn’t be a wiser, nearer-term choice in the upper stages on at least some missions.)

The point is that there is no shortage of opinion that NASA’s approach is confoundingly complex and inefficient. That perception is undiminished by the marked absence of real persuasion on NASA’s part that it is set on the right course.

Enter the presidential hero. A candidate or new president-elect could score some quick political points not only by making an eloquent case for returning to exploration, and keeping America the leader in something it does best, but taking NASA to task on its VSE implementation, and setting it straight.

That last point doesn’t have to be a negative, either. A newly-inaugurated President Clinton, McCain, or Obama could give NASA a big public challenge to do better, while instructing the agency to do no damage or modification to the nation’s precious STS infrastructure. Let him or her challenge NASA to come up with a better plan to get its new launcher online sooner, and cheaper.

A candidate or new president-elect could score some quick political points not only by making an eloquent case for returning to exploration, and keeping America the leader in something it does best, but taking NASA to task on its VSE implementation, and setting it straight.

Frankly, if the immediate concern is about getting Orion spacecraft to the ISS, I think we could find a way to human-rate EELVs to carry those, even if it meant creating a first-pass CEV with reduced capability, with room for growth into the full-fledged CEV intended for flight beyond the ISS. At least we’d have American vehicles flying a new American spacecraft, something certainly doable within a single presidential term.

Let the president also challenge NASA to come up with a more reasonable plan to transition from that to launch vehicles with more capability than EELV by adapting the shuttle infrastructure in a more direct way, whether it looks very much like DIRECT or something else. When that’s ready, we can move the CEV back to flying from Launch Complexes 39A and B, and begin to work on a truly capable exploration program using a more sensible adaptation of the STS infrastructure and workforce.

In this way, the president would be a true hero. He or she would make an early, positive example of real leadership by keeping our commitment to a return to exploration and to the International Space Station, while making NASA more accountable to fiscal reality and public expectation. All over the tiny (albeit inordinately visible) portion of the federal budget involved.

What could be more of a slam dunk for a new president?



VC Cash in Tow, Space Tourist Biz Moves Beyond Early Adopters

Reporting once again from the 2008 International Space Development Conference, PM columnist and Instapundit blogger Glenn Reynolds analyzes the influx of money into suborbital flight—and what that could mean for your vacation to the moon.
As Virgin Galactic's SpaceShipTwo (left) and XCOR's Lynx jet (top right) race to suborbital space, early billionaire tourists like Greg Olson (bottom right) see lower-cost trips to the moon in the not-so-distant future. (Photographs Courtesy of Virgin Galactic, XCOR and Nancy Ostertag/NSS)
WASHINGTON — After spending the weekend here with the elite minds of research and exploration for the next generation in space, I'm confident that several frontiers are making steady progress. Beyond China and the globalization of government space agencies, there's movement on everything from asteroid defense (which is about more than just 99942 Apophis) to the Google Lunar X Prize (which now has four more teams competing) and space-based solar power (which we'll see some action on, bringing tremendous energy-independence benefits).

But just like last year, the biggest driver of excitement at the National Space Society's 27th International Space Development Conference was the booming growth in suborbital tourism. For the companies building spacecraft, there's money to spare. For you and me? There's actually money to save in getting up there some day.

Sure, the crowd of space activists here likes pretty much everything about space. But most people are space activists because they want to go themselves. (In fact, I want to GO! is a popular t-shirt slogan.) And the space tourism industry offers the prospect not only of jump-starting commercial space efforts in general, but of letting people fly into space at a price that many can actually afford.

With the race between Virgin Galactic and XCOR in full throttle to get the first spacecraft ready for suborbital vacations, the message at the tourism panels here was that the entire industry is moving fast, with plenty of cash—and a variety of business plans from the smaller players. Space tourism has attracted over $1.2 billion in investment, mostly from individual "angel investors," of which only about 25 percent has been spent. Revenues last year were $268 million, up from $175 million the year before.

And even though most of the competitors haven't flown yet, the space tourism market is a proven phenomenon, with several people having paid top dollar already to fly on the Russian Soyuz to the International Space Station (and more to come). Two of them, Anousheh Ansari and Greg Olson, spoke about their $20 million tickets to space and made clear that they thought it was money well spent. Asked if they'd spend $100 million for a trip to the moon, both said yes, though Olson added, "I'd have to sell another company first."

Will one or both of them manage the trip? Bring it on, I say. As with the first folks who shelled out big bucks for laptop computers and HDTVs, these early adopters are developing a market that will one day make access to space affordable for lots more of us. Which is great, because I want to go myself, and I'm afraid I don't have a hundred million to spare!

Original here

Astronomers find tiny planet orbiting tiny star






An international team of astronomers led by David Bennett of the University of Notre Dame has discovered an extra-solar planet of about three Earth masses orbiting a star with a mass so low that its core may not be large enough to maintain nuclear reactions. The result was presented Monday (June 2) at the American Astronomical Society annual meeting in St. Louis.

The planet, referred to as MOA-2007-BLG-192Lb, establishes a record for the lowest mass planet to orbit a normal star. The star, MOA-2007-BLG-192L, is at a distance of 3,000 light years and the lowest mass host star to have a companion with a planetary mass ratio. The mass of the host is about 6 percent of the mass of the sun. Such a star is called a brown dwarf, because this is slightly below the mass needed to sustain nuclear reactions in the core. But the measurement uncertainty also permits a host mass slightly above 8 percent of a solar mass, which would make MOA-2007-BLG-192L a very low-mass hydrogen burning star.

“Our discovery indicates that even the lowest mass stars can host planets,” Bennett said. “No planets have previously been found to orbit stars with masses less than about 20 percent of that of the sun, but this finding suggests that we should expect very low-mass stars near the sun to have planets with a mass similar to that of the Earth. This is of particular interest because it may be possible use NASA’s planned James Webb Space Telescope to search for signs of life on Earth-mass planets orbiting low-mass stars in the vicinity of the sun.”

The discovery of the MOA-2007-BLG-192L star-planet system was made by the Microlensing Observations in Astrophysics (MOA), which includes Bennett, and the Optical Gravitational Lensing Experiment (OGLE) collaborations using the gravitational microlensing method.

Gravitational microlensing takes advantage of the fact that light is bent as the rays pass close to a massive object, like a star. The gravity from the mass of the intervening object, or lens star, warps surrounding space and acts like a giant magnifying glass. As predicted by Albert Einstein and later confirmed, this phenomena causes an apparent brightening of the light from a background “source” star. The effect is seen only if the astronomer’s telescope lies in almost perfect alignment with the source star and the lens star.

Astronomers are then able to detect planets orbiting the lens star if the light from the background star also is warped by one or more planets.

The primary challenge of the microlensing method is the precise alignments needed for the planetary microlensing signals are quite rare and brief, often lasting less than a day. This discovery was made possible by the new MOA-II telescope at New Zealand’s Mt. John Observatory, using the MOA-cam3 camera, which is able to image an area of sky 13 times larger than the area of the full moon in a single image.

“The new MOA telescope-camera system allows us to monitor virtually all of the known microlensing events for planetary signals,” Bennett said. “We would not have made this discovery without it.”

The microlensing observations provided evidence that the host star has a mass of about 6 percent of the mass of the sun. This was confirmed by high angular resolution adaptive optics images with the Very Large Telescope (VLT) at the European Southern Observatory in Chile. These images confirm that the planetary host is either a brown dwarf or a very low-mass star.

The planet orbits its host star or brown dwarf with an orbital radius similar to that of Venus. But the host is likely to be between 3,000 and 1 million times fainter than the sun, so the top of the planet’s atmosphere is likely to be colder than Pluto. However, the planet is likely to maintain a massive atmosphere that would allow warmer temperatures at lower altitudes. It is even possible that interior heating by radioactive decays would be sufficient to make the surface as warm as the Earth, but theory suggests that the surface may be completely covered by a very deep ocean.

This result also supports the 1996 prediction by Bennett and Sun Hong Rhie that the microlensing method should be sensitive to Earth-mass planets.

“I’ll hazard a prediction that the first extra-solar Earth-mass planet will be found by microlensing,” Bennett said. “But we’ll have to be very quick to beat the radial velocity programs and NASA’s Kepler mission, which will be launched in early 2009.”

A paper describing this result has been accepted for publication in the Astrophysical Journal and is scheduled for publication in the Sept. 1 edition. Bennett’s work is funded by the National Science Foundation and the National Aeronautics and Space Administration.

In addition to Bennett, the MOA group is composed of astronomers from Nagoya University, Konan University, Nagano National College of Technology, and Tokyo Metropolitan College of Aeronautics in Japan, as well as Massey University, the University of Auckland, Mt. John Observatory, the University of Canterbury, and Victoria University in New Zealand.

The OGLE group is comprised of astronomers from Warsaw University Observatory in Poland, the Universidad de Concepción in Chile, and the University of Cambridge in England. Additional collaborators who provided the VLT data and analysis are from the Institut d’Astrophysique de Paris, the Observatoire Midi-Pyr´en´ees, and the Observatoire de Paris in France, the European Southern Observatory in Chile, and Heidelberg University in Germany.

Original here

Live longer, hang out with young people

Hanging out with younger, healthier people might help the elderly to live longer, suggests a study of fruit flies.

  • 15 genes linked to a long life
  • Short people could live longer
  • Antidepressant may hold key to long life
  • The research also supports the notion that old people are more likely to thrive if with a younger peer group, or with their children and grandchildren, than if they are with their aged peers in a home.

    A Drosophila fruit fly: social interaction boosted the fly's lifespan
    A Drosophila fruit fly: social interaction boosted the fly's lifespan

    Scientists have already gathered a range of evidence that having a social network is healthier than leading a solitary life: the healthy effects of attending church could be as significant as those enjoyed by people who give up smoking, according to one study of 4,000 elderly people in North Carolina.

    Another study at the University of Chicago found that loneliness is a major risk factor in increasing blood pressure and could raise the risk of death from stroke and heart disease.

    However, the underlying reason why being sociable has health effects have not been well understood. Now, fruit flies are set to provide the answer, after the discovery that fast-ageing flies that socialise with normal flies live longer than if they live with their peers.

    In a study published in the Proceedings of the National Academy of Sciences, Drs Hongyu Ruan and Prof Chun-Fang Wu of the University of Iowa used the fruit fly Drosophila melanogaster to examine the molecular networks that govern the effects of social interactions on the ageing process.


    he authors grew a particular strain of mutant fly with greatly reduced lifespan and raised the flies in the same vial as normal fruit flies.

    What was striking was that the mutant flies that lived with normal flies lived survived nearly twice as long as mutants housed with other mutants.

    In addition, flies with shorter lifespans housed with the normal flies had improved physical responses and better survived environmental stresses compared to those that remained among the mutant population, according to the authors.

    The mutation that cuts lifespan, by interfering with an enzyme that mops up harmful radicals, mirrors deficits in a number of age-dependent diseases in humans, including Parkinson's, Huntington's, and Alzheimer's diseases, leading the team to suggest that their research may aid in therapies for these illnesses.

    "Our results provide a definitive case of beneficial social interaction on lifespan and a useful entry point for analysing the underlying molecular networks and physiological mechanisms," they conclude.

    Original here

    Magnetic Movie

    Natural magnetic fields are revealed as chaotic, ever-changing geometries as scientists from NASA’s Space Sciences Laboratory excitedly describe their discoveries.


    Duration

    4'56"
    18.2MB


    Credits

    A film by Semiconductor: Ruth Jarman & Joe Gerhardt
    Photographed and Recorded at the Space Sciences Laboratory UC Berkeley
    Space Physicists in order of appearance: Janet Luhmann, Bill Abbett, David Brain, Stephen Mende
    VLF Radio Recordings Stephen P McGreevy


    Synopses

    Natural magnetic fields are revealed as chaotic, ever-changing geometries.

    Scientists from NASA's Space Sciences Laboratory excitedly describe their discoveries.

    Natural magnetic fields are revealed as chaotic, ever-changing geometries as scientists from NASA's Space Sciences Laboratory excitedly describe their discoveries.

    The secret lives of invisible magnetic fields are revealed as chaotic, ever-changing geometries. All action takes place around NASA’s Space Sciences Laboratory, UC Berkeley, to recordings of space scientists describing their discoveries. Actual VLF audio recordings control the evolution of the fields as they delve into our inaudible surroundings, revealing recurrent ‘whistlers’ produced by fleeting electrons. Are we observing a series of scientific experiments, the universe in flux, or a documentary of a fictional world?


    Technical information

    Animated photographs, using sound-controlled CGI and 3D compositing.

    Original here

    Ancient man killed 'love rivals'

    Talheim burial pit
    Marks on the skulls showed they had been hit by an axe

    Prehistoric man may have executed rivals from neighbouring tribes to steal their women, research has found.

    A study of 7,000-year-old skeletons, led by Durham University scientists, found that one of the burial groups consisted only of men and children.

    This indicated that the women were spared and their capture could have been the motive for the attack.

    The findings, from a burial pit in Talheim, Germany, are published in the journal Antiquity.

    The 34 skeletons were discovered in the 1980s, but new studies of different types (isotopes) of atoms in their teeth show that they came from three groups - locals, cattle-herders and a "family" of a man, woman and two children.

    All the skeletons bore marks to the left side of the skull showing that they were hit in the head with an axe, indicating they were executed while bound.

    'Tribal warfare'

    The scientists concluded the absence of local females meant they were captured instead.

    Dr Alex Bentley, from Durham University's Anthropology Department, said: "It seems this community was specifically targeted, as could happen in a cycle of revenge between rival groups.

    "Although resources and population were undoubtedly factors in central Europe around that time, women appear to be the immediate reason for the attack.

    "Our analysis points to the local women being regarded as somehow special and were therefore kept alive."

    Dr Bentley added: "It looks like tribal warfare on a small scale.

    "It's crucial for a group which has a very small population to have access to mates."

    Original here

    3 People Who Are Pushing the Edge of Science

    Growing electronics with viruses, finding alien life, and quantum privacy protection.

    by Jane Bosveld; illustrations by Riccardo Vecchio

    Angela Belcher

    Edge work: “Programming” viruses to perform useful tasks

    Why? It is clean and efficient.
    Where? MIT
    Initial response: “I was called insane.”

    When 40-year-old materials chemist Angela Belcher was a child, she wanted to be an inventor. “I would try to build things out of scrap material that we had in the garage,” she says. To her disappointment, everything she made had already been invented. Then, in college, she “fell in love with large molecules” and found a whole new way to build things.

    Although Belcher was interested in DNA, the molecules she most loved were proteins. She wrote her doctoral thesis on how aba­lone grow their rough outer shells and pearl-like inner shells, the main difference between the two being a simple shift in protein sequences. “It’s pretty amazing,” she says. “If organisms like abalone have precise control at a genetic level, I realized it might be possible to program an organ­ism to grow other kinds of material. Why not use genetic information to build a protein that can grow a semiconductor?”

    In a series of experiments at MIT, Belcher, working with a team of about 30 students and postdocs, has successfully programmed viruses to incorporate, then grow, a variety of inorganic materials, including nanoscale semiconductors, solar cells, and magnetic storage materials. Separately, she is using yeasts as scaffold organisms because of their abil­ity to grow many different materials. “We look at yeasts as factories,” she explains. “Instead of Budweiser, there’s Nanoweiser.”

    Belcher has begun working with the U.S. Army on nanoscale batteries that would weigh a fraction of what current batteries weigh and be woven into a soldier’s uniform. She is also training viruses to “find mistakes in materials and give off a signal.” One possible application: spraying viruses on an airplane fuselage to check for microscopic defects. In addition, the National Cancer Institute is funding Belcher to use viruses to find peptides that can specifically identify cancer cells.

    “We have a long way to go,” Belcher says. “But one of the things I like about biology is that you have evolution on your side.”


    Dimitar Sasselov

    Edge work: Finding life on planets outside our solar system
    Why? We have to know.
    Where? Harvard University
    Initial response: “People are always very excited.”

    In his quiet, modest way, Dimitar Sasselov is working to answer one of science’s most explosive questions: Is there other life in the universe? Sas­selov, a 46-year-old Harvard University astronomer and director of the university’s Origins of Life Initiative, is looking for life-sustainable extrasolar planets—planets that are circling suns in other solar systems. Among the 270 extrasolar planets discovered so far, there is probably one living world, according to Sasselov.

    Sasselov says a planet needs to have two things to sustain life. First, it must allow complex biochemistry to develop. For this to occur, the temperature on the planet has to fall within a certain range. Too far from its star and the surface may be too cold to support the necessary reactions; too close and it may be too hot. The second must-have for life is a recycling of gases and minerals from the planet’s interior to its exterior—known as the carbon cycle —which keeps the atmosphere in balance over long periods so life can emerge and survive.

    The alien life we are most likely to find will be micro­bial, Sasselov explains. In fact, he expects that the first living planet we discover will resemble what Earth looked like a billion years ago, when life had not yet evolved beyond bacteria, simple algae, and other microorganisms. “But Earth is just one possible pathway for the emergence of viable bio­mole­cules from chem­istry,” he says. “Are there multiple pathways? Do all chemical pathways converge to one or two or three possible ones to produce life?” Sasselov is working with planetary scientists and cosmochemists to answer these questions by analyzing concentrations of molecules in the universe and on the extrasolar planets they suspect may harbor life.

    Sasselov doesn’t think that discovering life on another planet will change much on Earth. “It did not make a big difference 450 years ago whether Earth was the center of the solar system or whether the sun was,” Sasselov explains. “It’s the same now. Nothing really would change.” People would, however, realize that “the place we live in is much bigger than we ever imagined,” he admits. “That is world-changing.”


    Gilles Brassard

    Edge work: Using quantum mechanics to protect our privacy
    Why? It will make electronic communications more secure.
    Where? Université de Montréal
    Initial response: “Very few people took it seriously.”

    Privacy wonks should love Gilles Brassard. He is the guy who has delivered their seemingly impossible desire: an absolutely confidential way to send electronic messages. Unfortunately, it involves quantum mechanics, the twilight zone of physics. Brassard, a 52-year-old professor of computer science at the Université de Montréal, turned the wild idea of using the quantum world to send messages electronically into something real. Soon it may be essential.

    Quantum cryptography ensures complete privacy because any attempt to observe the transmission will change the message. It is a basic principle of quantum mechanics: The act of observing affects the thing observed. “If I send you information in the form of quantum signals and someone tries to eavesdrop on that signal,” Brassard explains, “the act of eavesdropping will disturb the signal. It will also alert the recipient if the transmission has been compromised.”

    As a child Brassard wanted to be a mathematician, but he became fascinated with programming when he took a computer science course at the Université de Montréal, which he entered at age 13. A decade later, in 1979, he became fascinated with how the strange properties of quantum mechanics could be harnessed to send confidential messages without an elaborate encoded key, as required by conventional cryptography. In 1983 he codeveloped BB84, the first practical quantum cryptography scheme, and he continued to refine it for years.

    Today, along with physicists like Christopher A. Fuchs of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, he is also reexamining the foundations of quantum mechanics to see where information fits in. Brassard suspects that underlying the fundamental laws of the universe are information theory axioms rather than waves or particles. “I don’t have any formal training as a physicist,” he says, “but sometimes that’s good. It helps you see things differently.”

    Original here


    Back to 1984: Scientists develop technology to read your mind

    Pittsburgh (PA) – Scientists from Carnegie Mellon University claim to have found a way to predict brain activity when someone thinks about specific words. So far, the catalog of words supported by the technology is limited to only 60 words. However, the simple fact that computers are able to detect thoughts based on words not only offers new opportunities in research such as thought disorders, but put the meaning of Big Brother into whole new category.

    Depending on how you look at Carnegie Mellon’s announcement, the research results presented can be both impressive and scary. It is the first time we know of that scientists are actually able to dive into your thoughts and uncovering without very little doubt what words you are currently thinking of. If these findings in fact are accurate and any indications what we may see in the not too distant future, we could see giant leaps in brain activity research that could provide more insight study of autism, disorders of thought such as paranoid schizophrenia, and semantic dementias such as Pick's disease. Of course, it could also be another dimension of privacy invasion.

    Tom Mitchell, and a cognitive neuroscientist, Marcel Just, both of Carnegie Mellon University, had previously shown that that functional magnetic resonance imaging (fMRI) can detect and locate brain activity when a person thinks about a specific word. Using this data, the researchers claim to have developed a computational model that enabled a computer to correctly determine what word a research subject was thinking about by analyzing brain scan data.

    Based on this technology, Just and Mitchell said that fMRI data allowed them to develop a complex computational model that can predict the brain activation patterns associated with concrete nouns, or things that we experience through our senses, even if the computer did not already have the fMRI data for that specific noun: 60 nouns were organized in twelve categories including animals, body parts, buildings, clothing, insects, vehicles and vegetables. The model also analyzed a text corpus, or a set of texts that contained more than a trillion words, noting how each noun was used in relation to a set of 25 verbs associated with sensory or motor functions. Combining the brain scan information with the analysis of the text corpus, the computer then predicted the brain activity pattern of thousands of other concrete nouns.

    The result? According to the scientists, the computer can effectively predict what each participant's brain activation patterns would look like when each thought about these words, even without having seen the patterns associated with those words in advance.

    "We believe we have identified a number of the basic building blocks that the brain uses to represent meaning," Mitchell said. "Coupled with computational methods that capture the meaning of a word by how it is used in text files, these building blocks can be assembled to predict neural activation patterns for any concrete noun. And we have found that these predictions are quite accurate for words where fMRI data is available to test them."

    "We are fundamentally perceivers and actors," Just said. "So the brain represents the meaning of a concrete noun in areas of the brain associated with how people sense it or manipulate it. The meaning of an apple, for instance, is represented in brain areas responsible for tasting, for smelling, for chewing. An apple is what you do with it. Our work is a small but important step in breaking the brain's code."

    The scientists also found “significant activation” in other areas, including frontal areas associated with planning functions and long-term memory. When someone thinks of an apple, for instance, this might trigger memories of the last time the person ate an apple, or initiate thoughts about how to obtain an apple, they said.

    As interesting as these research results sound, the advances in brain scanning could make you feel uncomfortable and Eric Arthur Blair, better known by his pseudonym George Orwell, may say that his vision of a society outlined in “1984” is not impossible after all, at least from a technology point of view. 1984 was published in 1949. Blair died in 1950.

    Original here

    Scientists probe giant squid sex secrets

    230 kg squid netted off Portland

    The six-metre long, 230 kilogram squid netted off Portland in Victoria's south-west on Monday May 26, 2008. (Fisheries Victoria)

    Victorian scientists are preparing to dissect a giant squid caught off the state's south-west coast last week, hoping to find out more about the enigmatic marine creature.

    Early this morning staff from Museum Victoria collected a huge block of ice containing the creature, which is Australia's largest discovered giant squid.

    The animal has three hearts and blue blood, boasts a donut-shaped brain that surrounds its oesophagus, is six metres long, and weighs 240 kilograms.

    Dr Mark Norman, the senior curator of molluscs with Museum Victoria, says the latest discovery provides him and fellow scientists with a unique opportunity.

    "This giant squid is really interesting. It is the biggest we have seen in Australia and it is a little bit bigger than some we've seen [worldwide]," he said.

    "But the exceptional thing about it is that it is in very good condition in that the eyes are still intact, all the skin is still on it, the fin is still attached."

    He says because its food must be swallowed through the ring-shaped brain, the head and mouth has had to adapt.

    "They've got this giant beak like a giant parrot with a tongue covered in teeth, and they have to puree the food so they don't get a splitting headache every time they swallow."

    While giant squids periodically wash up on beaches around the world and scientists have studied them in fits and bursts, Dr Norman says there are still huge gaps in knowledge.

    Next on the agenda for the frozen body is intensive research.

    "Our aim is that we will do a proper autopsy and dissection," he said.

    Group sex

    He says the reproductive habits of giant squids are particularly interesting and will be the focus of much study.

    "[We will look at] whether it has been mated or not. Whether it is a male or female.

    "Giants have very strange sexual behaviour where the male has a metre-long muscular penis that he uses a bit like a nail gun and shoots cords of sperm under the skin of the female's arms and she carries the sperm around with her until she is ready to lay her big jelly mass of a million eggs.

    "[We want to find out] whether they gather somewhere together to mass-breed.

    "If we get some sperm out of the arms of this animal then we can do paternity studies and see if was multiple males that are mating with her or single males.

    "It is very, very strange for the systems going on down there, but because these animals are probably few and far between, we don't know how often they get together to mate."

    With the animal in such good condition, the museum has been presented with a rare opportunity to glean knowledge on one of the world's most intriguing creatures.

    But the general public will not be forgotten, and the giant mollusc is destined to be a museum piece eventually.

    "Once we have done all the times and things we are going to do the 'Frankenstein' job and stitch it all back together and then put it on display at the museum." he said.

    Adapted by an AM report by Alison Caldwell.

    Original here

    Humans can 'see' future

    Our visual system has evolved to compensate for neural delays

    In this so-called Hering illusion, the straight lines near the central point (vanishing point) appear to curve outward. This illusion occurs because our brains are predicting the way the underlying scene would look in the next moment if we were moving toward the middle point.
    Mark Changizi, RPI

    Humans can see into the future, says a cognitive scientist. It's nothing like the alleged predictive powers of Nostradamus, but we do get a glimpse of events one-tenth of a second before they occur.

    And the mechanism behind that can also explain why we are tricked by optical illusions.

    Researcher Mark Changizi of Rensselaer Polytechnic Institute in New York says it starts with a neural lag that most everyone experiences while awake. When light hits your retina, about one-tenth of a second goes by before the brain translates the signal into a visual perception of the world.

    Scientists already knew about the lag, yet they have debated over exactly how we compensate, with one school of thought proposing our motor system somehow modifies our movements to offset the delay.

    Image: Ball illusion
    Credit: Mark Changizi, RPI
    In this illusion, looming toward the center leads to the brightness expanding outward, while looming away does the opposite. Credit: Mark Changizi, RPI

    Changizi now says it's our visual system that has evolved to compensate for neural delays, generating images of what will occur one-tenth of a second into the future. That foresight keeps our view of the world in the present. It gives you enough heads up to catch a fly ball (instead of getting socked in the face) and maneuver smoothly through a crowd. His research on this topic is detailed in the May/June issue of the journal Cognitive Science.

    That same seer ability can explain a range of optical illusions, Changizi found.

    "Illusions occur when our brains attempt to perceive the future, and those perceptions don't match reality," Changizi said.

    Here's how the foresight theory could explain the most common visual illusions — geometric illusions that involve shapes: Something called the Hering illusion, for instance, looks like bike spokes around a central point, with vertical lines on either side of this central, so-called vanishing point. The illusion tricks us into thinking we are moving forward, and thus, switches on our future-seeing abilities. Since we aren't actually moving and the figure is static, we misperceive the straight lines as curved ones.

    "Evolution has seen to it that geometric drawings like this elicit in us premonitions of the near future,” Changizi said. "The converging lines toward a vanishing point (the spokes) are cues that trick our brains into thinking we are moving forward — as we would in the real world, where the door frame (a pair of vertical lines) seems to bow out as we move through it — and we try to perceive what that world will look like in the next instant."

    Grand unified theory

    Image: Orbison illusion
    Mark Changizi, RPI
    In the Orbison illusion, the squares closer to the image's center seem larger than the outside squares. The radiating lines and center point trick us into thinking we are moving forward toward the center point. The result is a perception of what the image would look like a tenth of a second into the future.

    In real life, when you are moving forward, it's not just the shape of objects that changes, he explained. Other variables, such as the angular size (how much of your visual field the object takes up), speed and contrast between the object and background, will also change.

    For instance, if two objects are about the same distance in front of you, and you move toward one of the objects, that object will speed up more in the next moment, appear larger, have lower contrast (because something that is moving faster gets more blurred), and literally get nearer to you compared with the other object.

    Changizi realized the same future-seeing process could explain several other types of illusions. In what he refers to as a "grand unified theory," Changizi organized 50 kinds of illusions into a matrix of 28 categories. The results can successfully predict how certain variables, such as proximity to the central point or size, will be perceived.

    Changizi says that finding a theory that works for so many different classes of illusions is "a theorist's dream."

    Most other ideas put forth to explain illusions have explained one or just a few types, he said.
    The theory is "a big new player in the debate about the origins of illusions," Changizi said. "All I'm hoping for is that it becomes a giant gorilla on the block that can take some punches."

    © 2008 LiveScience.com. All rights reserved.

    Original here

    Types and Processes Gallery - Pyroclastic Fall

    Redoubt


    A dramatic, mushroom-shaped eruption column, lit by the rising sun, rises above Alaska's Redoubt volcano on April 21, 1990. Clouds of this shape, which are produced when the upper part of an eruption column attains neutral buoyancy and is spread out above the troposphere-stratosphere boundary, are common during powerful explosive eruptions. This column at Redoubt, however, did not originate from an eruption at the summit crater, but is an ash column that is rising buoyantly above a pyroclastic flow sweeping down the volcano's north flank.

    Photo by Joyce Warren, 1990 (courtesy of U.S. Geological Survey).

    Original here

    What's that name?

    You know the feeling that something is on the tip of your tongue? It offers deep insights into the nature of the mind.


    LATE IN 1988, a 41-year-old Italian hardware clerk arrived in his doctor's office with a bizarre complaint. Although he could recognize people, and remember all sorts of information about them, he had no idea what to call them. He'd lost the ability to remember any personal name, even the names of close friends and family members. He was forced to refer to his wife as "wife."

    (Ryan Lane/Istock Photo)

    more stories like this

    A few months before, the man, known as LS in the scientific literature, had been in a serious accident. He was thrown from his horse and the left side of his skull took the brunt of the impact. At first, it seemed as if the man had been lucky. A battery of routine tests had failed to detect any abnormalities. But now he appeared stuck with this peculiar form of amnesia, so that the names of people were perpetually on the tip of his tongue. It was agonizing.

    In the years since, scientists have come to a much firmer understanding of this phenomenon. It's estimated that, on average, people have a tip-of-the-tongue moment at least once a week. Perhaps it occurs when you run into an old acquaintance whose name you can't remember, although you know that it begins with the letter "T." Or perhaps you struggle to recall the title of a recent movie, even though you can describe the plot in perfect detail. Researchers have located the specific brain areas that are activated during such moments, and even captured images of the mind when we are struggling to find these forgotten words.

    This research topic has become surprisingly fruitful. It has allowed scientists to explore many of the most mysterious aspects of the human brain, including the relationship between the conscious and unconscious, the fragmentary nature of memory, and the mechanics of language. Others, meanwhile, are using the frustrating state to learn about the aging process, illuminating the ways in which, over time, the brain becomes less able to access its own storehouse of information.

    "The tip-of-the-tongue state is a fundamental side effect of the way our mind is designed," says Bennett Schwartz, a psychologist at Florida International University who studies the phenomenon.

    One of the key lessons of tip-of-the-tongue research is that the human brain is a cluttered place. Our knowledge is filed away in a somewhat slapdash fashion, so that names are stored separately from faces and the sound of a word and the meaning of a word are kept in distinct locations. Sometimes when we forget something, the memory is not so much lost as misplaced.

    The messy reality of the mind contradicts the conventional metaphor of memory, which assumes that the brain is like a vast and well-organized file cabinet. According to this theory, we're able to locate the necessary memory because it has been sorted according to some logical system. But this metaphor is misleading. The brain isn't an immaculate file cabinet - it's more like an untidy desk covered with piles of paper.

    Under normal circumstances, we don't notice the clutter because we still manage to find what we're looking for. However, during a tip-of-the-tongue experience, a crucial piece of knowledge gets lost. What's interesting is that, even though the mind can't remember the information, it's convinced that it's around somewhere in the mess. This is a universal experience: The vast majority of languages, from Afrikaans to Hindi to Arabic, even rely on tongue metaphors to describe the tip-of-the-tongue moment. And this is what has drawn the attention of neuroscience: If we've forgotten a person's name, then why are we so convinced that we remember it? What does it mean to know something without being able to access it?

    . . .

    For some researchers, the most interesting aspect of such moments is what they reveal about metacognition, a term that refers to the ways in which we reflect on our own thought processes. (We can think, in other words, about how we think.) Until recently, metacognition was largely ignored as a scientific subject because it seemed too abstract for experiments.

    While researchers had long realized that metacognition could be applied to things like mental states and emotions - you know when you're sleepy or angry - it wasn't clear that it could also be applied to particular pieces of knowledge, like the name of a person.

    "That seems like it would be a full-time job," says Schwartz. "There's a lot of stuff in your head."

    How might the mind keep track of its own contents? For the last several decades, scientists have assumed that the brain contains some innate indexing system, akin to a card catalog in a library, that allows it to immediately realize that it can produce a specific piece of knowledge. This is known as the "direct access" model, since it implies that the conscious brain has direct access to the vast contents of the unconscious.

    The tip-of-the-tongue experience, however, is leading researchers to question this straightforward model. According to this new theory, the brain doesn't have firsthand access to its own memories. Instead, it makes guesses based upon the other information that it can recall. For instance, if we can remember the first letter of someone's name, then the conscious brain assumes that we must also know his or her name, even if we can't recall it right away. This helps explain why people are much more likely to experience a tip-of-the-tongue state when they can recall more information about the word or name they can't actually remember.

    Perhaps the most surprising feature of the tip-of-the-tongue moment - a fleeting and infrequent experience - is that it can even be studied scientifically. Scientists say, however, that it's actually quite easy to trigger. The experiments go like this: A subject is given the definition of a rather obscure word, such as "goods that have been imported or exported illegally." Then, they are asked whether or not they can produce the word (contraband). A small percentage of the people will then say that, although they know the word, they can't quite recall it: it's on the tip of their tongue.

    Brain-imaging studies of tip-of-the-tongue states provide further evidence of how, exactly, the brain keeps track of its own knowledge. Research led by Daniel Schacter, a psychologist at Harvard, has demonstrated that tip-of-the-tongue states activate a distinct network of brain areas in the frontal lobes, including the prefrontal cortex and anterior cingulate cortex. These areas are typically associated with so-called higher brain functions and, during the tip-of-the-tongue moment, they seem to be performing two separate tasks. First, the frontal lobes are responsible for making the metacognitive judgment. And then, once we realize that we probably know what we can't remember, parts of the frontal lobe are in charge of organizing the search for that missing memory. They scour the stacks of the unconscious, as they try to figure out where we mislaid that pesky name.

    According to Schacter, the tip-of-the-tongue moment demonstrates a peculiar aspect of memory, which is that different aspects of memory are stored separately in the brain. When we think about a friend, all of our memories of that friend aren't filed away in a single location. Instead, different aspects of the memory are distributed throughout the brain, so that a proper name is separated from a visual memory of a face.

    "When we remember something, that memory feels unified," Schacter says. "But the reality is that you assemble each memory out of lots of different pieces. A tip-of-the-tongue state occurs when one of the pieces gets lost."

    A similar fragmentation is at work in the production of language. Lise Abrams, a psychologist at the University of Florida, has demonstrated that, in many cases, the key to remembering a word that has been on the tip of the tongue is to encounter another word that shares a first syllable with the one we are trying to remember. For instance, when subjects are trying to recall "bandanna," they are much more likely to come up with the solution if they are given "banish" as a hint. "Banish" and "bandanna" mean very different things, but they activate the same network of brain cells devoted to the sound of the words.

    The connections can be even more indirect. Abrams has shown that showing people a picture of a motorcycle can help them remember the word "biopsy." Because the idea of a motorcycle is connected in the brain to the concept of "bike," which shares a first syllable with "biopsy," the seemingly irrelevant cue becomes an effective hint.

    "By seeing what allows people to find the answer," Abrams says, "you can really trace all the different ways language is processed in the brain."

    The research suggests why the tip-of-the-tongue experience becomes so much more common with age. Numerous studies have documented the effects of the aging process on the frontal lobes, with the areas shrinking in size and decreasing in density. As a result, the frontal lobes become less effective at searching the rest of the cortex for specific pieces of information. This suggests that lapses in memory become more common not just because the memories have faded, but because it is harder and harder to find them. The memory is there, but it looms, frustratingly, just out of reach.


    Jonah Lehrer is an editor at large at Seed magazine and author of "Proust Was a Neuroscientist."

    Original here

    Hot Tech to Watch for the Next Four Years

    Socialmedia2 The lifespan of technology is such that it’s hard enough to buy a computer that will last you more than three years, let alone be state of the art after 6 months. So when Gartner Group – an information and technology research and advisory firm – releases their “Top 10 Technologies” list, it isn’t for “the next decade,” but rather “for the next four years.”

    Such a list has just been released by the world’s leading technology research center, and appears below.

    1. Multicore and hybrid processors

    2. Virtualization and fabric computing

    3. Social networks and social software

    4. Cloud computing and cloud/Web platforms

    5. Web mashups

    6. User Interface

    7. Ubiquitous computing

    8. Contextual computing

    9. Augmented reality

    10. Semantics

    Now even for me, some of these words are fine on their own, but when compared to each other, or to technology, I get a little baffled. So I’m going to go through them one by one, and see what they all mean.

    Multicore and hybrid processors

    The most popular and widespread use of multicore processors at the moment belongs to the Intel Core 2 Duo and Quad Core chipsets that are bringing processor speeds in retail computers up and up. The future of this will see computing speeds continue to rise as we move in to the realm of 8 cores, and up. And while there aren’t really any retail programs that are going to make much use of this, the science community and other such groups will be able to make the most out of higher processor speeds.

    Virtualization and fabric computing

    Virtualization is, in one example, the ability to run Windows Vista applications on a Mac laptop running Leopard. It allows for programs from two operating systems – with their pros and cons – to be used on the same computer, without the need for a second box sitting around, and without the need to emulate the hardware. (Thanks to my friend JB for helping me work this one out.)

    Social networks and social software

    This is not a category that needs much in the way of explanation. However its uses will, eventually, grow to expand past the frivolous and social uses that we see in sites like Facebook and MySpace.

    Programs like Second Life are already hosting business meetings, and websites like LinkedIn is providing people with the means to get in touch with people within their own fields, businesses and groups.

    Cloud computing and cloud/Web platforms

    Cloud computing is definitely going to be a big part of our future, and well beyond four years as well. The theory exists that we will not necessarily be hosting our information on one computer or device, but rather in the clouds – over the internet or whatever process follows – and are thus accessible via our work computers, home computers, personal devices and all over.

    Web mashups

    The best example of a web mashup is provided by Google Maps, which combines two web services – in one instance, Google Maps and real estate information – and combines them to provide you with real estate information on Google Maps.

    So, in essence, it’s the combination of two web services to create a new and more useful application.

    User Interface

    User interface, or UI, is definitely going to shift over the next few years. Bill Gates has just been quoted as saying that the mouse is going to be obsolete in a few years, replaced by touch screens. The iTouch and the Microsoft Surface are examples of the touch screen technology that Gates is referring too; using your fingertips to control, resize, move and change anything from images to data sets.

    Ubiquitous computing

    Want your fridge connected to the internet so that it can order the milk when it goes bad? Want to turn on the lighting or heating when you are on your way home from work? Want your life to be interconnected by the devices you use? Ubiquitous computing is also, funnily enough, called pervasive computing.

    In addition, it means that, akin to cloud computing, your information can follow you from device to device. JB – who has helped me out with this article – describe that he wants “the football game to follow” him around. In other words, he sits in the car listening to the audio of the game, walks in to the house to his TV where it is then on, and then upstairs to his computer where it is then on.

    Contextual computing

    This is basically the idea that your computing devices will be able to perform based on whatever context you find yourself in. For example, when you undock your laptop from your work dock is it 12pm or 5pm. In other words, are you heading to a meeting – and thus don’t need anything special – or are you heading home, and thus need your calendar updated and emails checked?

    This is also going to be used for mobile devices such as your phone. A recent grant was provided by Google to students at MIT for developing an application for the Android platform, that allowed the device it was on to sense whether you were outside, in a meeting or at home, thus allowing the device to swap profiles accordingly.

    Augmented reality

    We’ve often seen examples of this in futuristic movies. Those people wearing the goggles or with the contact lenses that pop up video calls, text, pictures, etc, that’s what we’re talking about when we use the term augmented reality. It’s basically augmenting your real-world reality with technology. (Thanks again to JB for help with this.)

    Semantics

    The semantic web is a term that is being thrown around a lot these days, and is, and it’s most basic level, the ability for a search engine to understand what you are talking about. In the future, the ability for a computer to understand what you are asking it – within context, rather than just by popularity as most search algorithms are based – will enhance our ability to get our work done quicker.

    If you want a very basic concept of what Gartner are suggesting, then it is basically that technology in the future will make us all very lazy. It could be spun to say that we’ll all be more efficient, but I have a deep and intricate relationship with humanity, and I know that those who choose efficiency over laziness are far and few between.

    Posted by Josh Hill.

    Original here

    Key to All Optical Illusions Discovered

    Humans can see into the future, says a cognitive scientist. It's nothing like the alleged predictive powers of Nostradamus, but we do get a glimpse of events one-tenth of a second before they occur.

    And the mechanism behind that can also explain why we are tricked by optical illusions.

    Researcher Mark Changizi of Rensselaer Polytechnic Institute in New York says it starts with a neural lag that most everyone experiences while awake. When light hits your retina, about one-tenth of a second goes by before the brain translates the signal into a visual perception of the world.

    Scientists already knew about the lag, yet they have debated over exactly how we compensate, with one school of thought proposing our motor system somehow modifies our movements to offset the delay.

    Changizi now says it's our visual system that has evolved to compensate for neural delays, generating images of what will occur one-tenth of a second into the future. That foresight keeps our view of the world in the present. It gives you enough heads up to catch a fly ball (instead of getting socked in the face) and maneuver smoothly through a crowd. His research on this topic is detailed in the May/June issue of the journal Cognitive Science,

    Explaining illusions

    That same seer ability can explain a range of optical illusions, Changizi found.

    "Illusions occur when our brains attempt to perceive the future, and those perceptions don't match reality," Changizi said.

    Here's how the foresight theory could explain the most common visual illusions — geometric illusions that involve shapes: Something called the Hering illusion, for instance, looks like bike spokes around a central point, with vertical lines on either side of this central, so-called vanishing point. The illusion tricks us into thinking we are moving forward, and thus, switches on our future-seeing abilities. Since we aren't actually moving and the figure is static, we misperceive the straight lines as curved ones.

    "Evolution has seen to it that geometric drawings like this elicit in us premonitions of the near future,” Changizi said. "The converging lines toward a vanishing point (the spokes) are cues that trick our brains into thinking we are moving forward — as we would in the real world, where the door frame (a pair of vertical lines) seems to bow out as we move through it — and we try to perceive what that world will look like in the next instant."

    Grand unified theory

    In real life, when you are moving forward, it's not just the shape of objects that changes, he explained. Other variables, such as the angular size (how much of your visual field the object takes up), speed and contrast between the object and background, will also change.

    For instance, if two objects are about the same distance in front of you, and you move toward one of the objects, that object will speed up more in the next moment, appear larger, have lower contrast (because something that is moving faster gets more blurred), and literally get nearer to you compared with the other object.

    Changizi realized the same future-seeing process could explain several other types of illusions. In what he refers to as a "grand unified theory," Changizi organized 50 kinds of illusions into a matrix of 28 categories. The results can successfully predict how certain variables, such as proximity to the central point or size, will be perceived.

    Changizi says that finding a theory that works for so many different classes of illusions is "a theorist's dream."

    Most other ideas put forth to explain illusions have explained one or just a few types, he said.
    The theory is "a big new player in the debate about the origins of illusions," Changizi told LiveScience. "All I'm hoping for is that it becomes a giant gorilla on the block that can take some punches."

    Original here