Followers

Friday, June 6, 2008

Biggest Eyes on the Universe Get Makeover

Bigger and Better
Bigger and Better

In astronomy, size matters. And for decades radio astronomers have been able to boast that they had the largest telescopes on the planet.

They still do, with some now virtually the size of continents. But until recently the technology used to run them was getting pretty clunky and outdated.

An international effort is now underway to upgrade the world's giant radio telescopes with 21st century technology. The improvements will increase their sensitivity up to 10 times, opening up whole new heavenly realms. At the same time, new and more specialized radio telescope arrays are being built to peer into the universe's earliest star-forming era.

21st Century Radio

"We're leapfrogging several generations of technological progress," said Fred K.Y. Lo, director of the National Radio Astronomy Observatory (NRAO).

The flagship of NRAO is the almost 30-year-old Very Large Array (VLA) near Socorro, N.M., which combines the waves of 28 radio telescopes, each 25 feet in diameter, that are spread out over the desert to create a virtual telescopic dish the size of a small city.

VLA is undergoing a total upgrade, starting with the key element in radio telescope arrays -- the computer correlator that blends all the radio data from all the dishes.

Arrays of radio telescopes combine the radio waves they collect to vastly enhance the resolution of their cosmic images. This has been possible -- and necessary -- for decades because radio waves from space can be more in the range of tens of meters long. That makes them both easier to line up and combine than visible light waves, which are only millionths of a meter long. But combining radio waves also makes for less sharp, lower resolution images.

"Radio waves are very long and so you need very large telescopes," said Rick Perley, who is working on the 21st century Expanded VLA, or EVLA.

To illustrate: A meter-wide visible light telescope is a couple of million times wider than the wavelength of light it gathers. It's analogous to having a computer monitor with lots of very high-density pixels -- that makes for a sharper image. Visible light telescopes can resolve a piece of sky just a few arc seconds across -- at least 60 times a smaller patch of sky than can be resolved with the human eye.

Because radio waves are millions of times longer than visible light, a collecting dish of a 10-meter radio telescope might only resolve an area of sky the size of the moon. The only way to counteract this is by aiming lots of widespread radio telescopes at the same thing and combining their light -- gather a lot more pixels, in other words -- to sharpen the image.

For this reason one of the most important upgrades for the VLA and other radio telescopes is the correlator, which synchronizes all the light from all the radio dishes to create a single high-resolution radio image. The EVLA correlator is being built by Canadian researchers and engineers. Like most new computers, it will be able to handle more data much faster.

Original here

Astronomy Picture of the Day

Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.

2008 June 6

Two-Armed Spiral Milky Way
Illustration Credit: R. Hurt (SSC), JPL-Caltech, NASA
Survey Credit: GLIMPSE

Explanation: Gazing out from within the Milky Way, our own galaxy's true structure is difficult to discern. But an ambitious survey effort with the Spitzer Space Telescope now offers convincing evidence that we live in a large galaxy distinguished by two main spiral arms (the Scutum-Centaurus and Perseus arms) emerging from the ends of a large central bar. In fact, from a vantage point that viewed our galaxy face-on, astronomers in distant galaxies would likely see the Milky Way as a two-armed barred spiral similar to this artist's illustration. Previous investigations have identified a smaller central barred structure and four spiral arms. Astronomers still place the Sun about a third of the way in from the Milky Way's outer edge, in a minor arm called the Orion Spur. To locate the Sun and identify the Milky Way's newly mapped features, just place your cursor over the image.

Original here

Astronomy study proves mathematics theorem

The gravitational lens G2237 + 0305, dubbed the
The gravitational lens G2237 + 0305, dubbed the "Einstein Cross", shows four images of a very distant bright galaxy called a quasar whose light has been bent by a relatively nearby galaxy acting as a gravitational lens (Image: NASA)

A gravitational lens can do more than reveal details of the distant universe. In an unexpected collision of astrophysics and algebra, it seems that this cosmic mirage can also be used to peer into the heart of pure mathematics.

In a gravitational lens, the gravity of stars and other matter can bend the light of a much more distant star or galaxy, often fracturing it into several separate images (see image at right). Several years ago, Sun Hong Rhie, then at the University of Notre Dame in Indiana, US, was trying to calculate just how many images there can be.

It depends on the shape of the lens – that is, how the intervening matter is scattered. Rhie was looking at a lens consisting of a cluster of small, dense objects such as stars or planets. If the light from a distant galaxy reaches us having passed through a cluster of say, four stars, she wondered, then how many images might we see?

She managed to construct a case where just four stars could split the galaxy into 15 separate images, by arranging three stars in an equilateral triangle and putting a fourth in the middle.

Later, she found that a similar shape works in general for a lens made of n stars (as long as there are more than one), producing 5n - 5 images. She suspected that was the maximum number possible, but she couldn't prove it.

At about the same time, two mathematicians were working on a seemingly unrelated problem. They were trying to extend one of the foundation stones of mathematics, called the fundamental theorem of algebra.

Algebraic roots

It governs polynomial equations, which involve a variable raised to powers. For example, the equation x3 + 4x - 3 = 0 is a polynomial of degree three – the highest power of x is x3.

The fundamental theorem of algebra, proved back in the 18th century, says that a polynomial of degree n has exactly n solutions. (Though in general, the variable x has to be a complex number, involving the square root of -1.)

"The fundamental theorem of algebra has been a true beacon, where modern algebra started," says Dmitry Khavinson of the University of South Florida in Tampa, US.

Khavinson and Genevra Neumann of the University of Northern Iowa in Cedar Falls, US, wanted to take this further, by looking at more complicated mathematical objects called rational harmonic functions. These involve one polynomial divided by another.

Upper limit

In 2004, they proved that for one simple class of rational harmonic functions, there could never be more than 5n - 5 solutions. But they couldn't prove that this was the tightest possible limit; the true limit could have been lower.

It turned out that Khavinson and Neumann were working on the same problem as Rhie. To calculate the position of images in a gravitational lens, you must solve an equation containing a rational harmonic function.

When mathematician Jeff Rabin of the University of California, San Diego, US, pointed out a preprint describing Rhie's work, the two pieces fell into place. Rhie's lens completes the mathematicians' proof, and their work confirms her conjecture. So 5n - 5 is the true upper limit for lensed images.

"This kind of exchange of ideas between math and physics is important to both fields," Rabin told New Scientist.

Rhie no longer works in academia, having run out of funding. "I didn't even bother to submit my papers to journals because I had been so much harassed by the referees [of earlier papers]," she told New Scientist. "I was new to gravitational lensing at that time. What I said and the way I said it must have been unfamiliar to the gravitational lensing experts."

Spread out

Theoretically, the work is valid for any type of gravitational lens, but its practical applications are not yet clear. That's because the objects in Rhie's sample lens all lie in the same plane and are simple point sources, with nothing between them.

Actual gravitational lenses tend to be much more complicated, and can be made up of clusters of hundreds of galaxies. These are spread out over large regions of space and contain a lot of gas between and within individual galaxies.

And although there are gravitational lenses comprised of just a few stars or planets, they produce images that are too close together for present-day telescopes to resolve.

But such "microlensing" can reveal the existence of planets around other stars. And in the future, a technique called optical interferometry, which links together the observations of more than one telescope, might make it possible to see the multiple lensed images produced by the planets of another star system.

Original here


5 Things You Didn't Know: DNA

It's inside all of us, telling us how to behave, how to function and how to grow. It's your deoxyribonucleic acid (DNA) and it contains the genetic instructions that essentially make you who you are. Just in the last decade alone, our use and manipulation of DNA has skyrocketed. We now have genetically modified foods and microorganisms, genetic testing for disease and we even know the full sequence of the human genome -- all attributable to DNA. However, with such power comes controversy. Did you know that the human genome was almost commercialized? Well it was -- almost. Here’s a little more on that story, and a few other things you likely didn’t know about our double-helix friend, DNA.

1- The human genome was almost commercialized

Most are not aware, but the project to sequence the human genome (all of the DNA a human possesses) was a vicious race that pitted public interests versus private.
The public arm of the race was lead by the Human Genome Project (HGP), an international consortium of scientists and researchers working at a price tag of a mere $3 billion U.S. The project began in 1990, and was expected to reach completion in 15 years. There were, however, complications.

In 1998, a privately funded quest was launched by J. Craig Venter, president of Celera Genomics. Working on a budget of only $300 million, Celera's effort was projected to proceed faster than the HGP -- a notion that enraged leaders of the public project, especially when Celera’s intentions were made clear. Celera announced that it would seek patent protection on specific genes, and would not permit free distribution of the human genome to public databases. Instead, they planned to harbor a separate database on their private website, and charge for any analyses requested by external researchers.

The announcement by Celera sent leaders of the HGP into a frenzy. With both teams in a bid to win the race, dueling media reports began to emerge. A progress update would be released by one party, and then the claim would be disputed by the other. After several failed partnerships, the two teams managed to release their results almost simultaneously in February of 2000, citing that the race sped up the project, to everyone's benefit.

2- DNA testing is used to authenticate foods like caviar

Who knew that the caviar industry was so ripe with mystery, excitement and illegal activity? It’s likely that most have no idea. The choice delicacy of the affluent, caviar has for years been regulated worldwide by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (C.I.T.E.S.). C.I.T.E.S. has closely monitored stocks of caviar coming from the Caspian Sea -- the body of water responsible for some 90% of the world’s caviar -- in an effort to protect endangered species, particularly sturgeon caviar. As a result, DNA testing has become a staple method of authentication to ensure that incoming caviar is properly labeled and legitimate. However, quite often, testing of the DNA yields an unfortunate result.

Back in 2006, a Toronto company was fined for importing caviar from three rare, protected sturgeon species -- the beluga, sevruga and osetra -- whose caviar can fetch up to $150 per ounce. Such reports are merely the tip of the iceberg for an industry dominated by organized crime. "The appetite of smugglers for profit has the potential to extinguish them [sturgeon] from the Earth," said Tom Sansonetti, assistant attorney general of the Justice Department's Environment and Natural Resources Division in a past interview. Illegal smuggling of caviar is, in fact, so widespread that Environment Canada estimates the black market for caviar to be worth between $200 and $500 million globally in 2005.

3- Full genome DNA testing can be had for as little as $1,000

Have you ever wondered what you looked like on the inside, you know, on a DNA level? Pacific Biosciences, a California biotech company, predicts it will soon be able to sequence an entire human gene map in just four minutes -- and only for a cool grand.

Now, to some of you, $1,000 is a lot of money, but just think about the big picture. Pacific Biosciences is offering a service that took the Human Genome Project almost 10 years and $3 billion to achieve -- and now it’s available in four minutes and for far less! The ramifications are truly remarkable, establishing a new precedent in personal genomics. In January 2008, Knome and the Beijing Genetics Institute (BGI) announced that they would sequence entire genomes for $350,000. And if you think that’s expensive, an anonymous Chinese citizen paid $1.3 million to have their genome sequenced by BGI in 2007. Talk about a dynamic market flux.

So, what’s the point? Paying such ludicrous prices to have your genome sequenced is kind of silly at this time as most researchers don’t even know where to begin when looking for genes that cause disease. Sure, some hazardous genes have been identified, but sifting through the entire genome to get a detailed prediction is like finding a needle in a haystack.

4- The first-ever DNA evidence cleared a murder suspect

In 1983 and 1986 two schoolgirls were found raped and murdered in the small town of Narborough, Leicestershire, sparking a murder hunt that led to the conviction of local man Colin Pitchfork -- but not before police arrested and nearly convicted their prime suspect, an unnamed local boy.

Apparently, the young boy knew physical details about one of the bodies and actually confessed to one of the murders. Convinced that the boy had actually committed both crimes, officers sought further confirmation. Using a crude DNA extraction technique, Dr. Peter Gill analyzed semen samples from the victims and found that they did not match the DNA of the young boy. The boy thus became the first ever suspect exonerated on DNA evidence.

Amazingly, police then led the world’s first DNA intelligence-led screen and sampled blood from literally all the men in three surrounding villages. However, after 5,000 samples, no match was found -- the murderer had pulled the old switcheroo. A friend of the murderer was overheard some time later talking about how he had switched the blood sample of Colin Pitchfork’s with his own. Mr Pitchfork, a local baker, was soon arrested and sentenced to life in prison for double homicide.

Since this historic case, DNA evidence has become a primary tool in numerous crime convictions, providing a level of confidence unparalleled: The odds that an individual with a matching DNA profile was selected by chance alone is about one in one billion. There’s no arguing that.
DNA is a part of us and will continue to be a strong part of scientific studies and research

5- DNA says Genghis Khan was a prolific lover

Ghengis Khan, one of the most fearless warriors, was both ruler and emperor of the Mongol empire. However, according to recent DNA evidence, it is also very likely that he helped populate his empire.

In 2003, an international group of 23 geneticists published results of a study that examined the Y chromosomes of 2,123 men from across Asia. The study found that nearly 8% of the men living in the region of the former Mongol empire carried nearly identical Y-chromosomes. The Y chromosome is passed down from father to son to grandson, and so on. Extrapolating the data a bit further, the researchers estimated that nearly 16 million descendants carried similar Y chromosomes.

While there are various explanations as to how this phenomenon could have arisen, the authors attribute the most likely scenario to the prolific seed-sowing of none other than Ghengis Khan. Khan’s empire spanned across most of Asia, he frequently slaughtered all who opposed him, thus limiting genetic variation. Khan often got the first pick of women whom he frequently raped, and some of Khan’s sons were noted as having as many as 40 children. Putting all the evidence together, one can’t help but consider Genghis Khan as the most prolific man history has ever known -- except for maybe Jesus, whose DNA was recently unearthed according to a controversial documentary, The Lost Tomb of Jesus -- but that’s an entirely different story.

Searched

Because DNA is the building block of all living organisms, and with our increasing understanding of disease we are entering a new genetic era where testing of our DNA will become as commonplace as turning your head and coughing.

Interest

Like a first love, DNA will attract infatuation that will burn through the ages -- at least for science buffs. As long as humankind quests to further human health, we will tinker with DNA. Whether the public remains interested depends on how far we choose to manipulate DNA. With human cloning and other controversial bombshells waiting just around the corner, expect DNA to remain in the public eye for decades to come.

Oriignal here

Agilent ups ante with new GC/MS and LC/MS technology

The American Society for Mass Spectrometry meeting (ASMS) was the scene of a lot of activity from Agilent this week as the lab instrument specialist unleashed new mass spectrometer products at the show.
It was a chance for the research community to get a sneak preview of new technology that Agilent says smashes existing sensitivity barriers that currently restrict current gas chromatograph/mass spectroscopy (GC/MS) and liquid chromatograph/mass spectroscopy (LC/MS) machines.

LC-MS is used to analyse a wider range of compounds than GC-MS. Large, polar and thermally labile analytes that are not amenable to GC may be studied by LC-MS without derivatization.

Agilent's 7000A triple quadrupole GC/MS is designed to achieve femtogram-level sensitivity and make high-speed Multiple Reaction Monitoring (MRM) accessible to a range of commercial and government users.

Setting new standards in sensitivity, the 7000A claims to support this position by delivering femtogram-level sensitivity. For example, the system will detect 100fg of OFN on column at greater than 100:1 signal-to-noise ratio in MS/MS mode using Autotune parameters.

Mass range is 1050 u, and the fast MRM speed of 500 per second enables users to determine more compounds per ion group than with comparable instruments.

Agilent also makes available the 6460 triple quadrupole LC/MS, a system that lowers detection limits fivefold compared to older instruments, allow sub-femtogram detection levels for many compounds.

The system is designed to excel analysis of trace-level environmental or food contaminants, pharmaceutical compounds, metabolites and protein biomarkers.

"In just two years, the 6400 triple quad LC/MS product line has demonstrated huge performance gains," said Gustavo Salem, Agilent vice president and general manager, LC/MS Division.

"Dramatic sensitivity gains, faster polarity switching, faster and more MRMs in a method, using either time segments or scheduling, and HPLC improvements are a few examples," he added.

The 6460 packs some serious technology to achieve this performance and includes a resistively coated sampling capillary that enhances ion transmission and enables rapid ion-polarity switching with minimal ion loss.

Also included is a high-capacity vacuum system with a second turbomolecular pump that increases conductance throughout the mass analyzer.

According to BCC research, the mass spectroscopy market in the US holds a total market share of about $1.3bn in 2005 and an average annual growth rate of almost 9 per cent projected through 2010, resulting in a $2bn market share.

This market includes regular mass spectroscopy along with chromatography (gas or liquid) and gas and liquid combinations (hyphenated) spectroscopy.

It is estimated that the total US market exceeded $3.6bn in 2005 and will grow at an average rate of 7.7 per cent per year through 2010, when it is forecast to achieve sales of more than $5.2bn.

Original here

Exponential Technologies: Cheer Up World—We Are On the Verge of Great Things

At the recent World Science Festival in New York City, Ray Kurzweil outlined why he is certain that the future isn’t as dreary as it’s been painted, and why we are closer to the incredible than we think: Exponential upward curves can be deceptively gradual in the beginning. But when things start happening, they happen fast. Here are a selection of his predicted trajectories for these “miracles” based on his educated assessment of where science and technology is at in the present.

· Within 5 years the exponential progress in nanoengineering will make Solar power cost-competitive with fossil fuels

· Within 10 years we will have a pill that allows us all to eat whatever we feel like and never gain any unwanted weight

· In 15 years, life expectancies will start rising faster than we age

· In about 20 years 100% of our energy will come from clean and renewable sources, and a computer will pass the Turing Test by carrying on a conversation that is indistinguishable from a human’s.

Commenting on the validity of Kurzweil’s predictions, John Tierney notes in the New York Times that Kurzweil has been uncannily accurate in the past:

“It may sound too good to be true, but even his critics acknowledge he’s not your ordinary sci-fi fantasist. He is a futurist with a track record and enough credibility for the National Academy of Engineering to publish his sunny forecast for solar energy. He makes his predictions using what he calls the Law of Accelerating Returns, a concept he illustrated at the festival with a history of his own inventions for the blind.

In 1976, when he pioneered a device that could scan books and read them aloud, it was the size of a washing machine. Two decades ago he predicted that “early in the 21st century” blind people would be able to read anything anywhere using a handheld device. In 2002 he narrowed the arrival date to 2008. On Thursday night at the festival, he pulled out a new gadget the size of a cellphone, and when he pointed it at the brochure for the science festival, it had no trouble reading the text aloud. This invention, Dr. Kurzweil said, was no harder to anticipate than some of the predictions he made in the late 1980s, like the explosive growth of the Internet in the 1990s and a computer chess champion by 1998.”

Kurzweil backed up his claims at the conference with charts and graphs that showed some of the exponential advancements of the past. One graph showed how computing power started with the first electromechanical machines over a century ago. Initially they doubled every three years. At mid-century, they began to double every two years, which was the rate that inspired Moore’s Law. It now takes only a year. Another graph showed technological changes going back millions of starting with stone tools working its way up to modern computers.

“Certain aspects of technology follow amazingly predictable trajectories,” Kurzweil noted. Hopefully, the popular sci-fi plot where uncontrolled science and technology dooms mankind has gotten it backwards. If Kurzweil is right, the future isn’t as bleak as many claim, and science may well turn out to be our savior.

Posted by Rebecca Sato

Original here

Plastic brain outsmarts experts

Training can increase fluid intelligence, once thought to be fixed at birth



Fluid intelligence, an aspect of a person's IQ, allows people to solve unfamiliar problems by understanding relationships between various concepts independent of previous knowledge or skills. Research shows that training...
Click here for more information.

Can human beings rev up their intelligence quotients, or are they stuck with IQs set by their genes at birth? Until recently, nature seemed to be the clear winner over nurture.

But new research, led by Swiss postdoctoral fellows Susanne M. Jaeggi and Martin Buschkuehl, working at the University of Michigan in Ann Arbor, suggests that at least one aspect of a person's IQ can be improved by training a certain type of memory.

Most IQ tests attempt to measure two types of intelligence--crystallized and fluid intelligence. Crystallized intelligence draws on existing skills, knowledge and experiences to solve problems by accessing information from long-term memory.

Fluid intelligence, on the other hand, draws on the ability to understand relationships between various concepts, independent of any previous knowledge or skills, to solve new problems. The research shows that this part of intelligence can be improved through memory training.

"When it comes to improving intelligence, many researchers have thought it was not possible," says Jaeggi. "Our findings clearly show this is not the case. Our brain is more plastic than we might think."

Jaeggi, Buschkuehl and Walter Perrig from Bern University, Switzerland, along with Jon Jonides, their National Science Foundation-supported colleague from the University of Michigan, reasoned that just as crystallized intelligence relies on long-term memory, fluid intelligence relies on short-term memory, or "working memory," as it is more accurately called. This is the same type of memory people use to remember a phone number or an e-mail address for a short time, but beyond that, working memory refers to the ability to both manipulate and use information briefly stored in the mind in the face of distraction.

Researchers gathered four groups of volunteers and trained their working memories using a complex training task called "dual n-back training," which presented both auditory and visual cues that participants had to temporarily store and recall.

Participants received the training during a half hour session held once a day for either eight, 12, 17 or 19 days. For each of these training periods, researchers tested participants' gains in fluid intelligence. They compared the results against those of control groups to be sure the volunteers actually improved their fluid intelligence, not merely their test-taking skills.

The results were surprising. While the control groups made gains, presumably because they had practice with the fluid intelligence tests, the trained groups improved considerably more than the control groups. Further, the longer the participants trained, the larger were their intelligence gains.

"Our findings clearly show that training on certain memory tasks transfer to fluid intelligence," says Jaeggi. "We also find that individuals with lower fluid intelligence scores at pre-test could profit from the training."

The results are significant because improved fluid intelligence scores could translate into improved general intelligence as measured by IQ tests. General intelligence is a key to determining life outcomes such as academic success, job performance and occupational advancement.

Researchers also surmise that this same type of memory training may help children with developmental problems and older adults who face memory decline. But, that remains to be seen, because the test results are based on assessments of young, healthy adult participants.

"Even though it currently appears very hard to improve these conditions, there might be some memory training related to intelligence that actually helps," says Jaeggi. "The saying 'use it or lose it' is probably appropriate here."

Since it is not known whether the improvements in fluid intelligence last after the training stops, researchers currently are measuring long-term fluid intelligence gains with both laboratory testing and long-term field work. Researchers say it will be some time before a complete data set is available to draw any conclusions.

Original here



Pretty on the Inside

Detailed 3-D images of cells reveal the inner beauty of biology.
Biological beauty: This image of two adjoining cells preparing to divide was made with a new high-resolution 3-D microscope developed at the University of California.
Credit: Lothar Schermelleh, Peter Carlton

There is a revolution afoot in microscopy, as biophysicists come up with ways to image the nanoscale structures of living cells. Using a new technique called 3-D structured-illumination microscopy, researchers at the University of California, San Francisco, have made some of the most detailed optical images yet of the interior workings of cells, and they are gorgeous.

The resolution of conventional microscopes is limited by the size of the spot of light used to scan a surface. For more than a hundred years, biophysicists have run up against a fundamental limit: using lenses, it's not possible to focus light down to a spot size smaller than half its wavelength. So the inner workings of living cells have been impossible to resolve. Biologists have sequenced the genome, but it's still something of a mystery how DNA, RNA, proteins, and other molecules interact in live cells. These parts are visible using electron microscopy, but this process can only be employed on dead cells. Images of live cells taken with conventional light microscopes reveal only a blur. Understanding the inner workings of cells could shed light on disease.

"We threw the conventional microscope out the window and began again," says John Sedat, a professor of biochemistry and biophysics at the University of California, San Francisco. Instead of focusing a small spot of light onto cells, the new microscope, which has a resolution of about 100 nanometers, illuminates cells with stripes of light called an interference pattern. When a fine cellular structure, such as a single cluster of proteins embedded in a cell nucleus, reflects this light, it changes the pattern slightly. The microscope collects this light; software is used to interpret changes in its pattern and create an image.

Sedat and his group played a major role in developing this technique, initially for two-dimensional imaging. Their new work, described this week in the journal Science, involved creating 3-D images of the nucleus, the structure that holds the lion's share of the genome. The next step, says Sedat, is to decrease the amount of cell-damaging light needed to make the pictures to ensure that the cells remain healthy during the imaging process.

Click here to launch a slide show of images that reveal the cell at an unprecedented level of detail.

Original here

CleanTech Biofuels to Turn Dirty Diapers Into Ethanol


CleanTech Biofuels is serious about turning garbage into fuel and sincerely hopes you’ll ignore the fact that your car’s fuel tank could be carrying what’s left of little Timmy’s soiled nappies.

The company has announced that it’s investigating suitable sites for commercial garbage-to-ethanol facilities — leading baby-owners everywhere to rejoice that they may never again have to feel guilty about throwing out enough diapers each day to put the elephant in this commercial to shame (and can I just be the first to say “WTF?” to that commercial).

Over the last month CleanTech Biofuels has formed major partnerships with Green Tech America and HFTA UCal Berkeley to purchase and develop novel equipment and methods they hope will make the production of ethanol from garbage a reality. CleanTech boasts that their technology can be used to produce ethanol locally using waste that would otherwise end up in landfills — potentially reducing waste disposed of in those landfills by as much as 90%. From a recent CleanTech press release:

It is estimated that Americans [each] produce 4.4 pounds of waste per day, or 229 million tons of trash annually nationwide. This waste represents a virtually endless source of cellulosic feedstock for the production of biofuels that potentially will be available to CleanTech at almost no cost, and in some locations at a profit.

The comment about receiving feedstock at a profit is what really intrigues me. As far as I know, there are no other types of ethanol production facilities that have the potential to receive feedstock at a profit. In fact, in most cases this is a major sticking point between making cellulosic ethanol at an acceptable price and seeing dreams go down the tubes.

CleanTech isn’t alone in the push to make ethanol from waste. BlueFire Ethanol (why do all these ethanol company names have to be two words shoved together but both still capitalized?) recently announced that it will be starting construction of a facility within weeks to convert landfill waste into ethanol, and Coskata Inc. is also constructing a demonstration facility that will use municipal waste as a feedstock.

It appears that these companies are on the path to becoming major competitors. They should just merge now and avoid the future pain. CleanFireCoskataBlueTech sound like a good name to you?

The great part about making fuel from garbage is that many communities already pay fees to garbage companies to accept trash - referred to as “tipping fees.” CleanTech is looking to site their facilities in communities with favorable tipping fees, allowing them to get paid before they even start selling the ethanol.

If CleanTech or BlueFire are successful, their ethanol could be the cheapest around — and you could relax knowing that those old Pokemon cards you finally threw out might actually be doing some good.

Original here

$45 trillion needed to combat warming

By JOSEPH COLEMAN, Associated Press Writer

TOKYO - The world needs to invest $45 trillion in energy in coming decades, build some 1,400 nuclear power plants and vastly expand wind power in order to halve greenhouse gas emissions by 2050, according to an energy study released Friday.

The report by the Paris-based International Energy Agency envisions a "energy revolution" that would greatly reduce the world's dependence on fossil fuels while maintaining steady economic growth.

"Meeting this target of 50 percent cut in emissions represents a formidable challenge, and we would require immediate policy action and technological transition on an unprecedented scale," IEA Executive Director Nobuo Tanaka said.

A U.N.-network of scientists concluded last year that emissions have to be cut by at least half by 2050 to avoid an increase in world temperatures of between 3.6 and 4.2 degrees above pre-18th century levels.

Scientists say temperature increases beyond that could trigger devastating effects, such as widespread loss of species, famines and droughts, and swamping of heavily populated coastal areas by rising oceans.

Environment ministers from the Group of Eight industrialized countries and Russia backed the 50 percent target in a meeting in Japan last month and called for it to be officially endorsed at the G-8 summit in July.

The IEA report mapped out two main scenarios: one in which emissions are reduced to 2005 levels by 2050, and a second that would bring them to half of 2005 levels by mid-century.

The scenario for deeper cuts would require massive investment in energy technology development and deployment, a wide-ranging campaign to dramatically increase energy efficiency, and a wholesale shift to renewable sources of energy.

Assuming an average 3.3 percent global economic growth over the 2010-2050 period, governments and the private sector would have to make additional investments of $45 trillion in energy, or 1.1 percent of the world's gross domestic product, the report said.

That would be an investment more than three times the current size of the entire U.S. economy.

The second scenario also calls for an accelerated ramping up of development of so-called "carbon capture and storage" technology allowing coal-powered power plants to catch emissions and inject them underground.

The study said that an average of 35 coal-powered plants and 20 gas-powered power plants would have to be fitted with carbon capture and storage equipment each year between 2010 and 2050.

In addition, the world would have to construct 32 new nuclear power plants each year, and wind-power turbines would have to be increased by 17,000 units annually. Nations would have to achieve an eight-fold reduction in carbon intensity — the amount of carbon needed to produce a unit of energy — in the transport sector.

Such action would drastically reduce oil demand to 27 percent of 2005 demand. Failure to act would lead to a doubling of energy demand and a 130 percent increase in carbon dioxide emissions by 2050, IEA officials said.

"This development is clearly not sustainable," said Dolf Gielen, an IEA energy analyst and leader for the project.

Gielen said most of the $45 trillion forecast investment — about $27 trillion — would be borne by developing countries, which will be responsible for two-thirds of greenhouse gas emissions by 2050.

Most of the money would be in the commercialization of energy technologies developed by governments and the private sector.

"If industry is convinced there will be policy for serious, deep CO2 emission cuts, then these investments will be made by the private sector," Gielen said.

Original here

Greenhouse Graveyard: New Progress for Big Global Warming Fix

Scientists admit it will be tough to capture a key greenhouse gas and bury the CO2 in the ground, in rock or underwater. What’s even tougher for carbon sequestration: figuring out where to store it.
Locking Up CO2: Once carbon dioxide (CO2) has been stripped from smokestacks, engineers face a bigger problem: where to put it. A popular proposal is to pump it as a liquid into geologic formations underground (the depths would vary depending on location). Other ideas include adding it to a rock formation in Oman or spreading it onto the ocean floor. (Illustration by Headcase Design)
The 6-ft.-high steel wellhead jutting up from a weedy field next to the R.E. Burger coal-fired power plant in eastern Ohio doesn't look momentous. If anything, it resembles a supersize fire hydrant. Yet the wellhead and the borehole extending through a mile and a half of rock below it represent an ambitious goal: nothing short of entombing global warming.
Sponsored by the Department of Energy, the Electric Power Research Institute and a host of public and private partners, this experiment is one of several designed to bury carbon dioxide (CO2)—the climate-changing byproduct of burning fossil fuels—permanently, deep in the Earth. Called carbon sequestration, the process seems straightforward: Capture the gas, just as power plants today filter out pollutants such as soot or sulfur dioxide, then find places—underground, in the oceans or elsewhere—to dispose of it.

In practice, simply capturing CO2 is a daunting task, both energy-intensive and costly. But this pales beside the task of storing so much carbon. Bradley Jones, a vice president at the giant Texas utility TXU, has compared the dilemma to a small dog chasing a car. "Once he catches it, he's got to figure out what to do with it."

A single 1000-megawatt coal-fired power plant can send 6 million tons of CO2 up its stack annually—as much as two million cars. Hundreds of such plants around the world spew more than one-third of the 25 billion metric tons of CO2 humans pump into the atmosphere each year, with no sign of slowing. More than 100 new coal-burning power plants are on utility company drawing boards in the United States. China plans to commission about one new coal-burning plant every week for the next five years. In October, an international study based on a series of meetings of national academies of science concluded that coal emissions present the single greatest challenge in combating global warming.

Even compressed to a liquid, the amount of CO2 produced by a 1000-megawatt power plant over its 60-year lifetime is staggering: the equivalent of 3 billion barrels of oil. Underground storage for that much CO2 would be six times larger than what the oil industry calls a giant—a field with reserves of at least 500 million barrels. Multiply that by hundreds of power plants, and the sequestration challenge might seem overwhelming. According to Howard Herzog, a senior research engineer at the Massachusetts Institute of Technology, it would become a new global industry. "The amount of oil we consume in one day might be similar to the amount of CO2 we'll have to handle daily," he says. Scientists and engineers are laying the groundwork for carbon storage now, with proposals that range from proven technology to border­line fantastical.

Lock It in Saline Vaults

Later this year, a prototype ammonia-based filtering device will begin capturing a fraction of the CO2 from the Burger power plant. The recovered gas, pressurized to a supercritical state, will flow 5000 ft. down the borehole into a vast formation of porous sandstone filled with brine.

In a small trailer a few feet from the wellhead, site manager Phil ­Jagucki, of Battelle Laboratories, points to a diagram of rock strata—and to a layer of dense and, project sponsors hope, impermeable rock that lies above the sandstone. "That's the cap rock, the containment layer that should keep the carbon where we put it," Jagucki says.

It should work: Similar formations entomb oil and natural gas for millions of years—or at least until drillers punch through. But are there enough geologic containers with tightly sealed lids to hold industry's CO2? A recent study estimated that deep saline formations in Pennsylvania could store 300 years' worth of emissions from the state's 79 coal-fired plants.

Since 1996, the Norwegian oil company Statoil has injected 10 million metric tons of CO2 into sandstone below the floor of the North Sea. Seismic time-lapse surveys show that, so far, a thick layer of shale has prevented the CO2 from migrating out. Statoil scientists say that this single saline formation is so large that, in theory, it could store all the carbon dioxide produced over the next several centuries by Europe's power plants.

Use It to Recover Oil

On the windswept plains of North Dakota sits another example of carbon capture—and 205 miles to the north, a different version of carbon storage. Since 2002, the pure stream of carbon dioxide produced by the Great Plains Synfuels Plant—a byproduct of synthesizing natural gas from a soft brown coal called lignite—has been compressed by 20,000-hp engines and piped to the Canadian province of Saskatchewan. There, it is forced about a mile underground into a formerly depleted oil field, scouring out petroleum that otherwise wouldn't have been recovered and replacing it with the greenhouse gas. Over the next 20 years the energy company EnCana expects to increase the field's total yield by about half while storing some 20 million tons of carbon dioxide.

The process, known as enhanced oil recovery, isn't new: For years drillers have been injecting commercially produced carbon dioxide to recover oil. But this is the first project to use CO2 captured from waste.
To lock up harmful CO2, Norway’s Statoil injects the gas into an aquifer below the North Sea.

Use It to Recover Gas

Not every major emission source will be near a suitable oil field. So where else might the CO2 go? One promising answer: back from whence it came, into coal fields. Methane (natural gas) is tenaciously bound, or adsorbed, to the surface of coal. Some early experiments have shown that carbon dioxide gloms onto coal even more readily. When pumped into mines with unrecoverable seams, CO2 displaces the methane, which can then be brought to the surface and sold; the coal, meanwhile, locks up the carbon dioxide.

Turn It Into Rock

Late in 2008, scientists at the Pacific Northwest National Laboratory plan to begin experimentally injecting 1000 tons of CO2 into porous vol­canic basalt in Washington state. Within two to three years, the lab's studies suggest, the carbon dioxide will begin a chemical transformation to a mineral.

First, some of the CO2 will react with water trapped in the basalt, forming weak carbonic acid. "Think of the acidity of orange juice," says laboratory fellow Peter McGrail. The acid should dissolve calcium in the basalt, which in turn will react with more CO2 to form calcium carbonate—in effect, limestone. According to McGrail, such basalt occurs worldwide, including under large expanses of India, with enormous storage potential.

Columbia University geologist Peter Kelemen has begun researching a way to use surface formations to seize CO2 straight from the atmosphere. About half of the Sultanate of Oman, a Kansas-size country on the Persian Gulf, is dominated by a rock typically found in the ocean crust. Called peridotite, it reacts with carbon dioxide and water to form carbonates.

Kelemen says that deposits of peridotite in Oman are so extensive—in fact, sufficient to store the excess CO2 now in the air many times over—it might be possible to accelerate this natural process and solidify significant amounts of the greenhouse gas.

Pipe It Into the Ocean

In a more unlikely scenario, some scientists have proposed that large quantities of carbon dioxide could be stored on the bottom of the deepwater ocean, where high pressures would compress the gas into liquid form. Denser than seawater, the liquefied CO2 would pool on the seabed. But no one yet knows whether it would stay put or harm the marine environment.

In the nearer term, oil recovery and other established technologies offer the best odds. MIT's Herzog says he's concerned that policy­makers are not moving more aggressively. Yet he remains confident that massive sequestration infrastructure can be built, whatever form it takes.

"Do you think," Herzog says, "when Henry Ford drove his first car down the road he envisioned the infrastructure that's arisen—millions of miles of highway, millions of gas stations, supertankers hauling oil around the world? We're not technologically ready to build all the infrastructure now, but we know enough to start."

Original here

The Milky Way Gets a Facelift

Picture of the Milky Way

Fresh look.
Recent surveys of the Milky Way show it contains a prominent central bar feature (bottom), distinguishing it from other galaxies of the classic spiral variety (top).

Credit: (top) NASA/Spitzer Space telescope (bottom) NASA/JPL-Caltech/R. Hurt (SSC/Caltech)

Forget what you thought the Milky Way looked like. The galaxy is far from the simple and elegant spiral-armed structure so often portrayed. New observations, presented today at the 212th meeting of the American Astronomical Society in St. Louis, Missouri, reveal, among other things, that the Milky Way is missing two of the four spiral arms it was thought to have. The findings should force a significant rethinking about how the Milky Way evolved and how its stars formed.

Mapping the Milky Way is extremely difficult. William Herschel first tried it in 1765 by counting stars with his small optical telescope. But even with improved modern instruments, astronomers have faced several challenges. For starters, our solar system sits on the Milky Way's outskirts, along a branch of one of the spiral arms, so there isn't a panoramic view of the main structure. In addition, our vantage point is obscured by many stars and large clouds of interstellar dust.

Now two teams have pierced that veil with unprecedented clarity. One team used the Spitzer Space Telescope, which can see through dust, to chart the positions and orbital speeds of more than 110 million stars. They discovered a big surprise: Two of the galaxy's four spiral arms are actually just small side-branches. On the other hand, the central bar of the galaxy turns out to be nearly twice as big as previously thought, Spitzer team member Robert Benjamin of the University of Wisconsin, Madison, said at a teleconference today.

Meanwhile, another team probed the galaxy with the Very Long Baseline Array, which comprises a telescope of such power that you could use it to read a newspaper on the moon. They have learned that many young stars in the spiral arms orbit the galactic center slower than calculations suggest. Analyses of their motions reveal what happened: The stars were born when gravity compressed interstellar gas clouds. The new stars got kicked out of the circular orbit of their parent clouds and into more elliptical paths.

"The new data give a much, much better picture of what's going on," says theoretical astrophysicist Avi Loeb, of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, who was not involved in the survey. So theorists should be able to improve their models of the Milky Way significantly and "perhaps gain a better understanding of how galaxies of its type are organized."

Original here

Team hopes to use new technology to search for ETs

Team hopes to use new technology to search for ETs
A Johns Hopkins astronomer is a member of a team briefing fellow scientists about plans to use new technology to take advantage of recent, promising ideas on where to search for possible extraterrestrial intelligence in our galaxy.


Richard Conn Henry, a professor in the Henry A. Rowland Department of Physics and Astronomy at Johns Hopkins' Zanvyl Krieger School of Arts and Sciences, is joining forces with Seth Shostak of the SETI Institute and Steven Kilston of the Henry Foundation Inc., a Silver Spring, Md., think tank, to search a swath of the sky known as the ecliptic plane. They propose to use new Allen Telescope Array, operated as a partnership between the SETI Institute in Mountain View, Calif., and the Radio Astronomy Laboratory at the University of California, Berkeley.

Comprising hundreds of specially produced small dishes that marry modern, miniaturized electronics and innovative technologies with computer processing, the ATA provides researchers with the capability to search for possible signals from technologically advanced civilizations elsewhere in our galaxy – if, in fact, such civilizations exist and are transmitting in this direction.

Employing this new equipment in a unique, targeted search for possible civilizations enhances the chances of finding one, in the same way that a search for a needle in a haystack is made easier if one knows at least approximately where the needle was dropped, said Henry, who is speaking about the proposal at the American Astronomical Society annual meeting in St. Louis.

According to the researchers, the critical place to look is in the ecliptic, a great circle around the sky that represents the plane of Earth's orbit. The sun, as viewed from Earth, appears annually to pass along this circle. Any civilization that lies within a fraction of a degree of the ecliptic could annually detect Earth passing in front of the sun. This ecliptic band comprises only about 3 percent of the sky.


"If those civilizations are out there – and we don't know that they are – those that inhabit star systems that lie close to the plane of the Earth's orbit around the sun will be the most motivated to send communications signals toward Earth," Henry said, "because those civilizations will surely have detected our annual transit across the face of the sun, telling them that Earth lies in a habitable zone, where liquid water is stable. Through spectroscopic analysis of our atmosphere, they will know that Earth likely bears life.

"Knowing where to look tremendously reduces the amount of radio telescope time we will need to conduct the search," he said.

Most of the 100 billion stars in our Milky Way galaxy are located in the galactic plane, forming another great circle around the sky. The two great circles intersect near Taurus and Sagittarius, two constellations opposite each other in the Earth's sky – areas where the search will initially concentrate.

"The crucial implication is that this targeted search in a favored part of the sky -- the ecliptic stripe, if you will – may provide us with significantly better prospects for detecting extraterrestrials than has any previous search effort," Kilston said.

Ray Villard of the Space Telescope Science Institute, who will join the team in its observations, said that in November 2001, STScI publicized Hubble Space Telescope observations of a transiting planet and "it occurred to me that alien civilizations along the ecliptic would likely be doing similar observations to Earth."

"Once they had determined Earth to be habitable, they might initiate sending signals," Villard said.

Shostak of SETI notes that the Allen Telescope Array is ideal for the team's plans to search the entire ecliptic over time, and not just the intersections of the ecliptic and galactic planes.

The team's presentation at the AAS meeting also explores possible scenarios for the appearance of civilizations in our galaxy.

"These models are nothing but pure speculation. But hey … it is educational to explore possibilities," Henry said. "We have no idea how many – if any – other civilizations there are in our galaxy. One critical factor is how long a civilization – for example, our own – remains in existence. If, as we dearly hope, the answer is many millions of years, then even if civilizations are fairly rare, those in our ecliptic plane will have learned of our existence. They will know that life exists on Earth and they will have the patience to beam easily detectable radio (or optical) signals in our direction, if necessary, for millions of years in the hope, now realized, that a technological civilization will appear on Earth."


Original here

Discovery of Earth-scale Exoplanet Signals Multitude of Planets in Universe

Cosmos_2_2_2 Astronomers have found the first Earth-sized exoplanet three thousand light years away. The planet was discovered by the science-fiction-sounding method of gravitational microlensing, and shows that there might be far more planets out there than we ever suspected.

The planet is three times the size of Earth, which might sound like a fairly significant difference but you have to remember that in astronomical terms even being within a factor of ten is an amazing similarity. Every other exoplanet yet discovered has been much larger, many times the size of our own solar system's heavyweight gas giant Jupiter. This is more to do with the sensitivity of our measurements than the makeup of the universe, however, which is where new tools and methods like the Microlensing Observations in Astrophysics (MOA) telescope-camera based in New Zealand come in.

Gravitational lensing was first used as a proof of general relativity, the idea that a massive object like a star bent space and light around it. If two stars and the Earth are in precise alignment, the middle star will bend the light from the further star towards Earth, making it appear brighter than it otherwise would - that is to say, 'lensing' it. Microlensing studies examine these images even more carefully and can reveal planets orbiting the lens star from minute variations in the focused light.

That's exactly how the international team of scientists discovered the Earth-scale exoplanet, which you'd really think they'd give a snappier name than MOA-2007-BLG-192Lb. It's also interesting because of the star it orbits - which almost isn't a star at all. At 6-8% of the sun's mass it's tiny, far smaller than any star previously observed to host planets. There is debate as to whether it can even support fusion reactions, or whether it's a "brown dwarf" - a failed coulda-been star that never sparked into nuclear light and is now slowly trading internal energy for heat until it runs out and goes cold.

This great success in planetary survey techniques raises hopes for the discovery of many, many more planets - not only do we have an accurate and convincingly demonstrated technology, but it seems we have many more places to point it as well.

Posted by Luke McKinney.

Original here

Mars Lander 'Tweets' at Space Fans

Phoenix Sends Messages from Mars to Cell Phones, IM Via Twitter


4:46 p.m., May 25: Atmospheric entry has started. time to get REALLY nervous. Now I'm in the "seven minutes of terror."
4:50 p.m.: Parachute is open!!!!! come on rocketssssss!!!!!
4:54 p.m.: I've landed!!!!!!!!!!!!!
4:55 p.m. Cheers! Tears!! I'm here!

Phoenix lander
(ABC News Photo Illustration)
More Photos

When the Phoenix Lander, NASA's spacecraft that is looking for signs of life -- or at least ice -- on Mars, landed two weeks ago it, had an audience of thousands, but it was all online, not on cable or the nightly TV newscast.

Web-savvy space fans have tuned into the spacecraft's Twitter page, where Phoenix has been "tweeting" back daily messages from space. The page has quickly become one of Twitter's most popular feeds, and turned Phoenix into a rising star.

Of course, this Phoenix is not a cutesy, talking robot turned spacecraft. Instead, the Lander's adventures appear online at Twitter.com, the social networking and micro-blogging site. The part of Phoenix is played by human: Veronica McGregor, the media director of NASA's Jet Propulsion Laboratory.

"The feedback has been tremendous," McGregor told me from California, where JPL is based. "Every day, we get hundreds and hundreds of replies. I put out a question one night. Should I reply to everybody or individuals [when answering questions]. Within minutes, I had 95 responses and every single person said, 'No, send them to all of us.'"

Twitter.com allows users to send out 140-character messages to their Twitter feed via e-mail, instant messaging or text-messaging. Twitter operates somewhat like a personal RSS feed; people who subscribe to your feed will receive all the messages you send out.

That means that the nearly 17,000 subscribers Phoenix's feed has netted since it began nearly a month ago have been receiving wide-eyed, enthusiastic messages like this one.

June 1: Looking forward to an exciting day on Mars; My first dig in the dirt! Team calls this a "dig and dump" test of my robotic arm and scoop.

McGregor, a former space reporter for CNN, writes as Phoenix about a dozen times a day, before and after work. She normally doesn't have time to post to Twitter during her regular workday.

May 28: It's very humbling, and thrilling that so many people care to follow. Want to be careful I don't "over-twitter" my welcome.

In addition to straightforward updates, McGregor spends a lot of time answering questions posed to her by other users about weather, cameras and how long it takes a signal to travel back to Earth. (Answer: 15 minutes at about 186,000 miles per second.) This isn't the first time that NASA has ventured into Web 2.0, but it is the first time it's been so successful. JPL has a Facebook page with just a few hundred users, and NASA has also blogged about other missions in the past without much incident.

"It was during one of our weekly staff meetings, and we were talking about the landing blog we were going to do," McGrego said. "One staffer said, 'Why don't we try Twitter?' Half of us in the room, including me, said, 'What's Twitter?'"

By the time, Phoenix landed, the site had 3,000 subscribers and is still growing. In the past 24 hours, it has netted more than 1,500 new subscribers.

"It's always hard for us to get our missions in the news. The news editors think that the public has a certain appetite for these things, and maybe they do for a full-length story," she said.

But as the project continued, McGregor said she kept receiving messages saying they liked getting short updates.

"I think that's the key thing," she said. "People just love getting that tidbit of news without having to sit down and read an entire story." For McGregor, the challenge is how to keep people interested in the mission, especially during the downtime of a coming Mars fall.

"I said I do it as long as people are following and as long as Phoenix is working," she said. "Eventually we'll have to sign-off and say goodbye."

June 3: Sci team is done with today's news briefing. They'll be back on the clock to start the Martian work shift at 9:30pm PDT. Mars time is rough.

Original here

Mars lander gets digging practice


One scoop: white patches on the right of the image may be salts or water ice

The US space agency's new robotic craft on Mars has been commanded to carry out a second practice dig before beginning its real work.

Controllers want another test run to perfect their technique before Phoenix begins excavating Martian soil for scientific purposes.

When the arm collected and released its first scoopful of soil on Sunday, some of the sample stuck to the scoop.

Phoenix touched down successfully on Mars' northern plains on 25 May (GMT).

"The team felt they weren't really comfortable yet with the digging and dumping process," said the mission's chief scientist, Peter Smith, from the University of Arizona, Tucson.

"They haven't really mastered it."

The extra practice means the earliest that Phoenix would flex its 2.25m-long robotic arm to claw below the planet's northern plains for scientific study would be Wednesday.

Beach play

Professor Smith likened Phoenix's efforts to a child playing on the beach with a sand pail and shovel.

"But we're doing it blind from 170 million miles away," he said.

Phoenix test site (Nasa)
Phoenix has already taken a scoop of Martian soil

Phoenix landed near the Martian north pole, which is thought to hold large stores of water-ice just below the surface.

It will carry out a three-month mission to study Mars' geological history and determine whether the Martian environment could once have supported life.

Images taken by the robotic arm camera revealed the spacecraft may have uncovered patches of water-ice when its thrusters blew away loose soil during the landing.

Phoenix got its first touch of Martian soil on Sunday when it scooped up and then dumped a handful of soil in a region dubbed the "Knave of Hearts".

Icy sample?

The scoop contained intriguing white specks that could be surface ice or salt.

For the second practice "dig and dump", engineers told the robot to go slightly deeper in the same region and use the camera on its arm to take photos this time.

Oven doors only partially opened (Nasa)
Doors to one of the ovens failed to open properly

Phoenix has its own laboratory on board. Any dirt and ice it scoops up will be shovelled into several small ovens to be heated.

The resulting gases will be analysed by a variety of scientific instruments.

New photos sent back by Phoenix showed one of the spring-loaded doors on the oven complex had failed to open all the way.

Scientists hope Mars' midday temperatures will make the door less sticky, but it can still be used with a partially open oven door, said Professor Smith.

Original here

Egypt uncovers 'missing' pyramid of a pharaoh

SAQQARA, Egypt - Egyptian archaeologists unveiled on Thursday a 4,000-year-old "missing pyramid" that is believed to have been discovered by an archaeologist almost 200 years ago and never seen again.

Zahi Hawass, Egypt's antiquities chief, said the pyramid appears to have been built by King Menkauhor, an obscure pharaoh who ruled for only eight years.

In 1842, German archaeologist Karl Richard Lepsius mentioned it among his finds at Saqqara, referring to it as number 29 and calling it the "Headless Pyramid" because only its base remains. But the desert sands covered the discovery, and no archaeologist since has been able to find Menkauhor's resting place.

"We have filled the gap of the missing pyramid," Hawass told reporters on a tour of the discoveries at Saqqara, the necropolis and burial site of the rulers of ancient Memphis, the capital of Egypt's Old Kingdom, about 12 miles south of Cairo.

The team also announced the discovery of part of a ceremonial procession road where high priests, their faces obscured by masks, once carried mummified sacred bulls worshipped in the ancient Egyptian capital of Memphis.

The pyramid's base — or the superstructure as archeologists call it — was found after a 25-foot-high mound of sand was removed over the past year and a half by Hawass' team.

Hawass said the style of the pyramid indicates it was from the Fifth Dynasty, a period that began in 2,465 B.C. and ended in 2,325 B.C. That would put it about two centuries after the completion of the Great Pyramid of Giza, believed to have been finished in 2,500 B.C.

Another proof of its date, Hawass says, was the discovery inside the pyramid of a gray granite lid of a sarcophagus, of the type used at that time.

The rectangular base, at the bottom of a 15 foot-deep pit dug out by workers, gives little indication of how imposing the pyramid might have once been. Heaps of huge rocks, many still partially covered in sand and dust, mark the pyramid's walls and entrance, and a burial chamber was discovered inside.

Archaeologists have not found a cartouche — a pharaoh's name in hieroglyphs — of the pyramid's owner. But Hawass said that based on the estimated date of the pyramid he was convinced it belonged to Menkauhor.

Work continues at the site, where Hawass said he expected to unearth "subsidiary" pyramids around Menkauhor's main one, and hoped to find inscriptions there to back up his claim.

The partial ceremonial procession road unveiled Thursday dates back to the Ptolemaic period, which ran for about 300 years before 30 B.C.

It runs alongside Menkauhor's pyramid, leading from a mummification chamber toward the Saqqara Serapium, a network of underground tombs where sacred bulls were interred, discovered by French archaeologist August Mariette in 1850.

A high priest would carry the mummified bulls' remains down the procession road — the only human allegedly allowed to walk on it — to the chambers where the bulls would be placed in sarcophagi, Hawass said.

Ancient Egyptians considered Apis Bulls to be incarnations of the city god of Memphis and connected with fertility and the sun-cult. A bull would be chosen for its deep black coloring and would be required to have a single white mark between the horns. Selected by priests and honored until death, it was then later mummified and buried in the underground galleries of the Serapium.

The procession route's discovery "adds an important part to our knowledge of the Old Kingdom and its rituals," Hawass said.

The sprawling archaeological site at Saqqara is most famous for the Step Pyramid of King Djoser — the oldest of Egypt's over 100 pyramids, built in the 27th century B.C.

Although archaeologists have been exploring Egypt for some 200 years, Hawass says only a third of what lies underground in Saqqara has been discovered.

"You never know what secrets the sands of Egypt hide," he said. "I always believe there will be more pyramids to discover."

Original here

Holodeck 1.0? Star Trek-style 3-D Displays Make Their Debut


Star Trek's holodeck is a famous science fiction concept. Crewmembers could walk through the garden of their childhood home, re-enact famous historical events or watch full, 3-D performances of famous plays.The holodeck is still science fiction, but last year researchers took the first, confident steps towards its realisation with the Coherent project. (Credit: iStockphoto)

Star Trek's holodeck is a famous science fiction concept. Crewmembers could walk through the garden of their childhood home, re-enact famous historical events or watch full, 3-D performances of famous plays. It was a rich source of story lines for the Star Trek writers because the holodeck offered so many opportunities to work, rest and play.

Crewmembers could also learn by using simulations to acquire new skills or execute training drills. They could simulate surgery, flight, and engine repairs in a truly realistic environment.

The holodeck is still science fiction, but last year researchers took the first, confident steps towards its realisation with the Coherent project. This EU-funded research project, developed a commercial, true 3-D display that could one day be called Holodeck version 1.0. It is called HoloVizio.

Innovation intensive

The HoloVizio is a 3-D screen that will allow designers to visualise true 3-D models of cars, engines or components. Better yet, gesture recognition means that observers can manipulate the models by waving their hands in front of the screen. The function offers enormous scope for collaboration across the globe.

"The aim of the COHERENT project was to create a new networked holographic audio-visual platform to support real-time collaborative 3-D interaction between geographically distributed teams," explains Akos Demeter, spokesperson for the project.

Two applications drove the design of the basic networked audiovisual components – a collaborative visualisation system for the medical sector and a collaborative design review system for the automotive industry.

The researchers based the display component on innovative holographic techniques that can present, at natural human interaction scale, realistic animated 3-D images simultaneously to an unlimited number of freely moving viewers.

No goggles required

The upshot is that users do not need goggles, and the 3-D image is maintained as they move about – both in contrast with early attempts at holographic displays. But the real star of the Coherent project is not simply the display. The researchers made exciting advances in enabling applications that show the system's real potential.

The COMEDIA application, for example, uses raw data from medical imaging devices to create 3-D models of anatomy. The development, led by Coherent partner CRS4 Visual Computing, demonstrated the system to 50 clinicians in Italy.

"The strength of the COMEDIA system is related to the collaboration, discussion and evaluation of clinical cases, since it provides users with an immediate 3-D understanding of the anatomy shown," explains Demeter.

COMEDIA led to the 'Holo-Heart' series of seminars last year.

Art's hidden secrets

CRS4 also developed rendering and visualisation software that may reveal the artistic secrets of the great masters, like Michelangelo. A scan of his famous David revealed that the eyes diverge.

It is impossible to see this by standing in front of the statue, because of its height and the position of the left forearm. But it becomes clear when viewed through the COHERENT system, and theorists posit that Michelangelo wanted to present two different faces of the same character.

Coherent also led to the development of the COLLAUDA application for collaborative automotive design. The application, developed with CS Systemes d’Information and Peugeot in France, led to a series of demonstrations to potential end users.

The demonstration led to a new project collaboration, named ARIVA, which starts in June 2008.

Oil exploration

Finally, COHERENT's researchers explored the potential for applying holographic systems for oil exploration, using Shell's data. The system displayed real examples of subsurface data. Holografika, the Hungarian research company behind the Holovizio system, developed a lot of the core technology used by the project.

In all, the team developed useful applications for a leading edge, emergent technology, explored excellent commercial opportunities and perfected holographic and allied systems for real-world use. The research also stimulated enormous interest in the area and prompted a wave of activity in the sector. But history, perhaps, will remember the Coherent project as the precursor to a real world holodeck.

Original here



Boeing Successfully Fires 25 kW Solid-State Lasers, Laser Weapons One Step Closer to Being a Reality

Boeing has just tested its new thin-disk laser, the most powerful solid-state laser ever made. It fires at over 25 kilowatts, with the scalability proven to go up to a 100 kilowatt laser in the coming years. A 100 kW laser would be the most powerful ever made, one that has a lot of challenges to overcome, including reducing the excess heat generated by such a powerful laser and maintaining the quality of the beam over distances. But even a 25 kW laser is extremely powerful. As the press release says, it "will damage, disable or destroy targets at the speed of light, with little to no collateral damage, supporting missions on the battlefield and in urban operations." Hit the jump for the full release.

Boeing Fires New Thin-Disk Laser, Achieving Solid-State Laser Milestone

ST. LOUIS, June 03, 2008 — The Boeing Company [NYSE: BA] fired its new thin-disk laser system repeatedly in recent tests, achieving the highest known simultaneous power, beam quality and run time for any solid-state laser to date.

In each laser firing at Boeing's facility in West Hills, Calif., the high-energy laser achieved power levels of over 25 kilowatts for multi-second durations, with a measured beam quality suitable for a tactical weapon system. The Boeing laser integrates multiple thin-disk lasers into a single system. Through these successful tests, the Boeing team has proven the concept of scalability to a 100-kilowatt-class system based on the same architecture and technology.

"Solid-state lasers will revolutionize the battlefield by giving the warfighter an ultra-precision engagement capability that can dramatically reduce collateral damage," said Scott Fancher, vice president and general manager of Boeing Missile Defense Systems. "These successful tests show that Boeing has made solid progress toward making this revolutionary capability a reality."

The thin-disk laser is an initiative to demonstrate that solid-state laser technologies are now ready to move out of the laboratory and into full development as weapon systems. Solid-state lasers are powered by electricity, making them highly mobile and supportable on the battlefield. The Boeing laser represents the most electrically efficient solid-state laser technology known. The system is designed to meet the rapid-fire, rapid-retargeting requirements of area-defense, anti-missile and anti-mortar tactical high-energy laser systems. It is also ideal for non-lethal, ultra-precision strike missions urgently needed by warfighters in war zones.

"This accomplishment demonstrates Boeing's commitment to advancing the state of the art in directed energy technology," said Gary Fitzmire, vice president and program director of Boeing Directed Energy Systems. "These successful tests are a significant milestone toward providing reliable and supportable lasers to U.S. warfighters."

Boeing's approach incorporates a series of commercial-off-the-shelf, state-of-the-art lasers used in the automotive industry. These industrial lasers have demonstrated exceedingly high reliability, supportability and maintainability.

A high-power solid-state laser will damage, disable or destroy targets at the speed of light, with little to no collateral damage, supporting missions on the battlefield and in urban operations.

Original here