Thursday, July 31, 2008
In a discovery that could qualify as one of the most important in the history of space exploration, NASA’s Phoenix Mission may have confirmed the presence of water ice on the planet, Popular Mechanics has learned. The scheduling of a press conference for Thursday at 2 p.m. Eastern by NASA and the University of Arizona has raised hopes in the space community that scientists will announce the breakthrough. When pressed for details, a spokesperson for the agency’s Jet Propulsion Laboratory refused to elaborate beyond saying that the Phoenix team would unveil new findings from the ongoing robotic mission to Mars. If the rumor holds true, it would be the first direct confirmation of water ice beyond Earth.
Data from recent missions to Mars has been building toward a confirmation of the presence of water ice. However, “this would be the first time we held it in our hands, so to speak,” says Bryan DeBates, a senior aerospace education specialist at the Space Foundation. Evidence from other locations in the solar system, including Earth’s moon, Saturn’s Enceladus moon and Jupiter’s Europa moon, have strongly hinted at the presence of water—NASA confirmed a liquid lake on Saturn’s Titan moon on Wednesday—but no direct observation of water has been made.
If the presence of water is confirmed on a small patch near the lander, the volume of ice on the Martian surface could be “extraordinary,” says Mary Bourke, a planetary research scientist with the Planetary Science Institute in Tuscon, Ariz. The landing site was chosen because the porosity of the soil appeared to lend itself to a build-up of water ice—and that type of soil is widespread on Mars. The presence of widespread water would make any mission to establish a manned Martian base far more feasible. (Of course, the presence of water would also greatly increase the likelihood that life exists, or once did exist, on the planet.)
Staff operating the Phoenix Lander from mission control in Arizona have been closely watching a patch of what they believed to be ice that was uncovered as the lander descended. The Phoenix Mars Lander has been scraping samples and dropping them into a spectrometer that heats the samples to determine their chemical composition. On Sunday, the Phoenix team changed the way it dug to reduce friction that prematurely heated the samples. On Tuesday, NASA scientists stated that the surface of the patch, dubbed “Snow Queen,” has changed between June and July. A camera mounted on a robotic arm captured those changes, which include 4-in. cracks and a visibly rougher surface texture.
Sources within NASA's Jet Propulsion Laboratory hinted that the new findings could reveal details of the atmosphere as well as the ground—perhaps indicating that researchers have learned new lessons about the dynamics of the Martian environment by the way the exposed material reacts to the carbon dioxide-rich atmosphere. The fractures could have appeared as ice sublimated off the surface. It’s also possible that temperature changes made the surface crack. However, any findings about the atmosphere could have come independently of observations of the ice surface, since there are atmospheric instruments such as laser radar on Phoenix, so any atmospheric findings could be separate from the water ice issue. For some Mars researchers, that would be fine. “Research into Mars’ atmosphere really needs to be beefed up,” Bourke says. “Any information about it could be invaluable.”
Also, two European orbiters circling Mars have recently captured geological features that indicate the planet once had standing water for thousands of years, including river valleys and 13,000-ft. waterfalls. Next step after confirming the ice would be finding out if life ever existed—or still does, in some deep Martian aquifer. “NASA’s mantra has been ‘Follow the ice to find life,’” says DeBates. And if life does not exist there, earthlings may import some. “Establishing a Mars colony is going to be a lot easier to maintain, having water there,” DeBates says.
Earthlings might be scrambling to find liquid hydrocarbons buried in our planet, but Saturn's moon Titan has plenty to spare.
"This is the first observation that really pins down that Titan has a surface lake filled with liquid," said the paper's lead author, University of Arizona professor Robert Brown.
The new observations affirm that Titan is one of the likeliest places to look for life in our solar system. Some astrobiologists have speculated that life could develop in the moon's hydrocarbon lakes, although it would have to be substantially different from known life on Earth, which requires liquid water.
Mixed in solution with the ethane, the lake is also believed to contain nitrogen, methane, and a variety of other simple hydrocarbons.
The Cassini-Huygens probe determined the chemical composition of the liquid by the way it reflected light, a technique known as spectrometry that has provided most of our knowledge about other planets' atmospheric compositions.
"It was hard for us to accept the fact that the feature was so black when we first saw it," Brown said. "More than 99.9 percent of the light that reaches the lake never gets out again. For it to be that dark, the surface has to be extremely quiescent, mirror smooth. No naturally produced solid could be that smooth."
Further, the scientists saw the specific absorption signature of ethane, which absorbs light at exactly 2-micron wavelengths.
These kinds of measurements are made more difficult by the hydrocarbon haze that engulfs the moon, making it hard to actually see the Titanic ground. Cassini scientists have to take advantage of narrow observation windows. One of these occurred in December 2007, which allowed them to catch this view of the lake, Ontario Lacus. At 7,800 square miles, it's slightly larger than the Earthbound Lake Ontario
Ethane is the byproduct of a solar-energy-induced reaction that transforms atmospheric methane, aka natural gas. Scientists believe ultrafine particles of ethane fall from the atmosphere to the surface and fill the lake.
Here on earth, ethane is used to create ethylene, which is used as an all-purpose chemical precursor and is the world's most-produced organic compound.
Brown and his team will publish their results in the July 31 issue of the journal Nature.
Image: Courtesy NASA. Ontario Lacus, which translates roughly to "Bigfoot track on Saturnian moon."
+ Larger image
This animation consists of two close-up images of "Snow Queen," taken several days apart, by the Robotic Arm Camera (RAC) aboard NASA's Phoenix Mars Lander.
TUCSON, Ariz. -- A distinctive hard-surface feature called "Snow Queen" beneath NASA's Phoenix Mars Lander visibly changed sometime between mid-June and mid-July, close-up images from the Robotic Arm Camera show.
Cracks as long as 10 centimeters, or about four inches, have appeared. A seven-millimeter (less than one-third inch) pebble or clod not seen there before has popped up on the surface. And some smooth texture on Snow Queen has subtly roughened.
Phoenix's Robotic Arm Camera, or RAC, took its first close-up image of Snow Queen on May 31, 2008, the sixth Martian day, or sol, after the May 25 landing. Thruster exhaust blew away surface soil covering Snow Queen as Phoenix landed, exposing a hard layer comprising several smooth, rounded cavities.
"Images taken since landing showed these fractures didn't form in the first 20 sols of the mission," Phoenix co-investigator Mike Mellon of the University of Colorado, Boulder, said. "We might expect to see additional changes in the next 20 sols."
Mellon, who has spent most of his career studying permafrost, said long-term monitoring of Snow Queen and other icy soil cleared by Phoenix landing and trenching operations is unprecedented for science. It's the first chance to see visible changes in Martian ice at a place where temperatures are cold enough that the ice doesn't immediately sublimate, or vaporize, away. Phoenix scientists discovered that centimeter-sized chunks of ice scraped up in the Dodo-Goldilocks trench lasted several days before vanishing.
The Phoenix team has been watching ice in the Dodo-Goldilocks and Snow White trenches in views from the lander's Surface Stereo Imager as well as RAC, but only RAC can view Snow Queen near a strut under the lander.
The fact that RAC is attached to the robotic arm is both an advantage and a disadvantage. The advantage is that RAC can take close-ups of Snow Queen, while the Surface Stereo Imager can't see Snow Queen at all from the topside of the spacecraft. The disadvantage is that the robotic arm has so many tasks to perform that RAC can't be used for monitoring trench ice at some opportune times. Also, RAC hasn't been used to take up-close images of other icy places under the spacecraft cleared on landing because it would require the robotic arm to make a difficult and complex series of moves.
"I've made a list of hypotheses about what could be forming cracks in Snow Queen, and there are difficulties with all of them," Mellon said.
One possibility is that temperature changes over many sols, or Martian days, have expanded and contracted the surface enough to create stress cracks. It would take a fairly rapid temperature change to form fractures like this in ice, Mellon said.
Another possibility is the exposed layer has undergone a phase change that has caused it to shrink. An example of a phase change could be a hydrated salt losing its water after days of surface exposure, causing the hard layer to shrink and crack. "I don't think that's the best explanation because dehydration of salt would first form a thin rind and finer cracks," Mellon said.
"Another possibility is that these fractures were already there, and they appeared because ice sublimed off the surface and revealed them," he said.
As for the small pebble that popped up on Snow Queen after 21 sols -- it might be a piece that broke free from the original surface or it might be a piece that fell down from somewhere else. "We have to study the shadows a little more to understand what's happening," Mellon said.
The Phoenix mission is led by Peter Smith of The University of Arizona with project management at the Jet Propulsion Laboratory and development partnership at Lockheed Martin, located in Denver. International contributions come from the Canadian Space Agency; the University of Neuchatel; the universities of Copenhagen and Aarhus, Denmark; Max Planck Institute, Germany; and the Finnish Meteorological Institute.
Decades ago, nutritionists lobbied the US government to add iodine to salt, because most US residents weren't getting enough iodine in their diets. Today, most of us here in the US accept that salt comes with iodine (though you can buy it without). Chemists like UC Santa Barbara's Bruce Lipshutz, who studies CoQ10, hope that in the future we will also accept the idea that CoQ10 comes in drinking water, perhaps along with several other vital vitamins and enzymes. So even if you want to grow old and die in the old-fashioned way, you may not be able to — at least, if you plan to drink water.
1/2 Price Deals Select $50 certificates on sale for $15!!! Get up to 50% OFF discounts for restaurants, spas, golf courses, clothing and much more (n
HOUSTON -- There is real hope that what’s happening in a Houston lab might lead to a cure for HIV.
“We have found an innovative way to kill the virus by finding this small region of HIV that is unchangeable,” Dr. Sudhir Paul of the University of Texas Medical School at Houston said.
Dr. Paul and Dr. Miguel Escobar aren’t talking about just suppressing HIV – they’re talking about destroying it permanently by arming the immune system with a new weapon lab tests have shown to be effective.
Ford Stuart has been HIV positive for 15 years. He’s on a powerful drug cocktail that keeps the disease in check.
“I’m on four different medications. Three of them are brand new, and it’s the first time that I’ve ever been non-detectible,” Stuart said. “I’m down to about – just for the HIV – about nine pills per day, five in the morning and four at night.”
But Stuart knows HIV mutates, and eventually it will learn how to outsmart his medications.
“The virus is truly complex and has many tricks up its sleeve,” Paul said.
But Dr. Paul thinks he’s cracked a code.
“We’ve discovered the weak spot of HIV,” he said.
Paul and his team have zeroed in on a section of a key protein in HIV’s structure that does not mutate.
“The virus needs at least one constant region, and that is the essence of calling it the Achilles heel,” Paul said.
That Achilles heel is the doctors’ way in. They take advantage of it with something called an abzyme.
It’s naturally produced by people, like lupus patients. When they applied that abzyme to the HIV virus, it permanently disarmed it.
“What we already have in our hand are the abzymes that we could be infusing into the human subjects with HIV infection, essentially to move the virus,” Paul said.
Basically, their idea could be used to control the disease for people who already have it and prevent infection for those at risk.
The theory has held up in lab and animal testing. The next step is human trials.
Meanwhile, every day in Houston, three people are diagnosed with HIV.
The doctors still need funding to launch human trials. In the world of HIV research, that’s often where things fall apart.
“Clinical trials are very expensive,” Paul said.
“That is the worry of the researcher. This is what nightmares are made of – that after 30 years of work, you find it doesn’t work,” Paul said.
But so far, it is working.
“This is the holy grail of HIV research, to develop a preventative vaccine,” Paul said.
“If we can get the viral loads down to a manageable level, that will preclude the need for these conventional drugs,” Escobar said.
Still, even if everything goes well, it’s at least five years before the research could help people with HIV.
The doctors know people like Ford Stuart are waiting.
“There are so many people struggling with the disease because it affects not only your body, but also your psyche, how you perceive yourself,” he said.
If nothing else, the research is promising for the tens of millions waiting for a cure.
It seems like some researchers from Radbound University in The Netherlands took advantage of the recent Four Days Marches of Nijmegen walking race for a little experiment earlier this month, where they convinced ten volunteers to swallow an RFID pill as part of a study to monitor body temperature. Apparently, the pills recorded and transmitted the walkers' core temperature to a receiver in their backpack every ten seconds, which in turn sent the data via Bluetooth to a GPS-enabled phone that then relayed it to the operations center at Radbound. With all that info at their disposal, the researchers were able to monitor each walker and alert them if their temperature was reaching a dangerous level, or even alert others nearby if they weren't responding (which apparently wasn't necessary). As you might have guessed, the researchers are already hard at work planning an even larger test for next year's event, which they hope could eventually lead to the system being used at marathons and other sports events.
Littered across the Internet are dozens of home videos of people putting a lit match into their microwave oven, turning it on and waiting for the inevitable chaos to ensue: a spitting, sputtering ball of brilliant white fire that seems to hang magically in the air until it floats upward and scorches the hell out of the microwave ceiling.
Some of the mischievous miscreants responsible for turning their kitchens into science experiments claim to have recreated a mysterious natural phenomenon known as “ball lightning,” which resembles the fiery spheres created in the microwave and is thought to be the byproduct of lightning strikes. But are these glowing orbs created in an appliance normally reserved for reheating leftovers really the same thing as ball lightning?
Several scientists in the relatively small field of “ball-lightning-ology” say that it isn’t quite the same thing. “It’s not the same as the ball lightning that we are talking about,” says Antonio Pavão, a professor of chemistry at the Federal University of Pernambuco in Brazil who has successfully created a ball lightning-like phenomenon in his lab.Reports often describe naturally-occurring ball lightning as a luminescent white-blue or white-orange ball, which are, on average, about the size of a grapefruit. It can move through the air on its own for seconds and even minutes, bouncing off most things it touches until it either fades away or explodes. Sightings are reported most often during thunderstorms when lightning actively strikes the ground. Not to be taken lightly, ball lightning has reportedly even killed people.
But until recently scientists did not take the phenomenon very seriously, and some did not even think it existed at all. Several theories were floated around to explain the mechanism of ball lightning, but only one has gained ground in recent years. It was proposed by John Abrahamson and James Diniss, professors of chemical engineering at the University of Canterbury in Christchurch, New Zealand.
The inspiration for their theory were the glass-like globules of silicon, called fulgurites, found in the ground after lightning hits silicate-rich soil (silicates are compounds containing silicon and oxygen.) They proposed that, in addition to the formation of fulgurites, inconceivably small particles of pure silicon, smaller than 100 nanometers, were being vaporized and ejected out into the air during a lightning strike.
Once in the air, these silicon particles would begin to condense together and react with oxygen in the atmosphere, giving off heat and light and creating a fiery sphere of ball lightning.
A nice theory, but only in the last several years have scientists been able to verify it by creating something similar to ball lightning in the lab. For example, a group at Tel-Aviv University in Israel created a ball lightning-like effect by shooting microwaves at blocks of silicate. However, the effect only lasted for a scant 30 milliseconds once the microwaves were turned off.
More recently, Pavão’s research group in Brazil created a ball lightning-like effect that lasted anywhere from eight to ten seconds, which more closely mimics the behavior of the real thing. Pavão sent an electric current into a wafer of pure silicon, conditions strikingly similar to those of real lightning hitting the ground.
According to both Pavão and Abrahamson, the spectacle that you can create in your microwave oven is more like the Tel-Aviv University group’s experiment than actual ball lightning. Outside energy from the microwave is sustaining the fireball instead of internal chemical energy caused by reaction with the atmosphere. “The important difference is the lifetime of the balls,” says Pavão. “The natural phenomenon is different because there is no need for an additional source of energy and the lifetime is minutes.”
Stanford's famed high-energy physics laboratory is in a tussle with the U.S. Department of Energy over naming rights to the Stanford Linear Accelerator Center, better known as SLAC.
The Energy Department, which pays SLAC's annual $300 million bill, wants to rename the lab to reflect changes in the direction of research since the facility opened 46 years ago.
On top of that, Energy Department officials have filed an application to trademark whatever new name the lab is finally given so no one else can use it.
University officials oppose any copyright plan with the Stanford name in it, and many SLAC scientists oppose any name change at all. Protest petitions at the lab have been circulating for weeks.
One eminent physicist, who asked to remain anonymous because he didn't want his name entangled in a political brouhaha, called the renaming issue "bizarre, petty and even stupid."
Stanford officials who insist the government can't copyright the venerable university's name have countered with an offer that would give federal officials a royalty-free "perpetual" license to use the Stanford and SLAC names.
"SLAC's record is pretty distinguished, and with the university's offer of a license to use the Stanford name, what more do they need?" said Burton Richter, SLAC's former director and one of the Nobelists who won his prize there. "I'm really bewildered."
Devon Streit, an associate director of the Energy Department's science office, said SLAC is a national laboratory and the federal agency has renamed many of its 16 other national labs to reflect their new research directions. It also has trademarked their names to protect them from commercial exploitation, and the same will be true of any new name that's chosen for SLAC, Streit said.
A new name for SLAC, she said, should reflect the new missions and research efforts that scientists there are now undertaking.
Over the past five decades, the atom-smashing work of SLAC's 2-mile-long linear accelerator has revealed countless secrets of the subnuclear particles that make up all matter, and three Nobel prizes have gone to scientists working there. Particle physics has long been SLAC's prime focus.
Now the lab has at least two new major research missions, including a focus on photon science that deals with the fundamental particles of light, and particle astrophysics that explores the riddles of everything from supernovas and black holes to the origin of the universe.
"We're unbelievably excited about these new missions for the lab," Streit said.
For photon science, SLAC engineers and scientists are building a huge new device called the Linac Coherent Light Source, a free-electron laser to produce hard X-rays - 10 billion times brighter than any X-ray beam on Earth. The laser will probe the arrangement of atoms in materials of all kinds, including the atomic properties of living molecules.
"That machine will be really cool," Streit said.
SLAC's director, Persis Drell, is on vacation and couldn't be reached for comment, but a university spokesman said Drell has accepted that the laboratory's name will be changed and has asked SLAC staff members to propose new ones. A short list of suggested names will be forwarded to Stanford President John L. Hennessy, but Energy Department officials will have the final say. No deadline has been set.
Opponents believe any name change will hurt the lab's ability to draw scientific talent.
"The Stanford Linear Accelerator Center is manifestly identified with Stanford University, and this connection is critical for recruiting the best scientists and engineers," said Martin Breidenbach, a particle physicist at SLAC and a Stanford professor. "Changing the name will weaken that link."
Drell agrees, according to the Stanford News Service. "SLAC has a long, illustrious history and the name evokes that history," Drell is quoted as saying.
Construction of the linear accelerator on the Stanford campus began in 1962, and when it was finished under the leadership of the now-deceased Wolfgang K.H. Panofsky, it was hailed not only as a feat of engineering and science, but also because it was completed on time and below its budget of $114 million - something no big scientific machine of the time had ever equaled.
Once again boosting their reputation as a country intent on helping the environment, Spain has announced that they intend to put 1 million electric cars on their roads by 2014. This will be part of the Zapatero government’s plan to reduce their use of energy and increase the countries overall energy efficiency.
Spain’s minister of industry, business and tourism Miguel Sebastian said Tuesday that the plan should gain approval from Spain’s Council of Ministers on Friday, and should then be carried out this year and on through to 2011.
“Electric vehicles are the future and the driver of the industrial revolution,” Sebastian said in testimony to a congressional panel. “Every time we ease off the accelerator, we boost national income and employment.”
The plan will cost some 245 million euros, and is made up of a total of 31 separate measures. In enacting this plan though, Spain is set to save between 5.8 and 6.4 million tons of oil over the three-year period, this according to industry estimates.
According to Sebastian, Spain has been trying for awhile to cut oil imports. Over the past year alone, Sebastian noted that the country had spent 17 billion euros on oil imports.
Editor's Note: With this post we welcome John Pendlebury to Celsias. John lives in Ireland and writes about how technology affects the world.
In an arid region of the western U.S. known as the Great Basin, the desert floor has recently been reaching temperatures in excess of 1,300 degrees Farenheit. No, this isn't due to global warming, but perhaps part of the solution to it. A Utah based company called IAUS (International Automated Systems Inc.) has developed a solar lens technology that transmits solar energy with an efficiency of 92%.
A California energy consortium has invested in the first stage of the project. Twenty specially designed solar towers are being erected close to the Great Basin in Delta, Utah. Each tower holds four solar lenses that follow the sun as it crosses the clear blue desert sky. Each lens will focus the sun's rays onto specially designed heat exchangers that will convert the solar energy to super-heated steam. The heat exchangers double as high-efficiency turbines that will drive electrical generators to produce alternating current output.
Later stages will involve placing 1000 towers over 700 acres of desert. With each tower having a capacity to produce 100 kW of power, the entire field stands to produce close to 100 MW of power when finished. That's enough energy to power 50,000 average Californian homes. Once generated, the power will travel around five miles to be integrated with the U.S. national power grid.
The key to the success of the project are the unique thin-film solar lenses. Lenses of this size are typically heavy and expensive to produce. IAUS have developed a technique of embedding magnifying material into cheap, light, rolled plastic. The plastic is composited into extremely large Fresnel lenses. The lenses are light, relatively cheap to manufacture and easy to maintain. This compares favourably with traditional solar collectors.
The plant is located in one of the best solar locations in the country due to its high altitude and thin air. Solar energy is absorbed as it travels through our atmosphere, so placing a solar energy plant in a rarefied environment allows more solar radiation to be captured. IAUS also point out that the land on which the final solar plant is to be situated is one tenth the price of equivalent land in California. Combined with the comparatively inexpensive cost of the plant equipment, this means that the entire facility would cost roughly half of what a coal fired power plant would cost to construct.
The solar power plant will produce no pollutants and any CO2 used in its production will be quickly offset by its operation. Coal, and other fossil fuels must be extracted from the earth and transported to antiquated furnaces for burning, increasing the solar plant's attractiveness, as its fuel is delivered daily by the sun. Although the sun does not always shine on the solar plant, the company believe that using a heat storage mechanism, they can deliver power around the clock at an estimated production cost of 5 to 10 cents per kilowatt hour. With such competitive production costs, IAUS say that their solar power plant will not only beat the price of coal, but be the first commercial solar power plant to compete favourably with gas powered stations.
Every hour, enough solar energy falls on the surface of the earth to satisfy the power needs of the whole planet for an entire year. Yet, at present only 1% of the worlds energy is derived from solar power. Will the Utah solar power plant be the nexus that changes all of that? Let's hope so. IAUS have a similar project under way in Texas and interest in the solar power project has been observed as far afield as China and Australia.
Independent geneticist J. Craig Venter raced an international consortium of scientists to map the human genome in the 1990s. Now he's putting the same cutting-edge science to work on today's energy crisis, engineering a whole new generation of biofuels. In a rare in-depth interview, we talked to Venter recently about his latest project to save the world, as well as historical flubs, today's presidential candidates and the future of genetics. —Chris Ladd
So how did you get from mapping the human genome to creating biofuels?
We considered the biggest issues facing society that we thought we could impact. What's happening to the environment and getting weaned off oil and coal are the biggest issues out there.
Is it similar to the genome project? More daunting?
Nobody thought that such a massive project as sequencing the human genome could be undertaken by a single team, like we did. But that challenge is minor compared to trying to replace the 30 billion barrels of oil that we use globally each year, and the 3 billion tons of coal. The scale of that is beyond my imagination.
I think the real challenge won't necessarily come from biology, because biology is infinitely scalable, but from engineering. [If we can overcome that,] we have the potential to stop using oil and coal hopefully within the next 10 to 20 years, and even start reducing the CO2 concentrations in the atmosphere.
How do you plan to do that?
We're working on what we call second-, third- and fourth-generation fuels. Like corn-based ethanol, a first-generation biofuel, our second- and third-generation fuels start with sugar as the feedstock. But unlike it, we're making fuels that have very high energy content, don't mix with water and have very low freezing points—well under 100 degrees below centigrade. They have the potential of working in high-altitude aircraft.
And the fourth-generation fuels?
We're using a unique type of algae that we've genetically engineered to turn sunlight and CO2 into C8 and C10 and larger lipids. The people that initially grew algae viewed it as farming—you know, you grow a bunch of algae and then you harvest it. But it's totally different if the algae are chemical factories. Ours continuously secrete these molecules, so we get constant production of something that can basically be used right away as biodiesel.
So they perform better than traditional biofuels—but will they actually be better for the environment?
Because we actually have to feed them concentrated CO2, we can take CO2 streams from power plants, cement plants and other places. People view CO2 as a contaminant—they want to bury it in the ground or pump it into wells to hide or sequester it. We want to take all that waste product and convert it into fuel.
When do you hope to have these fuels in people's cars?
Our goal is to have multiple things on the market within five years. We're looking now at how to scale this up. Our molecules are much higher energy density [than ethanol], but even so we need to produce hundreds of billions of gallons if we're really going to make a dent in oil use.
This is national security. We seem to be fighting wars at least in part over oil, we're sending most of our money to the Middle East and other places, and we're investing as a nation almost nothing in alternatives.
Since the 1970s oil crisis, a number of policies have been enacted to increase energy independence. Do you think they've been effective?
Had we followed intellectually where we were back in the Carter era, we wouldn't have a lot of the problems we do today. We've had a lot of short-term thinking from administrations that basically trades off the health of the planet for economic gain for the business community—and for their own re-election. We don't reward our leaders for making long-term beneficial decisions for society. It's like the stock market—all that matters is the next quarter, not where you are 10 years from now.
Do you think there's potential for change with the current presidential candidates?
I think either candidate would be orders of magnitude better than what we had in this administration, but I think Obama would be a few orders of magnitude better than McCain. Although McCain has been a longtime supporter of changes in the CAFE standards—trying to get higher-mileage cars—and has consistently been shouted down by his colleagues.
What about McCain's recent pledge to offer a $300 million prize for a better electric car battery?
Industry is very motivated to make new batteries. Whoever makes a better battery is going to make a fortune, and having a government incentive to do that doesn't necessarily move it along. In fact, if it's like the human genome project, it could just slow it down.
Where do you see the science of genetics going in the near future?
I see it as parallel to the electronics industry in the 1940s and '50s, a stage when all of the things that enabled computers came out of just a few handfuls of components—resistors, transistors, capacitors—and people were pretty much limited only by their imagination. My team has discovered more than 20 million new genes, so we're in a biological universe. There are no fundamental limits. I think we're going to see the next 25 years as some of the most innovative in the history of science.
Spectators look at a Pomona, California, scene where bricks collapsed into an alley from an unoccupied building during a magnitude 5.4 earthquake on July 29th. (Photograph by David McNew/Getty Images)
Yesterday morning, Los Angeles dodged another bullet. The earthquake that originated near Chino Hills, roughly 35 miles east of downtown L.A., was powerful enough to rattle homes and damage a hotel near the epicenter. But with a magnitude of 5.4, it was classified by the United States Geological Survey (USGS) as a moderate quake—one of 39 such events in the country this year. A moderate earthquake could pose a serious threat in some regions, particularly in places like New York City, where many brownstones were built more than a century ago. In Southern California, where seismic upheaval is practically routine, this quake left few signs of its passage.
“Engineered structures are meant to withstand a 5.4 earthquake,” says Jamie Steidl, a research seismologist at the University of California at Santa Barbara’s Institute for Crustal Studies. “Even non-engineered, old, unreinforced masonry structures should still be okay. There’s lots of old stuff in Long Beach, and in some of these cities that have been around awhile—older brick buildings that aren’t reinforced. But at this magnitude, we’re not even pushing what the building code was 80 years ago.” The quake preparedness of Los Angeles was put to the test yesterday, but only barely.
The Chino Hills event, minor as it may have been, was a reminder of the United States’ earthquake vulnerability. In Japan and Mexico, researchers have developed earthquake early warning systems, which can detect seismic activity and trigger a sequence of automated responses. This is a frantic sort of race, since the waves created by an earthquake propagate at some 3 kilometers, or nearly 2 miles, per second. In Japan, where quakes tend to start in offshore subduction zones, some areas would have a minute or more to prepare for the worst. “There’s a whole bunch you can do in 60 seconds,” says Thomas Jordan, director of the Southern California Earthquake Center (SCEC). “Shutting off gas mains. Conditioning the electrical grid for what’s going to happen. In hospital situations, especially during surgery, there’s a lot you can do.”
So far, Japan’s early warning system hasn’t done very much—it failed to detect the country’s last two moderate quakes. But in the United States, the outlook is even worse, since no such earthquake early warning system exists, though some preliminary research is underway. “Right now, we’re just fiddling with the concepts,” says Jordan. “We’re not into operational testing, yet.” Coincidentally, says Jordan, a Caltech team reported that its experimental detection gear had been off-line when the Chino Hills earthquake hit.
Realistically, however, if the recent quake had been severe, closer to the 6.7 magnitude that the USGS says is almost certain to hit the state in the next few decades, an earthquake early warning system wouldn’t have helped. The quake simply occurred too close to Los Angeles, with the ground-shaking waves hitting the city in less than 20 seconds. That’s why most of the research into early warning is focused on the San Andreas fault, which can produce earthquakes as close as 40 km (25 miles) from L.A., or as far as 200 km (nearly 125 miles) south of the city. With enough distance, a system-wide alert becomes viable. “Think of an earthquake as a cascade of events,” Jordan says. “They can generate tsunamis, which take some time to hit. Fire following earthquakes, that’s one of the biggest problems you can have. So you get the firetrucks ready, the station doors open. If you know what is happening, you can begin to prepare for what is going to happen later in that cascade.”
As limited as an earthquake early warning detection might be, the potential benefits—particularly in Southern California—seem clear. “It’s something we should be pushing a lot harder than we’re pushing. And we’ve fallen behind other countries. We’ve been a little remiss, to be honest,” Jordan says. He believes a system could be up and running in California in five years, at the earliest. That’s assuming that government agencies like the National Science Foundation and the USGS greenlight additional funding for research. Unfortunately, Jordan thinks it could take a large disaster to make that happen.
In the meantime, the SCEC is helping to prepare for just such a disaster, with the United States’ largest earthquake drill. Scheduled for this November, the Great Southern California Shakeout will test the region’s response to a simulated 7.8 magnitude quake at the southern end of the San Andreas fault. Using supercomputers, seismologists have created a scenario that calculates where the most severe damage would occur, how many fires might be started, and how many lives could be lost. The event will include at least 5 million participants throughout the region, from schools and firefighters to agencies like FEMA. “In a recent meeting, the L.A. County Fire Chief told us, ‘We’ve never really thought this through,’” Jordan says. “A lot of the standard operation procedures wouldn’t apply. That’s what we learned from Katrina. A big enough hammer blow shatters the system. We want to make sure that when that hammer comes down this time, and it’s going to come, the system doesn’t break.”
In what is just another example in a long stream of such, the US Army is beginning to realize that it is not only good for publicity, but essentially cheaper, to turn their operations green… er. Going green was never solely about making some cheap points on the PR board; it has, from the start, been a cheaper option across the board.
The Army had begun pushing for environmental sustainability in all of their bases, starting with Fort Bragg in North Carolina. And they’re thinking it through as well; not only are they thinking about the current footprint (I’m not going to say it), they’re thinking about the future as well. Since 2001, each village set up within Fort Bragg for training purposes has been made up of shipping containers, reducing the cost from $400,000 to $25,000, and keeping the shipping containers out of the solid waste stream.
But the goal is not solely to save money, but also lives as well.
One of the most common reports we would hear in the early days of the Iraq war and the War on Terror in Afghanistan, would be convoys encountering IED’s, or Improvised Explosive Devices, along the side of the road.
The main reason that these convoys had to make the long trek to the forward command posts was to transfer fuel from A to B. And the more trucks in the convoys, the more soldiers there were, and thus the more risk there was to more people.
“If we can reduce consumption on our forward operating bases by using renewable energy, let’s say wind or solar instead of a diesel generator outside the tent … then we can reduce the number of these supply convoys that need to come forward that are getting hit by these IEDs,” said Tad Davis, deputy assistant secretary for environment, safety and occupational health.
Another saving that the Army has made of late is to spray their tents with foam insulation. After a recent survey of U.S. forward bases in Djibouti, Kuwait, Iraq and Afghanistan showed that 85 percent or more of the power was used for air conditioning, to provide comfort sleeping and keep communications equipment cool, something had to be done. The foam insulation has now shown to cut the loss of energy by 45%.
One aspect of the military that has hit a sticking point in going green is Army vehicles. Keeping our troops safe is a priority, and shouldn’t be put by the sidelines for anything. Hence, many of the vehicles have to rely on heavy armor to prevent lessening the safety for troops inside. However, according to Davis, “There’s emerging technology that is providing lighter-weight armor, so I think at some point … you’re going to see more hybrid vehicles in the tactical military fleet.”
And as for the notion that the US military is the world’s biggest emitter of greenhouse gasses on the planet, Davis questions the notion, and hopes that an online tracking program started in June will bring a favorable result.
The Portland Business Journal reports that Oregon has just been given the go-ahead by The Oregon Energy Facility Siting Council to build a 909 MW wind farm in the north-central part of the state. That’s enough energy to power 200,000 homes.
The Shepherd’s Flat Wind Farm will contain 303 wind turbines and will double the state’s wind-generating capacity. It will boost the local economy by creating 250 to 300 new jobs, and lease payments to landowners will supplement farm incomes.
However, the farm does face one challenge: Northwest power agencies claim to only be able to handle 1500 more megawatts of wind power on the grid. With new renewable energy projects popping up all over the place, it might be time to start thinking about some serious solutions to this problem.
If all goes according to plan, the Oregonian wind farm—scheduled to be in operation by 2010— will ultimately be overtaken in capacity by T. Boone Pickens’ 4000 MW Texas wind farm, which should be completed by 2014.
Wednesday, July 30, 2008
Go outside on a dark, moonless night. Look up. Is it December or January? Check out Betegeuse, glowing dully red at Orion’s shoulder, and Rigel, a laser blue at his knee. A month later, yellow Capella rides high in Auriga.
Is it July? Find Vega, a sapphire in Lyra, or Antares, the orange-red heart of Scorpius.
In fact, any time of the year you can find colors in the sky. Most stars look white, but the brightest ones show color. Red, orange, yellow, blue… almost all the colors of the rainbow. But hey, wait a sec. Where are the green stars? Shouldn’t we see them?
Nope. It’s a very common question, but in fact we don’t see any green stars at all. Here’s why.
Take a blowtorch (figuratively!) and heat up an iron bar. After a moment it will glow red, then orange, then bluish-white. Then it’ll melt. Better use a pot holder.
Why does it glow? Any matter above the temperature of absolute zero (about -273 Celsius) will emit light. The amount of light it gives off, and more importantly the wavelength of that light, depends on the temperature. The warmer the object, the shorter the wavelength.
Cold objects emit radio waves. Extremely hot objects emit ultraviolet light, or X-rays. At a very narrow of temperatures, hot objects will emit visible light (wavelengths from roughly 300 nanometers to about 700 nm).
Mind you — and this is critical in a minute — the objects don’t emit a single wavelength of light. Instead, they emit photons in a range of wavelengths. If you were to use some sort of detector that is sensitive to the wavelengths of light emitted by an object, and then plotted the number of them versus wavelength, you get a lopsided plot called a blackbody curve (the reason behind that name isn’t important here, but you can look it up if you care — just set your SafeSearch Filtering to "on". Trust me here). It’s a bit like a bell curve, but it cuts off sharply at shorter wavelengths, and tails off at longer ones.
Here’s an example of several curves, corresponding to various temperatures of objects (taken from online lecture notes at UW:
The x-axis is wavelength (color, if you like) color, and the spectrum of visible colors is superposed for reference. You can see the characteristic shape of the blackbody curve. As the object gets hotter, the peak shifts to the left, to shorter wavelengths.
An object that is at 4500 Kelvins (about 4200 Celsius or 7600 F) peaks in the orange part of the spectrum. Warm it up to 6000 Kelvin (about the temperature of the Sun, 5700 C or 10,000 F) and it peaks in the blue-green. Heat it up more, and the peaks moves into the blue, or even toward shorter wavelengths. In fact, the hottest stars put out most of their light in the ultraviolet, at shorter wavelengths than we can see with our eyes.
Now wait a sec (again)… if the Sun peaks in the blue-green, why doesn’t it look blue-green?
Ah, this is the key question! It’s because it might peak in the blue-green, but it still emits light at other colors.
Look at the graph for an object as hot as the Sun. That curve peaks at blue-green, so it emits most of its photons there. But it still emits some that are bluer, and some that are redder. When we look at the Sun, we see all these colors blended together. Our eyes mix them up to produce one color: white. Yes, white. Some people say the Sun is yellow, but if it were really yellow to our eyes, then clouds would look yellow, and snow would too (all of it, not just some of it in your back yard where your dog hangs out).
OK, so the Sun doesn’t look green. But can we fiddle with the temperature to get a green star? Maybe one that’s slightly warmer or cooler than the Sun?
It turns out that no, you can’t. A warmer star will put out more blue, and a cooler one more red, but no matter what, our eyes just won’t see that as green.
The fault lies not in the stars (well, not entirely), but within ourselves.
Our eyes have light-sensitive cells in them called rods and cones. Rods are basically the brightness detectors, and are blind to color. Cones see color, and there are three kinds: ones sensitive to red, others to blue, and the third to green. When light hits them, each gets triggered by a different amount; red light (say, from a strawberry) really gets the red cones juiced, but the blue and green cones are rather blasé about it.
Most objects don’t emit (or reflect) one color, so the cones are triggered by varying amounts. An orange, for example, gets the red cones going about twice as much as the green ones, but leaves the blue ones alone. When the brain receives the signal from the three cones, it says "This must be an object that is orange." If the green cones are seeing just as much light as the red, with the blue ones not seeing anything, we interpret that as yellow. And so on.
So the only way to see a star as being green is for it to be only emitting green light. But as you can see from the graph above, that’s pretty much impossible. Any star emitting mostly green will be putting out lots of red and blue as well, making the star look white. Changing the star’s temperature will make it look orange, or yellow, or red, or blue, but you just can’t get green. Our eyes simply won’t see it that way.
That’s why there are no green stars. The colors emitted by stars together with how our eyes see those colors pretty much guarantees it.
But that doesn’t bug me. If you’ve ever put your eye to a telescope and seen gleaming Vega or ruddy Antares or the deeply orange Arcturus, you won’t mind much either. Stars don’t come in all colors, but they come in enough colors, and they’re fantastically beautiful because of it.
July 29, 2008 (Computerworld) NASA last week launched a new interactive Web site, jointly developed with the non-profit Internet Archive, which initially combines some 21 separately stored and managed NASA imagery collections into a single online resource featuring enhanced search, visual and metadata capabilities.
The new portal, located here, stores more than 140,000 digitized high-resolution NASA photographs, audio and film clips. The launch of the site marks the end of the first phase of a five-year joint NASA-Internet Archive effort to ultimately make millions of NASA's historic image collection accessible online to the public and to researchers, noted Debbie Rivera, manager of strategic alliance at NASA.
The first content available on NASA's imagery Web site includes photos and video of the early Apollo moon missions, views of the solar system from the Hubble Space Telescope and photos and videos showing the evolution of spacecraft and in-flight designs.
The five-year joint development agreement signed in 2007 will also lead to the embedding of Web 2.0 tools into the site. For example, engineers are developing Wikis and blogs for users to share content. The team has already started adding metatags to improve search results, Rivera said. "There's a lot more to come," she added. "This is only the beginning."
In about a year, the partnership will tackle the enormous task of on-site digitizing of still images, films, film negatives and audio content currently stored on analog media devices across NASA field centers, Rivera said. Speed is essential, she noted, as some of NASA's older analog recordings and film footage of events as far back as 1915 are "disintegrating. This is one of the largest aspects of this partnership," she admitted.
Internet Archive, founded in 1996 to create an Internet-based library, will manage and host NASA's new interactive image gallery on the cluster of 2,000 Linux servers at its San Francisco headquarters, said John Hornstein, director of the NASA images project for the group. The non-profit currently runs 2 petabytes of storage, Hornstein said.
Hornstein acknowledged some hiccups following the launch of the site last week when servers crashed causing intermittently sluggish response times. In addition, software around the zoom-in functionality of thumbnail images on the NASA web site is still being de-bugged. He downplayed any lingering effects, however. "We're just finding where the issues are and we don't see any of this as an ongoing problem," remarked Hornstein.
Internet Archive is using software donated by Luna Imaging Inc. to help develop and support the NASA images project.
The International Space Station as captured by the crew of STS-124 aboard Space Shuttle Discovery on June 11, 2008. (Photograph by NASA)
The International Space Station isn't scheduled to be completed for two more years, but a growing chorus of engineers and executives is already brainstorming about what to do with the ISS after its life span ends in 2015. Given how long it has taken to put together the actual pieces in space—the Japanese experiment module Kibo was finally installed just a few weeks ago—and the tens of billions of dollars sunk into the station, it's understandable that many would like to see the working power of the ISS extended to 2020 or beyond. Plans range from the humble, like guiding it into a fiery reentry, to the ludicrous, like driving the station to the moon and parking it there.
I say ludicrous because of the complex engineering issues involved with rethinking the space station's life span. It is, after all, made up of many separate pieces that have been delivered and assembled at different times. The original core, the Russian Zarya module, has a rated lifetime on orbit of 15 years, but it was launched almost 10 years ago and should be removed and deorbited in 2013. In practice, of course, space hardware often lasts long past its design life, and there may be ways of refurbishing those things on orbit (e.g., seals) that might degrade, while the basic structure remains sound. Nonetheless, an aging station will raise the same issues that are currently forcing the space shuttle into retirement to avoid an expensive "recertification"—particularly since no one really knows what that means, since it was never "certified" in the first place.
Will China, Russia or Hoteliers Close in on a Post-U.S. ISS? The biggest immediate problem with operating toward an indefinite future is a basic one: transportation to change out crews and deliver cargo such as food, water and clean clothes. After the shuttle is retired (planned for 2010), the only available way of getting to the ISS will be the Russian Soyuz, which is currently in use. NASA says its new Orion/Ares launch system could resupply ISS when it's ready in 2015, but there are doubts that NASA will be involved with the station past 2015. The agency remains noncommittal; the United States government has no policy covering the ISS's retirement. "[Michael Griffin] has said that he'd like to see it continue beyond that period, but that's a decision for the next administration, or perhaps the one after it," says NASA spokesman John Yembrick, "There is no specific plan for the facility after 2015." (As I outlined for PM earlier this year, John McCain and Barack Obama's space policies have begun to lay out ideas on that front.)
If America's focus on the moon and Mars removes the U.S. from the ISS partnership, there would be nothing to keep the remaining partners from bringing in new players, such as China, with their own resupply capabilities. The station's foreign partners and other stakeholders recently met to discuss the issue. And there's real interest from the Europeans and Japanese in developing their own independent capabilities to provide crew and cargo transfer. European countries are also talking to Russia about a joint effort for a new crew vehicle.
Some space program proponents want to see the United States stay involved with the ISS. "I'd like to think that we will continue to support it and participate actively," says Marty Hauser, vice president of operations research and analysis for the Space Foundation, an advocacy group based in Colorado Springs. "It is a good model for an international alliance that we may ultimately need to see [the planned U.S. trip to the moon and Mars] to fruition, given how expensive it is going to be. We may not be able or willing to afford it on our own."
Hauser adds that the ISS could be used to support NASA's moon and Mars missions, noting that it could be used for simulating long-duration, deep-space missions and as a long-term testbed for needed exploration technologies, in addition to its planned use as a weightless research facility.
Another player who wants to see the ISS continue on well past its sell-by date is Tom Pickens, son of famous oilman T. Boone Pickens and CEO and chairman of Spacehab. The company recently signed an agreement with NASA to use the ISS for biological and other weightless-environment research, and for some space on a few of the remaining shuttle flights for final assembly of the station. "The ability to utilize the unique microgravity environment for industrial processing purposes is expected to revolutionize a myriad of industries," he argues. "We believe the utilization of the ISS as a national lab will have a significant social and economic impact and shows great promise of saving lives and providing thousands of new jobs in the coming years.'
It would make sense for Robert Bigelow, whose company is building a space station for private use, to be interested in using the ISS as a hotel, but Bigelow Aerospace would rather build from scratch. "We're pretty focused on building our own, for our own purposes in our own orbits," said Chris Reed, director of publicity for the company. "That's keeping us pretty busy without thinking about using the ISS."
Can Engineers Really Move the ISS?Earlier this month, Michael Benson jump-started a lot of the public talk about the real future of the ISS in an op-ed for The Washington Post. Deciding that the station is simply in the wrong place, Benson proposed that it be refitted as an interplanetary spaceship.
Benson's idea suffers from several flaws. The ISS is designed for operations in low Earth orbit (LEO), but that is a unique environment. Had trips beyond that altitude been the station's intended use, both the requirements and the design would have looked very different. Tom Jones, a four-time shuttle astronaut and PM's guru on space who's looked at the future of the ISS here before, notes that the station is designed for LEO and should stay there: "It isn't designed to operate for long periods of time without resupply of things like food, water and spare parts for maintenance. You'd have to develop a duplicate interplanetary system anyway just to deliver the supplies and rotate the crew."
Another flaw to making the station a spaceship is a lack of radiation shielding: Once out of LEO, the crew and the electronics, including solar panels, would be vulnerable to normal radiation levels and spikes caused by solar flares. Picking a thruster that will not damage the ISS's structure is another major challenge.
But there's a more plausible potential relocation: In order to get a useful amount of cargo to the ISS with the space shuttle, the station is saddled with a lower altitude than would be optimal. This low orbit produces more atmospheric drag and less time for the solar panels to spend in daylight. Also, more damage is done to the structure by monatomic oxygen in the upper atmosphere, which reacts with and "rusts" it over time. Other rockets, because their upper stages are so much smaller than the shuttle orbiter, are much less sensitive, and there isn't as much payload penalty for them to go higher. So it's likely that, with the retirement of the shuttle, the nominal orbit of the space station will be raised in altitude, reducing corrosion and increasing the amount of energy that the panels can collect per orbit.
What Will the Dying Days Look Like?So far, all space stations ever built have met a fiery end, burning up in the atmosphere—with a few pieces even reaching the ground. If nothing is done to save it, the ISS will suffer the same fate. According to the 1967 Outer Space Treaty, to which the U.S. is a signatory, the U.S. government is responsible and liable for all objects put into space by U.S. entities, governmental or commercial. If title and keys of the ISS were to be transferred to some other public or private organization, the responsibility would remain with the U.S. government. If it is handed to another government, that government would be responsible.
A derelict space station is a hazard to others sharing its orbit, and eventually, its orbit will decay if control is not maintained—and it could come down anywhere. That was the case when Skylab, America's first space station, had a violent retirement in 1979. The station had no systems aboard to deorbit it on command, so it spiraled down, slowly at first and then rapidly as it got into thicker air. Several large pieces crashed in Australia, without injury or property damage. But an ISS death trip will be controlled similar to the way the Russian Mir station came back to earth. The Russians brought Mir brought down deliberately with a deorbit burn that aimed the station at an uninhabited part of the South Pacific. Like Skylab, it became a major cultural event, with Taco Bell, in a publicity stunt, floating a large target in the ocean and promising free tacos for a day if any pieces hit it. While such things are not precise, they did a pretty good job of it, with the biggest pieces landing in the ocean near Fiji. No one got any free tacos.
"That's 500 billion planets out there, and bear in mind there are 100 billion other galaxies. To think this [the Earth] is the only place where anything interesting is happening, you have got to be really audacious to take that point of view."
Seth Shostak, SETI senior astronomer
Some leading astronomers are quite confident that mankind will make contact with intelligent alien life within two decades. The search for extraterrestrial life will leap forward next year when NASA launches the Kepler space telescope. The instrument will be constantly scanning the same 100,000 stars over its four-year mission with the exciting objective of discovering Earth-sized planets in the habitable zones around suns.
This will allow SETI to hone in on where the odds of life are possibly greatest. Currently, SETI’s mission to find life on other planets is like trying to find the proverbial needle in a haystack. But now, whenever Kepler identifies planets most likely to sustain life, the team at SETI will be able to focus in on those solar systems using deep-space listening equipment. This will be a huge upgrade from their present work of randomly scanning the outer reaches of space for some kind of sign or signal. Also, upping the ante, is the recent discovery of Earth-like planets outside our solar system, which has led astrophysicists to conclude that Earth-like planets are likely relatively common in our galaxy.
"Everything has caused us to become more optimistic," said American astrophysicist Dr Frank Drake in a recent BBC documentary. "We really believe that in the next 20 years or so, we are going to learn a great deal more about life beyond Earth and very likely we will have detected that life and perhaps even intelligent life elsewhere in the galaxy."
However, some astrophysicists have warned that we humans may be blinded by our familiarity with carbon and Earthlike conditions. In other words, what we’re looking for may not even lie in our version of a “sweet spot”. After all, even here on Earth, one species “sweet spot” is another’s species worst nightmare. In any case, it is not beyond the realm of feasibility that our first encounter with extraterrestrial life will not be a solely carbon-based occasion.
Alternative biochemists speculate that there are several atoms and solvents that could potentially spawn life. Because carbon has worked for the conditions on Earth, we speculate that the same must be true throughout the universe. In reality, there are many elements that could potentially do the trick. Even counter-intuitive elements such as arsenic may be capable of supporting life under the right conditions. Even on Earth some marine algae incorporate arsenic into complex organic molecules such as arsenosugars and arsenobetaines. Several other small life forms use arsenic to generate energy and facilitate growth. Chlorine and sulfur are also possible elemental replacements for carbon. Sulfur is capably of forming long-chain molecules like carbon. Some terrestrial bacteria have already been discovered to survive on sulfur rather than oxygen, by reducing sulfur to hydrogen sulfide.
Nitrogen and phosphorus could also potentially form biochemical molecules. Phosphorus is similar to carbon in that it can form long chain molecules on its own, which would conceivably allow for formation of complex macromolecules. When combined with nitrogen, it can create quite a wide range of molecules, including rings.
So what about water? Isn’t at least water essential to life? Not necessarily. Ammonia, for example, has many of the same properties as water. An ammonia or ammonia-water mixture stays liquid at much colder temperatures than plain water. Such biochemistries may exist outside the conventional water-based "habitability zone". One example of such a location would be right here in our own solar system on Saturn's largest moon Titan.
Hydrogen fluoride methanol, hydrogen sulfide, hydrogen chloride, and formamide have all been suggested as suitable solvents that could theoretically support alternative biochemistry. All of these “water replacements” have pros and cons when considered in our terrestrial environment. What needs to be considered is that with a radically different environment, comes radically different reactions. Water and carbon might be the very last things capable of supporting life in some extreme planetary conditions.
At any rate, the odds of there being some type of life somewhere out there are good. As for intelligent life, well, that will depend on the definition of intelligence. There are a lot of other intelligent species here on Earth besides humans, that we don’t generally regard as such. In spite of many Star Trek episodes to the contrary, the odds of alien life forms having evolved to talk, look and act exactly like super hot humans are slim to none. If life is out there, it will have evolved according to it’s particular niche in the universe and will likely be quite foreign to us in the way it looks, communicates and thinks. We might not even be able to recognize hypothetical life forms as alive in the sense that we understand life. In fact, it would be more “miraculous” if we could effectively communicate with extraterrestrial life than to find that it exists. From that perspective, even if there are other life forms out there, we’d still be alone in the universe. Of course, that doesn’t mean we should look for the answers.
Posted by Rebecca Sato.