Followers

Thursday, May 22, 2008

How could anyone not know that Titan's atmosphere is mostly nitrogen?

However, the question, which springs from this mea culpa post by Phil Plait on his always-entertaining Bad Astronomy blog, speaks to one half of a serious issue that has puzzled me for many years, and, in more recent times, caused me considerable concern as a parent.

If everyone makes mistakes -- and they do -- why are so many people unwilling to accept the mistakes of others (not to mention their own)?

From Plait's post:

Well, I blew it, and I suppose I should make it official.

In my second video answering questions from sixth graders, I said that Titan's atmosphere is mostly methane.

Bzzzzzt. It's mostly nitrogen (specifically, N2, like in Earth's atmosphere). It's only about 1% methane.

A simple slip of the tongue, albeit repeated twice, he explains.

In this case, comments on the post are understanding, sympathetic and even amusing -- "It's OK, dude. It's hard to answer questions like that when you're ducking to avoid sniper fire." That's as would be expected, given that those writing are fans of the blogger.

But it isn't always that way, as anyone who has put their words and thoughts before the public for any length of time can attest. I cannot imagine there are too many writers who have not been on the receiving end of this question: "How could someone who doesn't know (fill in the blank) write for (fill in the blank)?"

My temptation has always been to reply that I have considered their point, tendered my resignation, and expect that they will help pay for next week's groceries.

One of those responding to Plait's mistake raises a more useful point:

In actuallity, your providing those 6th graders (and many others) with an invaluable lesson about how science works. Science makes mistakes. Science can change when better answers are found. I think those kids will be much richer in their quest for knowledge because of it. Kudos.

Yes, I left the spelling errors in there on purpose; can't tell you whether the writer intended to do the same.

After all, it's OK to make mistakes, not only in spelling, science, journalism and responding to bloggers, but in all walks of life. That doesn't mean that mistakes don't have consequences or that there isn't a world of difference between saying methane when you mean nitrogen ... and, say, drunken driving.

But, in general, it's OK to make mistakes. I've said as much to my daughter Emma countless times (as has her Mom), including a stretch when it was a bedtime ritual that I would utter that exact phrase 100 times as fast as possible while I counted and she giggled. That she can be so devastated by her own mistakes -- at age 6 -- is no laughing matter.

Emma knows that her Dad writes stories on the Internet ... and that he makes mistakes. Tonight I'll get to show her the story about the really smart scientist who made (for him) a really silly one. Maybe that will help.

Not sure what to do about the infallible and those unwilling to forgive.

Welcome regulars and passersby. Here are a few more recent Buzzblog items. And, if you'd like to receive Buzzblog via e-mail newsletter, here's where to sign up.

The REAL sticking point between Microsoft and Yahoo!

Worst of the lot for two years running: PCMall and PCConnection.

YouTube's down, everybody panic.

Photographer/Internet 1, Bully 0.

"Bribe" or miswording: You make the call.

This Year's 25 Geekiest 25th Anniversaries.

Google renames the Persian Gulf.

Top 10 Buzzblog posts for '07: Verizon's there, of course, along with Gates, Wikipedia and the guy who lost a girlfriend to Blackberry's blackout.

8 can't-miss tech predictions ... for 1998

Original here

Birth cry of a supernova

Very, very cool news today: for the first time in history, astronomers have unambiguously observed the exact moment when a star explodes.

Whoa.

The Quick Version

NGC 2770 is a galaxy at the relatively close distance of 84 million light years away. On January 9, 2008, a massive star in it exploded, and instead of finding out days or weeks later, astronomers caught it in the act, right at the moment, in flagrante delicto. The image above, from the Gemini observatory, shows the galaxy and its new supernova.

We’ve seen lots of stars explode; thousands in fact. But because of the mechanics of how a star actually explodes, by the time we notice the light getting brighter, the explosion may be hours or even days old. This time, because it was caught so early, astronomers will learn a whole passel of new knowledge about supernovae.

That’s the brief summary… but this event has a rich back story. Pardon me the lengthy description, but this is very cool stuff, and I think you’ll enjoy having the details.

The Death of a Star

I’ve explained before how massive stars explode. After a few million years of generating energy by fusing light elements into heavier ones (hydrogen to helium, helium to carbon, and so on), the core runs out of fuel. Iron builds up in the very center of the star, and no star in the Universe has what it takes to fuse iron. It’s like ash piling up in a fireplace. At some point, so much iron builds up that it cannot support its own weight, and the ball of iron collapses.

In a millisecond, more than the mass of the Sun’s worth of iron collapses from an object the size of the Earth to a ball only about 20 kilometers across. Weirdly, it happens so quickly that the surrounding layers don’t have time to react, to fall down (think Wile E. Coyote running over a cliff’s edge). While they hesitate and just start to fall, all hell breaks loose below them.

Dana Berry artwork of a supernova
Artist’s drawing of a supernova

The collapse of the core generates a vast explosion, a shock wave of incomprehensible power that moves out from the surface of the collapsed core into the layers of gas still surrounding it. Like a tsunami of energy, this shock wave works its way up to the surface of the star. The energy is so huge that the material it slams into gets heated to millions of degrees. When the shock wave breaks out through the surface, it bellows its freedom to the Universe with a flash of X-rays, a brief but incredibly brilliant release of high-energy light.

After that flash (what’s called the shock breakout), the explosion truly begins. The outer layers of the star — gas that can have many times the mass of the Sun, octillions of tons — blast outwards. The star tears itself apart, and becomes a supernova.

So — and this is important — the X-ray flash is emitted immediately, but the bright visible light follows it later.

Millions of light years away, astronomers on Earth patrol the sky with telescopes. We use ones that are sensitive to visible light, the kind we see with our eyes. They can see large parts of the sky easily, and that’s good since we never know where the next supernova will be. But that also means we don’t see the shock breakout; we notice the star exploding usually days after the actual event occurs.

We really want to see the supernova as early as possible; the physics, the mechanics of the explosion depend critically on what happens early on, so the younger we see the new supernova, the better we can tune our models and understand precisely how and why stars explode. An X-ray telescope would detect the first moments of the event, but they typically have too small a field of view to catch one as it occurs. The odds are simply way too small, and no one has ever actually witnessed the predicted X-ray flash from the shock breakout.

Until now.

Now, finally, we have seen a supernova go off right as the moment happened. And, ironically, it was an accident, an amazing coincidence, that allowed it to happen.

The Astronomer, the Coincidence, the Mobilization

NASA’s Swift satellite is sensitive to X-rays; it’s designed to detect the flash of light from gamma-ray bursts, which are a particular flavor of supernova. But Swift can also be used to observe older events, too. Alicia Soderberg, an astronomer at Princeton, had gotten time on Swift to observe SN2007uy, a supernova that had exploded the month before. This was a routine observation, and on January 9, 2008, she was actually on travel, ironically giving a talk about supernovae!

When she got back from her talk, she logged onto the Swift archive to look at the data as it came in, and got a huge surprise. There was a second, new source of X-rays in the field of view… and it was incredibly bright. She quickly realized what she was seeing: a new supernova caught in the act, the X-ray flash of the shock breakout detected for the first time. She realized Swift had caught the birth of supernova 2008D as it was happening.

The image above shows the pre- and post-discovery Swift images of the event. The upper images are in ultraviolet light, and show the galaxy NGC 2770. The bottom images show the same field, but in X-rays, where the galaxy itself is dim, but stars and star-forming gas clouds are bright. The images in the left column were taken on January 7, 2008 and the right column two days later, during the shock breakout. You can see how SN2008D is rather unremarkable in ultraviolet, but in X-rays is tremendously bright, washing out everything else in the galaxy.

Mind you, the X-ray flash from the supernova only lasted about five minutes. If Swift had not been looking right at the spot at right at that moment, this once-in-a-lifetime opportunity would have been lost.

That’s how cool this is.

I can only imagine what Soderberg was thinking when she saw that image on the lower right. Wow. But she acted quickly. She and her colleagues immediately mobilized a team of astronomers using telescopes across the world — and above it — to observe the newly born supernova. Using the Gemini telescope (the same one that made the beautiful picture at the top of this post) they quickly got spectra of the event, and it confirmed the event: she had bagged an exploding star. The energies of a supernova are so great that the outer layers explode outwards at a fraction of the speed of light, and velocities measured from SN2008D indicated expansion rates of more than 10,000 kilometers per second — fast enough to cross the entire Earth in just over one second, and faster than the expansion in a typical supernova.

The Aftermath

Because so many people observed this supernova from so early on, a vast wealth of knowledge was collected. The progenitor star probably started out life with about 30 times the mass of the Sun. Over a few million years, it shed quite a bit of its mass through a dense, super-solar wind, blowing off most (but probably not all) of its outer layers. When the core collapsed, the shock wave tore through what was left of the star’s envelope. Because there wasn’t as much material as usual surrounding the core, the energy of the blast could accelerate the gas outward at an unusually high speed. It’s also been determined that the explosion wasn’t symmetric: it wasn’t a perfect sphere, with the gas expanding in every direction equally. Instead it was off-center, with gas on one side of the explosion moving outward faster than on the other. This has been seen before, but never so early on.

All of this adds up to an incredible boon for astronomers. The observations collected are yielding a huge amount of information not just on this particular event, but on the basic parameters of supernova explosions themselves. Because of this happy coincidence — a new supernova occurring near an older one, and just as a powerful and sensitive X-ray observatory was pointed in the right direction, and with attentive scientists keeping an eye on their data — astronomers will take a huge step forward in understanding these tremendous explosions.

The Importance

You should understand something else here, too. As the blast wave moves through the gas of the star, the elements in that gas undergo an explosive fusion, creating new, heavier elements. Elements like iron, calcium, and gold. The hemoglobin in your blood, the bones in your body, and the wedding ring on your finger — all of these can trace their lineage back to a star that exploded like SN2008D. Every heavy element in the Universe was created in such an event, in the heart and fury of a supernova.

We owe our very existence to stars that explode.

That’s why work like this is important. Through science like this we can determine our own origins, from the hydrogen that formed a millisecond after the Big Bang, through elements built up in normal stars like the Sun, through heavy elements created in supernovae… to us.

That’s where science leads. We look out to the farthest reaches of the Universe, and we wind up seeing ourselves.

Original here

Who Owns the Moon? The Case for Lunar Property Rights

Published in the June 2008 issue.

The moon has been in plain view for all of human history, but it's only within the past few decades that it's been possible to travel there. And for just about as long as the moon has been within reach, people have been arguing about lunar property rights: Can astronauts claim the moon for king and country, as in the Age of Discovery? Are corporations allowed to expropriate its natural resources, and individuals to own its real estate?

The first article on the subject, "High Altitude Flight and National Sovereignty," was written by Princeton legal scholar John Cobb Cooper in 1951. Various theoretical discussions followed, with some scholars arguing that the moon had to be treated differently than earthbound properties and others claiming that property laws in space shouldn't differ from those on Earth.

With the space race in full flower, though, the real worry was national sovereignty. Both the United States and the Soviet Union wanted to reach the moon first but, in fact, each was more worried about what would happen if they arrived second. Fears that the competition might trigger World War III led to the 1967 Outer Space Treaty, which was eventually ratified by 62 countries. According to article II of the treaty, "Outer Space, including the moon and other celestial bodies, is not subject to national appropriation by claim of sover­eignty, by means of use or occupation, or by any other means."

So national appropriation was out, along with fortifications, weapons and military installations. But what about private property rights—personal and corporate? Some scholars argue that property rights can exist only under a nation's dominion, but most believe that property rights and sovereignty can be distinct.

In something of an admission that this is the case, nations that thought the Outer Space Treaty didn't go far enough proposed a new agreement, the Moon Treaty, in 1979. It explicitly barred private property rights on the moon. It also provided that any development, extraction and management of resources would take place under the supervision of an international authority that would divert a share of the profits, if any, to developing countries.

The Carter administration liked the Moon Treaty, but space activists, fearful that the sharing requirement would subjugate American mineral claims to international partners, pressured the Senate, ensuring that the United States didn't ratify it. Although the Moon Treaty has entered into force among its 13 signatories, none of those nations is a space power.

So property rights on the moon are still the subject of international discussion. But would anyone buy lunar land? And what would it take to establish good title?

The answer to the first question is clearly "yes." Lots of people would buy lunar land—and, in fact, lots of people have, sort of. Dennis Hope, owner of Lunar Embassy, says he's sold 500 million acres as "novelties." Each parcel is about the size of a football field and costs $16 to $20. Buyers choose the location—except for the Sea of Tranquility and the Apollo landing sites, which Hope has placed off-limits.

To convey good title, Hope essentially wrote the U.N. to say he was going to begin selling lunar property. When the U.N. didn't respond with an objection, he asserted that this allowed him to proceed. Although I regard his claim to good title as dubious, his customers have created a constituency to recognize his position. If he sells enough lunar property, it may become a self-fulfilling prophecy.

So there's demand, even for iffy titles. But what would it take to establish title, rather than Dennis Hope's approximation? That's not so clear. In maritime salvage law, which also deals with property rights beyond national territory, actually being there is key: Those who reach a wreck first and secure the property are generally entitled to a percentage of what they recover. There's even some case law allowing that presence to be robotic rather than human. Traditionally, claims to unclaimed property require long-term presence, effective control and some degree of improvement. Those aren't bad rules for lunar property, either. But who would recognize such titles?

Individual nations might. In the 1980 Deep Seabed Hard Mineral Resources Act, the United States recognized deep-sea mining rights outside its own territory without claiming sovereignty over the seabed. There's nothing to stop Congress from passing a similar law relating to the moon. For that matter, there's nothing to stop other nations from doing the same.

Ideally, title would be recognized by an international agreement that all nations would endorse. The 1979 Moon Treaty was a flop, but there's no reason the space powers couldn't agree on a new treaty that recognizes property rights and encourages investment. After all, the international climate has warmed to property rights and capitalism over the past 30 years.

I'd like to see something along these lines. Property rights attract private capital and, with government space programs stagnating, a lunar land rush may be just what we need to get things going again. I'll take a nice parcel near one of the lunar poles, please, with a peak high enough to get year-round sunlight and some crater bottoms deep enough to hold ice. Come visit me sometime!

PM contributing editor, Instapundit blogger and University of Tennessee law professor Glenn Harlan Reynolds is the author (with Robert P. Merges) of Outer Space: Problems of Law and Policy.

Original here

Creationism Creeps into U.S. Classrooms

One in eight U.S. high school biology teachers presents creationism or intelligent design in a positive light in the classroom, a new survey shows, despite a federal court's recent ban against it.

And a quarter of the nation's high school biology teachers say they devoted at least one or two classroom hours to the topics, with about half presenting it favorably and half presenting it as an invalid alternative.

Those results are part of a nationally representative, random sample of 939 teachers who filled out surveys between March 5, 2007, and May 1, 2007 on questions concerning the teaching of evolution. The figures have a 3 percent margin of error.

The research, funded by the National Science Foundation, also revealed that between 12 percent and 16 percent of the nation's biology teachers are creationists, and about one in six of them have a "young Earth" orientation, which means they believe that human beings were created by God in their present form within the past 10,000 years.

Scientists, on the other hand, agree that humans evolved from a common primate ancestor in a process that stretches back tens of millions of years. The theory of evolution on which this is based is one of the most well-supported theories in science.

By design

The highly publicized Dover ruling in 2005 banned the teaching of intelligent design in Pennsylvania public school science classes, and there have been many other legal victories at the state and local level for the teaching of evolution, but there is a disconnect between these rulings, science and what really happens in high school biology classes, said study leader Michael B. Berkman, a political scientist at Penn State University.

In the end, it is teachers, more than court cases, that determine what is presented in science class, the new research suggests.

"The status of evolution in the biology and life sciences curriculum remains highly problematic and threatened," writes Berkman and his colleagues, including Eric Plutzer and Julianna Sandell Pacheco, both of Penn State, in a peer-reviewed essay on the survey in the latest issue of the journal PLoS Biology.

Berkman and Plutzer have a longstanding project that focuses on the responsiveness of school districts to public opinion.

"This issue [the teaching of evolution] is particularly interesting in that context because the public opinion on it is in many ways so far away from where the experts are," Berkman told LiveScience. For instance, about 38 percent of Americans would prefer that creationism be taught instead of evolution, according to a 2005 poll by the Pew Forum on Religion and Public Life.

Hottest button

Other details of the survey results:

- The majority of biology teachers spend between 3 and 15 hours on evolution, which the National Academy of Sciences considers to be the most important concept of biology.

- The majority of teachers spend no more than five hours on human evolution.

- Only 23 percent of teachers strongly agreed that evolution is the unifying theme for their biology or life sciences courses, though the majority of teachers see evolution as essential to high school biology.

- The more biology or life sciences classes taken in college by a teacher, the more evolution they taught.

"This is the hottest of the hot buttons," Berkman said of the teaching of evolution. Even the strongest legal ruling "still gives boards of education, school districts and especially teachers considerable leeway."

Victory in the courts and state standards will not ensure that evolution is included in high school science classes, Berkman and his colleagues conclude. A bigger impact would come by focusing on certification standards for high school biology teachers, such as requiring all teachers to complete a course in evolutionary biology.

Berkman said he didn't know if this was likely to happen, but he hopes the new research "captures the attention of people who make decisions on that, science educators as well as scientists."

Original here

The "Trust Me" Drug That Makes You Take Social Risks

glowingdrink.jpg What if you could convince people to trust you and take risks for you with just a few drops of liquid surreptitiously placed in their water? There would be no drunkenness, no rufie-esque glazed eyes: just pure, human trust created via chemicals. The person wouldn't even know they'd been dosed. A study coming out tomorrow in the journal Neuron explains how this scenario is possible today, with just a small dose of the brain chemical oxytocin.


Oxytocin is a chemical associated with many of the "pleasurable" feelings you have, from basic trust, to love and orgasm. Researchers in Switzerland theorized that people playing social trust games might change their behaviors if given doses of oxytocin, since the chemical might artificially enhance their willingness to trust someone. Indeed, they were right: subjects dosed with Oxytocin were willing to trust people even after they'd been explicitly told that those people had behaved in untrustworthy ways in the past. People who had not been dosed did not trust the "untrustworthy" people.

According to a release from Neuron:

In their experiments, the researchers asked volunteer subjects to play two types of games—a trust game and a risk game. In the trust game, subjects were asked to contribute money, with the understanding that a human trustee would invest the money and decide whether to return the profits, or betray the subjects' trust and keep all the money. In the risk game, the subjects were told that a computer would randomly decide whether their money would be repaid or not.

The subjects also received doses of either the brain chemical oxytocin (OT) or a placebo via nasal spray. They chose OT because studies by other researchers had shown that OT specifically increases people's willingness to trust others.

During the games, the subjects' brains were scanned using functional magnetic resonance imaging. This common analytical technique involves using harmless magnetic fields and radio waves to map blood flow in brain regions, which reflects brain activity.

The researchers found that—in the trust game, but not the risk game—OT reduced activity in two brain regions: the amygdala, which processes fear, danger and possibly risk of social betrayal; and an area of the striatum, part of the circuitry that guides and adjusts future behavior based on reward feedback.

Baumgartner and colleagues concluded that their findings showed that oxytocin affected the subjects' responses specifically related to trust . . . "If subjects face social risks, such as in the trust game, those who received placebo respond to the feedback with a decrease in trusting behavior while subjects with OT demonstrate no change in their trusting behavior although they were informed that their interaction partners did not honor their trust in roughly 50% of the cases."

So basically you've got the world's scariest date-rape drug ever — one that persuades people to trust the untrustworthy and take risks with them. The researchers don't see it that way, however. They think it means there's potential to help people with social phobias who have trouble responding with normal trust levels in situations that call for it. I'm all for that, but I'm not looking forward to hearing about oxytocin parties in dorms.

Brain's Trust Machinery Identified [Eurekalert]

Original here

Garbage Will Lead the Biofuel Revolution

“Don’t let invasive biofuel crops attack your country” was the warning delivered by concerned scientists yesterday at a UN meeting in Germany. Scientists from the Global Invasive Species Program (GISP), the Nature Conservancy and the International Union for Conservation of Nature all warned that bioenergy crops could prove ecologically and economically disastrous, as many of the proposed energy crops are in fact invasive species.

This warning could easily be aimed at entrepreneurs and venture capitalists, and be read as: “Don’t let invasive biofuel crops attack your business plan/investment.” Indeed, while Coskata and other cellulosic biofuel startups are stressing “non-food feedstocks,” perhaps the real next step is “non-crop feedstocks” — to that end, many biofuel startups are targeting garbage, waste and leftovers as feedstocks that are both low-cost and low-risk.

In their new report, “A Risk Assessment of Invasive Alien Species Promoted for Biofuels,” GISP lists 28 plant species already being cultivated for biofuel use that are classified as invasive. And “invasives,” as defined by the National Invasive Species Council, can be subject to stifling federal and state regulations.

This is a serious potential regulatory monkey wrench for the biofuel startups depending on America’s cropland being quickly converted to green waves of energy grasses. But it could be a boon for startups focusing on using waste products as biofuels. Agricultural wastes, like corn stover and sugarcane bagasse, are being targeted in the U.S. by startups like Mascoma and Coskata and in Brazil by Brenco and KiOR.

Coskata’s first biorefineries will use currently available feedstocks — wood chips, sugarcane waste and municipal trash. The company estimates that municipal waste could be used to produce 8-10 billion gallons of fuel annually. The untapped market of industrial waste could double that. “Industrial waste gases off of steel mills could provide another 10 billion gallons,” said Wes Bolsen, Coskata’s chief marketing officer and VP. “Those gases are exactly what Coskata’s microbes could eat. They’re burning bug food. Coskata is actively approaching steel producers to turn those gases into fuel.”

But Coskata still believes there’s a place for sustainably and safely grown energy crops. “The energy crop market is a few years away,” Bolsen told us. “We’re going to wait and see it develop in a sustainable way before we decide to build a $400 million biorefinery dependent on any one of those crops.”

The logical conclusion of this progression is Craig Venter’s much-vaunted “fourth-generation biofuels.” Venter has said that by next year, his startup, Synthetic Genomics, will be producing octane from the ultimate waste product - carbon dioxide.

GISP recommends risk assessment and information gathering before any country moves full-steam ahead with energy crop plantations. That might be a little late as both the U.S. and the EU have already passed legislation mandating increases in non-food biofuels. In the mean time, we haven’t seen anyone raise objections to using garbage as a feedstock…yet.

To read more on biofuels:

An A to Z of the Biofuel Economy

15 Algae Startups Bringing Pond Scum to Fuel Tanks

Primer: What You Need to Know About Brazilian Biofuels

Original here

World’s Largest College-Based Solar Farm Coming to Florida

Sun setting over Grayton Beach in Northwest Florida. (Image credit: Ebyabe at Wikimedia Commons under a GNU Free Documentation license.)The Sunshine State might have a lot of catching up to do when it comes to solar energy installations, but it’s now on a fast track toward big improvements.

The tide began turning when Gov. Charlie Crist, a Republican with a strong environmental sentiment and an affinity for renewable energy, first took office. Then came the debut earlier this year of Florida’s largest solar array to date, a 250-kilowatt installment in Sarasota County.

And now comes the news that Florida Gulf Coast University (FGCU) in Ft. Myers has been singled out by state lawmakers for an $8.5 million allocation to build a 16-acre solar farm on its campus. While the funding still needs a final OK from Crist, who’s likely to approve, the money would help FGCU construct what would be the largest university-based solar farm in the world.

Once the allocation is cleared by the governor, FGCU officials plan to move forward aggressively, with plans to begin construction in October and finish by next summer. Upon completion, the state-funded solar farm is expected to provide 100 percent of the campus’ energy needs.

While that project alone could save the school $22 million in energy costs over the next three decades, university officials have even greater ambitions. In addition to the state funds, they hope to collect enough private donations to cover the cost of another solar array that could generate one further megawatt of power.

With developments like this, Florida’s energy security future looks a little brighter every day.

Original here

How To Reduce Vampire Power

Vampires, Phantoms, and Bears, Oh My!

Okay, so there aren't any bears in this story. But there are vampires, phantoms, idlers, and warts. In this case, however, we're talking about vampire power, phantom loads, idling standby current, and wall warts. They all basically refer to the same thing: electronic devices with two sharp, pointy teeth that latch into your wall sockets and suck blood...err...electricity all day, all night, whether on or "off," whether charging batteries or not. These devices include TV's, VCR's, DVD players, answering machines, iPods, cell phones, stereos, laptops, desktops, anything with a remote, anything with a charger, anything with a clock display. They are everywhere. Lurking.


Top 10 ways for you to fight the vampires

  1. Unplug your devices. It's as simple as that. Pull TV/computer/stereo/etc power cords out of the outlet. If they're not in use or if they're totally unneccesary (are you really going to ever use that VCR player again?), unplug.
  2. Reduce your demand. Sure, electronic gizmos are fun. But do you really need 2 TVs for one room? If the answer is yes, then at least follow number 6's advice!
  3. Use the other off switch. Many devices also have an 'off' switch in the back. For example, most computers come with one 'soft' power switch on the front, which takes it from standby to on. Separately, there is usually a real 'on/off' switch located in the back on the power supply (near where the power cord goes in).
  4. Plug your devices and chargers into a power strip. And when you're not using those devices, turn off your power strip.
  5. Remove chargers from the wall when you're not charging. Your cell phone charger, iPod charger, laptop charger, etc. keeps drawing electricity even if your phone/Ipod/laptop/etc isn't charging. So if your phone says "Charge complete" (or worse, isn't even attached to your charger), pull out the charger.
  6. If you're in the market for new electronics, buy Energy Star qualified. Energy Star takes standby power into account and their qualified devices draw less than the average when in their "off" mode. Some of their best electronic items include cordless phones and audio equipment.
  7. Get a phone that tells you to unplug it. Nokia announced in May 2007 that it will be rolling out new phones with audible alerts (they say, "Battery is full, please unplug the charger.") This feature will first appear in models 1200, 1208 and 1650 (they will most likely start in Europe).
  8. For your various computer accessories, try a smart strip. These work really well when it's not feasible to be constantly unplugging your devices. Check out the Isole Plug Load Control. This power strip saves energy by monitoring occupancy. The Smart Strip Power Strip monitors power differences between computers and peripherals. This way, when you shut down your computer, the Smart Strip automatically shuts off the accessories. The Mini Power Minder also works by communicating between your computer and your accessory.
  9. To learn about the power consumption of your electronics, look into a Kill-A-Watt. This device will tell you about the efficiency of your electronics, whether turned on or "off." It can actually be kind of fun (and definitely enlightening) to run around your house and see how much juice each piece of equipment takes, in both and and standby mode. You'll likely be surprised. (If you want something a little more hardcore, try Watts Up?).
  10. If you're up for a whole house project, check out GreenSwitch, a wireless home energy control system that let's you cut off power to your various electronics quite easily. For other whole house devices, here's a wiki that might be right up your alley.


(Image from GOOD Magazine)

Basics of vampire power

Most people think that when you turn something off, it actually turns off. Most people assume that it stops drawing power. Unfortunately, that's not true in the case of most electric devices. Most of them just hover in standby mode, waiting for you to 'turn on' the power again.

A 1999 study in New Zealand conducted by the Energy Efficiency and Conservation Authority indicated that 40% of microwave ovens used more electricity to power the clock and the keypad over the course of the year than actually heating food. Big screen TV's (and their respective cable boxes and satellites) up to 30 watts when off. A computer left turned on can potentially draw as much current as a refrigerator. And what about those chargers? Even when your cell phone (or other battery operated device) isn't charging, even if it's not even plugged in, it's still drawing power. It may even add as much as 10% to your energy bill.

This is bad news for your wallet and bad news for the environment. Studies conducted by the Lawrence Berkeley National Laboratory estimate that standby power consumption in the US accounts for 5% of all residential power consumption. That means Americans spend more than $3.5 billion annually on wasted power. It also means that our standby power is responsible for 27 tons of carbon dioxide emissions.

The International Energy Agency (IEA) estimates that globally standby power is responsible for 1% of carbon dioxide emissions (to contextualize that number, it is estimated that 2-3% of CO2 emissions are from air travel). And let's be honest. Those numbers are probably growing given the affinity many of us have for new gadgets and fancy appliances.

What's being done on the manufacturer and policy side?

  • Some manufacturers are making appliances and electronics more efficient (we applaud them): Energy Star takes standby power into consideration when evaluating products.
  • In 1997, the EU negotiated with consumer electronic manufacturers to reduce standby losses of TV's and VCR's; in 2000, the EU worked on an agreement to reduce standby losses of audio equipment; in 2003, an agreement was reworked for TV's and DVD players.
  • In 1999, the IEA launched the One Watt Initiative, an international action plan to reduce standby power in all appliances to one watt by 2010. The plan would reduce CO2 emissions by 50 million tons if OECD countries participated (that's the equivalent of taking 18 million cars off the road). In 2000, Australia endorsed the One Watt Initiative.
  • In 2001, President Bush signed Executive Order 13221 requiring the federal government to purchase electronics with one watt or lower of standby draw.
  • On January 1, 2006, a California Energy Commission regulation went into effect limiting standby power-consumption of consumer-electronic devices, including DVD players and stereos. Under this legislation, TV's and DVD player that consume more than three watts in standby mode are illegal, power adapters are limited to 0.75 watts (which will fall to 0.5 watts in January 2008), and as of 2007, stereos without permanent display clocks are limited to 2 watts, while those with clocks are limited to 4 watts.

Additonal Resources