Wednesday, June 29, 2011

Shale Gas and the New York Times

Shale Gas and the New York Times: The Challenge from Energy In Depth (A ‘Dewey-Defeats-Truman’ Energy Moment?)
by Robert Bradley Jr. From MasterResource
June 29, 2011

[The factual rebuttalby Chris Tucker and Jeff Eshelman of Energy In Depth (a project of the Independent Petroleum Association of America, or IPAA) is a serious moment in the energy debate. MasterResource reproduces their rebuttal in total and invites comments, particularly from the peak oil' community that received the front page article of their dreams (or nightmares, depending on the ultimate outcome of this fact-versus-fact debate.]

“What [the New York Times] isn’t entitled to, at least in our view, is to represent its piece as an original investigation; not when the story was essentially outsourced to a well-known critic of the industry whose predictions on shale’s imminent collapse grow less defensible (and more difficult to find on his website) by the day. Nor do we believe The Times is entitled to mislead its readers on the expertise of those whose “leaked” emails — many written in 2008 and 2009 – are used to form the basis of the story, especially when real-world production numbers from 2010 and 2011 directly contradict those speculative accounts.”

- Chris Tucker and Eric Jeff Eshelman, June 28, 2011

The United States produced more natural gas in 2010 than at any point in the previous 37 years, a stunning reversal of fortune given the country’s supply picture earlier this decade, and one that could not have been possible without the massive volumes of American energy that continue to be generated from shale.

So what happens from here? By now, you’ve likely heard the stories and seen the estimates: with everyone from IEA to EIA to PGC to MIT projecting a future in which shale’s production trajectory continues along an aggressive upward path, delivering literally quadrillions of cubic feet of clean-burning natural gas to generations of consumers not only in the United States, but around the world. It’s a view that’s supported by the preponderance of science and a majority of scientists, not to mention one that’s continuously reinforced by new data.

Over the weekend, The New York Times sought to advance a contrarian view on the subject, and to that view The Times (and reporter Ian Urbina) is more than entitled. What it’s not entitled to, at least in our view, is to represent its piece as an original investigation; not when the story was essentially outsourced to a well-known critic of the industry whose predictions on shale’s imminent collapse grow less defensible (and more difficult to find on his website) by the day. Nor do we believe The Times is entitled to mislead its readers on the expertise of those whose “leaked” emails — many written in 2008 and 2009 – are used to form the basis of the story, especially when real-world production numbers from 2010 and 2011 directly contradict those speculative accounts.

Against that backdrop, we attempt below to pull back the curtain a bit on some of the tricks employed in The Times’ latest front-page assault on responsible natural gas development:
Trick #1: Suggest that the Barnett, Haynesville, and Fayetteville shales are “not performing as industry expected” without actually defining what that means – and exclude mention of the extraordinary production growth currently being witnessed across all three plays.

· WSJ sets the stage: “As recently as 2000, shale gas was 1% of America’s gas supplies; today it is 25%. Prior to the shale breakthrough, U.S. natural gas reserves were in decline, prices exceeded $15 per million British thermal units, and investors were building ports to import liquid natural gas. Today, proven reserves are the highest since 1971, prices have fallen close to $4 and ports are being retrofitted for LNG exports.” (Wall Street Journal editorial, June 25, 2011)

· According to data from IHS Global Insight and UBS, daily production of natural gas in the Barnett Shale has more than doubled over the past four years, even as the number of rigs operating in the play has decreased by more than 50 percent. In Arkansas, the Fayetteville Shale now delivers more than 2.5 billion cubic feet of natural gas per day, even with 40 percent fewer rigs in service today compared to the summer of 2008.

· According to Forbes’ reporter Christopher Helman: “The shale play that started it all, the Barnett of northern Texas, is today producing more than ever … despite there being half as many rigs working the land than there was two years ago (when production was 5.3 bcfd). As analyst Dan Pickering of Tudor, Pickering & Holt wrote in a note this morning, ‘If wells are declining faster than expected, the Barnett would not be at record production with reduced rig count.’”

· And then there’s the Haynesville, which recently surpassed the Barnett as the most productive shale field in North America. According to EIA, the Haynesville is responsible for producing more than 5.5 billion cubic feet of natural gas a day, even with 40 fewer rigs operating today than were there last spring.

· The Times mentions none of these data, declaring instead that shale development is uneconomical based on the belief that its recovery rates are too low. Of course, if that was actually true, the price of natural gas would presumably be a lot higher today than it is right now owing to reduced supply, which would improve shale’s economic fundamentals – and thus destroy The Times’ thesis about profitability.

Trick #2: Whatever you do: Avoid mention of the Marcellus Shale, since including anything on that would really wreck your story!

· Tellingly, Urbina largely avoids mention of the supply picture associated with the Marcellus Shale, mentioning the play only a single time in his entire 2,500-word piece. That decision proved to be a good one: On Sunday, the same day The Times’ piece ran, the Associated Press published its own account of production potential in the Marcellus, including new details on a series of record-breaking wells in Pennsylvania.

· From the AP piece: “Each of the Cabot Oil & Gas Corp. wells in Susquehanna County is capable of producing 30 million cubic feet per day — believed to be a record for the Marcellus and enough gas to supply nearly 1,000 homes for a year. The landowners attached to the wells, who leased the well access, numbering fewer than 25, are splitting hundreds of thousands of dollars in monthly royalties. … Cabot’s wells, and Marcellus wells in general, are not running at full tilt, mainly because the infrastructure required to take the gas from wellhead to market is not yet fully in place. An oversupply of natural gas and the availability of crews to fracture the wells are other limiting factors.

· Same story for other Marcellus operators: “Range [Resources] has boosted its estimate of the amount of natural gas it will ultimately be able to harvest from its Marcellus Shale wells, telling investors this month that it plans to triple production to 600 million cubic feet per day by the end of 2012. Another major player, Chesapeake Energy Corp., has likewise reported a dramatic increase in expected well production. Early on, the Oklahoma City-based driller predicted that each well would yield 3.5 billion cubic feet of gas over its life span. That amount has since doubled, to more than 7 billion cubic feet, and continues to go up.”

· Perhaps Urbina decided to keep quiet on the Marcellus because so much of its production data is already available online, making it much easier for the public to challenge and refute vague assertions about resource depletion. According to John Hanger, former secretary of DEP: “All the reader is told about the Marcellus is that a Penn State professor reports well production is meeting or exceeding expectations in the Marcellus. No charts or bar graphs. No data. Nothing. Why? Very inconvenient facts for the Ponzi; Enron narrative is the answer.”

Trick #3: Allow discredited peak energy activist Art Berman to write, edit and review your piece, but be careful not to quote him too often.

· Although unfamiliar to most mainstream audiences, Arthur Berman is well-known in energy circles as a professional opponent of resource plenitude, serving on the board of the Association for the Study of Peak Oil & Gas (ASPO-USA), which promotes “cooperative initiatives in an era of depleting petroleum resources.” Mr. Berman has written extensively on the subject, but his work has been rebutted on several occasions – most notably by the energy investment firm Tudor, Pickering & Holt in this memorandum.

· Although Mr. Berman’s work was channeled by Urbina to attack the economics of shale development, Berman himself was forced to back-track on earlier statements made about the Haynesville in 2009, admitting that his reserve estimates then “were too low.”

· In April 2009, Berman said it was “difficult to imagine that the Haynesville Shale can become commercial when per-well reserves are similar to those of the Barnett Shale at more than twice the cost.” Only two months later, in June 2009, Berman had changed his tune, saying that “I now think that the Haynesville Shale reserve estimates that I presented previously were too low.” Unfortunately, this new perspective was not reflected in The Times’ treatment of the Haynesville, currently the most productive shale field on the entire continent.

· From the APSO-USA website: “If Berman is right, we will not see large increases in shale gas production through 2011, or some companies will go belly up, or both.”

· Consistent with his view that the world will soon run out of oil and natural gas, Berman has put himself on record, as recently as this spring, in support of a ban on cars and trucks: Berman: “The other piece that nobody wants to hear is that we can’t go on living like we are. … The idea of private transport needs to go away. The idea that you can just drive yourself anywhere you want to, whenever you want to, and – oh, well the answer is, ‘I’ll just get an electric car.’ No, that’s not the answer.” (Arthur Berman, Cornell Law School, April 1, 2011; 03:44:50 to 03:45:25)

Trick #4: Tell your readers that Deborah Rogers does some work for the Federal Reserve Bank of Dallas, but don’t mention that she also works for environmental groups seeking an outright ban on hydraulic fracturing — even though most folks would agree that’s relevant here.

· Urbina quotes Deborah Rogers several times in his story (and even includes a picture) — describing her as “a member of the advisory committee of the Federal Reserve Bank of Dallas.” What Urbina fails to mention is that Ms. Rogers is also an active “steering committee member” of the Oil and Gas Accountability Project (OGAP), an activist group that considers natural gas to be a “filthy energy” source, and has worked in New York and Pennsylvania to institute bans on hydraulic fracturing.

· Last year, Ms. Rogers was a featured speaker at OGAP’s “People’s Oil and Gas Summit” in Pittsburgh, even directing her own local anti-shale group in Texas to pitch in as a sponsor for the event. In advocating for her position, Ms. Rogers rarely mentions her involvement with the Federal Reserve Bank – but often mentions her work as an artisanal cheese maker and goat farmer in Fort Worth.

· Urbina reports that Ms. Rogers “started studying well data from shale companies in October 2009 after attending a speech by the chief executive of Chesapeake.” In fact, Rogers was tied in with OGAP long before she attended that event, working with OGAP contractor and supporter Alisa Rich to prepare a paper in May 2009 that sought to blame air quality impacts on natural gas development.

· According to an independent report commissioned by the city of Fort Worth, the Rich paper is “based on very limited data” and “too general and limited” to arrive at the conclusions that it did. Speaking about the Rich paper, the authors wrote: “Reasonably possible sources for contamination, other than gas well operations, appear to have been ignored.”

Trick #5: Ignore the insights of independent reservoir engineers; instead, base your story on cherry-picked comments, often from firms that no one has ever heard of.

· In his piece, Urbina uses several quotes from an email exchange between a “federal energy analyst” and a “geologist at Chesapeake” in which both parties appear to adopt a skeptical position on shale. A careful review of the emails in question, though, reveals that at a key moment in the exchange, the geologist appears to be referring to the prospect of shale oil, not natural gas (notice his use of “bbls” as a term of measurement).

· That review also suggest that Urbina may have intentionally excluded from his reporting important statements from those emails challenging his core premise — such as this one, made by that same geologist: “Even at low ends of estimations, [they] are world class, huge reserves of resources still that are recoverable at extremely low costs when compared to other drilling.” (p. 29)

· McClendon letter to Chesapeake employees: “Isn’t it completely illogical when this reporter argues that shale gas wells are underperforming, yet acknowledges that gas prices are less than half the price they were three years ago? Today gas shale production represents 25% of US natural gas production, if it were underperforming, how come gas prices are so low when US gas demand is at a record high?”

· CNBC’s Jim Cramer weighs in: “This is Urbina’s seventh hit job on the natural gas industry in five months and, while the reporter defends this new series on the basis of his reporting quoting a series of skeptics — many anonymous — it is hardly unbiased. In fact, it is absurd on the face of it.”

· As for the rest of the “leaked” emails captured by The Times, the vast majority – literally hundreds of pages’ worth – appear to have been written by people working for fairly obscure companies in the energy space. Indeed, some of them (like a group called Haddington Ventures) don’t even analyze upstream oil and natural gas.

· Council on Foreign Relations reacts: “There’s a pattern: Urbina was clearly looking for negative views of shale gas, and had no problem finding them. Given the massive size of the industry, and the number of financial bets being placed upon the sector, that shouldn’t be a surprise. What is a surprise is that Urbina hasn’t done much to put them in context. … [B]y choosing to indulge in hype rather than digging down into the real substance, [The Times] missed an opportunity to spark a useful debate too.” (CFR’s Michael Levi, June 27, 2011)

· Real geologists write-in to EID: “Shale plays are unconventional for a reason. These wells typically have high initial production rates but will inevitably drop over time, as the fracture depletes. After this point, the well will continue to produce at steady (but lower) rates for a long period of time while it depletes the tight rock surrounding the fracture interface. This is not a surprise and is absolutely taken into account when running the economics.”

Harvesting solar power in space

Harvesting solar power in space, for use on Earth, comes a step closer to reality
Jun 23rd 2011 | from The Economist

THE idea of collecting solar energy in space and beaming it to Earth has been around for at least 70 years. In “Reason”, a short story by Isaac Asimov that was published in 1941, a space station transmits energy collected from the sun to various planets using microwave beams.

The advantage of intercepting sunlight in space, instead of letting it find its own way through the atmosphere, is that so much gets absorbed by the air. By converting it to the right frequency first (one of the so-called windows in the atmosphere, in which little energy is absorbed) a space-based collector could, enthusiasts claim, yield on average five times as much power as one located on the ground.

The disadvantage is cost. Launching and maintaining suitable satellites would be ludicrously expensive. But perhaps not, if the satellites were small and the customers specialised. Military expeditions, rescuers in disaster zones, remote desalination plants and scientific-research bases might be willing to pay for such power from the sky. And a research group based at the University of Surrey, in England, hopes that in a few years it will be possible to offer it to them.

This summer, Stephen Sweeney and his colleagues will test a laser that would do the job which Asimov assigned to microwaves. Certainly, microwaves would work: a test carried out in 2008 transmitted useful amounts of microwave energy between two Hawaiian islands 148km (92 miles) apart, so penetrating the 100km of the atmosphere would be a doddle. But microwaves spread out as they propagate. A collector on Earth that was picking up power from a geostationary satellite orbiting at an altitude of 35,800km would need to be spread over hundreds of square metres. Using a laser means the collector need be only tens of square metres in area.

Dr Sweeney’s team, working in collaboration with Astrium, a satellite-and-space company that is part of EADS, a European aerospace group, will test the system in a large aircraft hangar in Germany. The beam itself will be produced by a device called a fibre laser. This generates the coherent light of a laser beam in the core of a long, thin optical fibre. That means the beam produced is of higher quality than other lasers, is extremely straight (even by the exacting standards of a normal laser beam) and can thus be focused onto a small area. Another bonus is that such lasers are becoming more efficient and ever more powerful.

In the case of Dr Sweeney’s fibre laser, the beam will have a wavelength of 1.5 microns, making it part of the infra-red spectrum. This wavelength corresponds to one of the best windows in the atmosphere. The beam will be aimed at a collector on the other side of the hangar, rather than several kilometres away. The idea is to test the effects on the atmospheric window of various pollutants, and also of water vapour, by releasing them into the building.

Assuming all goes well, the next step will be to test the system in space. That could happen about five years from now, perhaps using a laser on the International Space Station to transmit solar power collected by its panels to Earth. Such an experimental system would deliver but a kilowatt of power, as a test. In 10-15 years Astrium hopes it will be possible to deploy a complete, small-scale orbiting power station producing significantly more than that from its own solar cells.

Other researchers, in America and Japan, are also looking at using lasers rather than microwaves to transmit power through the atmosphere. NASA, America’s space agency, has started using them to beam energy to remotely controlled drones. Each stage of converting and transmitting power results in a loss of efficiency, but with technological improvements these losses are being reduced. Some of the latest solar cells, for instance, can covert sunlight into electricity with an efficiency of more than 40%. In the 1980s, 20% was thought good.

Whether the Astrium system will remain a specialised novelty or will be the forerunner of something more like the cosmic power stations of Asimov’s imagination is anybody’s guess. But if it comes to pass at all, it will be an intriguing example, like the geostationary communications satellites dreamed up by Asimov’s contemporary, Arthur C. Clarke, of the musings of a science-fiction author becoming science fact.

Wednesday, June 22, 2011

Wind or Nuclear?

Mises Daily: Wednesday, July 08, 2009 by Ray Harvey

Energy is like a river; it exists in two ways: flows and stores.

When you store energy, you create a dam to capture it.

What environmentalists call "renewable energy" is really just the stored energy of the sun.

In actuality, there's no such thing as "renewable energy": all energy, even the sun, is limited.

Fossil fuels are energy stores as well — specifically, they are stored solar energy, a process that takes millions of years — and they are highly concentrated, ten times more so than, for instance, wood.

In terms of wind and raw solar energy, the flow is exceptionally diluted: solar is ten to fifty times less concentrated than fossil fuel. When you can't concentrate it, then the only way to harvest it is to use more and more land. That's the limiting factor for both sun and wind energy.

T. Boone Pickens's now-infamous plan would require 1,200 square miles for a single power plant.

Compare that to nuclear, which would require only one square mile.

Coal is extraordinarily abundant — we'll never run out — and pound-for-pound contains twice as much energy as wood. Coal is a concentrated storehouse of energy.

Octane molecules in gasoline, however, are even more concentrated. In fact, they're the densest store of carbon energy we've ever discovered. Pound-for-pound, gas possesses four times as much energy as coal. There's a popular misconception today that gasoline is inefficient and wasteful. Nothing could be more inaccurate.

Gas molecules are not only by far the densest form of carbon energy we've ever discovered; they're also easy to transfer because they're fluid. These are two of the greatest reasons we've adopted gasoline.

Nuclear, on the other hand, is something else entirely. The public hasn't even begun to grasp nuclear energy.

These are the facts:

A handful of uranium contains more energy than 100 boxcars full of coal.

Consumption of energy creates more energy, not less.

Despite years of government subsidies (regulators, for instance, have forced utility companies to buy "renewables"), these same renewables generate only about 0.9 percent of our total electricity.

The most efficient solar panels currently in use (on the space station) are costly, and their conversion efficiency is about twenty percent, which is not very much.

Twelve miles of solar reflectors generate about 300 megawatts, a miniscule amount. Furthermore, those reflectors must be kept squeaky clean, maintained to the hilt, or they won't work.

At our current level of technology, no conceivable mix of solar, wind, or wave can meet even half the demand for energy.

If, however, wind, wave, and solar are to become more efficient, it is only science and technology — as opposed to environmentalism's plan of blasting us back into the Dark Ages — that will get them there.

We begin to know about a resource only when we begin to use it. Knowing about that resource includes a cursory calculation of its quantity.

The more we use of it, therefore, the better we become at finding it and calculating its quantity, extracting it and refining it. Thus, the more we use of a resource, the more of it we're able to find.

This may sound counterintuitive, but only at first: then you glimpse its awesome logic. The entire history of resource use and extraction has followed this pattern without deviation.

Boone Pickens is calling for massive subsidization of the wind-power industry.

As with ethanol and recycling and a host of other issues, you must ask yourself again, if these things are so efficient, why do they need to be subsidized? Answer: they're not so efficient.

Energies that require massive subsidization benefit absolutely no one; the only reason they need to be subsidized is that they cannot compete on the open market.

That fact alone tells you everything you need to know about them: they're simply not good enough yet.

When they are, the free market will adopt them naturally.

The reason wind power still won't get us very far is that transmitting this power is such a huge difficulty.

Wind is also unpredictable; it's therefore hard to integrate into an electrical grid, since grids have to maintain a voltage balance, or you'll get brownouts, blackouts, and power surges that destroy equipment by the ton.

The "grid," incidentally, refers to the entire energy infrastructure. It even includes the electrical wires that go into your house.

Grid operators spend their whole lives trying to balance supply and demand on the grid.

Energy demand changes all throughout the day, all throughout the year. In summer, for instance, demand is higher. Late at night, demand is lower.

Grid operators balance all this.

Factor in the wind, which you cannot predict more than, at most, five hours in advance, and try pulling all that wind power into a grid, and you'll begin to see how impossible the task is.

Wind needs constant backup.

"Spinning reserve" on an electrical grid refers to the amount of backup power that is sitting there, waiting to go at a moment's notice in case something goes wrong. In general, twenty percent extra power is the standard spinning reserve on the grid. Wind can indeed supplement a grid with this needed twenty percent spinning reserve, but it cannot come close to replacing fossil fuel.

Here's what you don't see in the fine print: The vast majority of wind energy needs to be transmitted. Thus, you'll need to step up voltage to 745 kilovolts (which is a lot) so that wind doesn't lose all its energy in the transmitting process. That infrastructure alone — forget the actual windfarms — will cost billions.

We'll also have windmills covering the entire great plains. Quoting energy expert William Tucker, "If Boone Pickens's dream is realized, you'll be able to drive from Texas to North Dakota without ever being out of sight of a windmill, just as in Denmark."

That is, except for Boone Pickens's backyard. Said Pickens, "I'm not going to have the windmills on my ranch: they're ugly."

Indeed.

And that, in part, is why people are already objecting. Windmills are taller than the Statue of Liberty, and they're loud; the Audubon Society calls them "condor Cuisinarts."

Wind comes strongest along mountain crests. Thus the Blue Ridge Mountains, the Adirondacks, the Appalachians, and so on would all have their ridges lined with these monstrosities. Yet environmentalists object to the building of one small nuclear plant, which, compared with a windfarm, is tiny.

Uranium generates gigantic amounts of energy in a very small space, which wind and solar combined cannot come close to. Those who say otherwise — those who are antinuclear, in other words — have brought the world 400 million more tons of coal used per year, because for thirty years now, since the Three Mile Island accident in 1979, we've been using more coal.

The meltdown of the uranium core in 1979 at Three Mile Island was so overblown by antinuclear groups that it went virtually unnoticed that the containment vessel at Three Mile Island had done its job and prevented any significant release of radioactivity.

Uranium is abundant, clean, and safe — in technological societies.

The catastrophe at Chernobyl — which, once again, sent greens groups worldwide scurrying to their soapboxes — only happened because that state-run reactor was astonishingly unsafe: in the words of Peter Huber, "You couldn't have operated a toaster oven out of it."

Few scientists disagree that the discovery of energy at the nucleus of the atom is the greatest scientific feat of the 20th century. All this talk about how we need to "discover a new form of energy" therefore misses the point: we've already done so. It's called nuclear energy. And it's amazing.

We discovered that the concentration of energy in the nucleus of the atom is 2 million times as great as energy in the shell of an atom.

There are tiny amounts of uranium residue in coal; those trace residuals have more energy potential than all the coal itself.

Chemical energy, which is everything from wood to crude oil to gasoline to coal, consists of playing with the electrons, changing their energy state. With nuclear, however, the big discovery was that there's far more energy in the nucleus of the atom. Therefore, it produces a far, far smaller "footprint."

In fact, there's really no such thing as "nuclear waste": a nuclear reactor is refueled by its waste. In other words, almost all "waste" can be recycled. Indeed, ninety-five percent of a spent nuclear fuel rod is natural uranium, and so it can be put right back in the ground, just as it was found.

The radioactive part constitutes only about five percent, but of that, half is uranium and plutonium, and so it can be recycled as fuel — specifically mixed-oxide fuel, which is exactly what the French have been doing for twenty-five years now.

After twenty-five years, the French store all their so-called waste in one room, under La Hague, which is about the size of a basketball gymnasium.

Why haven't you heard this? A writer for the New Yorker magazine named John McPhee in 1974 published a highly influential book called The Curve of Binding Energy, which convinced President Jimmy Carter (et al.) that people could steal used plutonium from nuclear plants and makes bombs with it. But this is untrue. Nevertheless, solely on the basis of this detrimental misinformation, our country now has fifty thousand tons of nuclear "waste," because our government won't allow nuclear plants to reuse it.

The stated policy of the Department of Energy (DOE) is "not to reprocess" a perfectly reusable byproduct — and all for absolutely no good reason. That is why Yucca Mountain is unnecessarily, and at great cost, being built in southwestern Nevada to store a nuclear "waste" that could instead be simply and efficiently reused.

Nuclear "waste" is also used for medical isotopes. Over forty percent of medicine now is nuclear medicine. Currently, we must import all our nuclear isotopes because we're not allowed to use any of our own. This is not only profligate; it's a kind of lunacy.

We're the only country in the world that doesn't reuse its nuclear byproducts. Nuclear energy is the cleanest, most efficient energy we have — by light years. Anyone who tells you differently, is flat-out wrong.

Sunday, June 19, 2011

The Difference Engine: Gut feeling

The Economist, Jun 17th 2011, 17:00 by N.V. | LOS ANGELES

THE bean sprouts contaminated with a particularly nasty strain of Escherichia coli, a bug that normally lives quietly in the gut of humans and other animals, have now sickened over 3,250 people in Germany and caused 37 deaths. Since the outbreak began in May, a quarter of those infected have developed haemolytic uraemic syndrome (HUS)—a potentially fatal complication that affects the blood, kidneys and nervous system.

The genetic sequence of the bacterium in question (a wholly new version of a strain of E.coli called O104:H4) has been found by scientists in Germany and China to contain at least eight genes that make it resistant to the majority of antibiotics. Many of the patients with HUS will need kidney transplants or require dialysis for the rest of their lives.

The source of the tainted bean sprouts has been traced to an organic farm in northern Germany. The owner claims not to have used cattle manure, nor any of the three dozen or so non-organic additives widely employed in organic farming. Apparently, the only ingredients were seeds and water. The usual procedure for sprouting is to steam the selected seeds in drums at a temperature of 38ÂșC. Such conditions are ripe for breeding bacteria.

The question is how the O104:H4 got there in the first place? The usual route is via animal faeces that have contaminated the water used for sprouting, or from manure used directly as organic fertiliser. But both have been ruled out. By all accounts, the farm also complied with the industry’s highest standards of personal hygiene. The conclusion is that the seeds themselves must have been contaminated beforehand.

Microbiologists have long known that E.coli can bind tightly to the surface of seeds and even penetrate them, and then lie dormant for months. On germination, the population of bacteria can expand 100,000 times or more. Apart from contaminating the seeds, the bacteria get inside the stem tubers as the seeds begin to sprout. No amount of washing can then eradicate the bugs completely.

The outbreak in Germany is just the latest in a long string of food scares associated with E.coli. In 1996, a sequence of outbreaks linked to contaminated radish sprouts in Japan sickened some 12,000 people and caused a dozen or so deaths. Like the current incidence in Germany, the Japanese outbreaks (of a more common strain known as O157:H7) also caused bloody diarrhoea and HUS. The good news is that such food-borne infections are on the wane—at least in the United States. Thanks to better reporting methods, stepped up inspections and improved hygiene measures generally, the number of dangerous O157:H7 infections has been halved since the mid-1990s.

Unfortunately, that is not the case with Salmonella. According to the Centres for Disease Control and Prevention (CDC) in Atlanta, the number of confirmed cases of Salmonella infection—especially from raw meat, eggs and vegetables—increased by 10% in 2010. Memories are still strong of last year's scare when 500m tainted eggs had to be withdrawn from the American market after 2,000 people became infected, though mercifully no-one died.

All told, the CDC reckons that one in six Americans is infected annually by food- or water-borne diseases such as Salmonella, E.coli, Campylobacter and noroviruses. Some 130,000 wind up in hospital each year, and about 3,000 die as a result of complications. In statistical terms, a fatality rate of 0.001% would seem a monumental achievement for public health. But the point is that those 3,000 annual deaths from food poisoning could easily be avoided, and millions of people spared the incapacitating symptoms of food poisoning.

It is practically impossible to prevent at least some bugs getting into food in the field, no matter how stringent the hygiene rules. And washing fresh produce removes little more than surface dirt. The only answer is irradiation. That means treating food with high-energy bursts of electrons or photons to attack the micro-organisms’ DNA, preventing them from spitting out dangerous toxins and proliferating.

The food industry welcomes the idea. Irradiation destroys 99.9% of common pathogens, reduces the need for chemical pesticides and fumigants, extends shelf life by slowing down the ripening process, and eliminates the need to quarantine fruit and vegetables from abroad. Irradiation is used widely in France for ensuring the safety of deboned poultry meat. Frozen seafood and frogs legs are similarly treated in Belgium, France and the Netherlands. Elsewhere, irradiation is employed extensively to eradicate bacteria and moulds in spices, dried vegetables and seasonings. In America, irradiation has long been approved for killing the pathogens in meat, and (following the E.coli scare in 2006) for treating spinach and lettuce.

The World Health Organisation, the American Medical Association and the American Dietetic Association, among others, are strongly in favour of food irradiation. Many medical researchers and food scientists would like to see irradiation become the fourth pillar of public health, taking its place alongside chlorination, vaccination and pasteurisation. They see the benefits as far outweighing any risks the technology may entail—especially as it is now done, not by bombarding food with radiation from X-ray machines or radionuclides such as cobalt-60, but with a beam of electrons from an emitter similar to the kind found in traditional television sets.

But despite its private enthusiasm for irradiation, the food industry is leery of embracing the technology in public. In America, the Food and Drug Administration requires that irradiated-food packages carry the international “Radura” symbol (coined to symbolise "irradiation" and "durability") along with the words “Treated with/by irradiation”. Few supermarkets have been willing to stock such products, fearing customers will mistakenly associate the wording with nuclear fallout.

Advocacy groups such as the Centre for Food Safety, the Food and Water Watch and the Organic Consumers Association have opposed food irradiation, not on grounds that the technology is risky, but because it does not address the root cause of outbreaks—namely, the unsanitary conditions found on many farms and in food processing plants. Such concerns are genuine. Unfortunately, though the authorities have stepped up inspections to improve hygiene, there is no way that the food industry, no matter how scrupulous, can be made bug-free using disinfectants and washing alone. Only additional processing with irradiation can ensure that.

In his weekly column in the Wall Street Journal, Matt Ridley, a former science editor of The Economist, noted that Germany is a classic example of what is known as the “precautionary principle”—the notion that the burden of proof is on the innovator to demonstrate that a new technology is safe before it can be approved. The precautionary principle holds all new technologies to far higher standards than existing ones.

In Europe, for instance, genetically modified foods must be labelled so that they can be traced “from farm to fork”. Yet, organic crops fertilised with animal manure have no such requirements—even though they pose a far higher risk to human health. Under United States Department of Agriculture rules, excrement used as an organic fertiliser must be composted to a sterilising temperature of over 70ÂșC, and the treated crop then kept for 120 days before being harvested. Given the exigencies of the business, that rarely happens.

By the same token, America requires that irradiation of food be shown to be not just beneficial, but to do no harm whatsoever. According to Michael Osterholm, director of the Centre for Infectious Disease Research and Policy at the University of Minnesota, that is a standard to which even medical products such as hip joints and vaccines cannot hope to aspire.

The irony, as Mr Ridley points out, is that when, in 2000, the European Commission proposed that irradiation be allowed for a greater range of foods and at higher doses, it was the German government (fearful of the country’s vociferous green movement) that vetoed the idea. Harvested bean sprouts would seem a perfect candidate for such treatment. Having now witnessed the tragic consequences of allowing a dangerous pathogen like O104:H4 to get loose in the country’s food supply, one can only hope that Germany will now lead the world in embracing the merits of irradiating food. Along with presumably countless other consumers, your correspondent would heartily welcome such a move.

Friday, June 17, 2011

THE DEMISE OF SUNSPOTS—DEEP COOLING AHEAD?

Don J. Easterbrook, Professor of Geology, Western Washington University, Bellingham, WA

The three studies released by NSO’s Solar Synoptic Network this week, predicting the virtual vanishing of sunspots for the next several decades and the possibility of a solar minimum similar to the Maunder Minimum, came as stunning news. According to Frank Hill,

“the fact that three completely different views of the Sun point in the same direction is a powerful indicator that the sunspot cycle may be going into hibernation.”

The last time sunspots vanished from the sun for decades was during the Maunder Minimum from 1645 to 1700 AD was marked by drastic cooling of the climate and the maximum cold of the Little Ice Age.

What happened the last time sunspots disappeared?

Abundant physical evidence from the geologic past provides a record of former periods of global cooling. Geologic records provide clear evidence of past global cooling so we can use them to project global climate into the future—the past is the key to the future. So what can we learn from past sunspot history and climate change?


Galileo’s perfection of the telescope in 1609 allowed scientists to see sunspots for the first time. From 1610 A.D. to 1645 A.D., very few sunspots were seen, despite the fact that many scientists with telescopes were looking for them, and from 1645 to 1700 AD sunspots virtually disappeared from the sun (Fig. 1). During this interval of greatly reduced sunspot activity, known as the Maunder Minimum, global climates turned bitterly cold (the Little Ice Age), demonstrating a clear correspondence between sunspots and cool climate. After 1700 A.D., the number of observed sunspots increased sharply from nearly zero to more than 50 (Fig. 1) and the global climate warmed.

FIGURE 1. Sunspots during the Maunder Minimum (modified from Eddy, 1976).

The Maunder Minimum was not the beginning of The Little Ice Age—it actually began about 1300 AD—but it marked perhaps the bitterest part of the cooling. Temperatures dropped ~4Âș C (~7 Âș F) in ~20 years in mid-to high latitudes. The colder climate that ensued for several centuries was devastating. The population of Europe had become dependent on cereal grains as their main food supply during the Medieval Warm Period and when the colder climate, early snows, violent storms, and recurrent flooding swept Europe, massive crop failures occurred. Winters in Europe were bitterly cold, and summers were rainy and too cool for growing cereal crops, resulting in widespread famine and disease. About a third of the population of Europe perished.

Glaciers all over the world advanced and pack ice extended southward in the North Atlantic. Glaciers in the Alps advanced and overran farms and buried entire villages. The Thames River and canals and rivers of the Netherlands frequently froze over during the winter. New York Harbor froze in the winter of 1780 and people could walk from Manhattan to Staten Island. Sea ice surrounding Iceland extended for miles in every direction, closing many harbors. The population of Iceland decreased by half and the Viking colonies in Greenland died out in the 1400s because they could no longer grow enough food there. In parts of China, warm weather crops that had been grown for centuries were abandoned. In North America, early European settlers experienced exceptionally severe winters.

So what can we learn from the Maunder? Perhaps most important is that the Earth’s climate is related to sunspots. The cause of this relationship is not understood, but it definitely exists. The second thing is that cooling of the climate during sunspot minima imposes great suffering on humans—global cooling is much more damaging than global warming.

Global cooling during other sunspot minima

The global cooling that occurred during the Maunder Minimum was neither the first nor the only such event. The Maunder was preceded by the Sporer Minimum (~1410–1540 A.D.) and the Wolf Minimum (~1290–1320 A.D.) and succeeded by the Daltong Minimum (1790–1830), the unnamed 1880–1915 minima, and the unnamed 1945–1977 Minima (Fig. 2). Each of these periods is characterized by low numbers of sunspots, cooler global climates, and changes in the rate of production of 14C and 10Be in the upper atmosphere. As shown in Fig. 2, each minimum was a time of global cooling, recorded in the advance of alpine glaciers.

Figure 2. Correspondence of cold periods and solar minima from 1500 to 2000 AD. Each of the five solar minima was a time of sharply reduced global temperatures (blue areas).

The same relationship between sunspots and temperature is also seen between sunspot numbers and temperatures in Greenland and Antarctica (Fig. 3). Each of the four minima in sunspot numbers seen in Fig. 3 also occurs in Fig. 2. All of them correspond to advances of alpine glaciers during each of the cool periods.


Figure 3. Correlation of sunspot numbers and temperatures in Greenland and Antarctica (modified from Usoskin et al., 2004).

Figure 4 shows the same pattern between solar variation and temperature. Temperatures were cooler during each solar minima.


Figure 4. Solar irradiance and temperature from 1750 to 1990 AD. During this 250-year period, the two curves follow remarkably similar patterns (modified from Hoyt and Schatten, 1997). Each solar minima corresponds to climatic cooling.

What can we learn from this historic data? Clearly, a strong correlation exists between solar variation and temperature. Although this correlation is too robust to be merely coincidental, exactly how solar variation are translated into climatic changes on Earth is not clear. For many years, solar scientists considered variation in solar irradiance to be too small to cause significant climate changes. However, Svensmark (Svensmark and Calder, 2007; Svensmark and Friis-Christensen, 1997; Svensmark et al., 2007) has proposed a new concept of how the sun may impact Earth’s climate. Svensmark recognized the importance of cloud generation as a result of ionization in the atmosphere caused by cosmic rays. Clouds reflect incoming sunlight and tend to cool the Earth. The amount of cosmic radiation is greatly affected by the sun’s magnetic field, so during times of weak solar magnetic field, more cosmic radiation reaches the Earth. Thus, perhaps variation in the intensity of the solar magnetic field may play an important role in climate change.

Are we headed for another Little Ice Age?

In 1999, the year after the high temperatures of the 1998 El Nino, I became convinced that geologic data of recurring climatic cycles (ice core isotopes, glacial advances and retreats, and sun spot minima) showed conclusively that we were headed for several decades of global cooling and presented a paper to that effect (Fig. 5). The evidence for this conclusion was presented in a series of papers from 2000 to 2011 (The data are available in several GSA papers, my website, a 2010 paper, and in a paper scheduled to be published in Sept 2011). The evidence consisted of temperature data from isotope analyses in the Greenland ice cores, the past history of the PDO, alpine glacial fluctuations, and the abrupt Pacific SST flips from cool to warm in 1977 and from warm to cool in 1999. Projection of the PDO to 2040 forms an important part of this cooling prediction.



Figure 5. Projected temperature changes to 2040 AD. Three possible scenarios are shown: (1) cooling similar to the 1945-1977 cooling, cooling similar to the 1880-1915 cooling, and cooling similar to the Dalton Minimum (1790-1820). Cooling similar to the Maunder Minimum would be an extension of the Dalton curve off the graph.

So far, my cooling prediction seems to be coming to pass, with no global warming above the 1998 temperatures and a gradually deepening cooling since then. However, until now, I have suggested that it was too early to tell which of these possible cooling scenarios were most likely. If we are indeed headed toward a disappearance of sunspots similar to the Maunder Minimum during the Little Ice Age then perhaps my most dire prediction may come to pass. As I have said many times over the past 10 years, time will tell whether my prediction is correct or not. The announcement that sun spots may disappear totally for several decades is very disturbing because it could mean that we are headed for another Little Ice Age during a time when world population is predicted to increase by 50% with sharply increasing demands for energy, food production, and other human needs. Hardest hit will be poor countries that already have low food production, but everyone would feel the effect of such cooling. The clock is ticking. Time will tell!

References

D’Aleo, J., Easterbrook, D.J., 2010. Multidecadal tendencies in Enso and global temperatures related to multidecadal oscillations: Energy & Environment, vol. 21 (5), p. 436–460.

Easterbrook, D.J., 2000, Cyclical oscillations of Mt. Baker glaciers in response to climatic changes and their correlation with periodic oceanographic changes in the Northeast Pacific Ocean: Geological Society of America, Abstracts with Programs, vol. 32, p.17.

Easterbrook, D.J., 2001, The next 25 years; global warming or global cooling? Geologic and oceanographic evidence for cyclical climatic oscillations: Geological Society of America, Abstracts with Programs, vol. 33, p.253.

Easterbrook, D.J., 2005, Causes and effects of late Pleistocene, abrupt, global, climate changes and global warming: Geological Society of America, Abstracts with Programs, vol. 37, p.41.

Easterbrook, D.J., 2006, Causes of abrupt global climate changes and global warming; predictions for the coming century: Geological Society of America, Abstracts with Programs, vol. 38, p. 77.

Easterbrook, D.J., 2006, The cause of global warming and predictions for the coming century: Geological Society of America, Abstracts with Programs, vol. 38, p.235-236.

Easterbrook, D.J., 2007, Geologic evidence of recurring climate cycles and their implications for the cause of global warming and climate changes in the coming century: Geological Society of America Abstracts with Programs, vol. 39, p. 507.

Easterbrook, D.J., 2007, Late Pleistocene and Holocene glacial fluctuations; implications for the cause of abrupt global climate changes: Geological Society of America, Abstracts with Programs, vol. 39, p.594

Easterbrook, D.J., 2007, Younger Dryas to Little Ice Age glacier fluctuations in the Fraser Lowland and on Mt. Baker, Washington: Geological Society of America, Abstracts with Programs, vol. 39, p.11.

Easterbrook, D.J., 2007, Historic Mt. Baker glacier fluctuations—geologic evidence of the cause of global warming: Geological Society of America, Abstracts with Programs, vol. 39, p. 13.

Easterbrook, D.J., 2008, Solar influence on recurring global, decadal, climate cycles recorded by glacial fluctuations, ice cores, sea surface temperatures, and historic measurements over the past millennium: Abstracts of American Geophysical Union Annual Meeting, San Francisco.

Easterbrook, D.J., 2008, Implications of glacial fluctuations, PDO, NAO, and sun spot cycles for global climate in the coming decades: Geological Society of America, Abstracts with Programs, vol. 40, p. 428.

Easterbrook, D.J., 2008, Correlation of climatic and solar variations over the past 500 years and predicting global climate changes from recurring climate cycles: Abstracts of 33rd International Geological Congress, Oslo, Norway.

Easterbrook, D.J., 2009, The role of the oceans and the Sun in late Pleistocene and historic glacial and climatic fluctuations: Geological Society of America, Abstracts with Programs, vol. 41, p. 33.

Eddy, J.A., 1976, The Maunder Minimum: Science, vol. 192, p. 1189–1202.

Hoyt, D.V. and Schatten, K.H., 1997, The Role of the sun in climate change: Oxford University, 279 p.

Svensmark, H. and Calder, N., 2007, The chilling stars: A new theory of climate change: Icon Books, Allen and Unwin Pty Ltd, 246 p.

Svensmark, H. and Friis-Christensen, E., 1997, Variation of cosmic ray flux and global cloud coverda missing link in solar–climate relationships: Journal of Atmospheric and SolareTerrestrial Physics, vol. 59, p. 1125–1132.

Svensmark, H., Pedersen, J.O., Marsh, N.D., Enghoff, M.B., and UggerhĂžj, U.I., 2007, Experimental evidence for the role of ions in particle nucleation under atmospheric conditions: Proceedings of the Royal Society, vol. 463, p. 385–396.

Usoskin, I.G., Mursula, K., Solanki, S.K., Schussler, M., and Alanko, K., 2004, Reconstruction of solar activity for the last millenium using 10Be data: Astronomy and Astrophysics, vol. 413, p. 745–751.

Thursday, June 16, 2011

‘Resources are Not, Resources Become’

Eagle Ford Oil: ‘Resources are Not, Resources Become’ (and new jobs galore without government subsidy, President Obama)
by Greg Rehmke
June 16, 2011

“Nothing is more fatal to a realistic and usable understanding of resources than the failure to differentiate between the constants of natural science and the relatives of social science, between the totality of the universe or of the planet earth … and … the ever-changing resources of a given group of people at a given time and place…. One has but to recall some of the most precious resources of our age—electricity, oil, nuclear energy—to see who is right, the exponent of the static school who insists that ‘resources are,’ or the defender of the dynamic, functional, operational school who insists that ‘resources become.’”

- Erich Zimmermann, World Resources and Industries (New York: Harper & Brothers, 1951), p. 11.

Resource optimists are continually rewarded by oil and gas drillers. Once can only imagine what world production would be like if private property rights and profit/loss entrepreneurship was the norm as it is in much of the United States.

The only good news is that politically shackled mineral production is ‘reserved’ for the future, further refuting the peak oil (or peak anything else) proponents who fail to see that politics and not potential is the limit to growth.

Eagle Ford: What ‘Peak Oil’?

Consider Eagle Ford where a new shale drilling technology is creating a private-sector Strategic Petroleum Reserve.

Last month, a Houston Chronicle feature reported on the fast expansion of oil drilling along the 400-mile long Eagle Ford shale formation. While most new shale operations in the U.S. and Eastern Europe have been drilling for gas, at Eagle Ford drilling is for black gold, Texas Tea.

The Chron.com article focuses on new jobs and tax revenues, but the bigger story is the new oil expected to flow over the next 20–30 years across some six million acres. The article quotes sources expecting 20,000–30,000 wells (you read that right!) to be drilled, ultimately producing up to ten billion barrels of oil.

If ten billion barrels over 30 years is a reasonable estimate, that’s comes to about a million barrels produced each day from this one large shale formation.

The Railroad Commission of Texas maintains a webpage on Eagle Ford. Oil production has jumped from 308,000 barrels in 2009 to over 3 million barrels in 2010. January and February combined production is 614,000. This map shows the Eagle Ford Shale Play size, and current producing oil and gas wells. Natural gas production, by the way, quadrupled from 2009 to nearly 80 billion cubic feet in 2010.

How long will this rapid increase of oil production continue? Consider that the first Eagle Ford well, drilled by Petrohawk, was announced in late 2008. Today is only mid-2011! A similar find in California might have progressed by now to just a few Department of Conservation hearings, against a background of protests by environmentalists and Hollywood players.

But Eagle Ford is in Texas, not California, and by 2010 there were 72 producing oil leases, up from 40 in 2009. From the Railroad Commission website: “Drilling Permit Processing Time as of May 23, 2011: Expedited Permits: approximately 2 Business day[sic], Standard Permits: approximately 6 Business days.”

Drilling Permits issued in Texas dropped nearly in half–24 thousand to 12.2 thousand–from 2008 to 2009 when oil prices and the economy collapsed. But drilling permits issued climbed again to 18,000 in 2010.

Expect oil production to continue to grow in Texas. And expect the recent shale oil production in Texas to stimulate to new exploration in and around other shale gas fields.

A Private SPR?

The Federal Government’s Strategic Petroleum Reserve (SPR) holds 725 million barrels, which it can released at 3.5 million barrels a day. As thousands of new oil wells begin to produce across Eagle Ford, these secure flows provide further reasons to reduce oil stockpiled at taxpayer expense (and federal debt interest expense) in the SPR.

The oil abundance of Eagle Ford is yet another reason to get the federal government out of the oil business.

On The Hijacking of the American Meteorological Society (AMS)

by Bill Gray Professor Emeritus, Colorado State University
(AMS Fellow, Charney Award recipient, and over 50-year member)

June 2011

I am very disappointed at the downward path the AMS has been following for the last 10-15 years in its advocacy of the Anthropogenic Global Warming (AGW) hypothesis. The society has officially taken a position many of us AMS members do not agree with. We believe that humans are having little or no significant influence on the global climate and that the many Global Circulation Climate Model (GCMs) results and the four IPCC reports do not realistically give accurate future projections. To take this position which so many of its members do not necessarily agree with shows that the AMS is following more of a political than a scientific agenda.

The AMS Executive Director Keith Seitter and the other AMS higher-ups and the Council have not shown the scientific maturity and wisdom we would expect of our AMS leaders. I question whether they know just how far off-track the AMS has strayed since they foolishly took such a strong pro-AGW stance.

The American Meteorological Society (AMS) was founded in 1919 as an organization dedicated to advancing scientific knowledge of weather and climate. It has been a wonderful beacon for fostering new understanding of how the atmosphere and oceans function. But this strong positive image is now becoming tarnished as a result of the AMS leadership’s capitulating to the lobby of the climate modelers and to the outside environmental and political pressure groups who wish to use the current AMS position on AGW to help justify the promotion of their own special interests. The effectiveness of the AMS as an objective scientific organization is being greatly compromised.


We AMS members have allowed a small group of AMS administrators, climate modelers, and CO2 warming sympathizers to maneuver the internal workings of our society to support AGW policies irrespective of what our rank-and-file members might think. This small organized group of AGW sympathizers has indeed hijacked our society.

The AMS should be acting as a facilitator for the scientific debate on the pro and con aspects of the AGW hypothesis, not to take a side in the issue. The AMS has not held the type of open and honest scientific debates on the AGW hypothesis which they should have. Why have they dodged open discussion on such an important issue? I’ve been told that the American Economic Society does not take sides on controversial economic issues but acts primarily to help in stimulating back and forth discussion. This is what the AMS should have been doing but haven’t.

James Hansen’s predictions of global warming made before the Senate in 1988 are turning out to be very much less than he had projected. He cannot explain why there has been no significant global warming over the last 10-12 years.

Many of us AMS members believe that the modest global warming we have observed is of natural origin and due to multi-decadal and multi-century changes in the globe’s deep ocean circulation resulting from salinity variations. These changes are not associated with CO2 increases. Most of the GCM modelers have little experience in practical meteorology. They do not realize that the strongly chaotic nature of the atmosphere-ocean climate system does not allow for skillful initial value numerical climate prediction. The GCM simulations are badly flawed in at least two fundamental ways:

Their upper tropospheric water vapor feedback loop is grossly wrong. They assume that increases in atmospheric CO2 will cause large upper-tropospheric water vapor increases which are very unrealistic. Most of their model warming follows from these invalid water vapor assumptions. Their handlings of rainfall processes are quite inadequate.
They lack an understanding and treatment of the fundamental role of the deep ocean circulation (i.e. Meridional Overturning Circulation – MOC) and how the changing ocean circulation (driven by salinity variations) can bring about wind, rainfall, and surface temperature changes independent of radiation and greenhouse gas changes. These ocean processes are not properly incorporated in their models. They assume the physics of global warming is entirely a product of radiation changes and radiation feedback processes. They neglect variations in global evaporation which is more related to surface wind speed and ocean minus surface and air temperature differences. These are major deficiencies.
The Modelers’ Free Ride. It is surprising that GCMs have been able to get away with their unrealistic modeling efforts for so long. One explanation is that they have received strong support from Senator/Vice President Al Gore and other politicians who for over three decades have attempted to make political capital out of increasing CO2 measurements. Another reason is the many environmental and political groups (including the mainstream media) have been eager to use the GCM climate results as justification to push their own special interests that are able to fly under the global warming banner. A third explanation is that they have not been challenged by their peer climate modeling groups who apparently have seen possibilities for similar research grant support and publicity by copying Hansen and the earlier GCM modelers.

I anticipate that we are going to experience a modest naturally-driven global cooling over the next 15-20 years. This will be similar to the weak global cooling that occurred between the early-1940s and the mid-1970s. It is to be noted that CO2 amounts were also rising during this earlier cooling period which were opposite to the expected CO2-temperature association.

An expected 15-20 year cooling will occur (in my view) because of the current strong ocean Meridional Overturning Circulation (MOC) that has now been established in the last decade and a half and ought to continue for another couple of decades. I explain most of the last century and-a-half general global warming since the mid-1800s (start of the industrial revolution) to be a result of a long multi-century slowdown in the ocean’s MOC circulation. Increases of CO2 could have contributed only a small fraction (0.1-0.2oC) of the roughly ~ 0.7oC surface warming that has been observed since 1850. Natural processes have had to have been responsible for most of the observed warming over the last century and a half.

Debate. The AMS is the most relevant of our country’s scientific societies as regards to its members having the most extensive scientific and technical background in meteorology and climate. It should have been a leader in helping to adjudicate the claims of the AGW advocates and their skeptical critics. Our country’s Anglo-Saxon derived legal system is based on the idea that the best way to get to the truth is to have opposite sides of a continuous issue present their differing views in open debate before a non partisan jury. Nothing like this has happened with regards to the AGW issue. Instead of organizing meetings with free and open debates on the basic physics and the likelihood of AGW induced climate changes, the leaders of the society (with the backing of the society’s AGW enthusiasts) have chosen to fully trust the climate models and deliberately avoid open debate on this issue. I know of no AMS sponsored conference where the AGW hypothesis has been given open and free discussion. For a long time I have wanted a forum to express my skepticism of the AGW hypothesis. No such opportunities ever came within the AMS framework. Attempts at publication of my skeptic views have been difficult. One rejection stated that I was too far out of the mainstream thinking. Another that my ideas had already been discredited. A number of AGW skeptics have told me they have had similar experiences.

The climate modelers and their supporters deny the need for open debate of the AGW question on the grounds that the issue has already been settled by their model results. They have taken this view because they know that the physics within their models and the long range of their forecast periods will likely not to be able to withstand knowledgeable and impartial review. They simply will not debate the issue. As a defense against criticism they have resorted to a general denigration of those of us who do not support their AGW hypothesis. AGW skeptics are sometimes tagged (I have been) as no longer being credible scientists. Skeptics are often denounced as tools of the fossil-fuel industry. A type of McCarthyism against AGW skeptics has been in display for a number of years.

Recent AMS Awardees. Since 2000 the AMS has awarded its annual highest award (Rossby Research Medal) to the following AGW advocates or AGW sympathizers; Susan Solomon (00), V. Ramanathan (02), Peter Webster (04), Jagadish Shukla (05), Kerry Emanuel (07), Isaac Held (08) and James Hansen (09). Its second highest award (Charney Award) has gone to AGW warming advocates or sympathizers; Kevin Trenberth (00), Rich Rotunno (04), Graeme Stephens (05) Robert D. Cess (06), Allan Betts (07), Gerald North (08) and Warren Washington and Gerald Meehl (09). And the other Rossby and Charney awardees during this period are not known to be critics of the AGW warming hypothesis.

The AGW biases within the AMS policy makers is so entrenched that it would be impossible for well known and established scientists (but AGW skeptics) such as Fred Singer, Pat Michaels, Bill Cotton, Roger Pielke, Sr., Roy Spencer, John Christie, Joe D’Aleo, Bob Balling, Jr., Craig Idso, Willie Soon, etc. to ever be able to receive an AMS award – irrespective of the uniqueness or brilliance of their research.

What Working Meteorologists Say. My interaction (over the years) with a broad segment of AMS members (that I have met as a result of my seasonal hurricane forecasting and other activities) who have spent a sizable portion of their careers down in the meteorological trenches of observations and forecasting, have indicated that a majority of them do not agree that humans are the primary cause of global warming. These working meteorologists are too experienced and too sophisticated to be hoodwinked by the lobby of global climate modelers and their associated propagandists. I suggest that the AMS conduct a survey of its members who are actually working with real time weather-climate data to see how many agree that humans have been the main cause of global warming and that there was justification for the AMS’s 2009 Rossby Research Medal (highest AMS award) going to James Hansen.

Global Environmental Problems. There is no question that global population increases and growing industrialization have caused many environmental problems associated with air and water pollution, industrial contamination, unwise land use, and hundreds of other human-induced environmental irritants. But all these human-induced environmental problems will not go away by a draconian effort to reduce CO2 emissions. CO2 is not a pollutant but a fertilizer. Humankind needs fossil-fuel energy to maintain its industrial lifestyle and to expand this lifestyle in order to be able to better handle these many other non-CO2 environmental problems. There appears to be a misconception among many people that by reducing CO2 we are dealing with our most pressing environmental problem. Not so.

It must be remembered that advanced industrial societies do more for the global environment than do poor societies. By greatly reducing CO2 emissions and paying a great deal more for our then needed renewable energy we will lower our nation’s standard of living and not be able to help relieve as many of our and the globe’s many environmental, political, and social problems.

Obtaining a Balanced View on AGW. To understand what is really occurring with regards to the AGW question one must now bypass the AMS, the mainstream media, and the mainline scientific journals. They have mostly been preconditioned to accept the AGW hypothesis and, in general, frown on anyone not agreeing that AGW is, next to nuclear war, our society’s most serious long range problem.

To obtain any kind of a balanced back-and-forth discussion on AGW one has to consult the many web blogs that are both advocates and skeptics of AGW. These blogs are the only source for real open debate on the validity of the AGW hypothesis. Here is where the real science of the AGW question is taking place. Over the last few years the weight of evidence, as presented in these many blog discussions, is beginning to swing against the AGW hypothesis. As the globe fails to warm as the GCMs have predicted the American public is gradually losing its belief in the prior claims of Gore, Hansen, and the other many AGW advocates.

Prediction. The AMS is going to be judged in future years as having foolishly sacrificed its sterling scientific reputation for political and financial expediency. I am sure that hundreds of our older deceased AMS members are rolling in their graves over what has become of their and our great society.

Wednesday, June 15, 2011

THE LOOMING THREAT OF GLOBAL COOLING

Geological Evidence for Prolonged Cooling Ahead and its Impacts

Prof. Don J. Easterbrook, Dept. of geology -- Western Washington University -- Bellingham, WA 989225

The past is the key to the future--To understand present-day climate changes, we need to know how climate has behaved in the past. In order to predict where we are heading, we need to know where we've been. Thus, one of the best ways to predict what climate changes lay ahead is to look for patterns of past climate changes.

Numerous, abrupt, short-lived warming and cooling episodes, much more intense than recent warming/cooling, occurred during the last Ice Age and in the 10,000 years that followed, none of which could have been caused by changes in atmospheric CO2 because they happened before CO2 began to rise sharply around 1945. This paper documents the geologic evidence for these sudden climate fluctuations, which show s remarkably consistent pattern over decades, centuries, and millennia.

Among the surprises that emerged from oxygen isotope analyses of Greenland and Antarctic ice cores was the recognition of very sudden, short–lived climate changes. The ice core records show that such abrupt climate changes have been large, very rapid, and globally synchronous. Climate shifts, up to half the difference between Ice Age and interglacial conditions, occurred in only a few decades.

Ten major, intense periods of abrupt climate change occurred over the past 15,000 years and another 60 smaller, sudden climate changes have occurred in the past 5000 years. The intensity and suddenness of these climatic fluctuations is astonishing.

Several times, temperatures rose and fell from 9–15° F in a century or less.

The dramatic melting of continental glaciers in North America, Europe, and Asia that began 15,000 years ago was interrupted by sudden cooling 12,800 years ago, dropping the world back into the Ice Age. Continental and alpine glaciers all over the world ceased their retreat and re-advanced. This cold period, the Younger Dryas, lasted for 1300 years and ended abruptly with sudden, intense warming 11,500 years ago. The climate in Greenland warmed about 9° F in about 30 years and 15° F over 40 years. During the Younger Dryas cold period, glaciers not only expanded significantly, but also fluctuated repeatedly, in some places as many as nine times.

Temperatures during most of the last 10,000 were somewhat higher than at present until about 3,000 years ago. For the past 700 years, the Earth has been coming out of the Little Ice Age and generally warming with alternating warm/cool periods.

Both Medieval Warm Period and Little Ice Age have long been well established and documented with strong geologic evidence. Georef lists 485 papers on the Medieval Warm period and 1413 on the Little Ice Age for a total of 1,900 published papers on the two periods. Thus, when Mann et al. (1998) contended that neither event had happened and that climate had not changed in 1000 years (the infamous hockey stick graph), geologists didn't take them seriously and thought either (1) the trees they used for their climate reconstruction were not climate sensitive, or (2) the data had been inappropriately used. As shown in the 1,900 published papers, the Medieval Warm Period and Little Ice Age most certainly happened and the Mann et al. 'hockey stick' is nonsense, not supported by any credible evidence.

The oxygen isotope record for the Greenland GISP ice core over the past 500 years shows a remarkably regular alternation of warm and cool periods. The vertical blue lines at the bottom of the graph below show the time intervals between each warm/cool period. The average time interval is 27 years, the same as for time intervals between Pacific Ocean warm and cool temperatures as shown by the Pacific Decadal Oscillation (see below)

Global warming is real, but it did not begin in 1945 at the time of greatly increased CO2 emissions. Two periods of global warming (1915–1945 and 1977–1998), and two periods of global cooling (1880–1915 and 1945–1977) occurred in the 20th century. Atmospheric CO2 began to rise sharply right after WWII in 1945 but was accompanied by global cooling for 30 years, rather than by warming, and the earlier warm period from 1915 to 1945 took place before CO2 began to rise significantly.

During each of the two warm periods of the past century, alpine glaciers retreated and during each of the two cool periods glaciers advanced. The timing of the glacier advances and retreats coincides almost exactly with global temperature changes and with Pacific Ocean surface temperatures (PDO).

The Pacific Ocean has two modes, a warm mode and cool mode, and regularly switches back and forth between modes in a 25-30 year repeating cycle known as the Pacific Decadal Oscillation (PDO). When the PDO is in its warm mode, the climate warms and when it is in its cool mode the climate cools. Glacier fluctuations are driven by climatic changes, which are driven by ocean surface temperatures (PDO).

During the cool PDO mode, ocean surface temperatures in the eastern Pacific are cool. This was typical of the global cooling from 1945 to 1977. During the warm PDO, ocean surface temperatures in the eastern Pacific are warm. This was typical of the global warming from 1977 to 1998. The abrupt shift of the Pacific from the cool mode to the warm mode in a single year (1977) and the beginning of the last warm cycle has been termed the "Great Pacific Climate shift." There is a direct correlation between PDO mode and global temperature

The ocean surface temperature in the eastern Pacific off the coast of North America was warm in 1997. In 1999, the PDO switched from its warm mode to its cool mode and has since remained cool as shown by satellite imagery. Adding the PDO record for the past decade to the PDO for the century provides an interesting pattern. The PDO 1915–1945 warm mode, the 1945-1977 cool mode, the 1977-1998 warn mode, and the switch from warm to cool mode in 1999 all match corresponding global climate changes and strongly suggest:

1. The PDO has a regular cyclic pattern with alternating warm and cool modes every 25-30 years

2. The PDO has accurately matched each global climate change over the past century and may be used as a predictive tool.

3. Since the switch of the PDO from warm to cool in 1999, global temperatures have not exceeded the 1998 high.

4. Each time the PDO has changed from one mode to another, it has stayed in that mode for 25-30 years; thus, since the switch of the PDO from warm to cool in 1999 has been entrenched, it will undoubtedly stay in its cool mode for another several decades.

5. With the PDO in cool mode for another several decades, we can expect another several decades of cooling.

In 2000, the Intergovernmental Panel on Climate Change (IPCC) predicted global warming of1° F per decade and global warming of about 10° F by 2100. By 2010, temperatures should have been 1° F warmer than in 2000. That didn't happen so their climate models failed to predict even 10 years ahead.

The blue curves of projected cooling are based on the past PDO patterns for the past century and temperature patterns for the past 500 years. Three possible scenarios are shown: (1) global cooling similar to the global cooling of 1945 to 1977, (2) global cooling similar to the cool period from 1880 to 1915, and (3) global cooling similar to the Dalton Minimum from 1790 to 1820.

The possibility of temperatures dropping to the level of the Dalton Minimum is suggested by the recent passing of the sun from a solar grand maximum to a solar grand minimum similar to that of the Dalton Minimum. The unusually long sun spot cycle 23 and the solar magnetic index suggest that a solar minimum similar to the Dalton is very possible. A fourth possibility is that we may be approaching another Maunder type minimum and another Little Ice Age. Time will tell which curve is correct.

IMPACTS OF GLOBAL COOLING

That global warming is over, at least for a few decades, might seem to be a relief. However, the bad news is that global cooling is even more harmful to humans than global warming and a cause for even greater concern because:

1. A recent study showed that twice as many people are killed by extreme cold than by extreme heat.

2. Global cooling will have an adverse effect on food production because of shorter growing seasons, cooler growing seasons, and bad weather during harvest seasons. This is already happening in the Midwestern U.S., China, India, and other places in the world. Hardest hit will be third world countries where millions are already near starvation levels.

3. Increase in per capita energy demands, especially for heating.

4. Decrease in the ability to cope with problems related to the population explosion. World population is projected to reach more than 9 billion by 2050, an increase of 50%. This means a substantial increase in demand for food and energy at a time when both are decreasing because of the cooling climate.

CONCLUSIONS
Numerous, abrupt, short-lived warming and cooling episodes, much more intense than recent warming/cooling, occurred during the last Ice Age, none of which could have been caused by changes in atmospheric CO2. .

Climate changes in the geologic record show a regular pattern of alternate warming and cooling with a 25-30 year period for the past 500 years.
Strong correlation between solar changes, the PDO, glacier advance and retreat, and global climate allow us to project a consistent pattern into the future.

Strong correlation between solar changes, the PDO, glacier advance and retreat, and global climate allow us to project a consistent pattern into the future.

Projected cooling for the next several decades is based on past PDO patterns for the past century and temperature patterns for the past 500 years. Three possible scenarios are shown: (1) global cooling similar to the global cooling of 1945 to 1977, (2) global cooling similar to the cool period from 1880 to 1915, and (3) global cooling similar to the Dalton Minimum from 1790 to 1820.

Expect global cooling for the next 2-3 decades that will be far more damaging than global warming would have been.

Tuesday, June 14, 2011

Wind Energy's Ghosts

http://www.americanthinker.com/2010/02/wind_energys_ghosts_1.html link

Shrinking Worker's Income

The wealthy don’t create jobs – customers create jobs. The worker's share of national income is collapsing. No matter how many tax cuts you give the rich and corporations they are not going to create any jobs if the customers have no money to buy the stuff they make. George HW Bush was right; it is "VooDoo Economics." Even the inventor of supply side economics, Ludwig von Mises, admitted it was a scam before he died but it is still a conservative mantra.

Monday, June 13, 2011

Latin Phrases

http://en.wikipedia.org/wiki/List_of_Latin_phrases_(full) link

A Brief History of the Corporation

http://www.ribbonfarm.com/2011/06/08/a-brief-history-of-the-corporation-1600-to-2100/ link

Sunday, June 12, 2011

7 Lessons Public School Teaches

Are we really turning out independent, skeptical, curious and adaptable citizens? Critics have had their doubts for decades, for example:

From 7 Lessons Public School Teaches by John Taylor Gatto, New Society Publishers:

Students learn to accept:

1. Confusion as your destiny.
2. Hierarchy: You must stay in class where you belong.
3. Indifference: Not to care about anything too much.
4. Emotional dependency: Surrender your will/rights to the predestined chain of command who can withdraw your rights.
5. Intellectual dependency: Curiosity has no important place, only conformity.
6. Good people wait for an expert to tell them what to do.
7. Provisional self-esteem: Your self-respect should depend on expert opinion-- children should not trust themselves or their parents, but need to rely on the evaluation of certified officials.
8. Controlled society: Constant surveillance and denial of privacy--no one can be trusted, that privacy is not legitimate.