Thomas Vales Presentation

Thomas Vales, the Laboratory Coordinator for the Electrical & Computer Engineering Department at Suffolk University’s College of Arts and Sciences, gleefully treated our class to some welcome edutainment about the evolution of electricity.

Mr. Vales began by introducing us to the Peltier Device, which was originally developed in 1834.  It functions by using plated copper which is cooled on one side and heated on another to produce electricity.  They are used by the military mainly in submarines for silent generation of power.  Lighter plugs used to power electronic devices through automobile lighters, as well as containers used in automobiles to keep beverages hot and cold, use this device as well.  Mr. Vales finally related that, because they are very inefficient, the Peltier Devices are used relatively sparingly for commercial uses.

The Peltier Device in the two cups and on the spinning wheel. The Mendocino Motor is illuminated by the lamp to receive "solar" power.

Mr. Vales then discussed the solar powered device which is suspended – or levitated – on magnetic bearings and has no practical application, called the Mendocino Motor.

The penultimate discussion traced the history of the Tesla coil, with an introduction to the magnanimous genius who developed it, Nikola Tesla.  (Tesla sought no monetary gain for his inventions because he just simply loved his work!  As a result, he died penniless.)  In 1880, the Polish born Tesla had over 700 U.S. patents, the most popular of which was the method of generating electricity by alternating current (AC).  He was backed financially by the entrepreneurial engineer George Westinghouse.  Tesla’s AC system was latched on to by Westinghouse to compete with Thomas Edison’s direct current (DC) system.  The two geniuses once held a contest to see whose system was more powerful by shocking animals.  The animals shocked by Edison’s system lived, whereas Tesla’s system killed the animals, which meant that Tesla’s alternating current was more powerful.  It is agreed by experts that alternating current is better and easier to transmit across the country.  Westinghouse used Tesla’s alternating current to electrify Niagara Falls, which proved to be “the first large system to supply electricity for multiple uses (railway, lighting, power) from one circuit.” ( http://www.pbs.org/wgbh/theymadeamerica/whomade/westinghouse_lo.html )

Mr. Vales demonstrating a light bulb being powered by the Tesla coil.

The grand finale was Mr. Vale’s demonstration of how the Tesla coil is used to wirelessly generate electricity to power lights.  The Tesla coil is composed of wound-up wires emitting a radio frequency transmitter which creates a spark.  The spark’s vapors, composed of ozone, is absorbed by neon and argon gas in light bulbs to produce light.  (The ozone can prove harmful if too much is inhaled.)  This dynamic video is a dramatic display of the Tesla coil at work.

The Tesla coil, with Mr. Vales and Dr. Shatz in the background.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Clean Energy Subsidies

It would seem prudent and appropriate for the United States federal government to subsidize clean energy for the simple fact that not only would our dependence on foreign energy sources be decreased, it would also contribute significantly to our producing a smaller carbon footprint, thereby helping to save the world from its projected rendezvous with more cataclysmic planetary conditions and weather systems as a result of global warming and its offspring.  We currently subsidize the ethanol industry to the tune of billions of dollars, even though ethanol production is no more than a micro step below gasoline production as far as CO2 emissions are concerned.  The oil industry is even more heavily subsidized, as the following graphic shows.

According to a Chicago Tribune article from 18 September 2011, “The FBI is investigating what happened with Solyndra, a solar panel company that got a $535 million government-backed loan with the help of the Obama White House over the objections of federal budget analysts.”  The piece goes on to say that the scandal is nothing more than politics as usual – the good ole quid pro quo where corporations make political contributions and the contributee repays them by granting them a largesse from the government cash trough.  It is also “reported that the U.S. Department of Energy employee who helped monitor the Solyndra loan guarantee was one of Obama’s top fundraisers.”

President Obama is investing much political capital in the so-called green industry.  USA Today ran a story about the President’s outmaneuvering Congress to further his green agenda: ” Secretary of State Hillary Rodham Clinton announced Thursday, accompanied by officials from Bangladesh, Canada, Ghana, Mexico and Sweden, a joint effort to curb the short-lived emissions of pollutants including soot (also called black carbon), methane and hydrofluorocarbons that account for 30% to 40% of global warming.  The United States plans to contribute $12 million and Canada $3 million over two years to begin the project, which will be run by the United Nations Environment Program….’One of the benefits of focusing on pollutants that are short-lived is, if we can reduce them significantly, we will have a noticeable effect on our climate in relatively short order,’ Clinton said at the State Department announcement. Scientists estimate that cutting these emissions can help prevent millions of deaths from pollution and lower global temperatures 0.5 degrees Celsius by 2050.”  Sounds like a comparatively cheap price to pay.  I would place my bet on subsidizing.

There are many views out there on whether or not the United States should pump money into the renewable energy sector.  The best case for subsidies that I found was contained in this U.S. News & World Report article, where Eric Pooley argues, “The benefits of our investments in clean energy couldn’t be clearer. The U.S. was a significant net exporter of solar energy products with total net exports of $1.9 billion in 2010. Solar jobs have doubled in the U.S. to 100,000 since 2009, and last year alone U.S. solar energy installations created a combined $6.0 billion in direct value, of which $4.4 billion accrued to the U.S….If our goal as a nation is to flourish in the next energy boom, if we want to claim our share of the $2.3 trillion clean energy market, we can’t walk away from clean energy subsidies—so long as conventional fuels enjoy an artificial competitive advantage. It’s time to put Solyndra behind us, heed its lessons, and get back to the business of growing American business, cleaning up our power sector, and securing our energy supply.”

Hydraulic Fracturing – Fracking

Hydraulic fracturing, or “fracking”, is a method of extracting natural gas and oil from shale rock formations thousands of feet underground by drilling vertically and then horizontally.  Depending on whom one listens to, fracking is either a godsend or a certain path to destruction.  There were news reports that the recent earthquake that reverberated from Virginia to New England were the result of fracking.  An economic boom is occurring in some Texas towns and the Dakotas thanks to fracking.  It seems that American efforts to reduce its dependence on foreign sources of oil and natural gas are proceeding by using all available means.  To drive home that last point,  today’s (21 February 2012) Financial Times mentioned, in an article about a US/Mexico landmark oil deal, that “US oil production has risen to its highest level since 2002, largely as a result of the development of onshore ‘tight oil’, opened up by the techniques of horizontal drilling and hydraulic fracturing, or ‘fracking’, used to produce shale gas.”

A New Year’s Day 2012 article in the Oakland Press was very optimistic about the prospects of fracking putting people to work in the once-mighty but now economically hard-hit industrial state of Michigan.  “Hydraulic fracturing — popularly known as fracking —  involves pumping pressurized water, sand and chemicals underground to open fissures and improve the flow of oil or gas to the surface.  It has been used in Michigan since the 1950s, in more than 12,000 wells, mostly in northern regions of the state. Most commonly, a straight vertical well has been used; horizontal wells — where the vertical well reaches its depth and then is extended horizontally underground — are increasingly used by producers where appropriate to the producing formation, according to the Michigan Oil and Gas Producers Education Foundation. The approach used depends upon the geologic conditions and the economic reality of a particular situation…It carries the potential of thousands of jobs for Michigan, energy independence and, possibly, a cleaner environment.”

It has been an edifying exercise writing these blogs because of the intimacy with which I am delving into the problems and solutions of climate change and energy.  No longer are terms like demand response, emissions trading, and fracking just cursory items gleaned from news headlines.  These terms now arrest my attention because they transcend the surface and bore into the deeper echelons of my consciousness.

Now, me being an eternal optimist, I prefer to dwell on the positive aspects of things, while at the same time giving due attention to the negatives, as well.  Though fracking will definitely pollute water and otherwise damage aquifers, the case is made that the technology should be allowed an opportunity to evolve to a point where the drawbacks are minimal, at the least.

Richard Epstein, in an article in the Hoover Institution Journal Defining Ideas entitled “The Fracking Panacea”, writes:  “A number of recent reports have indicated that new techniques of ‘fracking’ are able by intense hydraulic  pressures to unlock huge amounts of oil and gas reserves from once abandoned  sites.  Right now a land boom is taking place in the conspicuous locations of  yesterday, such as the Permian Basin, in West Texas, the Eagle Ford region in  Central Texas, and the Bakken in the Dakotas. The numbers are quite staggering.   Production in the Bakken has jumped from virtually nothing a few years ago to  400,000 barrels a day today, with the prospects that better technology will push  that total up to a million barrels a day by 2020.  Similar gains are reported in  both the Permian Basin and Eagle Ford.  It is almost as if the laws of scarcity  have been repealed. Daniel Yergin, a notable energy expert, puts the point in  geopolitical terms: ‘This is like adding another Venezuela or Kuwait by 2020,  except these tight oil fields are in the United States.'”

After going over the minuses of fracking, Epstein concludes:  “The dogmatic stance of some environmental groups is not defensible in light  of the potential gains from fracking.  But we must proceed cautiously. Here is  one intermediate strategy that bears promise.  Start fracking in remote regions  of Texas and the Dakotas, and hope that improved fracking techniques will allow  exploration at a greater range of sites.  As with nuclear power, we should not  flatly prohibit dangerous technologies. Once blocked, those technologies will be  ever more difficult to improve. No one can be sure to get the balance right all  the time. But by the same token, no one should be carried away by extreme  positions.”

Just as I am personally reluctant to take some new medicine that promises to cure my immediate ill because I do not want to end up a party in some future class action suit due to the result that that same medicine had some catastrophic side effect, it is a given that those same reservations hold true to whoever is in the line of fire of a fracking operation.  The “not in my backyard” dictum would apply in the present situation.

On 30 March 2011 President Obama instructed Energy Secretary Steven Chu “to work with other agencies, the natural gas industry, states, and environmental experts to improve the safety of shale gas development.”  (DOE website)  Secretary Chu is charged with overseeing the work of a committee of renowned experts to figure out a way to keep Americans safe as we explore ever-new methods of manumitting ourselves from the shackles of dependence on foreign oil.  At least that is what one would think.

In the EPA Briefing to the SEAB Natural Gas Subcommittee to Examine Fracking Issues published on 19 May 2011, page 4 noted the following:  “Topics that are not within the scope of the study include:  air quality, impacts on land and aquatic ecosystems, seismic risks, public safety [emphasis mine], and occupational risks.”  I sit corrected.

 

 

 

 

 

 

Emissions Trading

Burton Richter, the author of our text, mentioned that one of the methods of helping to reduce carbon emissions is to figure out some fair and useful way to levy a fee on corporations that contribute significantly to emitting CO2 into the atmosphere.  Specifically, he wrote, “One…way is to see how much it would cost to eliminate the emission from our present coal-based system and include that as a fee paid to the government if you emit the greenhouse gas – no emissions means no fee to be paid….” (Beyond Smoke and Mirrors, pp. 88-89)

Apparently, the European Union is doing just what the author is suggesting and has been doing it since 01 January 2005.  The Financial Times recently published an article entitled “Cheap and dirty” wherein they analyzed the emissions trading scheme (ETS) enacted by the European Union seven years ago.  The scheme grants a permit to corporations to emit one ton of carbon dioxide, for a fee; these permits can be sold to others or saved for future use, depending on the needs of the company.  (The website of the European Commission gives a better explanation of exactly how the scheme works.)  According to the European Commission’s website on the policy, by 2020 emsissions will 21% lower than they were in 2005.  (The website was last updated on 15 Novemeber 2010.)

For instance, Alcoa buys a million permits, which gives them the right to pump a million tons (2 billion pounds) of carbon dioxide into the atmosphere.  Alcoa only needs 100,000 permits, so they decide to sell 400,000 permits to BP and hold on to the other half-million permits for a “rainy day”, as it were.  The rationale behind this scheme was to encourage corporations to figure out other innovative ways to safely manufacture their products without using greenhouse gases or pay an ostensibly prohibitive fee for the “privilege” of doing business as usual.

The ETS is rooted in the Kyoto Protocol and has its genesis in that great bastion of innovation, the United States of America.  “The EU ETS would likely not have come into existence without the Kyoto Protocol, but the story of that relationship contains its share of irony.  Briefly, emissions trading is an American institutional innovation in environmental regulation that was forced into the negotiations on the Kyoto Protocol by the United States in late 1997 in the face of strong opposition from the EU.  Resistance to the concept continued until the new American president pulled the United States out of the Kyoto Protocol in 2001, after which European opposition to emissions trading faded.  Thereafter, the EU ETS became an indispensable instrument of European climate change policy and the primary means by which the EU member states would meet their obligations under the Kyoto Protocol.”  (ftp://ftp.cba.uri.edu/classes/graham/CarbonIFRS/Carbon/carbon%20markets/The_European_Union_Emissions_Trading.pdf)

Now, the EU policy was enacted before the economic downturn and seemed promising at the outset, but things have deteriorated since, according to the FT.  “Proponents hoped that by putting a price on carbon and forcing companies to pay for their emissions, it would prod [the companies] to pour money into green technologies and greater efficiency.  But, as a result of a subsequent recession and poor management, the market is saturated – and could be for years to come – with permits that give companies the right to emit carbon without penalty.  That has led to a prolonged slump in the carbon price.  At roughly $9 per ton, compared with a peak of nearly $40 in July 2008, it is a fraction of what policymakers and analysts had forecast it would have reached by now – and well below the levels necessary to justify the desired investments….It is a mechanism borrowed from the US, which introduced a cap-and-trade approach to contain the industrial pollution fouling lakes and rivers in the 1990s.  The appeal was its seeming simplicity:  all policymakers had to do was place a cap on the bloc’s annual emissions, then stand back and let companies find the most cost-effective way to reduce them.  They could invest in cleaner technology  or opt to buy permits on the secondary market from more efficient rivals.  Companies embraced the idea because they believed it would be less intrusive than other forms of regulation.”

If Alcoa had bought those permits in July 2008, it would have cost the company $40 million, but those same permits today would run about $9 million.  Today, if it costs, say, $15 million dollars in penalties for Alcoa to emit those carbons into the atmosphere without a permit and it costs $25 million to upgrade to a more energy efficient way to greatly reduce or eliminate emissions, then it would make better business sense for Alcoa to simply pay for the right to emit rather than spend the loot on upgrading.  Conversely, in pre-recession July 2008, Alcoa would have had more incentive to invest in upgrading.

The policy is not quite going according to plan and is even ridiculed by business executives within the European Union.  A financial analyst from the Swiss banking giant UBS called ETS “a joke” and went on to say that while the concept is theoretically wholesome, “it’s just when it gets exposed to political reality” when things go awry.  “In an effort to overcome business opposition, the European Commission, the EU’s executive arm, set a generous cap and allowed national governments to lavish favored industries with free permits.  In some sectors, such as electricity, companies subsequently reaped millions of euros in windfall profits by passing on the market price of the permits to customers even when they themselves paid nothing for them….”(Financial Times)

The EU has run smack dab into the “invisible hand” of the capitalist free-market system, which is making it rethink and retool its ETS policy.  For all intents and purposes, their trial and error approach has positioned them on the cutting-edge of global emissions reduction policy, even though the present policy is at a low point.

They are moving forward with bold, but unpopular, applications of their policy.  One of those policies is “to force foreign airlines to pay for the carbon pollution generated by each flight landing at its airports.  [P]olicymakers justified their action as the only way to forge a global solution to one of the fastest-growing sources of greenhouse gas emissions.” (“Cheap and dirty“)  China, Russia, India, and the U.S. is against this policy and are meeting in Moscow this week to discuss it.

Regarding the ETS, the EU is seriously considering driving up the prices of the emissions trading allowances by removing over one billion of the permits.  Such an action would restrict supply and conceivably make the remaining permits more valuable.  China is honing in on what the Europeans are doing so that they can figure out how best to proceed in forging their own emissions control policy.  My guess is that the whole world is watching, as the whole world’s fate is dependent upon lowering the buildup of greenhouse gases into the atmosphere.

 

 

Electricity

     Our experiment on electricity was the most edifying experiment, for me, thus far in this course.  Though not fully being able to grasp what electricity is, Dr. Shatz attacked my ignorance by relating that, “In order to produce electricity, something has to be moving to modify the magnetic field….The more you change the magnetic field, the more electricity is generated.

     So, what we did was shake up a flashlight in which was contained a magnet and magnetic field of copper wiring.  Shaking up the magnet cause disturbance in the magnetic field and electricity was the result.  That electricity was used to power the light in the apparatus.  Below you will find the results of our experiment as contained in an Excel spreadsheet.

SCI 184 Electricity Experiment

Energy

On 31 January 2012 we conducted an experiment in class to help us better understand the concept of energy.  Though definitions there are in superabundance, it is as difficult for me to fully grasp the concept of energy as it is for most people to fully grasp the concept of love.  I just know that love is and know what the experience feels like.  Likewise, I know that energy exists and associate it with my experiences therewith, e.g., movement.

As with most technical terms, one is led on an interminable journey through the dictionary in order to get a solid understanding of the word.  Energy was defined in class as the ability to do work, which is in turn defined as the process by which energy is transferred from one object to another.  Each definition contains the term which is sought to be defined – a seeming conundrum, that.

Enter the discipline of science, where empirical experimentation is the order of the day.  And our class project sought to further demystify this ubiquitous term, energy, by conducting an exercise to gauge how energy is utiliized to raise a certain mass to a certain height.

Our device was a pulley with a cylindrical mass attached to it. The pulley was rigged to lift the mass via a computer software program.  We did a total of 8 runs, alternating the mass with the power level remaining constant at 50% on four of the runs, and alternating the power levels with the mass remaining constant at 0.25 kg on the other four runs.

Speed (RPM) Battery
Discharge (mv)
Mass (kg) Power Level (%) Time (s) Acceleration Height (m) g (m/s^2) MGH
Constant Power Level
0 0.08871 0 138 0 0.24 0 50 0 1702.17 0 5.21E-05 0.2 9.8 0.4704
0 0.076954 0 138 0 0.23 0 50 0 1734.802 0 4.44E-05 0.2 9.8 0.4508
0 0.087218 0 83 0 0.13 0 50 0 1784.809 0 4.89E-05 0.2 9.8 0.2548
0 0.068573 0 41 0 0.25 0 50 0 2447.497 0 2.80E-05 0.2 9.8 0.49
Constant Mass
0 0.072024 0 70 0 0.25 0 60 0 1872.049 0 3.85E-05 0.2 9.8 0.49
0 0.081327 0 138 0 0.25 0 80 0 1932.529 0 4.21E-05 0.2 9.8 0.49
0 0.07907 0 27 0 0.25 0 100 0 1996.121 0 3.96E-05 0.2 9.8 0.49
0 0.08298 0 27 0 0.25 0 105 0 2034.625 0 4.08E-05 0.2 9.8 0.49

As can be seen from the above data, when the mass is constant and the power levels are different, potential energy (mgh) is constant.  But, when the mass changes while the power levels remain constant, we see that potential energy (mgh) changes.  Potential energy was defined in class as energy that a body has that is not being spent.

Increasing Gas Mileage By Eliminating the Use of Gas?

All roads seem to point back to the late 1960s and early 1970s when the movement to make automobiles more ecologically friendly and fuel efficient was launched in earnest.  Authors Sperling and Gordon insist that Ralph Nader’s consumer safety campaign, which kicked off in the sixties, and the establishment of Earth Day in 1970, were instrumental in persuading Congress to pass and President Richard Nixon to sign into law the Clean Air Acts Amendments of 1970.  “An aggressive campaign was begun to reduce pollution from gasoline combustion engines, forcing the insular, maturing automotive industry to embrace innovative pollution-reduction technology and, soon after, safety and energy innovations as well,” the authors write.

The Demand Response post pointed to OPEC as the main culprit motivating the electricity industry to look inward and come up with some substantive solutions to help wean them off of being so heavily dependent on foreign oil for sustenance.  OPEC’s oil essentially single-handedly fuelled the transportation industry back then, so that crisis also reverberated through the automobile industry, moreso than the electricity industry.

“One-fourth of all the oil consumed by humans in our entire history will be consumed from 2000 to 2010.  And if the world continues on its current path, it will consume as much oil in the next several decades as it has throughout its entire history to date.  The increasing consumption of oil, and the carbon dioxide emissions resulting from it, are the direct result of dramatic growth in oil-burning motor vehicles worldwide.  Barring dramatic events  such as wars, economic depressions, or newfound political leadership, these trends will continue….Motor vehicles are fundamentally unchanged from a century ago….Perhaps most important, the vast majority of vehicles on the road are still powered by an inherently inefficient technology – the four-stroke internal combustion engine developed by Nikolaus Otto in 1867 and first incorporated in a car by Karl Benz in 1885.  These ancient engines are still fueled by petroleum, essentially the sole fuel for all global mobility.  Gasoline engines still waste more than two-thirds of the fuel they burn and directly emit 20 pounds of CO2 into the air for every gallon of fuel burned.” (From Two Billion Cars:  Driving Toward Sustainability by Daniel Sperling and Deborah Gordon)

As can be seen from the above timeline, the first hybrid vehicle was actually developed at the very end of the 19th century by a young engineer named Ferdinand Porsche.  Porsche’s design fused battery-powered electric motors into the front wheels to make them roll.  It is interesting to note that this innovation was also used in what was the first 4×4, pictured below.

Fast forward to today and we find that the automobile industry has only recently started mass-producing vehicles based on technology that is over a hundred years old.  This New York Times article detailed the development of the technology.

In the interim, other methods of improving fuel efficiency included the use of the biofuel ethanol, which is made from corn.  In Burton Richter’s book, Beyond Smoke and Mirrors:  Climate Change and Energy In The 21st Century, he states:  “…[C]orn ethanol takes about as much energy to make as is in the gasoline it displaces, results in the emission of nearly as much greenhouse gas as gasoline, and has driven up the price of food.”  Natural gas is also used as an alternative fuel to further reduce the carbon footprints of motor vehicles, but it is still not widely available.

We are now seeing automobile manufacturers hustling to improve their offerings of more eco-friendly and fuel efficient vehicles.  Plug-in hybrid electric vehicles (PHEV) and hydrogen fuel cell models are now in production or are being developed for use by the masses.  They seem to be trying to increase gas mileage by eventually eliminating the use of gas altogether, for the sake of maintaining our planet for generations to come.

LEGO NXT Project

In all honesty, this will not be a very long post, as I did not get too into this project.  Perhaps the project just did not take to me because of my “non-dexterousness”, as a friend once described my tendency towards clumsiness.
A box full of small things in close proximity to my presence invariably seem to mystically activate Murphy’s Law, which posits that if something can go wrong, it will go wrong.  Having set the box up on the table and opened it, I accidentally knocked the whole box, with its diminutive contents, off the table and onto the floor.  This happened twice through my agency and once thanks to one of our lab partners.
The first roadblock, if you will, that we ran into in setting up the car was locating the proper pieces needed to build the car.  After many trial-and-error-experimenting with pieces substituted for the prescribed parts, we finally put the ‘whip’ together and were able to make it do what it do.
In measuring the fractal error when we changed the power from 75 to 95, we came up with the result of 0.032.  Though it does no figure into the formula, the number of whell turns was 1.61389.  In getting the fractal error, we measured the distance the car travelled after setting the power to 95; that distance was .27 meters.  Knowing the distance that the computer program measured, which was .27871 meters, we were able to use the following formula to determine the fractal error:
The absolute value of (measurement with ruler minus program measurement) divided by ((rule measurement+program measurement) divided by 2).

Demand Response

There is an oft-repeated dictum by one of the Professors in the Applied Legal Studies Program that “money makes the blind see.”  It is my neophytic contention that nearly every innovation which permeates the industrial world is arrived at because of a need of or desire for more money; their presence also serves to save money, as well.

The Federal Emergency Regulatory Commission(FERC) defines demand response as “changes in electric usage by demand-side resources from their normal consumption patterns in response to changes in the price of electricity over time, or to incentive payments designed to induce lower electricity use at times of high wholesale market prices or when system reliability is jeopardized.

Because of the 1970s energy crises where OPEC, who were at the time (and still is) the major supplier of oil to all industrialized economies, especially the Unites States, flexed their muscle and quadrupled the price of oil, the world was forced to rethink energy policy.  After decades of being wholly dependent on Middle East oil – blinded, if you will –  the world opened its eyes when OPEC effectively closed the spigots with their prohibitive pricing scheme.  That spigot quenched the energy thirst of developed countries who were unaware of how addicted to oil and other fossil fuels their nations had become.

Amid the scramble to liberate themselves from that addiction/affliction, the blind, unable to simply rub spit-moistened dirt on their eyes to cure their affliction, turned within and figured out several ways to help themselves.  Domestic oil exploration and production efforts were intensely stepped up.  Alongside those efforts, conservation was studied and found to be equally effective.

Demand response grew out of these conservation efforts.  In a paper entitled “Economic Principles of Demand Response in Electricity”, Larry Ruff, who has a PhD in economics from Stanford and a BS in physics from the California Institute of Technology, gives us this historical and academic perspective:  “Until the 1970s, price was usually ignored in forecasting electricity demand even in the long run.  When price-independent demand forecasts were used during the oil crisis of the 1970s to justify a rapid expansion of generating capacity, technological critics argued that it would be cheaper to reduce demand than to increase supply, at least to some large extent.  Then some economists pointed out that regulated prices below the costs of incremental supplies give consumers too little incentive to conserve and make it unprofitable to expand supply.  Under these conditions, a utility with an obligation to supply can improve consumers’ incentives and reduce its own full-cost-recovery prices by paying for demand reductions.  As long as such payments do not exceed the difference between marginal costs and retail prices they are ‘price corrections’ rather than subsidies….”

After experimentation throughout the 1980s and 90s, Dr. Ruff continues:  “…electricity demand is strongly affected by prices and ignoring this reality will lead to costly mistakes; today, nobody would think of forecasting electricity demand without considering prices as critical explanatory variables….More importantly…these events also demonstrate the importance of correctly understanding and applying basic economic principles in the design of demand reduction policies and programs.”

In the November 2011 edition of BACnet and the Smart Grid, Dr. David Holmberg, PhD., contributed an article entitled “Demand Response and Standards”, where he sketches an enthusiastically optimistic evolutionary picture of demand response(DR) and concludes:  “Options for facility interaction with the power grid are increasing daily as standards, technology, regulations and markets advance, and as DR programs expand in scope and number.  The concept of demand response has expanded from a traditional view of utility peak shaving via central dispatch to that of facility interaction with markets, grid operations, and service providers, enabled by a common protocol and standard information models….The grid is steadily getting smarter, with technology, regulations and legislative changes pushing it forward.  In fact, the U.S. must move toward a system where buildings and other consumer facilities act as ‘demand response’ resources to help maintain grid reliability.  We will be integrating more intermittent renewables and depending on buildings to become more intelligent, shifting and balancing load and demand peaks in response to price and grid reliability signals.  You, too, will be carried along by this wave, so make plans to surf the rising water.”

So, the understanding that I get from all the jargon is that demand response is a method the power companies have devised to help them deal more efficiently with providing power to consumers during peak operation times by encouraging consumers, through financial incentives, to give back to the power company the energy that the consumer did not use.  This helps control production costs in the electricity industry and at the same time ostensibly reduces the carbon footprint of the industry and helps consumers save on their power bills.

Japan’s Reaction to Fukushima Daiichi Nuclear Power Plant Explosion

An 8.4 magnitude earthquake and a resulting tsunami devastated Japan on 11 March 2011.  The earthquake caused widespread topographical damage and wreaked havoc on the island nation’s infrastructure.   As can be imagined, roads, bridges, power lines, and aquatic systems were heavily damaged, if not altogether destroyed. The most far-reaching and significant item that was damaged was the Daiichi Nuclear Power Plant in Fukushima. (To see some amazing before and after photos of the Plant and Japan, go here.)

The significance arises, of course, because we are dealing with nuclear power and the real threat of widespread radiation and contamination.  It is far-reaching in that, geographically, the fallout from leaked chemicals can potentially stretch around the globe, and, medically, people can develop all types of diseases and ailments, the most-feared of which is any form of aggressive cancer.

It is unfortunate for those affected by current disasters to not have learned from past catastrophes, as is the present case of the Japanese handling of the evacuation of those in and near the Fukushima Daiichi Nuclear Power Plant.  Now, comparing Japan’s catastrophe to the Chernobyl incident in the former Soviet Union (in what would today be the independent nation of Ukraine), one would be tempted to say that Japan handled the situation magnificently.  But, according to scientists and academics, such was not the case.

News reports have inveighed against what they perceive to be the sloppy handling of the aftermath of the earthquake and tsunami.  According to a 26 December 2011 Financial Times article on the incident, “The operator of the Fukushima nuclear power plant and its regulators all failed in their duty to adequately prepare for and respond promptly to a major emergency….Tokyo Electric Power, the operator of the Fukushima plant, and its regulators were so unprepared for a major nuclear emergency that they lacked even the basic safety measures to respond to a disaster of the scale that hit Fukushima Daiichi Nuclear Power Plant in the wake of the March 11 tsunami….”

There are even reports that warnings predicting precisely the scenario which occurred were ignored or possibly even ridiculed.  The Financial Times relates in its 06 May 2011 issue that a perspicacious nuclear engineer and member of the Japanese Communist party saw the handwriting on the wall after studying the aftermath of the Three Mile Island nuclear affair in the United States in 1979.  “Five years ago,” reports the FT, “he had outlined in parliamentary debate how a combination of earthquake and tsunami could knock out a Japanese atomic station’s cooling systems.  He was concerned less about flooding than the risk that water intakes might be left exposed when the sea drew back before a tsunami’s arrival.  Still, his insistence that the government needed to prepare for catastrophic cooling failure was prescient.  Reactor overheating could cause steam or hydrogen explosions, he warned.  ‘It is essential to have proper countermeasures in place for something that could be close to a Chernobyl.'”

Personally, I am unsure whether or not things could have been handled better by officials in positions of influence.  No matter how well one prepares for disaster, things can and oftentimes do go wrong.  The Japanese, who were atomically bombed by the U.S. during World War II, are a resilient people and are in bounce-back mode from this devastating blow.  Because of their rapport with the world scientific community and innovative abilities in technological matters, the world will most definitely profit from the research that will be gained from their experience.  Just like after World War II, they are forging ahead.

From an analysis done by the Financial Times and published on 10 November 2011, the government, along with scientific/academic/tech communities are doing some good things.  “For now, the government’s main hope of addressing fears is through better monitoring, more detailed data and a decontamination programme intended to reduce radiation to levels everyone can accept….Whatever the details, this will be a huge undertaking, requiring removal and disposal of vast amounts of contaminated soil and vegetable matter….As part of a government trial, over four days, teams of workers have pressure-sprayed the roof and removed 5-10cm of the garden soil.  The effort has cut radiation levels from above the 20 millisievert a year limit to about one-quarter of that….”  The average annual per capita radiation dose is 6.2 millisieverts.  (A millisievert measures the absorption of radiation by the human body.  Click here for more on how radiation is measured, from the U.S. Environmental Protection Agency’s website.)

A government official even drank decontaminated water from one of Daiichi’s stricken reactors to reassure the people that recovery efforts were moving forward.  (Though he was visibly shaking as he poured the water into the glass, his courage and willingness to encourage confidence in recovery efforts is nothing less than heroic.)

The inventive nature of Japan’s citizens was quickened by the earthquakes devastation, and one invention in particular was an apparatus used to give workers a measure of comfort and security whilst working in the nuclear hot zone.  The HAL, or Hybrid Assistive Limb, was modified to allow workers to wear some cumbersome vests while working to contain and clean up the Nuclear Plant.  (http://www.nytimes.com/2012/01/22/business/inventions-offer-tools-to-endure-future-disasters.html?scp=6&sq=fukushima%20daiichi%20&st=cse)

So, after all is said and done, things are looking up for the people of Japan as they positively deal with the aftermath of the earthquake and tsunami.  They have done it before and are certain to rise from the ashes again.