Author Archives: mchapin2

Keystone XL: Canada, Water and Dependency

The Keystone XL pipeline is a pipeline that connects the oil sands of Alberta to ports of the Gulf of Mexico. It consists of four phases. Phase 1 linked the oil sands to refineries in Illinois[1], with a total of 3,456 kilometers of pipe, 2,219 km located in the United States and 864 km located in Canada. It was finished in 2010. Phase 2 linked the Phase 1 pipeline in Steele City, Kansas to a refinery in Cushing, Oklahoma. [2] Phase 2 is 468 km long. Phase 2 was finished in 2012. Phase 3 was separated into two, 3a and 3b. Phase 3a started at Cushing and connected to Port Arthur, Texas. This was a total of 700 km. 3a was completed in Feb. 2014. Phase 3b, which is still in progress, is a 76 km pipeline that will link into the Houston area.

Phase 4, which would travel through Montana to allow a shorter distance, is the controversial portion of the project. One major issue involved is that Phase 4 would have originally traveled through the Ogallala Aquifer in Nebraska,  which provides water to 82% of inhabitants of the Great Plains area and 30% of American irrigation water.[4] The result of an oil spill here would be catastrophic, possibly destroying the economy and population of the Great Plains. As such, President Obama rejected this proposed route.[5]

The current issue is whether the Keystone Pipeline would cause more pollution than it is worth. Current research suggests the Keystone Pipeline would cause a considerable amount of carbon emissions[6], but there are still arguments that the economic gain would be worthwhile. These ignore the issue that building more oil and gas infrastructure is never worth the economic gain, as regardless of the current benefit it will in the end create more fossil fuel infrastructure and delay transitions to less polluting alternatives. Creating a new pipeline may cause some immediate economic benefit, though mostly to the oil companies, but the long term cost of reinforcing dependence upon fossil fuels and creating even more pollution is simply not worth it.

There needs to be a clarification on what purpose the pipeline will serve. Is it there to provide more energy, or is it there to provide more supply? These may seem like the same thing, but there is a clear difference. If more energy is the goal, the best solution would be to construct more nuclear reactors and expand the current power grid. Nuclear reactors can be built anywhere and are much more space-efficient, making them a clearly superior system. If more supply is what is required, the issue is economic rather than social. The goal here would be to create profit, but even that would be better done by creating long-term systems that can continue selling power rather than constructing infrastructure around an inherently limited resource like fossil fuels. Once the oil sands are used up, all we will be left with is a huge dirty metal pipe underneath the ground. Continued reliance on and expansion of fossil fuel energy does not make sense from a social or economic standpoint.

 

 

  1. http://www.downstreamtoday.com/news/article.aspx?a_id=6776&AspxAutoDetectCookieSupport=1
  2. http://www.ipi.org/ipi_issues/detail/keystone-has-been-shipping-canadian-oil-to-the-u-s-for-years
  3. http://www.theglobeandmail.com/globe-investor/the-politics-of-pipe-keystones-troubled-route/article2282805/singlepage/#articlecontent
  4. https://www.princeton.edu/~achaney/tmve/wiki100k/docs/Ogallala_Aquifer.html
  5. http://www.npr.org/blogs/thetwo-way/2011/11/14/142322356/transcanada-says-it-will-reroute-keystone-xl-pipeline
  6. http://www.sei-international.org/publications?pid=2450

Tom Vales, Nikola Tesla and Alternating Current

Two types of electric current exist. Alternating Current and Direct Current. Alternating current works by having voltages that switch polarity every so often, while direct current only flows one way. Alternating current is dominant in high power applications, like powering buildings or large motors, while direct current is dominant in low power applications, such as batteries.

At one point, direct current was also used for high power applications. This required shorter distances than with alternating current, as direct current does not travel as far as alternating current can. The major advantage of alternating current is that it is compatible with a transformer. This means it could have a very high voltage (millions of volts) when being transmitted, and then go to a lower voltage (a hundred or so volts) when actually being used. Direct current can not do this, having only one voltage setting, and would instead have to be very close to whatever it is powering.

As Tom Vales explained, alternating current was invented by Nikola Tesla and endorsed by George Westinghouse while direct current was endorsed by Thomas Edison. The competition between the two, the War of Currents, apparently consisted mostly of Thomas Edison electrocuting animals while George Westinghouse and co. actually demonstrated the advantages of AC over DC.

Today, alternating current is used overwhelmingly more than direct current in high power applications such as powering buildings, while direct current is predominant in low power applications such as batteries, personal electronics, and electric cars.

Nikola Telsa, AC inventor, also created various other devices. One of these, shown by Tom Vales in the presentation, is the Tesla coil. The Tesla coil is a wireless transformer, using alternate current, which may seem like a great thing but would have serious issues by interfering with other transmitting devices such as cell phones, internet hotspots and anything else that is wireless and communicates with the outside world. Tesla coils could also absorb atmospheric electricity, potentially acting as a source of energy, but this technology has not yet been fully developed.

Demand Response: Extreme Weather, British Television and FERCing Rule 745

Demand response programs are programs where consumers can lower their energy usage during a time of high demand in exchange for some sort of incentive, such as money or a lowered energy bill. [1] Demand response programs are seen as a method of balancing supply and demand in times where demand is higher than supply. In most other industries, this would be called a shortage. Demand response periods are generally in times when energy production facilities are under maintenance or there is an abnormally high demand for energy. [2]

Demand response is currently present in California[3] and the United States in general and may soon become common in the United Kingdom[4]. It is generally declared during times of peak demand, such as extreme weather, high temperature or a TV pickup (mass usage of appliances, esp. tea kettles in England, during the commercial breaks in a popular television program [5]). The general idea is that some people choosing to lower their demand on energy will balance out with other people increasing theirs, preventing demand from overtaking supply. Because demand response is only required during times of peak demand, which are rare, it is seen as more cost-efficient than simply building a new power station.

Demand response was regulated by the US Federal Energy Regulatory Commission’s, or FERC’s, Rule 745, which stated that individuals who reduced their energy demand would be paid the same as utility companies were for supplying power. Rule 745 was recently struck down in a court case on the grounds that it was violating the power of the states. It remains to be seen what exactly the result of this will be, with some states seeming to be implementing their own version of Rule 745 while others have completely rejected it.

The Automobile Industry and Energy Efficiency

There are many different methods of increasing energy efficiency in an automobile, with various degrees of adoption for each. The US Department of Energy has assembled a list of common energy efficient engine and transmission technologies here.

One of the common engine technologies is Variable Valve Timing & Lift (VVT&L), which regulates air and fuel flow to optimize the efficiency of the engine. VVT&L causes a potential efficiency increase of 5% and an estimated saving of $1,500. VVT&L is used by companies such as Honda, Toyota, Nissan and Mazda as well as others.

Another technology is Cylinder Deactivation which deactivates cylinders when they are not in usage. This causes an estimate 7.5% efficiency increase and saves $2,300 dollars. It was originally developed during WWII, though it only became widespread in the 80’s when companies like Cadillac adopted it. This was later followed by Mercedes-Benz developing the modern design in the 1990s.

Turbocharging & Supercharging is another technology that allows smaller engines to be developed, causing an increase of 7.5% efficiency and savings of $2,300. Turbochargers are now used by almost all modern cars. Adding Direct Fuel Injection causes a further 12% in efficiency and savings of $3,600.

The final engine technology in widespread usage is an Integrated Starter and Generator which turn the engine off when the vehicle comes to a start and restart it when acceleration occurs. This causes an 8% increase in efficiency and saves roughly $2,400.

Transmission technologies have also been developed for more efficiency. Two of these exist, the first being continuous variable transmission which uses pulleys to produce various engine/wheel speed ratios. An increase of 6% and saving of $1,800 results from this.

There is also automated variable transmission which instead functions as a combination of manual and automatic transmission. This causes an increase in efficiency of 7% and savings of $2,100.

Overall, these various technologies are a solid and respectable steps towards energy efficiency. Though not quite as great as a complete transfer to electricity stations powered by nuclear energy, this is still a very good thing from an environmental position.

General Mindless Outrage: GMOs and the appeal to nature

Genetically modified foods are generally opposed on two grounds, one of which is rooted in tradition and the other rooted in anticonsumerism. Neither of these are correct, though both are misguided rather than actively malicious. Well, usually.

The major complaint against genetically modified organisms always boils down to “they aren’t natural’. Non-GMO food is called ‘organic’. Clearly then, the two words are being used as synonyms. This implies that genetically modified foods, being the opposite of organic foods, must be inorganic. Yet even the most ardent anti-GMO activist would not claim that genetically modified organisms are not organisms. The word organic has been perverted, transformed from referring to organic structure to being a synonym for natural (itself a rather abused word). And yet the funny part is, all food is organic yet almost none of it is truly natural (defined in this sense as ‘unmodified by humanity). Since the dawn of agriculture, humanity has been modifying sources of food to forms better for human consumption likely since before homo sapiens existed as a species. A very good example of this is maize (or corn). Maize originated from a teosinte, a group of species of grass in the zea genus. Visual: Teosinte (left), Maize-Teosinte Hybrid (middle), Maize (right).

The only difference between so-called genetically modified organisms and so-called organic food, is that organic food is genetically modified through agriculture while GMOs are genetically modified in a laboratory. I believe it is the fact that a laboratory is involved, rather than concerns about naturalness, that is the true source of GMO hysteria. The same hysteria about scientists and abuse of nature can be seen in everything from monster movies, to the abject horror alternative medicine supports have towards ‘allopaths’, to the vaxxers, to the people who believe nuclear reactors will cause everyone to grow a third arm. It is fear of the unknown, the fear that their way of life will be crushed under the feet of scientific progress. And rather than reforming their way of life to survive, these people instead wallow in ignorance and blindly attack anything seen as ‘playing god’. Another issue often brought up is that genetic modification adds alien genes, from other species, into a plant and that this could have a negative effect on human health. However, plant genes cannot be applied to animals and genes rarely survive exposure to stomach acid anyway. This may be a legitimate misunderstanding of science on the part of some, but is often just a rationalization.

The other major reason for opposing genetically modified organisms is the fear that they are a tool of the corporations used to solidify their control over the economy. This is why ‘poor’ farmers are so often trotted out by anti-GMO activists, to show how the corporations and their evil magical teleporting seeds are destroying the lives of hard working Americans in the name of profit. I find this position to be more sympathetic than the former, if only because it generally makes a few more rational comments before descending into irrationality. The existence of agricultural corporations controlling a massive amount of farmland is indeed troubling if they have a history of abusing workers, but using GMOs as a scapegoat to pin the blame on not only does not address the problem but actually obscures it. In the process of blaming GMOs for agricultural abuses, the activists draw attention away from more legitimate concerns and set their own agenda back.