Solar Cell Lab

Objective

The objective of this experiment was to analyze how light intensity and light color act as variables in a solar cell’s efficiency of generating a voltage. We are looking to derive a distinct relationship between each variable and the voltage produced.

Procedure

To perform this experiment we needed a flashlight, a small solar panel, colored filters, a ruler, and a computer programmed data recorder. Our first step was to ensure that our solar panel was properly connected to the data recorder. Once we knew data was being recorded properly, we gathered data for the single variable of light intensity. To do this, we used the ruler to measure 5 different distances above the solar panel: 0cm, 4cm, 8cm, 12cm, and 16cm. At each of these distances, we turned the flashlight on pointing directly at the solar panel, and let the computer record the voltage produced in a 30 second span. The computer gave us 10 numbers for each trial, and we found the average for each set of 10 in Excel. After averaging all 5 sets of data, we were left with 5 measures of voltage: One to correspond with each intensity level. The data is shown below.

Distance (cm):      0            4              8              12        16

Voltage (V):     0.4690    0.4394     0.3972    0.4061   0.3882  

After gathering this data, we then singled out the variable of light color. To do this, we kept the intensity of light at a controlled distance of 0cm for all trials. However, every time we turned on the flashlight and recorded the voltage output, we had a different color light filter in between the flashlight and solar panel. The first trial let only purple light through to the solar cell. The second let only yellow light, the third only pink light, and the fourth only red light. We averaged out each of the four data sets that were recorded just like the first stage of the experiment. Our results were us 4 measures of voltage: one to correspond with each color of light.

Color:            Purple     Yellow      Pink        Red

Voltage (V):   0.3689     0.4639      0.4382     0.4215

Conclusion

In order to draw conclusions for the data sets of each variable, we used Microsoft Excel to graph our data. We will first analyze the graph representing the relationship between light intensity and voltage.

Distance

As we can see by our graph above, none of our voltage measurements are too far from the linear trend line. This indicates that there is a relatively strong correlation between the distance of the flashlight from the solar panel, and the voltage that the solar panel produces. We also can see that the slope of the trend line is negative. This tells us that as the distance between the flashlight and the solar panel increases, the voltage that is produced decreases. What we also have to take into account though, is that we are analyzing the relation of light intensity and voltage output. We know that as distance increases, light intensity decreases. So in this case, despite the negative slope of the graph, we can conclude that there is a positive relationship between light intensity and voltage output– meaning that as light intensity increases, so does the voltage output.

To draw conclusions for the variable of light color, we will consider the  bar graph form our results below.

color

If we look closely, we can see that each color yields a different voltage output. What I notice, is that the more light transparent colors, such as yellow and pink, yield a higher voltage than less transparent colors, like red and purple. I would explain this by reasoning that the more light a solar panel is allowed to absorb by a colors transparency, the more voltage the solar panel can produce. This would indicate a positive relationship between transparency and voltage output.

Another significant thing that this graph shows is the stark difference of voltage produced by cooler hues like purple, and warmer hues like red. While their transparencies are closely related, they lie on opposite ends of the light frequency spectrum, as shown below.

spectrum

According to this figure, the wavelength of purple light waves are much shorter than red light (roughly 300 nanometers shorter.) When comparing their difference in voltage output, we can conclude that solar cells absorb longer light wavelengths more efficiently than shorter wavelengths. This would explain why purple light yields less voltage than rel light.

Global Solar Energy

Introduction

indicator12_2013_production
Exponential growth of global solar output

In the collective effort to wean the world energy demand from fossil fuels, solar energy has become an increasingly attractive substitute. Countries around the world are building solar panels to satisfy energy demand on large and small scales. There are multiple reasons why solar energy output has been allowed to  grow during the recent past, and we can see the impact of each reason when discussing examples of their success. The first and most potent reason I would argue is the advance in the technology of solar energy production. The second most powerful reason is an increase in political will to utilize solar power. The third reason I would argue is improved integration in solar energy production. After discussing the impacts of these three factors specifically, the exponential rise of the graph above will seem more realistic. We will start first by discussing the technological improvements in the solar industry.

New Technology

Technological advancement is the force that drives the cost of energy production down, and the efficiency of production up. With solar energy, this force is critical in order to give energy producers and consumers more of an incentive than just “being green” to use solar panels. Research and development of photovoltaics has been part of the reason why solar energy is making progress in solving the renewable energy problem.

perov effiicency
Comparatively better R&D results

One potential improvement in photovoltaics is the Perovskite solar cell. These cells can complement, or substitute for the silicon semiconductor within the panel. The materials that would take its place are lead, ammonia and iodine. These materials are used to construct a crystalline layer that would take the place of the silicon cell. The benefit of this method is significant. First, the cost of the materials, and the construction of the crystalline perovskite cell is much less costly than the processing of silicon. This allows for cheaper production and lower prices for solar panels. Second, the perovskite cell can absorb higher frequency radiation than silicon, and is transparent to normal solar rays. So when used as a complement to the silicon layer, a solar panels absorption range is increased, therefore increasing its efficiency by about 5.6%. This would increase the benefit of buying solar panels, for commercial and residential use.

Political Will

Germany is an excellent example of success in solar energy production driven by political will. Today, they satisfy about half of their energy demand with photovoltaics. For this to be achieved, all that was required was an early start, and the political power to create market incentives to back the solar energy production. German politicians passed a Renewable Energy Sources Act in 1991. Under this act, government policies were fitted to promote the expansion of renewable energy. Solar providers were able to sell their energy utilities to consumers just as easily as oil. It also required government to ensure the financial stability of the solar industry, which in turn encouraged more investment in solar. So, after 25 years of government protection and incubation in the marketplace, solar energy has thrived in a country where relative sunshine is low. This example of success should prompt other nations to follow suit.

Integration

The US Department of Energy’s SunShot initiative is a project aimed at introducing solar power into the energy grid through efficient integration. This also would allow for solar power to be a part of the utilities market, expanding its production. This efficient integration would in turn achieve something similar to Germany’s initiative, as far as breaking cost barriers. By the cooperation of utilities companies, and government to dissolve fees and regulations for solar energy transmission, they can provide the solar option to a wider range of consumers. The main part however is actually building infrastructure that incorporates solar plants into the grid. These improved additions would benefit by their more advanced technology, such as information exchange similar to a smart grid, and improved reliability that can compete with non-renewables. If SunShot can achieve these goals, then solar energy will be a viable option for a majority of consumers in the US, which is a big step in reducing domestic greenhouse gas emissions.

Sources

“Systems Integration.” Energy.gov. US Department of Energy, n.d. Web. 24 Feb. 2016.

3pcontributor. “How Germany Became a Solar Superpower.” Triple Pundit People Planet Profit. N.p., 12 Aug. 2015. Web. 24 Feb. 2016.

Gilpin, Lyndsey. “8 Crazy New Solar Research Breakthroughs – TechRepublic.” TechRepublic. N.p., 25 Feb. 2015. Web. 24 Feb. 2016.

Tran, Kevin. “Perovskite Solar Cells.” Stanford University, 7 Dec. 2015. Web. 24 Feb. 2016.

Tesla’s Wireless Energy Transfer

Introduction

Nikola Tesla’s legacy is visible, and utilized by modern life every single day, and his experiments with alternating current (AC) electricity is essential for the power grid. As of now, the AC current that is generated at power plants is transmitted through an incredibly complex network of copper wires and substations, right into our home power outlets. So, a physical connection is required from an electric generator to all units that it intends to power. Nikola Tesla saw this future challenge in transmitting his prized AC current. Through considerable experimentation, Tesla developed a shortcut in transmitting his AC current by sending power through thin air. The contraption which achieved this was the Tesla coil, and its functions are still utilized in modern wireless power and information transmission.

Tesla’s Coil

Tesla’s invention was able to force electricity through the air from one coil to another, but how was this done? The Tesla coil itself was a large copper conductor coil, and a capacitor, which was able to store electric current like a battery. As a generator pumped more and more electricity into the capacitor, it eventually overflows and charges the copper coil, which when charged, creates a magnetic field in the air around it. As more electric charge is deposited, the magnetic field grows larger and stronger, until it is within range of the receiving coil, which in turn, conducts the electric charge through the magnetic field until it has a field of its own. This is how the coil uses electromagnetic induction. These two fields interacting allow greater and greater amounts of electric current to flow back and forth through the empty space between the two coils. If the charge continues to increase from the generator, then the flow of energy can build to a capacity that causes a visible bolt of lightning between the two coils.

The Fault

So if this technology existed in the time of Tesla, then why do we still have a network of copper wire strung along our national energy grid? The problem with this method of power transmission is that it is inefficient for carrying electrical current long distances. The energy can only be transferred between coils when the space between them, or the spark gap, can be penetrated. The spark gap and magnetic field produced is very small  in size compared to the distance between a power plant and a domestic home. Furthermore, even the standard magnetic field required a massive amount of voltage to be able to transfer electricity. This fact, compounded with the loss of electric energy in the transmission makes power transfer very inefficient for a large scale power supply. Just because of this shortcoming does not mean Tesla’s invention was revolutionary. The technology that Tesla introduced has essential modern applications.

Tesla’s Legacy

Inventions such as radio and television signals, wifi, and microwaves are all uses of wireless energy transfer. These variations of energy transfer are very useful today because they can be broadcast over very large distances, and require a reasonable amount of power to do so. Today, especially in the information revolution, society could not function without any form of wireless energy transfer. The only way we could stay connected would be through written letters, landlines, and newspapers. So we may not use wireless transfer of electricity on this scale, but it was nonetheless a pivotal step in information and communication technologies.

As far as induction used to transmit a power supply, there are still smaller scale applications today. It it a useful function for charging smaller appliances or batteries, such as phone charging pads, and electric toothbrush charging docks. In addition to this, research is still being performed today to expand the uses of wireless electric transmission by extending the range of electromagnetic induction, and the voltage that can be transferred. Whether Tesla’s coil is used to charge a phone, or broadcast information across the country, it is undoubtedly one of the most important resources available to modern technology.

Works Cited

Paper: “Wireless Non-Radiative Energy Transfer,” to be presented at the 2006 AIP Industrial Physics Forum, Tuesday, November 14, 2006, San Francisco, CA, Moscone. Found at http://www.mit.edu/~soljacic/AIP_press.pdf

Dickerson, Kelly. “Wireless Electricity? How the Tesla Coil Works.”<i>LiveScience</i>. TechMedia Network, 10 July 2014. Web. 19 Feb. 2016.

“Who Invented Radio?” <i>Tesla: Life and Legacy</i>. PBS, n.d. Web. 19 Feb. 2016.

 

Generator Lab

Objective

In this experiment, our objective was to derive a relationship for amount of voltage that could be produced by means of a manually operated generator. We would do this by calculating the voltage produced from five separate sets of shaking. In other words, we mean to see how efficiently the shake generator can convert kinetic energy into electric energy.

Experiment Procedure

Before we could look for a relationship between the kinetic energy input into the generator, and the voltage output, we needed a reliable set of data. To obtain a reliable measurement of the kinetic energy input, we counted the number of shakes performed in a strict 30 second time limit. To obtain a reliable measurement of the voltage output, we used a robot computer programming. When we shook the generator, the AC current created by the magnetic field disruption was detected and accurately recorded by the robot and generator. This happened every 1.5 seconds in a 30 second time span, giving us a set of 20 measurements per shake session. Once a set of measurements was recorded, we used Microsoft Excel to square each measurement, and added our results to get a total measurement of real electric output in Volts (V). This could be shown by the following equation.

Total Voltage= Σ [ (m1*m1) : (m20*m20)]

This procedure was repeated 5 times with varying numbers of shakes (kinetic energy inputs) to produce the following set of data.

# of Shakes:    84            80           82             115            76

Voltage (V):   98.100   76.298    86.706     130.410     32.094

Conclusion

Generator

Using Excel, we were able to plot our data onto the graph above. Although the scatter plot makes visible a certain margin of error, the trend line shows a relationship between the number of shakes and the voltage produced. The positive slope of the trend line tells us that the more kinetic energy, or number of shakes performed on the generator, the more voltage output will be produced. This makes sense, because generally more input should yield more output in any system. The slope of the trend line tells us even more. According to Excel, the equation of the trend line is y=1.8913x – 80.578. Here we can see the slope can be rounded to about 2. Slope is rise over run, so this number allows us to conclude that the generator produced an average of about 2 more volts of electricity per addition shake performed on it.

Pulley Lab

Objective

The objective of this experiment was to study Newtons second law of Motion: F=ma (Force is equal to mass x acceleration).

Procedure

In this experiment, we used a mechanical pulley to lift a load of a certain mass up against gravity. The pulley was connected to a computer program, which allowed us to exert precise amounts of force on the mass, as well as record its exact acceleration. We lifted the mass a total of 10 times.

The first five times, we kept the force exerted constant. The only variable was the amount of mass being lifted. With each lift, the mass was increased. From this set of trials, we recorded the following data:

Mass (kg)     Acc.                                                                      Chart 1
0.02      85.22
0.06      79.96
0.1        76.85
0.14      69.95
0.18     67.85

From this data, we can see that there is a negative relationship between mass and acceleration. Since the slope of the trend line is negative, we can conclude that as mass increases, a constant force exerted will cause acceleration to decrease.

The second set of 5 lifts was recorded using a constant mass. This time, the only variable was the amount of force being exerted:

Force     Acc.

60          48.11                                                                       Chart 2
50          29.03
40          16.37
30          7.33
20          1.57

In this trend, we see the opposite relationship as the first set of data. Force exerted and acceleration share a positive relationship; this is why the trend line of the graph has a positive slope. The greater the force exerted on a mass, the larger the acceleration.

Conclusion

The relationships that we have gathered from our data does agree with Newton’s second law of motion, Force= Mass x Acceleration. From his formula, we can mathematically see why mass and acceleration must have a negative relationship when the force is fixed. If the equation is to remain balanced, then one value to the right must decrease as the other increases so that they total value does not change. The same evidence is true in the second relationship found. If mass is fixed, then force and acceleration must have a positive relationship in order for the equation to remain balanced.

Spacex

Introduction

Spacex is a multi billion dollar company based in California. Founded in 2002, Spacex is an entirely private sector corporation that launches cargo into orbit, as well as performs research and innovation in rocket science and space exploration. As of now, the company has in operation 2 rockets for launching spacecraft, and 1 spacecraft, called The Dragon to carry cargo into orbit and to space stations. The company uses these vehicles to provide space station resupply services to NASA, as well as provide transport services for any customer wiling to pay the cost (over $60 million).  These vehicles were designed and produced entirely by Spacex, and they continue to innovate with a very bold long term goal: facilitating the transport of humans to populate Mars. Of course, there is a lot to be done before this goal can be reached. As of now, Spacex is working toward their goal by developing revolutionary technologies in spacecraft launch and operation.

The Dragon

As Spacex’s only operational spacecraft, the Dragon is the first privately owned spacecraft to visit a space station in 2012. As of now, its missions are unmanned, and it carries cargo. The Dragon’s original intent for design however was for the transport of cargo, and astronauts. It is equipped with a pressurized cabin with seating and controls. As of now, Spacex is improving on the Dragon to make it a safe transportation vehicle for astronauts. The upgraded Crew Dragon would be a spacecraft that breaks the barriers of what was thought could be available in the private market. With an astronaut operated “space taxi”, the possibilities for the future are wide open.  A private enterprise has the ability to sell a ticket to space. This is the first step into a whole new type space exploration.

Falcon 9

The Falcon 9 is the standard rocket used by Spacex to launch their spacecraft into orbit. But their standard rocket is no ordinary rocket. This rocket is designed to be very sturdy and smart; sturdy enough to survive a reentry into earths atmosphere,  and smart enough to perform a landing back on the surface. Ordinary rockets are designed to burn up upon reentry due to the high temperatures reached during its descent. This means for every launch, a new rocket must be built, costing tens of millions of dollars. However, the goal set for the Falcon 9 rocket is to build it well enough to remain intact and fully operational for multiple launches into space and returns back to earth.  This goal is close too– with their first stage landing test failing only because the rocket failed to land perfectly upright. Once the goal is reached, founder Elon Musk projects that reusable rockets will allow Spacex to facilitate launches at a dramatically reduced cost, and therefore at a much more affordable price for their consumers. Technology that renders space launch affordable for the consumer would be a large step towards space tourism and multiplanetary population.

Falcon Heavy

This rocket pushes the boundary of power in rockets. Essentially, the falcon heavy is three Falcon 9 rockets in one. This incredible lifting power, according to Spacex, allows the rocket send into orbit a payload of mass that is, “…equivalent to a 737 jetliner loaded with passengers, crew, luggage and fuel…”  A second feat of operation is the landing function of the rocket’s first stage launch vessels. The falcon heavy is designed to take after the Falcon 9, where if the main part of the vehicle can safely reenter and land, then transport costs will be cut dramatically. This rockets launch power has yet to be demonstrated, but the launch date is now approaching fast in the spring. If this demonstration is a success, it will prove itself to be the second most powerful rocket to ever launch; only stopping short of the 1973 launch of the Saturn V moon rocket.  If it passes it’s test, its first job will be to lift a payload of 37 satellites into orbit. Spacex also hopes that its jobs in the long term will be to launch larger manned spacecrafts into orbit and deeper into space.

 Applications 

During class, we talked about forms of energy, and how it is defined by the ability to do work. When we discuss rocket science, we are referring to the same idea of doing work. It is compared to something as simple as lifting a bar bell above your head– just on a much larger scale. For comparison, we will observe the Falcon Heavy rocket. This vehicle uses energy to do work against gravity, just like we do when we lift a weight vertically. But the Falcon Heavy does an exponential amount of work which can be imagined by analyzing formula for work against gravity, W=mgh. The mass in this case, is the payload (a 737 jetliner) in addition to the mass of the rocket itself. Its weight is calculated by multiplying the acceleration due to gravity (9.8 m/s/s), and then this massive figure is multiplied with the height of earths atmosphere. By doing this much work, the rocket transfers a large amount of kinetic energy into potential energy.

When the rocket has disconnected from the craft, it has reached its maximum altitude, and has completely transferred its energy into potential energy. Then the unit falls back to earth at a rate of 9.8 m/s/s, which when falling from space allows it to reach incredible speeds. As the unit keeps gaining kinetic energy, the friction against the unit and the air itself  causes heating, so much so that the high temperature from friction often makes the rocket burn and disintegrate before its potential energy can ever be fully transferred back to kinetic. The amount of energy used, and work done is amazing, and that is why retrieving a rocket intact from space is an incredible feature.

Works Cited

“Company.” SpaceX. SPACE EXPLORATION TECHNOLOGIES CORP, n.d. Web. 11 Feb. 2016.

“SpaceX’s Huge Falcon Heavy Rocket Finally Has A Launch Date.” Tech Times RSS. N.p., 12 Sept. 2015. Web. 11 Feb. 2016.\

Howell, Elizabeth. “SpaceX: First Private Flights to Space Station.”Space.com. N.p., 9 Feb. 2016. Web. 11 Feb. 2016.

Robotics Lab

1. Objective

The objective of the robotics lab was to acquire an understanding of the accuracy and care it takes to gather legitimate data. We did this by calculating and analyzing the margin of error in a set of data.

2. Experiment Procedure

To perform our experiment, we needed to gather two sets of data to record the same information: one set recorded with natural human error, and a control set of data without any human error. For our sample data we recorded the distance traveled by a robotic car two different ways. The data with human error was gathered by means of our best judgement looking at the distance the car’s wheels had traveled along the side of a ruler. The control set with no human error was gathered using a computer program connected to the car. We used the formula Circumference=2(3.14)Radius to calculate and enter the distance one wheel rotation would make the car travel. The program used this information along with its sensory data of how many times the wheel turned during its trip to calculate for us the exact distance traveled by the car every time. We reset and recorded both numbers three times on the same setting of 100% power and 1 second of travel time. We then repeated the process with a different setting of 75% power and 1 second travel time. A third time, we recorded trips with a setting of 50% power and 2 seconds travel time. This gave us a total of 9 manually measured distance values that could be compared to 9 computer calculated distance values which are the following:

Measured:    37.5  36.9  35.9  24.5  25.4  26.0  32.6  32.3  34.0

Calculated:   39.5  39.5  37.6  26.1  26.1  26.1  32.4  33.2  33.5

We then took this data and compared it. We calculated the percent of error in each pair by using  the formula provided in class. Our results are the following:

Percent error: 5.19%  6.8%  4.65%  6.8%  2.7%  0.38%  0.62%  2.83%  1.36%

3. Conclusions

Given the care we took to record the most accurate measurements that we could, we did not expect our percent error to be very large. At a maximum of 6.18%, I believe we succeeded in keeping our margin of error relatively small. What did strike me though, was the wide inconsistencies of error between trials. I thought our measurements had a consistent level of accuracy, but our error ranged from 0.38% to 6.8%. Although the scale of our measurements did keep our margin of error relatively small, the lack of consistency in error still showed me how much human error can warp data of a larger scale. This being known, machines are best to analyze large sets of data, and even so not all data can be trusted to be absolutely precise, because it is very hard to maintain both accuracy and consistency.

Energy Grid Infrastructure

Introduction

light map power grid
Map of energy demand across the US

Since the earliest utilization of electric power in US cities, the demand for electric power has only grown. With this demand the infrastructure to generate and transport electricity to homes and businesses across the country has grown adequately to meet the demand for energy. But, the demand for electricity is growing faster as technology and our nations economy develops. I fear our power grid may soon become too meager to sustain a power hungry nation such as the US. The technology is becoming obsolete, and this shortcoming may stifle future economic growth. It seems that it would be in the nations best interest to invest time and money today to create an energy infrastructure that can support the growth of tomorrow.

Your grandfather’s grid

grandpa
1913: A power transformer produced by General Electric

The rapid expansion of the electricity grid during the 20th century has made for an odd and overly complex power grid. Power plants and transmission networks that bring electricity to your home have developed very inconsistently, because they have been shaped by both government and private companies over the years. As a result, there is no national power grid in the sense that the system is not uniform. Power networks are indeed categorized and regulated, by state, region, and so on, but this only complicates the issue of complexity and inefficiency in the electric grid. The biggest shortcomings of the energy grid that has been operating for the 20th century is the following:

Scarce information-  The many different layers and patchworks of transmission jurisdiction across the country renders the gathering of big data on our country’s energy usage very difficult. In addition, the consumer is not able to track the fluctuating prices of electricity by the hour, preventing the option of informed power use to cut individual cost. This is because the wide majority of household electric meters lack the sensory ability to gather information on how energy is used.

Inefficiency- Because our grid is composed of mostly generation and transmission mechanisms, energy that is produced must be transferred instantaneously from the plant to the utility that draws the electricity. According to Daniel Snyder, “…today’s network is designed to always operate at peak capacity. Yet peak loads are only required about 5 percent of the year.” This presents a clear issue of inefficiency in how the electric power plants allocate power.

Blackouts- Often times it is this poor allocation that causes many blackouts, because when too much power is drawn in a short period of time, a power plant will fail due to the unattainable amount of power that is demanded. To exacerbate the problem, the complexity of the transmission grid will often cause a much larger blackout than necessary in the case of an overload or natural disaster.

The solution

smart grid
Smart grid circuit board: Image courtesy of energy.gov

 The smart grid is today’s technological solutions to the increased need for information and efficiency in the power grid. The smart grid system would be an installed power grid that is uniform and consistent across the country. But what makes the “smart grid” smart? The main upgrade in the smart grid is a built in computer that not only distributes electricity but also gathers information. Digital sensors in the standard residential or commercial electric meter allows for the gathering and two way transmission of information between both the consumer, and the power distribution center. This allows for informed consumption of energy, and informed distribution of energy. It is essentially the information revolution brought to our power grid. Like the internet, abundant information will be available for more efficient allocation, and power grids will not have to run at maximum capacity at all times.

However, the task of replacing the entire nations energy grid is incredibly large, and costly. Like any investment, there will be risk, and a major cost. So it would be wise to analyze carefully these costs and risks, as well as the benefits, before we conclude whether a nationwide smart grid would improve our country’s well being.

Costs and Risks

The most glaring cost of this would be the revenue required by the government to fund a project of this scale. The majority of the grid would be paid for by American taxpayers, and if not, we would further burden the national debt. Then once the grid is installed, there would still be a large federal operations sector to be paid for to keep the grid working. Furthermore, the standardization of a national grid would present a great risk for our future. Technology is advancing quickly, which means eventually the smart grid will be obsolete. But will that day come too soon? If a technological breakthrough does render the smart grid inefficient in comparison, the cost of revamping the energy grid again will likely be even more costly than the goals we have set today. This raises the question of whether we are wiling to accept that long term risk, and put real faith into one, standard energy system for the benefit of the foreseeable future.

Benefits

That being said, as of today the benefits do indeed seem to outweigh the risks. The smart grid effectively solves all the issues that I have mentioned with our nations current energy grid. Not only does its efficiency save large amounts of energy from being wasted, but it also creates a more stable, safe, and secure power grid with today’s digital communication technology. This would mean lower electric prices, less emissions, and faster recoveries in blackouts. A uniform system of jurisdiction would eliminate today’s confusion, and comprehensive power distribution would be able to adapt to the rising energy demand in the future. Overall, the smart grid would make our life for the foreseeable future much easier and better, and this is why the smart grid is a  good investment in our country.

Works cited

“Smart Grid.” Smart Grid. Department of Energy, n.d. Web. 05 Feb. 2016.
Snyder, Daniel. “Pros and Cons of the Smart Grid.” All Articles RSS. Kinoji, n.d. Web. 05 Feb. 2016.
“What Is the Smart Grid?” What Is the Smart Grid? SmartGrid.gov, n.d. Web. 05 Feb. 2016.
“Power Grid Technology.” BURN An Energy Journal. American Public Media, 04 Aug. 2011. Web. 05 Feb. 2016.