Obama’s Solyndra Scandal

The solar technology developed by the innovative company Solyndra LLC appeared to be a progressive solution for energy concerns back in 2009. Impressed by Solyndra’s promise of a better future through their use of high-quality solar panels, the United States Department of Energy, under the Obama administration, agreed to loan the company $535 million “to create so-called ‘green jobs,’ which we were told were the key to the future economic growth” (Barone). This step towards cleaner energy ended up backfiring in 2011 as Solyndra suddenly declared bankruptcy, resulting in a multi-million dollar loss of federal money as well as damage to the previously untainted presidency of Barack Obama.

According to the information available on the still-accessible Solyndra website, their process of equipping buildings with the advantage of solar energy seems flawless and promising, leaving little doubt as to why the Obama administration jumped at the chance to benefit their

President Obama with Solyndra workers

work. Each Solyndra system is made up of “cylindrical copper indium gallium diselenide (CIGS modules) and thin-film technology”, has the ability to withstand 130 mph winds, comes with a 25-year warranty, and costs less in time and money to install than any other commercialized solar construction (solyndra.com). Mesmerized by the innovation presented by this up-and-coming solar production company, the Department of Energy included the steep loan in their 2009 stimulus package; however, the DOE and private investors such as Oklahoman George Kaiser were unaware of the fact that the company was sliding towards bankruptcy and would be unable to ever pay back the federal government. Wary of Solyndra’s financial issues, PriceWaterhouseCoopers funded an audit of the company in 2010 and discovered that Solyndra “had accumulated losses of $558 million its five years of existence” (Barone). The full impact of the company’s financial turmoil became nationwide on August 31, 2011, as Solyndra filed for bankruptcy, “putting 1,100 out of work and potentially leaving taxpayers on the hook for $527 million” (Higgins). This news resulted in a breakdown of the “green jobs” plan devised by the federal government, as well as a scandal upon the Obama administration for its compliance to hand over the American people’s money to a failing firm.

Following the eruption of the Solyndra scandal, the Republican party jumped at the chance to criticize and blame President Obama, attacking his administration’s choices not only to

Solyndra solar panels

provide this huge loan to the jeopardized company, but also to ensure the return of the private investors’ loans before taxpayers’ money. Counteracting the Republican claims, Democrats made attempts to soften the blow of the Solyndra scandal by labeling it a “bipartisan scandal, noting that the Bush administration began the loan guarantee process [in January 2009]” (Higgins). Whether or not the Obama administration was aware of Solyndra’s financial troubles, the fact remains that the scandal has become an important factor in the consideration of his reelection this year.

 

Sources:

Barone, Michael. “Obama tainted by loan guarantees to solar firms.” Washington Examiner. 14 Sept. 2011. AEI. Web. 27 Feb. 2012.

Chinn, Paul. “President Obama Speaks at Solar Panel Manufacturing Facility.” Chicago Tribune. Photo. 18 Sept. 2011. Web. 28 Feb. 2012.

Energy BySolar. “Flat Roof Mount.” Photo. Web. 28 Feb. 2012.

Higgins, Sean. “Solyndra Scandal Continues To Grow For White House.” Investors Business Daily. 15 Sept. 2011: A01. Business Source Complete. Web. 27 Feb. 2012.

Reuters. “A sign at the entrance to the headquarters of bankrupt Solyndra LLC is shown in Fremont, Calif.” International Business Times. Photo. 24 Sept. 2011. Web. 28 Feb. 2012.

Solyndra. “Technology/Products.” Solyndra.com. Web. 27 Feb. 2012.

 

Solar Cell Experiment

The lab experiment we completed in the last class involved the use of a photovoltaic cell to determine the voltage created by an outside light source. In order to complete this lab, my group used the computer-connected solar cell aligned with a ruler to measure the distance from the designated light source, which in this case was a miniature flashlight.

There were two parts to this lab: the first portion required us to shine the flashlight directly onto the panels of the solar cell from different distances (as measured by the ruler), thereby allowing the computer to calculate the differences in voltages for each trial. According to some advice we learned before this experiment, the further away the solar cell moved from the light source, the smaller the voltage amounts should appear on the computer screen.

As we began this part of the lab experiment, we began to notice flaws in our equipment that could affect our results. The first time we ran the voltage-calculating program on the computer to determine the charge input from the solar cell with no light source, we noticed that our numbers were significantly high despite the lack of light – apparently, the numbers were supposed to be lower than the ones we received, which ranged between .10 and .20. For the next four trials, we ran the program each time by moving the solar cell back in increments, starting at 1 centimeter, moving to 10 centimeters, then 20 and finally 30 centimeters. Our odd results indicated to us that something in the equipment was not allowing the program to produce accurate results, as the voltage numbers fluctuated from high to extremely low, then crept back up again. Likewise, our supposedly faulty results were also skewed in the troublesome Excel sheet, and we had no choice but to redo our entire experiment

For our second round of trials, we did not begin with the lack of light source, and jumped in to using the solar cell at 1 centimeter away from the flashlight for the first trial. As Fig. 1 evidences, the voltage results ranged from .5 to .6, which are considered a normal high for the light source shining so close to the solar cell panels. However, for the second trial, our fears

Fig. 1

from the faulty first experiment came back as the results from 10 centimeters away ranged from .2 to .3, followed by an increase to .3 to .4 when the solar cell was moved 20 centimeters away. Finally, as Fig. 1 shows, the voltage stayed constant between .3 and .4 even when the solar cell was 30 centimeters away from the flashlight. At the end of this portion of the lab experiment, we were instructed to calculate the average voltage for each trial and create a scatter chart with the data; looking at Fig. 2, there is a clear inconsistency in our result averages. Logically, the numbers should

Fig. 2

have gradually decreased as the solar cell moved further away from its light source; considering our extreme caution in the prevention of human error for this second round of trials, we can only conclude that flaws in our technological equipment could have yielded such odd results.

The second part to the solar cell lab involved the use of thin, colored filters to be placed in front of the light source to alter the light absorption of the solar cell, which had to be kept at a constant distance from the flashlight. Choosing a distance of 10 centimeters, we first placed the red film in front of the flashlight, ran the voltage calculation program, and watched as the results ranged between .3 and .4, the average for which we calculated to be approximately .36892. For the next trial, we used the green filter, which produced voltage results between .4 and .5 and yielded an average of

Fig. 3

.49851. The final trial using the blue film gave us results mainly in the .4 range, dipping down to .39 at only one instance. The average of this trial was .4202. Concluding with our experiment, we graphed the average results of this portion of the lab into a bar chart, which is evidenced in Fig. 3. The blue and green filters yielded higher voltage results than the red film as can be seen in the graph, concluding that the darker red absorbs more of the light to make it incapable of reaching the solar cell.

 

Hydraulic Fracturing

The process of hydraulic fracturing (commonly referred to as “hydrofracking”) was first used in the Mt. Airy Quarry located in North Carolina in the early 1900s in order to obtain granite from the bedrock underground. Beginning in 1949 and continuing for the next 40 years, scientists improved the function of hydraulic fracturing in order to obtain the desirable natural resources trapped in the rock beneath the Earth’s surface, finally debuting the process we know today in 1988. Oil shale, composed of organic matter including kerogen, illite, kaolinite, and smectite, must be sucked up from the pores of the clay, along with the similar resource, shallow coalbed methane (CBM), providing the United States with a national alternative to foreign fuels.

With a high amount of pressure, a mixture of water, sand, and chemicals, is blasted down the hydrofracking well; originally, the mixture was limited only to water and sand, but scientists

Diagram of hydraulic fracturing process.

began adding different chemicals to thicken the water, lessen friction within the well, or eliminate bacteria, as well as many other reasons. With the liquid inside the wellbore, its pressure is intended to cause the surrounding rock to fracture. Once the wellbore has reached maximum capacity, the proppants (which can range from sand to ceramic or aluminum beads) to prop open the newly created fractures. Following this procedure, the liquid is returned to the surface for disposal or treatment, while the proppants are left beneath the ground to allow the gas and oil to flow (Chesapeake Energy).

While some Americans see the process of hydraulic fracturing as an innovative and futuristic way to obtain the natural resources we need at a more local level, others worry about the environmental impacts of the process, unable to see beyond these concerns to the benefits hydrofracking offers. According to The United States Environmental Protection Agency, potential impacts of hydraulic fracturing sights and their conducted work can be stress on surface and ground water as a result of its use for providing effective pressure to the system; another possibility is the contamination of drinking water in case of a wastewater spill or an error in the sight’s construction. It is the resurfaced water previously used underground for pressurizing that worries the American public and the EPA alike as its disposal is neither simple nor harmless.

While the EPA’s national law enforces that hydraulic fracturing sights may not dump their wastewater into national United States waterways, they must therefore provide companies with the means to dispose of this used liquid, which could potentially contain TDS (total dissolved solids), natural radioactive particles, or metals, inevitably harmful to humans as well as unaccustomed areas of the environment. There are three possible forms of disposal of this wastewater: firstly, if the water is low enough in its pollutant content, it may be used as a supplement for freshwater sources. If its pollutant levels exceed EPA standards, the process of underground injection can be applied, yet this will not eliminate the entirety of the wastewater amount. Its remainder must be sent to either a public or private treatment facility for final disposal; however, the EPA has discovered in recent years that these facilities are not effectively equipped for such a delicate and potentially harmful disposal process, and they are seeking to offer improvements within the next few years (EPA).

As of 2007, over 4,000 hydraulic fracturing sights were operating in the United States, and the number has continued to grow thanks to the increasing fuel prices coming from overseas (Natural Gas Americans). The map of the American Northeast identifies the central locations of shale reserves, evidencing that many of the most used hydrofracking facilities are located in

Location of the most potent reserve, the Marcellus Formation.

North Carolina and Maryland (including the cited source, Chesapeake Energy). However, the Southeast is also home to many known location of underground shale gas and oil, with states including Texas, Louisiana, Arkansas, and even stretching into the Midwest to Kansas, Nebraska, and North Dakota. Although its effectiveness as a process and its relationship with the environment are still being investigated by the EPA, the fact that hydraulic fracturing provides better local resources for fuels is inarguable.

 

Sources:

Chesapeake Energy, Inc. http://www.hydraulicfracturing.com

Geology.com. http://geology.com/articles/hydraulic-fracturing/

Natural Gas Americas. “A Short History of Hydraulic Fracturing.” 21 Jul. 2010. Web. 17 Feb. 2012.

United States Environmental Protection Agency. “Natural Gas Extraction – Hydraulic Fracturing.” 15 Feb. 2012. Web. 17 Feb. 2012.

 

Electricity Generating Lab

Last week’s lab experiment involved the application of what we learned as Faraday’s Law, which required the generation of electricity in a flashlight-like object; within this tube, a magnet traveled back and forth between the wire coils and thus created an amount of electricity stored in the capacitor for later use as illumination. Prior to beginning the experiment, we learned that the more times we shook the tube, the more electricity was to be generated. After hooking the flashlight to the computer for result recording purposes, the computer would read the electricity in the tube 30 times (or every second) for each trial executed, thus creating a waveform chart of results. We were instructed to record five different trials with the tube; for the first trial, we were not supposed to shake the tube whatsoever and observe the voltage results displayed by the computer. For the four following trials, we were to increase the intensity of our shaking and subsequently observe these results as well.

The first trial involved resting the flashlight tube upright on the table and running the generator program as we simply sat by and watched the voltage results appear on the computer

Fig. 1

screen, despite the fact that we were not shaking the tube, therefore not creating new electricity between the magnet and the tube’s coils. According to our results, which can be seen in Fig. 1, no negative voltages were recorded by the system, yet the visual readout on the screen showed how the voltage varied up and down on the chart in the positive number range. Worrying that we had made an error in our first trial, we repeated this procedure, this time placing the flashlight tube on its side; to our relief, the results were similar, and likewise did not produce any negative voltages. With our results transferred immediately onto a Microsoft Excel sheet, we then calculated the square-sum of the first trial’s results and discovered it to be .955, although we were unclear as to how low this would prove to be.

 

For the second trial, we concluded that we should shake the tube exactly 10 times within the 30-second timespan, keeping in mind that our shake intensity was meant to be low at this point in the experiment. While some of the voltage result pairs were recorded as if the object had been resting, other readings were dramatically different, some dropping low into the negative range as can be seen in Fig. 2. One aspect of this trial that should be noted is the slight inconsistency of our shakes; some were either light in intensity while

Fig. 2

others were harder, and they were irregularly spaced throughout the 30 seconds. After calculating the sum-square in our Excel sheet, we found it to be a drastic increase from the first trial, jumping from .955 to 32.614. The third trial yielded similar results in its dramatic drops in negative numbers as we shook the tube a bit more vigorously 20 times. For this trial (as shown in Fig. 3), the sum-square came out to be 81.697, which is an interestingly high increase from the second trial due to the fact that the recorded voltage results were not too different.

The fourth trial involved more intensity and 30 shakes within a 30-second time period. Our results for this trial can be seen in Fig. 4 as the voltage spikes towards the end of the 30 seconds, mysteriously jumping up to 6.477 all of a sudden; perhaps this is because the

Fig. 3

magnet had continuously created energy and it had all been captured for recording at once by this point in the trial. After this jump in voltage, the numbers fall drastically back down despite the constant intensity and spacing between the shakes. Interestingly, the sum-square of this trial was calculated to be 44.447, significantly lower than the sum-square of the more drastic results from the third trial.

Fig. 4

For the fifth and final trial, we increased the intensity of the shakes and their frequency to 40 within the 30-second timespan. As we tried to get a feel for how often we should shake the tube to reach the goal of 40 shakes in this time period, our shaking speed was off at the beginning of the trial, as evidenced in Fig. 5. After this slight hiccup in our experiment, we reached a steady shaking speed and the recorded results leveled out to what we had seen thus far to be “normal.” For our final sum-square calculation, the result jumped up yet again to its highest peak of 143.9.

In conclusion, we believe that the only trial that was off in its success was Trial 4 due to its surprisingly smaller results and sum-square despite the increase to 30 shakes. After generating all of this electricity by shaking the tube so that the magnet would travel back and forth through the metal coils inside, we pressed the rubber button and were blinded by the blue-

Fig. 5

fluorescence of the light bulb. The next time you fall asleep with your emergency flashlight on your bedside table, consider the process of voltage generation you must go through to have your safety light!

 

New Year, New Cars: A Better Future

Vehicles debuting in 2012 must not use less than 35 miles per gallon.

As the severity of global warming becomes a reality impossible to ignore, the world’s leading automobile companies are working alongside national governments to ensure the creation of more fuel-efficient and environmentally friendly vehicles. Although the total elimination of fossil fuels in the automobile industry is years from being a possibility, current strides are being made to reduce the amount of carbon being burned into the air by raising the production standards for gas mileage and lowering the acceptable value of greenhouse gas emissions. Beginning this year, America’s annual vehicle models must adhere to “the nation’s first-ever law requiring a reduction in greenhouse gas emissions” (Clayton). It is important to note that only newly-produced cars will have to abide by this law; unfortunately, the majority of cars on American roads today are either used or considered an older model, and with the trend of SUV purchases not far in our past, those who cannot afford to buy new cars are left with the vehicles which cannot help but emit greater quantities of greenhouse gases. Despite the fact that previous standards for carbon emissions were set decades ago, the automobile industry has been slightly careless in their production, only being charged “$55 per vehicle for every 1 mpg that their fleet average mpg falls short of the relevant standard”, therefore causing the need for governmental enforcement over their newer models (Anderson et al. 3). This emissions law insists that by 2016, vehicles must not produce more than 9 ounces of greenhouse gases for every mile traveled, nor must their average mile-per-gallon rate drop below 35.5; however, in order to achieve this goal, the automobile industry will need an extra “$52 billion – adding about $950 to the price of a car. But consumers should be able to save enough over three years to pay for the extra cost,” suggesting that the additional cost of the vehicle will eventually be reimbursed at the fuel pump (Clayton).

With the introduction of the 2012 vehicle models and their compliance with the effectual carbon laws, prospective purchasers are provided with a wide range of economically and ecologically advanced options. During the last week of January, the city of Portland, Oregon, held their annual automobile exhibition to introduce hybrid models such as the new Toyota Prius V Wagon (which receives an estimated 53 miles per gallon) and efficient luxury vehicles like the sleek Fiat 500, which is estimated to use only 38 miles per gallon on the highway (Rafter). Strides in the direction of gasoline elimination were also presented at the Portland International Auto Show, namely Honda’s attempt at a version of their popular Civic using natural gas and the Nissan LEAF’s new technology involving roadside electrical charging

A 2012 Honda Civic refueling at one of the nation's few natural gas stations.

stations; although these two examples evidence the automobile industry’s progress in the reduction of greenhouse gas emissions, the popularity of these vehicles has yet to spread nationwide due to the inconvenience of limited refueling locations (Rafter). What seems to be apparent is that the West Coast of the United States, including Oregon and California, are leading the nation in their efforts to provide drivers with opportunities for more efficient vehicles. With the help of future improvements in the automobile industry, hopefully the advantages of these developments will spread across the country to the East Coast.

 

Sources:

Anderson, Soren, Fischer, Carolyn, Parry, Ian, Sallee, James M. “Automobile Fuel Economy Standards: Impacts, Efficiency, and Alternatives.” Resources for the Future, Oct. 2010. Web. 8 Feb. 2012.

Muttley. Photograph. Car Humor, 27 Jan. 2012. Web. 13. Feb. 2012.

Clayton, Mark. “Auto emissions: New greenhouse gas caps raise gas mileage standards.” Christian Science Monitor, Apr. 2010: N.PAG. MasterFILE Premier. Web. 8 Feb. 2012.

Honda. 2012 Honda Civic Natural Gas. Photograph. Softpedia, 4 Oct. 2011. Web. 13 Feb. 2012.

Rafter, Michelle V. “8 New Gas-Saving Car Trends.” SecondAct, 27 Jan. 2012. Web. 8 Feb. 2012

 

Power/Pulley Experiment

Prior to our second experiment with a robotics lab, my class learned about force and motion relating to Isaac Newton’s fundamental laws of physics. Of the three that we discussed, his second law applies most accurately to our experiment from February 6th, which is that an object’s force is equal to its mass multiplied by its acceleration (F = ma). According to the information provided to us, as an object increases in mass, its acceleration will decrease with a fixed force; in other sense, if an object has a fixed mass and increases its power level, its acceleration will also increase. This is the ideal preview for our robotics experiment in which we had to use a motorized pulley to record its variants by experimenting four times with the same mass and four times with the same power level.

My group was given a pendulum-like weight to suspend from a provided structure, creating a pulley system with the computerized motor at the base. The gold object meant to hang from the structure could be taken apart to decrease its mass. The instructions were to allow the motor to run for approximately one second (although this proved difficult to calculate manually) and measure the stopping height of the object with a ruler.

For the first set of trials, we sustained the object’s mass at 0.23 kilograms and varied the power level inputted into the LabView program. The results of this trial group can be seen in the first 4 columns in this Microsoft Excel chart:

Fig. 1 (click for full view)

 

Based on the outcomes of these four trial experiments, we were able to come to the conclusion that the object experiences a higher rate of acceleration as the power level, or force, is increased. In the second set of trials (as shown in the lower four columns of Figure 1), we sustained the power level at 75% and only reduced the mass of the object. Compared to the data from the previous trial set, the greater force resulted in increased acceleration, which stayed fairly constant throughout the four experiments.

Unfortunately for my group, an unsure amount of our data was skewed in the LabView program, leaving us with shaky results to use for the final calculations, which had to be added to our experiments. The two new variables were the object’s different values of potential energy and the overall power used by the computerized motor. Referring to the chart Figure 1, the object’s potential energy is identified in the column labeled mgh, referring to the mass multiplied by the measured height and the constant rate of gravity (which, as we know, is 9.8 m/s squared). The power used by the motor falls under the column labeled mgh/t, which uses the same values divided by the allotted time. As evidenced by Figure 2, the comparison of the power level and the power used by the motor stays with a fairly constant curve, only slightly varying in the portion of the experiment concerning the single 75% power level. Despite our technological issues, I would have to say that the steady increase in power usage relating to the programmed level is what we expected to see by the end of our lab activity.

Fig. 2

 

The Benefits of Demand Response Programs

The concept of demand response exists as a conscious reaction to excessive energy use within recent decades. As electrical consumers participating in demand response programs reduce their average number of exhausted kilowatts, they will experience financial savings as well as contributing to environmental friendliness. In relation to the supply/demand ratio, this concept is meant to cut the demand of electricity, thereby increasing the availability of the electrical supply.

Demonstration of the Bonneville Power Administration's DR project.

Demand response programs can be broken down into 2 primary classifications – Incentive Based Programs and Price Based Programs. The former offers financial benefits for participants, while the latter employs the use of preset prices aimed at inspiring a decrease in energy use in order to experience savings. These two programs are further defined based on different forms of consumer participation:

 

Demand Response Programs:

I)              Incentive Based Programs (IBP)

A. Classical:

1)    direct control – automatically shut down

2)    curtailable programs – determined by set values

B. Market Based:

1)    demand bidding – customers compete for  lowest reduction

2)    emergency DR – customers paid during emergencies

3)    ancillary services – customers paid to be on bidding standby

II)            Price Based Programs (PBP)

* rates based on peak period usage and the following conditions:

A. time of use

B. critical peak pricing

C. extreme day pricing

D. extreme day critical peak pricing

E. real time pricing

 

Electrical customers who choose to take part in one of these energy-saving programs will experience numerous benefits, aside from the aforementioned monetary payback. According to a study conducted by McKinsey & Company in 2001, an estimated “$10-15 billion in annual benefits can be achieved from participation of all customers in dynamic pricing programs on a wide scale across the US” (R. Walawalkar et al. 1553) Participants may also be privy to safety and convenience of a demand response program as they have been proven to prevent power outages. However, for those who are conscious and concerned with the effects of climate change as a result from overwhelming amounts of expelled energy throughout the past couple decades, the long-term benefits of demand response programs speak for the importance of this development; some of the primary environmental benefits “include better land utilization as a result of avoided/deferred new electricity infrastructure… air and water quality improvement… and reduction of natural resources depletion” (Albadi, El-Saadany 1991).

The costs required to take part in one of these programs are significantly less financially devastating than the results of continuance with excessive energy use; as long as customers abide by the conditions of their chosen program, the only financial cost they will experience is the initial setup of the required demand response technology, which includes “smart thermostats, peak load controls, energy management systems, and onsite generation units” (Albadi, El-Saadany 1992). Although the perception of these installation factors may be expensive and overwhelming, the estimated savings as a result of demand response outweighs the starting cost. Within the next decade, the United States Federal Energy Regulatory Commission (FERC) predicts a nationwide savings of over $60 billion “if demand response is incorporated into RTO market design and operations” (R. Walawalkar et al. 1554).

Thanks to the increased availability of demand response programs, electrical consumers now have the opportunity to save in a financial and personal sense as well as conserve energy in a world where it is wrongly viewed as endless and inconsequential. As Virginia Representative Rick Boucher notes in his 2003 report, the technology required in demand response programs allows participants to better visualize the effects of their electrical needs: “By enabling consumers to access accurate data on their usage of electricity and its costs, the use of advanced meters would empower consumers to make more informed and accurate decisions regarding their usage” (Boucher). If the FERC achieves its goal of nationwide exposure to demand response benefits, perhaps the results of cutting back on electricity use will assist the environment in the fight against the threatening effects of climate change.

Sources:

Albadi, M. H., El-Saadany, E.F. “A summary of demand response in electricity markets.” Elsevier: Electric Power Systems Research 78 (2007): 1989-1996. Web. 4 Feb. 2012.

Bonneville Power Administration. How Demand Response Works. 2010. www.bpa.gov. Web. 4 Feb. 2012.

Boucher, Rick. “Smart meters mean smarter consumers.” The Hill: Special Energy Section. 2003. Web. 4 Feb. 2012.

Chevva, Konda Reddy, Fernands, Stephen, Thakur, Netra, Walawalkar, Rahul. “Evolution and current status of demand response (DR) in electricity markets: Insights from PJM and NYISO.” Elsevier: Energy 35 (2009): 1553-1560. Web. 4 Feb. 2012.

Experimental Robotics Lab Update

During our initial robotics lab activity on Monday, January 20, Angela Bray, Phil Sommer, and myself, built a computerized Lego car fondly nicknamed “Rover.” As we familiarized ourselves with Rover and the LabView program, we experimented with different functions and their compiled reactions with our robotic car. After successfully programming Rover to drive in a straight line while playing a cheerful tune, we moved on to testing the wheel power functions directed to Port A and Port C on the Lego car. It was at this time that we discovered that varying the power to each port caused the car to stray from its head-on direction; by programming one port to output a higher power level than the other, Rover’s wheels turned in a circular direction.

Much to our pleasure, this week’s lab assignment involved experimenting in this manner with the port power levels. In order to successfully complete one of the tasks, the power levels had to vary enough to produce a diameter exceeding 2 feet within the circle traveled by the car. Using our knowledge of power variation from the previous casual trial, we succeeded in creating a large circle by programming one port to output a power level of 50 and the second to output 25.

The second portion of our robotics activity required us to measure the distance traveled by Rover given our inputted power levels, and to then compare the results with the calculations given by the LabView program. In order to properly determine the mean distance between the ruler and the program’s computations, we were provided with the fractional error equation:

 

fractional error =  | druler - dprogram / [(druler + dprogram) /2] |

 

We completed 4 different trial scenarios for this experiment, starting with the lowest speed and increasing the levels, and each only running for 1 second. For the first trial, we programmed the power levels for both ports to output a speed of 25. Upon compiling the program, Rover jolted forward and stopped at 6 centimeters according to our measurement. LabView calculated that Rover had traveled 5.617 centimeters, and that his wheels had turned a third of a full rotation. The calculated fractional error of this trial was 6.6 centimeters.

During the second trial, we increased the outputted power to 55 and watched as Rover sped along and stopped abruptly at the 16-centimeter mark, which matched the distance results provided by LabView. This time, Rover’s wheels completed 95% of one full rotation to reach this distance.

By the third trial, we began to notice a developing trend in the relationship between our distance measurements and the LabView calculations. When Rover’s power level was increased to 75 (with the wheels turning 1.47 times), his higher speed caused his driving directions to become skewed as he veered to the left, causing us to grow concerned with the accuracy of our measurements. According to our results, Rover’s wheels stopped at the 23.5-centimeter mark, while LabView demands that the car stopped at precisely 25 centimeters. The fractional error between the ruler and the program resulted in 6.18 centimeters.

The following and final trial, with a new power level of 80 to both Port A and Port C, proved to be the most difficult to successfully accomplish as Rover drove in a curve away from his designated track; annoyed by this experiment complication, we repeated the trial and measured that the car had traveled approximately 28 centimeters using 1.55 wheel rotations. However, the results did not comply with those provided by LabView, which stated that Rover had only driven 26 centimeters. This trial yielded the largest fractional error of the experiment at 7.4 centimeters.

As we increased the outputted speed, our trials differed further from the calculated results of LabView and our measurement accuracy became compromised by the complications in efficiency. In conclusion, we determined that a higher power level produces a greater distance, yet with the higher speed comes the skewed direction of the car and the risk of collecting incorrect data.