Wednesday, June 12, 2024

“Ethical Concerns on the Deployment of Self-driving Cars”: A Policy and Ethical Case Study Analysis


Alec Gremer
University of South Florida
LIS4414.001U23.50440 Information Policy and Ethics
Dr. John N. Gathegi,
June 12th, 2023


“The Ethical Dilemma of Self-Driving Cars”: A Policy and Ethical Case Study Analysis


Self-driving cars are now an advanced technology with the potential to revolutionize how we travel around. Self-driving cars provide a variety of ethical, legal, and information technology policy challenges, though, in addition to the promises of enhanced safety, less traffic, and increased accessibility. For the purposes of analysis, ten sites that offer insights into the complex field of self-driving automobile technology will be examined. The informational materials offered give insight on the ethical, legal, and information technology policy concerns regarding self-driving cars. The main issues include accountability for algorithms, human-machine interaction, liability, safety, and regulatory frameworks. To ensure the ethical, safe, and responsible use of autonomous vehicles in society, these issues must be resolved.

The TED-Ed lesson “The Ethical Dilemma of Self-Driving Cars” by Patrick Lin introduces the moral conundrums self-driving cars face, concentrating on the difficulties in implementing moral decision-making in autonomous vehicles.

The video’s main ethical challenge addresses the issue of moral decision-making being programmed into self-driving cars. Although self-driving cars are designed to do as little harm as possible, Lin observes that there are some circumstances in which “harm to others or the passenger is unavoidable” (Lin 2015). This presents important issues on how self-driving automobiles should prioritize the safety of various people involved in potentially fatal circumstances. In addition, Lin emphasizes that “self-driving cars don’t have the human ability to make moral choices” (Lin 2015) and stresses the difficulties in formulating general moral standards for these machines’ programming.

Important legal problems are also raised by the ethical conundrums of self-driving cars. “Liability will be a major issue” (Lin 2015) in incidents involving driverless vehicles, according to Lin. When a vehicle is not being driven by a human, determining culpability becomes difficult. Lin notes that “makers and programmers of self-driving cars could be liable for accidents” (Lin 2015). This highlights the requirement for new legislation that specifies the division of liability and accountability in collisions involving autonomous vehicles, ensuring that all parties involved are subject to the law.

The material deals on algorithmic responsibility as well as information technology policy concerns related to self-driving automobiles. According to Lin, “the algorithms of self-driving cars need to be transparent and accountable” (Lin 2015). Building public trust and making sure autonomous vehicle decision-making processes are clear and equitable both depend on transparency. The importance of answering issues like “Who decides what ethical decisions the car should make?” (Lin 2015) and “How should these algorithms be audited and certified?” (Lin 2015) is emphasized by Lin.

The potential advantages of self-driving automobiles, such as improved safety and improved accessibility, are highlighted in the Brookings article. It highlights the requirement for thorough policy frameworks to deal with the legal and regulatory issues raised by autonomous vehicles. These frameworks ought to take liability difficulties, privacy issues, cybersecurity hazards, and the possible workforce effects into account. The material underlines the necessity for legislation to resolve liability issues on the legal front. There is a statement that says, “A legislative solution will need to specify how liability for accidents involving autonomous vehicles is determined” (Karsten 2016). To protect the rights of all parties involved, it is essential to establish precise rules for determining who is to blame and who is accountable in incidents involving self-driving cars. The resource emphasizes the significance of data privacy and cybersecurity in regard to information technology policy concerns. It states, “Policies will need to ensure that consumer data is protected and not used for nefarious purposes” (Karsten 2016). Robust cybersecurity measures and privacy regulations should be in place to safeguard sensitive information collected by autonomous vehicles.

The Brookings study examines the legal ramifications of autonomous vehicles. The topic of product liability in incidents involving autonomous vehicles is the main emphasis of this article. Legislation must address how liability is allocated in such situations given the change in responsibility from human drivers to autonomous systems to ensure justice and accountability. The report makes the following recommendation: “To the extent possible, the law should allocate liability for the consequences of accidents involving autonomous vehicles in a way that parallels the current law” (Karsten 2017). It highlights the requirement for legislation that makes the division of responsibility clear and assures accountability and justice. It also emphasizes the difficulties of assessing responsibility when human drivers are no longer in charge. The analysis argues that “to the extent possible, the law should allocate liability for the consequences of accidents involving autonomous vehicles in a way that parallels the current law” (Karsten 2017). It underlines the requirement for law that makes the division of responsibility clear and guarantees justice and accountability.

This article emphasizes the dangers of over-reliance on semi-autonomous vehicle technology. While these systems offer convenience and safety features, there is a risk of drivers becoming complacent or disengaged from the driving task. Policies should encourage driver vigilance to prevent potential accidents resulting from driver distraction or misuse of autonomous features. The resource states, “Automakers have a responsibility to communicate to consumers the limits and capabilities of their semi-autonomous systems” (Villasenor 2014). Policy measures should encourage responsible usage and emphasize the shared responsibility between drivers and autonomous technology.
The continuous safety issues that autonomous vehicles are encountering are highlighted in this site. Autonomous vehicles are not yet as safe as drivers on the road, despite their potential. When considering how and when self-driving cars should be used on public roads, taking into account their present limits and potential risks to road users, ethical issues come into play. The resource’s legal issues center on liability for accidents involving autonomous vehicles. “Determining who is at fault for a crash involving an autonomous car can be tricky” (Hsu 2017), the statement reads. When it comes to handling situations when technology is at fault, conventional liability models might not be sufficient. Establishing legal frameworks is necessary to establish culpability and guarantee that those in charge of creating, producing, and using self-driving vehicles would be held liable in the event of an accident.
The website emphasizes the information technology policy concerns around the security of self-driving cars. It underlines how crucial it is to thoroughly test and validate a technology before using it. According to the article, “Many researchers agree that self-driving cars must be at least 10 times safer than human drivers before they are deployed on a large scale” (Hsu 2017). To protect public safety, policies and regulations should demand extensive safety testing, certification procedures, and continual performance monitoring of self-driving car performance.

Concerns regarding the lack of transparency in the development of self-driving cars are also raised by the resource. To enable impartial safety assessments and hold manufacturers responsible, it proposes that “companies should be required to publish information about their cars’ performance” (Hsu 2017). For the public to have confidence in the safety of self-driving cars, transparency in technology development and information exchange is essential.

The next article talks about California’s regulatory framework for self-driving cars, which covers the issuance of licenses for testing autonomous vehicles. Policymakers must strike a balance between promoting innovation and ensuring public safety through appropriate law, monitoring, and continuous assessment of autonomous systems. The article tackles the moral issues raised by the use of autonomous vehicles. According to the article, “self-driving cars must be designed and programmed to prioritize the safety of all road users” (Spectrum 2019). This raises concerns about how autonomous vehicles should make decisions in complex scenarios when it’s important to compromise between several safety issues. Self-driving cars must be developed and programmed with ethical considerations in mind to make sure that their behaviors prioritize safety and are consistent with society norms.

The difficulties self-driving automobiles encounter when accurately detecting and reacting to bicycles on the road are raised by this resource. Keeping bikers and other vulnerable road users safe is a matter of ethics and safety. To reduce the possible risks involved with sharing the road with autonomous vehicles and bicycles, technological improvements and governmental initiatives should address these issues. The article claims that “Cyclists are vulnerable road users, and their interactions with autonomous vehicles raise complex ethical issues” (Spectrum 2019). Self-driving cars face an ethical conundrum when they must make split-second judgments that compromise the security of vulnerable road users like bicycles. Programming self-driving cars to prioritize the safety of cyclists and other vulnerable road users is necessary to address these problems. Policy initiatives should concentrate on creating efficient human-machine interfaces and user-centered design concepts that promote safety, usability, and public acceptance in order to ensure the successful integration of self-driving automobiles into society.

The National Highway Traffic Safety Administration’s website has details on the safety characteristics of automated vehicles. It highlights the agency’s role in developing policies and conducting safety assessments for the creation and application of autonomous vehicles. Policy issues include promoting stakeholder cooperation in order to enable the safe deployment of self-driving cars and the ongoing evaluation of their safety performance. The NHTSA is crucial for creating safety standards, conducting research, and collaborating with business stakeholders to generate regulatory guidelines. The source claims that “NHTSA is committed to working collaboratively with stakeholders to develop and deploy automated vehicle technologies that advance safety while providing appropriate regulatory oversight” (Lynberg 2018). Legal frameworks must address liability, data privacy, cybersecurity, and other legal challenges to guarantee public safety and responsibility.

The ethical and legal issues surrounding algorithms employed in autonomous systems, such as self-driving cars, are highlighted in the ACM statement on algorithmic accountability. Transparency, equity, and potential biases in decision-making algorithms are major issues. To avoid biased outcomes in the deployment and use of self-driving cars, policy measures should assure accountability, transparency, and the mitigation of algorithmic biases. The resource emphasizes the moral issues raised by self-driving cars’ usage of algorithms. To assure fairness and eliminate any biases, it highlights the necessity of accountability and openness in algorithmic decision-making. According to the resource, “Ethical considerations are raised when algorithms produce results that are biased, violate privacy, or otherwise negatively affect individuals or groups” (ACM 2017). When algorithms make choices that could jeopardize people’s rights, privacy, or safety, ethical problems result. It is necessary to create ethical frameworks and norms that control the creation and use of algorithms in self-driving automobiles in order to allay these worries.

The next TED Talk highlights the ethical concerns that are presented by online “filter bubbles,” or the content and information that is personalized to each individual depending on their online behavior and interests. People are concerned about how filter bubbles may affect people’s access to different ideas, information, and their ability to make good decisions. Eli Pariser writes that “this moves us very quickly toward a world in which the Internet is showing us what it thinks we want to see, but not necessarily what we need to see” (Pariser 2011). An ethical dilemma is created by the manipulation of information and consumers’ ignorance of the biases and limitations of the content they consume. To combat the detrimental impacts of filter bubbles, it is necessary to encourage transparency, diversity, and user empowerment in order to address these ethical issues.

Self-driving cars have a lot of potential to increase transportation safety and effectiveness. To ensure their responsible and ethical deployment, however, ethical, legal, and information technology policy issues must be resolved. The assignment of liability, safety restrictions, regulatory frameworks, human-machine interaction, and algorithmic accountability are among the major issues. Policymakers may encourage the use of self-driving cars while preserving public trust and safety by proactively addressing these challenges.

Friday, April 26, 2024

LIS 4317 Final Project: Fuel Economy Data from the U.S Dept. of Energy

 

LIS 4317 Final Project: Fuel Economy Data from the U.S Dept. of Energy


Investigating the association between vehicle features and CO2 emissions in a dataset comprising fuel efficiency data is the issue statement for my final project. The objective is to ascertain the extent to which CO2 emissions differ among various vehicle classes. In order to evaluate the effects of automobiles on the environment and pinpoint possible areas where fuel efficiency regulations could be strengthened, further investigation is essential. The idea suggests that specific vehicle classes, like larger cars or ones with bigger engines, might have more CO2 emissions than other car classes.



This issue is set within the larger framework of transportation research and environmental sustainability. The relationship between vehicle characteristics and emissions, particularly CO2 emissions, has been well studied in the past. Numerous techniques, such as statistical evaluations and graphics, have been used to investigate these connections. For example, to see how emissions vary amongst various car models, researchers have employed bar charts, box plots, and scatter plots. Additionally, research in this field has looked into how regulations and technology developments can lower the emissions from vehicles.



A bar plot showing the distribution of CO2 emissions by vehicle class is made using a visual analytics technique in order to solve the issue. To get the average CO2 emissions for each vehicle type, the dataset is first analyzed. Subsequently, the box plot is created using the R ggplot2 library, where the y-axis represents CO2 emissions and the x-axis represents vehicle class. This method facilitates insights into possible trends or patterns by providing an easy-to-understand graphic representation of the variations in CO2 emissions across various vehicle classes.

Overall, this solution offers a structured and informative approach to examining the relationship between vehicle characteristics and CO2 emissions, contributing to the broader understanding of environmental sustainability in the automotive industry. The visual representation accurately portrays the data without distortion, resulting in a low lie factor. There are no unnecessary or excessive decorative elements in the visualization, ensuring clarity and focus on the data. The visualizations adheres to best practices outlined by Stephen Few, such as clear labeling, appropriate use of color, and minimal distractions. It fits into the best practices outlined in our textbook for effective visual analysis.

Wednesday, April 3, 2024

LIS 4317: Module #13 Assignment

 

Module 13 Assignment



A powerful visual representation of random sampling from a uniform distribution is provided by the animation produced with the animation package and the R programming language. The animation shows a plot of ten randomly selected values from the uniform distribution in each frame. The y-axis boundaries are always set between 0 and 1 to facilitate comparisons. As the animation goes on, viewers will be able to see how the generated numbers are distributed randomly and with variability; some frames show clustered dots, while others show a more scattered distribution. An interactive and dynamic element can be added to presentations, blog posts, or instructional materials by saving the animation as a GIF. This makes it easily shareable and a useful tool for presenting ideas connected to random sampling. 

The plot() method is used to create scatter plots, and the Sys.sleep() function adds a little delay between frames to make sure the animation is visible. The code uses a loop to generate each frame. Finally, to make sharing and distribution of the visual presentation simple, the animation is saved as a GIF file using the saveGIF() function. The concept of random sampling is clearly communicated by this straightforward yet powerful animation, which also improves R data display.

LIS 4317: Module # 12

 

Module # 12




I was able to install and load the necessary packages, such as GGally, igraph, and ggplot2, for network visualization. The network was successfully visualized using the ggnet2 function from the GGally package, and the erdos.renyi.game function from the igraph package was successfully used to generate a random network. Because of this, the random network may be successfully visualized using ggnet2, giving rise to a rudimentary depiction of the network topology.

Throughout the procedure, there were a number of difficulties and mistakes. I first ran into issues when trying to use functions from the network package, like rgraph and as.network, because I was using the wrong functions and didn't have the necessary dependencies. Moreover, I have made the mistake of applying functions from the network and SNA packages when they are not required for the visualization process, which has caused confusion and needless difficulties.

Despite several significant obstacles and setbacks along the road, the process of developing the social network visualization was ultimately successful. These challenges offered priceless teaching moments and emphasized how crucial it is to pay close attention to function usage, package dependencies, and error interpretation during the development process.

Friday, March 29, 2024

Final Project: Comparative Analysis of Fuel Efficiency in Various Vehicle Types

 

Final Project: Comparative Analysis of Fuel Efficiency in Various Vehicle Types

Step 1: Choosing a Dataset

Dataset: Fuel Economy Data from the US Department of Energy (http://www.fueleconomy.gov/feg/download.shtm)

Step 2: Sampling and Hypothesis

Sample Size: 250 vehicles

Null Hypothesis (H0): There is no significant difference in fuel efficiency between different vehicle types.

Alternative Hypothesis (H1): There is a significant difference in fuel efficiency (MPG/City) between different vehicle types.

Step 3: Write-up Summary

This study aims to determine whether different types of vehicles have statistically significant differences in fuel efficiency. Customers place a high value on fuel economy, and knowledge about the capabilities of various car models can help lawmakers and consumers alike.

This study is consistent with what was discussed in class on analysis of variance (ANOVA) and hypothesis testing. The groundwork for choosing suitable statistical techniques to evaluate the differences in fuel efficiency between various car models has been laid by previously discussed subjects in class.

I will utilize an ANOVA to answer the study question. An analysis of variances in fuel economy between different types of vehicles can be done effectively with ANOVA since it permits the comparison of means across numerous groups. The type of vehicle (compact, SUV, etc.) is the categorical variable, and fuel efficiency is the continuous variable.

The following R code was used to conduct the ANOVA variance analysis:

Step 4: Generate Visualization and Abstract

Visualization

To show the distribution of fuel efficiency for each type of vehicle, I created a boxplot. A clear comparison of the fuel efficiency distribution and central tendency across different vehicle types was made possible by this graphical approach.

The purpose of this study is to determine whether there are any statistically significant differences in fuel efficiency across different vehicle classes using ANOVA. The boxplot provides insights into the possible effects on customers and the car industry by graphically illustrating the difference in fuel efficiency. The results will advance our knowledge of how different car models differ in terms of fuel efficiency, which will have consequences for consumer decisions as well as environmental concerns.

Thursday, March 28, 2024

LIS 4317: Module # 11 Assignment

 
Module # 11 Assignment



I created a marginal histogram scatter plot for this assignment using R and the ggplot2 program. ggplot2 and ggExtra were among the packages I first loaded and installed. Next, I generated statistics that showed annual budget expenditures per capita. The scatter plot was then created using ggplot2, maintaining a basic style, and adding a linear regression line for trend visualization. ggExtra's ggMarginal function was utilized to add marginal histograms to the plot. The histogram type of marginal plots was specified, and their presence was guaranteed on both axes. 

This procedure made it possible to create a thorough visualization that combined summaries of the marginal distribution with insights from scatter plots, enabling a deeper comprehension of the properties of the data.

Wednesday, March 20, 2024

LIS 4370: Module # 11 Debugging and defensive programming in R

 

Module # 11 Debugging and defensive programming in R


Bugged Code:

tukey_multiple <- function(x) {

   outliers <- array(TRUE,dim=dim(x))

   for (j in 1:ncol(x))

    {

    outliers[,j] <- outliers[,j] && tukey.outlier(x[,j])

    }

outlier.vec <- vector(length=nrow(x))

    for (i in 1:nrow(x))

    { outlier.vec[i] <- all(outliers[i,]) } return(outlier.vec) }



Corrected Code:

tukey_multiple <- function(x) {

  outliers <- array(FALSE, dim = dim(x))  # Corrected initialization

  for (j in 1:ncol(x)) {

    outliers[, j] <- tukey.outlier(x[, j])  # Corrected logic

  }

  outlier.vec <- vector(length = nrow(x))

  for (i in 1:nrow(x)) { 

    outlier.vec[i] <- any(outliers[i, ])  # Corrected logic

  } 

  return(outlier.vec) 

}


Explanation:

Immediately as I looked over the code, I saw that the loop's update of the outliers array might have a flaw. The array's initialization and update appeared to deviate from the Tukey method's rationale. I found that debugging is frequently a gratifying and demanding task. To prepare it for updating with the output of the tukey.outlier method, the outliers array was initialized with FALSE rather than TRUE. The mechanism for updating the outliers array inside the loop has been corrected to correctly identify outliers for each input matrix x column.Then, to appropriately detect rows having at least one outlier, the logic inside the second loop was adjusted.

The tukey_multiple function's problem was successfully fixed after closely examining the code and comprehending the underlying reasoning. 

“Ethical Concerns on the Deployment of Self-driving Cars”: A Policy and Ethical Case Study Analysis

Alec Gremer University of South Florida LIS4414.001U23.50440 Information Policy and Ethics Dr. John N. Gathegi, June 12th, 2023 “The Ethical...