The rise of technology has meant the growth of decision-making algorithms that seem to make life easier. But behind the ease, these algorithms can hold biases that impact certain groups.
Mason Engineering’s Huzefa Rangwala, a professor of computer science, and Aditya Johri, a professor of information sciences and technology, along with the Alex Monea, an assistant professor in the College of Humanities and Social Sciences, received a grant from the National Science Foundation (NSF) to educate engineering and technology students on how biased algorithms affect different facets of society.
“People think that algorithms are inherently neutral, but they aren’t. To modify and train algorithms to do certain things, you use data, and the sources of the data themselves often have biases, so the end result of your algorithm is often also biased,” says Johri.
When algorithms are trained on biased data and are used to make decisions about people, the results of the algorithm can disproportionately affect certain populations. “One example is criminal risk recidivism algorithms that predict if people should be granted parole based on risk factors associated with them," says Rangwala. "Unfortunately, several studies have reported that these algorithms are not fair, and they do not give the same risk factors for different demographics and groups of people.
“We need to consider how this technology we create may be used in the future by folks who may not understand the technology. If a judge uses criminal risk recidivism algorithms to decide who does and doesn’t get parole, then the judge should understand the potential problems associated with it,” he says.
Besides the justice system, algorithms are also used to make decisions selecting candidates for open roles, deciding who gets loans, urban planning, and even google image searches, says Johri.
Rangwala, Johri, and Monea hope to use their $299,989 NSF grant to implement new teaching methods in upper-level technology courses to educate future engineers, computer scientists, and technologists on how these biases affect society.
When new technology and algorithms are created, equity problems often take a back seat, says Monea. “Ethical concerns seem to be addressed primarily in later stages through patches, updates, etc., but once these algorithms are implemented, their ethical problems can be self-reinforcing.”
The best way that Rangwala and Johri found to teach students about these ethical concerns is through role-playing and case-based learning. This model will help students see all sides of the ethical problems that algorithms pose. They plan on implementing these techniques into their high-level computer sciences and information sciences and technology courses this fall.
“My primary role in the project is to examine the cultural, social, political, and ethical implications of specific types of algorithms and datasets and to help select case studies that can be turned into scenarios,” says Monea.
Once implemented into classes, Rangwala and Johri would like to see students share their knowledge. “The hope is that through this training, students will be better prepared for the technology workforce and act in a socially conscious and ethically responsible manner,” says Rangwala. “And that they will understand the potential impact their work might have on people and society.”