AI Child Develops Human Prejudice

Development of Early AI Systems Present Human-Inspired Bias

RESEARCH

Abstract

In the age of machines and data, Artificial Intelligence presents one potential over human reasoning; objective decision-making. Various scientists and researchers have warned about the consequences of letting AI run the world (Bottou, 2013). One controversy has, however, long been ignored. The article Left Unchecked, Artificial Intelligence Can Become Prejudiced All on Its Own by Dan Robitszki explores the fact that Artificial Intelligence could, in fact, be prejudiced. Various studies have proven that machines could learn how to treat other forms of life as less precious than themselves. Further research has shown that it doesn’t take a high cognitive ability to develop prejudice, thus artificially intelligent machines could easily exclude others from their unique groups.

Principles Behind ML Bias in Decision Making

Earlier intelligent systems were fed data from selected human populations. The datasets used to train these AI systems were curated and summarized by humans. These systems thus picked up on the same biases these people had. Modern AI systems present a unique prejudice problem; they can develop prejudice on their own (Collins, 2018). By running various simulations, scientists have been able to pinpoint exactly how virtual agents interact and form groups. These simulations have also helped researchers uncover the evolution of prejudice in a society and the conditions that foster its growth. These machines can identify with discrimination and replicate human behavior, making them susceptible to human-like biases.

FIGURE 1: Neuronet Scales for Each Weight (Developers Club, 2015).

Prejudice in AI has huge implications on public policy and administration. For instance, in 2016, a computer program used by various US courts was found to be prejudiced against black prisoners. According to reports, the system, known as Correctional Offender Management Profiling for Alterative Sanctions (COMPAS) mislabeled black men to be twice as likely to reoffend. Programs like COMPAS have been extensively used in the Americas, where they influence judge and jury decisions (Robitzski, 2018). Thus, bias in AI machines has far-reaching implications on the system of governance. Prejudice is just one of the many concerns that may stand in the way of making AI a reality. Further advances in the technology will raise more questions concerning the regulation of AI, machine learning, autonomous systems, and data technologies.

AI promises to improve human life using learning algorithms. With proper implementation, AI will help us develop a wide array of useful applications, economic growth and improved quality of life. Just like any technological revolution, AI comes with its shortfalls. Prejudice ranks high among these. So far, innovators, programmers and developers have worked to eradicate bias that comes from annotated data. However, as seen from research and simulations, AI can automate these biases (Spector, 2006). While regulation may stifle the evolution of this potentially game-changing technology, unbiased AI can only arise if the underlying dataset does not endorse human-like stereotypes. Current legislation simply fails to address regulation of discrimination in big data and machine learning. Transparency in the inner workings of AI systems can help curb prejudice in underlying datasets.

Conclusion

Prejudice is an emerging controversy in the field of Artificial Intelligence. Earlier, AI systems had been touted for their ability to work without bias. However, owing to the stereotypes in underlying datasets, these systems can also view other forms of life as inferior. AI systems also learn to exhibit biased behavior through datasets curated by humans. This prejudice presents a public administration challenge since government bodies have started to rely on AI systems for decision-making. Thus, technology companies and administrators should develop regulations on the development and deployment of AI systems. For every individual to reap the benefits of AI, we should develop the systems with some level of transparency to help identify any source of biased behavior.

Resources

Sponsor

Peter W. Ross, Ph.D. CUNY
Philosophy of Mind, Cognitive Science

Share this post