Algorithms Can’t Fix Racism

Recent reports on the expanded use of algorithms (such as these in Forbes and The New Republic) reveal that our technology is reflecting and adopting our systemic disease. It should come as no surprise; after all, who developed these technologies? Navneet Alang writes in the TNR piece that “Since machine learning and AI operate through collecting, filtering, and then learning from and analyzing existing data, they will replicate existing structural biases unless they are designed explicitly to account for and counteract that.”

Algorithms have been used with negative consequences in facial-recognition systems and government-run healthcare, thus highlighting underlying inequalities and existing inefficiencies. Although algorithms can be used to eliminate individual biases, systematic biases find their way into the algorithms and can be even more destructive.

Sherrilyn Ifill, President of the NAACP Legal Defense Fund, proposes that if authorities focus on addressing bias in law enforcement and criminal justice systems before facial recognition is implemented, then tax dollars will be used more efficiently on creating an AI that is equitable. Ifill says that facial-recognition systems have already proved to be less accurate in identifying people with darker skin; the results would be therefore be inaccurate if it were combined as-is with an offender database that was primarily persons of color.

Another area where algorithms are not the cure can be seen in Pennsylvania’s Allegheny County foster care system. They use an algorithmic screening tool to evaluate calls made to report child endangerment. The system measures racial bias on the volume of calls that get turned into investigations. Virginia Eubanks, author of the book Automating Inequality, says that the algorithmic screening tool misses the existing societal problem completely, which is that Black and biracial families are reported 350% more than white families for child endangerment. While the percentage of cases investigated might be equal, the number of cases opened on people of color is much greater.

As AI moves into our everyday lives, it is important that we organize against unchecked influence and impact of AI systems. Unfortunately, even though public awareness is growing about the need for oversight and transparency, faulty systems are being implemented at a rapid pace.

Reality Changing Observations:

1. What can public health, welfare, and education do to ensure their algorithms are free from bias?

2. Why is it so hard for developers to design algorithms without biases?

3. What can we do to foster equality in our neighborhoods and local government?

Recommended Posts

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments