What’s AI to Do in a World Where Ethics Are Subjective?

Everyone seems to agree that it is important for AI systems to be developed and used ethically. The next step is deciding who decides what those ethics are and who provides accountability to the AI’s creators.

One example of ethical variance is seen in the use of facial recognition that has been adopted by police departments in the US, UK, and China despite the fact that the software can be inaccurate in identifying women and people with darker skin. Law enforcement is faced with the task of balancing individual privacy and the known error rate in the software with public safety. In contrast, the civil rights organization NAACP prioritizes their grave concerns about racial profiling in facial recognition software.

For tech platforms, the ethical standards of what constitutes hate speech vary greatly between users. One solution has been to incentivize free expression while discouraging what is harmful and false content on platforms based on user interactions like upvotes/downvotes.

The Moral Machine, created by MIT, attempts to use crowdsourcing to determine a universal morality of self-driving cars. According to an article in the MIT Technology Review, “They asked millions of people from around the world to weigh in on variations of the classic ‘trolley problem‘ by choosing who a car should try to prioritize in an accident. The results show huge variation across different cultures.”

Similarly, Google has struggled to establish ethical standards, only to receive pushback from its employees about its involvement in military defense projects. Yet, a recent study has found that reading a code of ethics will not change the behavior of software engineers.

There is a solution that can potentially bypass the subjective nature of ethics and that is to point AI towards an action based on prioritizing international human rights. Philip Alston, an international legal scholar at NYU’s School of Law, says, “[Human rights are] in the constitution…They’re in the bill of rights; they’ve been interpreted by courts.” Therefore an AI’s action should never remove basic human rights. This means that in order for AI to develop a universal code, it should not be based on ethics, but developers should work with civil rights groups and researchers to study the impact of human rights throughout the life cycle of the AI.

Reality Changing Observations:

1. What are the similarities and differences between a code based on ethics and one on human rights?

2. Why is it important for AI’s behavior to be unambiguous and accountable?

3. How would you solve the trolley problem?

Recommended Posts

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments