The Ethical and Moral Dilemma of Artificial Intelligence

Abstract

When the ALA updated its core competencies in 2022, it was the first time in the library profession to intentionally incorporate “concepts of social justice, equity, diversity, and inclusion.” Despite this, Galleries, Libraries, Archives, and Museums (GLAMs) have made an effort in recent years to adopt and implement machine learning and artificial intelligence. On one hand, GLAMs are known to be learning institutions and are touchpoints for communities to usher in breakthrough or new technologies. However, implementing A.I. and machine learning into these institutions is directly contradictory to the core competencies and would sever the trust gained with marginalized and underrepresented communities. Stanford tested three popular AI models—ChatGPT, Google AI, and RoBERTa—to examine whether they responded differently to identical crimes when the defendants were of different races, one white and one black. The language models exhibited attitudes “even more negative than the most negative experimentally recorded human attitudes about African Americans.” (p. 149) Overall, the AI was more likely to convict the black defendant, sentence them to prison, and even opt for the death penalty despite having committed the same crime. In the United States, the total adoption rate of artificial intelligence in libraries currently sits at 21%. While that may seem low, awareness and use are only going to increase as GLAMs become ever more susceptible to budget cuts, poor working conditions, and divestment in public initiatives. The findings here show that using AI is only going to further marginalize underrepresented groups while making the workflow marginally easier for those who already benefit from privileged conditions.

Notes

Rights