New Publication: Online Extremism, AI, and (Human) Content Moderation
Michael Barnes has published a new article in Feminist Philosophy Quarterly, as part of a special issue on Feminism, Social Justice, and Artificial Intelligence. Michael’s paper is entitled ‘Online Extremism, AI, and (Human) Content Moderation’. You can access the paper here: https://ojs.lib.uwo.ca/index.php/fpq/article/view/14295.
Abstract:
This paper has three main goals: (1) to clarify the role of artificial intelligence (AI)—along with algorithms more broadly—in online radicalization that results in “real world violence,” (2) to argue that technological solutions (like better AI) are inadequate proposals for this problem given both technical and social reasons, and (3) to demonstrate that platform companies’ (e.g., Meta, Google) statements of preference for technological solutions functions as a type of propaganda that serves to erase the work of the thousands of human content moderators and to conceal the harms they experience. I argue that the proper assessment of these important, related issues must be free of the obfuscation that the “better AI” proposal generates. For this reason, I describe the AI-centric solutions favoured by major platform companies as a type of obfuscating and dehumanizing propaganda.