The Moral Case for Using Language Model Agents for Recommendation
In this paper, Seth Lazar, Luke Thorburn, Tian Jin, and Luca Belli argue that current recommender systems contribute to the deterioration of our global information environment through surveillance, power concentration, and compromised user agency. The authors propose using language model agents as an alternative approach to content recommendation, suggesting that these agents could better respect user privacy and autonomy while effectively matching content to users' preferences.
Read the full paper here.