- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models https://aclanthology.org/2023.acl-long.656.pdf
It’s a large part of the point. Launder biases into an algorithm so you can blame the algorithm for enforcing biases while taking no responsibility. It’s how every automated police tool has ever worked.