Artificial intelligence might be the cutting edge of futuristic tech, but by its nature, it’s also rooted in the past and present.
The technology learns from human behavior, and as hundreds of years of humanity show, often veers into racism, misogyny and many other types of inequality. So, as conversations about confronting systemic discrimination have grown in volume in recent months across institutions and industries, so too have they become central debates in the world of AI.
The consequences of this discussion aren’t just academic. They’ll play out in the algorithms that increasingly play a role in deciding the course of people’s lives, whether it be through job application filters, loan approval software or facial recognition.
The key question is whether an AI that’s trained on real human data can be taught not to reproduce racism.
Many in the industry say it can, but not…