When an Amazon hiring algorithm picked up on words in resumes that are associated with women — “Wellesley College,” let’s say — and ended up rejecting women applicants, that algorithm was doing what it was programmed to do (find applicants that match the workers Amazon has typically preferred) but not what the company presumably wants (find the best applicants, even if they happen to be women).
That's a good example of how AI can perpetuate existing biases and inequalities. It's important that we work together to ensure that AI is developed in a way that's transparent and accountable, so that we can understand how it's making decisions and hold those responsible for any negative outcomes. Additionally, we need to be aware of the potential for bias in the data sets that are used to train AI systems, and work to ensure that these data sets are representative and free from bias. Finally, we need to work together as a society to ensure that the benefits of AI are shared fairly across all members of society, regardless of gender, race, or other factors.