Artificial intelligence with the biases of its designers

Nice piece from Kate Crawford in the New York Times about how predictive technologies used by Google and others go wrong when applied outside the context they were trained on: “Artificial Intelligence’s White Guy Problem”

If we look at how systems can be discriminatory now, we will be much better placed to design fairer artificial intelligence. But that requires far more accountability from the tech community. Governments and public institutions can do their part as well: As they invest in predictive technologies, they need to commit to fairness and due process.