You’re driving a car, and suddenly someone walks in front of you.

You have two choices:

  • You hit the person
  • You steer towards a wall and potentially kill you and your friend in the passenger’s seat.

Which one do you pick?

I’ll spare you the discomfort of the decision and tell you the truth: you won’t choose either. You’ll be driven by your instincts, do whatever they make you do in that split second you have, and deal with the consequences.

This is not true for self-driving cars. That split second is enough for their digital brain to evaluate the situation and take a decision, while our wet brain is trying to keep our sphincter tight to avoid shitting our pants.

Unfortunately (or maybe fortunately), we’re not spared from the uncomfortable task of choosing people’s faith. Whatever decision the self-driving car will take, it must be coded by someone.

Who should decide how to code a car who can take decision on people’s lives? Everyone, of course. A decision that can impact any human should be communally taken by every human.

This is the goal of the Moral Machine, an online platform developed by MIT that generates moral dilemmas and collects information on the decisions that people make between two destructive outcomes.

There, you can vote on the faith of people in various harmful situations while sitting on the couch.

An example of the Moral Machine’s questions An example of the Moral Machine’s questions

In theory, the answers are supposed to form a complete picture of human morality, that we can use to teach self driving cars how to behave when human life is at risk.

But are we sure that picture is complete?

The analysis of entries from 492.291 people showed that culture has a great impact on the decisions (you can read the full research here). An example is that people in the South America seem to have a much stronger preference for sparing women or fit people compared to clusters of Western or Eastern regions.

Given the striking differences observed, I can’t help but think about all the people that won’t be included in this data collecting effort that can impact their faith as well.

Are we sure that we’re including in the conversation the farmer from rural America? The indigenous tribe from Brazil? The Kenyan Masaai warrior?

Just looking at a map of Moral Machine’s users the answer is exactly what I feared: we’re not.

You may not find it relatable with self-driving cars (yet), but imagine the people working at Facebook or Google. They can literally click on a button and deploy an AI system that will affect the lives of billions of people.

We have a social responsibility to include these people in the conversation. So far, this conversation has been exclusive of the same old group: educated people from rich countries with some interest in technology. If I want to be even more disillusioned, I can say that it’s probably even more restricted to mostly white men in tech.

This has to change.

We need a more inclusive conversation about AI, because it will affect all humanity. And to do that, we need to bring it outside of the usual bubbles of tech companies, geek websites or elite universities. We need to bring it to primary schools, elderly homes, rural communities, and maybe even dance theatres.

And we need everyone’s help to do so.