I’ve been thinking a lot about how we create responsible AI – specifically, I’ve been reading more about the trolley problem and the moral conundrum that inevitably follows.
The trolley problem is essentially this:
There’s a runaway trolley barreling toward five workers. You’re watching from afar, standing by a lever. If you pull this lever, the trolley will divert to a second track. However, on that second track, there’s one worker who’ll be struck if you intervene. Thus, you face a moral quandary: Do you switch rails, steering toward the single worker, or do you let the tram run its course, running over the five workers?
Trolleyology as it’s called, brings up a range of ethical debates that will define how AI is used alongside humans in the years to come. According to a recent poll, 81% of executives believe AI will work alongside humans as an aid / trusted advisor within the next two years. The urgency has never been more apparent for us to deal with our moral instincts.