Tech ethicist Olivia Gambelin outlines benefits and consequences
11/12/2021
SF Examiner
This week, Cruise, the self-driving car company owned by General Motors, deployed its first driverless robotaxi in San Francisco.
None other than Cruise co-founder and CTO Kyle Vogt was in the back seat, filming his experience on his iPhone and carrying with him his 3-year-old son’s favorite toy truck for support.
“I always thought we were trying to build the magic carpet for cities,” he said to the camera, his face overtaken by a child-like smile. “You just get on it and whisper where you want to go and then you’re magically whisked away.”
But these aren’t magic carpets. These are autonomous, 3,500-plus pound hunks of metal roaming San Francisco’s streets. And that raises the question: Who is responsible when someone, eventually, is hit by a self-driving car?
That’s a question tech ethicist Olivia Gambelin has thought a lot about. She’s the founder and CEO of San Francisco tech ethics firm Ethical Intelligence. She has researched the ethics of self-driving cars, specifically.
Born and raised in the Silicon Valley, Gambelin always thought she might enter tech — but philosophy, she says, was always her “intellectual guilty pleasure.” She fuses her upbringing with her intellectual curiosity in her consultancy, giving her a nuanced perspective on how to use ethics as a tool for innovation.
“I found out that there wasn’t really a career path already, but that there was a strong need for these kinds of services and expertise,” she says. “So I started that company myself.”
In 2019, she published research analyzing some of the ethical issues facing self-driving cars. But she’s not all about critiquing tech. Instead she specializes in providing “Ethics as a Service,” helping companies use ethics as a catalyst for innovation. She broke down some of the biggest ethical issues in the self-driving space, and how tech companies can use these dilemmas as a catalyst for innovation.
How does one offer ‘Ethics as a Service?’ I come to a company and look at a value like transparency, for example, and say ‘OK, this is a value that matters to you, and this is a value that matters to the society that you’re functioning in. What does transparency mean in your specific context?’
The benefit of having an ethics team is that this decision-making is mentally heavy. People get this thing called ‘moral fatigue,’ where, if they’re constantly making moral decisions with a high impact on individuals and society, they get tired, and they get hung up on things that they don’t realize they’re getting hung up on.
Ethicists are trained to both spot the key decisions that will have the most impact, and are also trained in how to think them through. It’s not us coming in and saying ‘I’m morally superior and you’re going to follow my values.’ It’s me coming in and saying ‘this decision matters. Here’s the possible consequences and benefits.’
What is the biggest ethical dilemma with self-driving cars? Responsibility. There is a fixation on this question of ‘Who is responsible when the car harms someone?’ The fact that we’re fixated on it means that it’s a real concern. It matters to the broader society that someone will be held responsible.
What are some of the other ethical concerns? The other two big areas have to do with urban planning and environmental impacts. These are massive computers driving around, and that is a massive amount of energy going into them. What is the environmental impact of that? I realize we’re talking about the automobile industry, and that’s not a good industry for the environment in the first place. But do self-driving cars actually help or are they causing further harm?
Then there’s a societal impact that’s going to come with self-driving cars. If these cars start to take over, like what Uber did to the taxi industry, then what happens? With Uber, there’s still drivers earning income, but self-driving cars, those people are going to be out of jobs if they don’t have access to the education to train and learn how to run the back-end at these companies. There will be a growing economic gap between those blue collar jobs and tech jobs.
Are the ethical considerations any different if a self-driving car hits people one in a thousand times versus one in a million? How does the probability of harm play into ethical decisions? Take, for example, how we have different classifications for criminal offense when it comes to homicide. There’s different levels, like involuntary versus voluntary manslaughter. It’s the intentionality that changes the severity of the punishment.
That mapped over into my research on self-driving cars. When the probability of running someone over is one in a thousand, it’s classified in people’s minds as ‘accidental.’ We’re upset that it’s running over one in a thousand people, but there’s this sense of ‘well, the probability of that accident happening was pretty high.’ We calculate our risk differently.
When the probability switches to one in a million, we’re now looking at a situation that people believe shouldn’t have happened. We believe that the company should have the technology figured out enough so that these accidents are no longer happening.
The fact is that when the probability of harm decreases, our risk appetite also decreases. We’re much less forgiving.
How can self-driving car companies utilize ethics to catalyze innovation? It’s all about sitting down and going ‘What is the purpose of this technology?’ In this case, it’s getting a person from A to B. Then you ask, ‘Why do we need to get from A to B?’
When you brainstorm, you come up with new answers. For example, take accessibility. Self-driving cars could be a beautiful solution for someone that can’t normally get around because they physically have trouble, or have social anxiety and are nervous interacting with people. Then one can use that fact and think “How do we design this technology so that someone with social anxiety can use features, built for them, to call a driverless car and get out of the house?”
Similarly, take emergency responders who need to get from point A to point B. Can we design a specific offering for emergency responders in a disaster? What if, in the next big earthquake — knock on wood — these cars automatically mobilized to transport first responders so they can search and rescue more efficiently?
Driverless taxis are the core product, but the innovation comes with ethics when you explore how that core project can be further advanced.
These answers have been edited for clarity.
SF Examiner Link: https://www.sfexaminer.com/news/as-cruise-launches-driverless-rides-we-probe-the-ethicis-of-risk/
Comments