Fei-Fei Li: Managing AI Crisis Talk with a Human Touch

Fei-Fei Li is the Director of the Stanford Artificial Intelligence Lab and the Stanford Vision Lab

When Elon Musk attended Stanford Artificial Intelligence Lab’s bi-monthly salon discussion—derived from the 18th century French custom where people gather to talk with an inspiring host—he asked Lab director Fei-Fei Li a pointed question about the future: “Aren’t you worried?” 

While Musk has been at the forefront of the crisis conversation regarding the future implications of AI, Li has been on the other side, moving the technology forward by teaching computers to think (machine learning) and see (machine vision). 

“I can’t be worried,” she responded. 

Li believes the answer to what she calls “AI crisis talk” is simple: diversity. According to Li, AI depends not just on great code, but also on the human intention behind it. As an Asian-American woman in a field dominated by white men, Li seems uniquely qualified to introduce diverse human intention to her groundbreaking research.

What are some ways you combat the AI “crisis talk”?
When I became director of the AI Lab, I realized that we lacked a platform to actually sit down and talk about AI in the greater context of society—law, ethics, philosophy— we’re all so focused on the technology development. So I created the salon discussion for this purpose.

This conversation is important because, as someone researching and developing this technology, you’re not just writing a code to decide when to avoid a pedestrian in a self-driving car. Your decision is going to touch on ethics. What if you have a choice between hitting a tree that might injure the driver versus hitting a stroller that has a baby in it? I don’t have an answer, but we need to be thinking about it.

How has your background propelled your dedication to growing diversity in AI research?
It’s been a long journey. My past in this country is very different from a typical middle-class kid who had a computer since age five. My path started in a Chinese restaurant and working as a cleaning girl. I have a deeper touch to a different aspect of society.

Also, I’m a mother. Being a mother gives you a very rich and deep insight into humanity. I have to believe in the benevolent power of my technology. We need to be thinking responsibly, designing responsibly. Because I am a mother, I think about this deeply.

You believe that a humane view of the technology will create diversity. How come?
People from more diverse walks of life are much more attracted to a humanistic mission statement. If they can choose to study cancer research that cures breast cancer or AI research that produces the next cool gadget, they’ll choose the cancer research. Injecting humanism into technology— which is necessary for the technology itself—is also a way to attract diverse talent in the technology world.

Technology, no matter how powerful, is in the hands of humans. So from educating humanistic technologists all the way to designing responsible policies and laws, the whole society—the entire spectrum—needs to be involved.

In the multi-part feature with WIRED Brand Lab, Lenovo looks at six extraordinary innovators who work relentlessly to move their field forward. Check all six stories from the series here.

Rahil Arora leads Lenovo’s Customer Stories program.