“It’s really important that we bring society along and that we’re not operating in this black box and people think there’s only a few people controlling the future,” Chesky said in an interview with NBC News’s Lester Holt at the Aspen Ideas Festival on Thursday. Altman joined Chesky for the interview at the festival.
After OpenAI’s ChatGPT debuted to the public in 2022, artificial intelligence burst into the mainstream as companies raced to implement and profit off advanced large language models while the public and ethicists worried about the societal ramifications of letting the technology run amok unchecked.
Will AI take people’s jobs? Will it interfere with elections? Worst of all: Could it destroy humanity?
Both tech leaders, speaking at the Aspen Ideas Festival, emphasized the importance of including larger society in the conversation of AI development to allay some of those fears.
“I think that if everyone here could feel like they could participate and they could have their input into it, then I don’t think there’s a huge thing to fear,” Chesky said. “I think the thing to fear is something we don’t understand or [we’re] left out of, and something that runs away from us that we can’t control. And so that’s the future we don’t want to live in.”
Altman, too, highlighted the need to get “feedback from society.”
“We need to learn how to make safe technology,” he said. “We need to figure out how to build safe products, and that includes an ongoing dialogue with society.”
Just seven months ago, Altman was briefly ousted from his chief executive role at OpenAI before returning to the organization with a new board. A few former board members accused Altman of lying to colleagues and creating a toxic culture through “psychological abuse.”
“Sam didn’t inform the board that he owned the OpenAI startup fund, even though he constantly was claiming to be an independent board member with no financial interest in the company,” Helen Toner, a former OpenAI board member, said in an interview in May.
Several OpenAI executives announced in the past month that they were leaving OpenAI, including Jan Leike, who led the company’s now-dissolved safety group. Leike joined a chorus of OpenAI critics who are questioning the company’s commitment to safety as it pursues artificial general intelligence, AI that surpasses human capabilities.
An OpenAI spokesperson did not immediately respond to a request for comment.
Airbnb CEO Chesky was optimistic about AI’s impact on the future.
While artists raise alarms about AI’s potential to diminish creative work, Chesky, who went to the Rhode Island School of Design, sees the technology as “an incredible tool for artists.” While researchers fear that AI will exacerbate the loneliness epidemic, Chesky believes the tool will “help bring people together.”
“At the end of the day, it’s not the technology; it’s the people with the technology,” Chesky said, referring to those who are building with AI. “It always comes down to the people, their values and, ‘Are they good people?'”