• Sam Altman showed up at Morgan Stanley’s tech conference in San Francisco last week.
  • The event was closed to the public and the media.
  • Morgan Stanley analysts just summarized some of what was discussed.

OpenAI CEO Sam Altman showed up to Morgan Stanley’s tech conference last week. The event was closed to the public and the media, so details have been thin about what was discussed. Until now.

On Monday, Morgan Stanley analysts published a roundup of comments from the event, including some of what Altman and OpenAI discussed.

This company is one of the world’s most valuable startups, working on AI technology that could revolutionize business and broader society. And Altman is probably the second most famous tech executive in the world, behind Elon Musk.

This makes it pretty important to check up on Altman’s thinking regularly — especially when he shares thoughts in a semi-private setting.

Here are some highlights from the event, according to Morgan Stanley analysts:

AI = deflation?

Altman spoke about how AI could be deflationary. He said that this is one of the impacts of this technology that is most underappreciated and misunderstood by investors.

“This potential deflationary impact is consistent with our Thematic work where we have highlighted the potential for higher global efficiency and productivity… which would help offset inflation,” Morgan Stanley’s analysts wrote.

The cost to access and use generative AI models has been collapsing. This is partly due to new techniques that have made creating top AI models easier. I wrote last year that there are tons of great models, so the laws of supply and demand dictate that they are becoming more of a commodity, and hence cheaper.

This is likely great news for developers and businesses that need to access these AI models because they are paying a lot less than a year ago.

Still not enough GPUs

OpenAI highlighted significant capacity constraints, saying that its large fleet of graphic processing units is “completely saturated.” This is true both in the training stage, where AI models are created, and in the inference stage, which runs these models.

OpenAI also said that it has never experienced a situation “where it can’t sell out access to its GPUs at reasonable margins.”

“At a high level these comments are consistent with our thematic framework where a small number of leading companies (or individuals) are focused on training/building the leading LLM…which will require significant compute,” Morgan Stanley’s analysts wrote, referring to large language models.

There’s been some hand-wringing lately about how strong GPU demand may be in the future, so OpenAI’s comments are notable.

OpenAI isn’t worried about training data

In contrast, OpenAI isn’t worried about the supply of data to train its models.

The startup highlighted its ability to use its GPUs and existing AI models to create more data. This is known as synthetic data, and it’s becoming increasingly useful for certain stages of the model-creation process.

“Data is not a constraint in the way that compute is a constraint (or even the way energy may one day be),” Morgan Stanley’s analysts wrote.

OpenAI didn’t respond to a request for comment on Monday.

Do you have a story to share about AI? Contact this reporter at abarr@businessinsider.com.

Share.
Exit mobile version