AZEEM AZHAR: Hi there, I’m Azeem Azhar. For the past decade, I’ve studied exponential technologies, their emergence, rapid uptake, and the opportunities they create. I wrote a book about this in 2021, it’s called The Exponential Age. Even with my expertise, I sometimes find it challenging to keep up with a fast pace of change in the field of artificial intelligence, and that’s why I’m excited to share a series of weekly insights with you where we can delve into some of the most intriguing questions about AI. In today’s brief note, I’ll speak about how I see this latest wave of AI development and innovation changing the web. I believe that large language models have immense potential to revolutionize how we access and use information online. In fact, they will be the successor to Web 2.0, a web underpinned by AI. Here’s how I think this revolution could come about.
There’s a big feeling that Web 2.0 had played itself out from going from a social web to services that were very, very closed off and controlled by the major online aggregators like Facebook and others. And so, as we know, for the last few years there’s been this hunt for what might be Web 3.0, and the theory was that Web 3.0 could be built on an own your own data model on the blockchain and as we know, more than a decade in, very few people are using products like that.
There’s a lot of retrenchment and the developers in the Web 3.0 space have gone back in some sense to basics on the one hand and on the other hand, regulators are crawling over some sort of egregious and outrageous claims in behaviors that came from that segment. While that was happening, AI started to really, really improve and I think that as we see the maturity of these LLMs, we actually start to see the outlines of a new set of behavioral interactions across the internet on the one hand and a whole new class of information supply and an information economy that supports that on the other, and that is really coming through the LLM.
If you think about what the Web was really about, it was really about finding, accessing, and making use of information. I mean, that was what the system that Sir Tim Berners-Lee prototyped and built in CERN in 1989 was all about and the direction of travel was about making that system richer and richer and richer. Search engines came along. Initially, they were just keyword search engines and they got more and more sophisticated and they speeded up our ability to access that information, and at the same time, publishing tools made it easier for more of us to publish into that space.
And then we got into this sort of mire, molasses in the late 2000 and tens when business models were starting to take hold and everything started to look a little bit like it had lost its luster. So we’re looking for that new paradigm, and what I see within LLMs is exactly a provision of that paradigm. It’s not problem free by any means, but it is providing a completely new interaction mechanism for us to make use of these distributed computer systems that we have on the internet.
And the thing that’s fascinating is that even those, all these products ChatGPT and GPT-4, and the other ones that I use, are in Beta, they’re often not even released as products, as is the case of ChatGPT. I’m using them more and more than I ever really used any of this crypto Web 3.0 stuff for practical day-to-day purposes.
And just this week, OpenAI announced plugins. This is the ability to connect the chatbots through to underlying services like WolframAlpha for maths or Kayak for flight bookings or Shopify for commerce and that starts to become really, really powerful because it gives us the conversational interface to highly trusted transactional or other information.
If you think about that, that addresses some of those weaknesses that LLMs have and we still can’t iron out, which is their ability to hallucinate or do quite well and then blow up by putting in random information because now I can get that data back from a trusted source. But the other thing is that we can start to get these LMS to do so much that as one of my colleagues said to me, “Wow, it looks like you wouldn’t even bother to make any software anymore because you just type it in to the LLM and it starts to produce what you need.”
Again, we’re only seeing the hints of this, but we would assume, I think it would be reasonable to assume, that this would get better and better rather than worse and worse. So you then get into this space where actually the Web could be largely mediated through interfaces that are driven by LMS of one type or another.
Then we get this question of, will it be a single winner? Will it be one size fits all as it was with Web Search, which is essentially was Google winning everything or will it be something that becomes much more fragmented by use case as we saw within social media? And my sense is that we’ll end up with a number of these different services for a couple of pretty good reasons. So, we’ll end up with different services because there will be concerns about pushing information, especially corporate information, into hosted generalized LLMs.
So, you’ll start to see organizations wanting to deliver their knowledge, both internally and through customers, through their own systems run effectively on-prem, that’s on-premise. You’ll also start to see countries wanting to maintain their own cultural, political, historical context in these interfaces. They will have learned from the experience of Facebook in particular, which try to provide singular standards for behavior and user registration and the kind of attributes you cared about, standards that were baked up in a Harvard dorm room and then applied globally.
And I think nationally, both countries and the people within them, will start to be thinking it’s quite important to reflect our cultural heritage, our historical narratives, and our wider context. So, I would expect that you will start to see some localized, nationalized, community driven LLMs being used as interfaces to these systems.
And I think then the final part of all of that is that the devices also really, really matter, and I’ve written about this. But I would expect very much that we’ll start to see privacy safe on device LLMs, ones that don’t fall foul of the data harvesting or the privacy concerns that we see when we’re using things like ChatGPT or TikTok or Facebook, but rather ones where the technology lives and resides on a device that is fundamentally ours because we’ve paid for it. It’s stored in a secure private enclave, are largely on device that only we have access to, for example, the way that Apple photos works on your iPhone.
And I think all of that points to there being a large number of these different systems that will work in slightly different ways. Now, what I’m not so sure of, and I need to think a little bit more about, is whether we’ll see a single super dominant player in the market. I think that in some segments that may start being the case. And in particular because GPT-4 is better than the publicly available alternatives for information that is not too sensitive, it’s quite hard to see why they wouldn’t be able to maintain some of that momentum for quite a bit longer.
But I don’t think that the network effects play in quite the same way they did when Google was first rolling out its search engine. So, I don’t have a confirmed view on that yet. I’m still discovering but what I do feel quite certain about is that we do now have a sense of what follows Web 2.0, which is that it will be something that is increasingly mediated by these large language models connected through to trusted sources and services below and I think that will really start to change the interaction patterns and fundamentally the business model of the web.
Well, thanks for tuning in. If you truly want to grasp the ins and outs of AI, visit www.exponentialview.co, where I share these insights with hundreds of thousands of readers.