In today’s column, I examine the most likely pathways to get us from today’s contemporary AI to the vaunted AGI (artificial general intelligence). This is a mighty big open question and there are AI makers and humongous tech firms all making bets on which path will be the winner-winner chicken dinner when it comes to attaining AGI.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
Strawman Dates On Attaining AGI
Since attaining AGI seems to be a greater chance in the relatively near-term versus achieving ASI, let’s put our minds to trying to foresee how AGI is going to be reached. I will use some strawman dates to help illuminate the murky matter.
Recent surveys of AI specialists indicate that the overall guess is that the year 2040 will be the presumed date by which AGI will have been accomplished. Numerous AI luminaries are touting that we will arrive at AGI sooner, such as in the next 3 to 5 years from today, thus they are staking their brazen claims on the years 2028 to 2030. I find this to be doubtful. They are also using Jedi mind tricks to twist the definition of AGI into being a lot less than what AGI is really supposed to denote, which helps to bolster their emboldened date forecasts. For my analysis of the various predicted dates and assorted definitions of AGI, see the link here.
The strawman we will use here is the year 2040. That gives us a runway of 15 years. It is useful to put some thought into how those fifteen years are going to play out.
Timeline Considerations
As you well know, we are currently sitting just about mid-way through the year 2025. Trying to envision arriving at AGI in the year 2040 seems like a daunting task. It is quite a long distance in time from our present-day AI status.
No worries, we will do a divide-and-conquer approach to see what we can come up with.
One possibility is that the advances in AI occur smoothly on a year-by-year basis which ultimately culminates in AGI. Assume that each year there is an incremental advancement, and the advancement is roughly the same amount of progression each year. In other words, if we improve AI by about 7% per year, doing so over roughly 15 years, AGI will become a reality by 2040 (I’m using rounded numbers for this thought exercise).
Some AI prognosticators believe that simply incrementing AI each year is not the ticket to success. Their view is that the current methodologies and practices are not going to scale up. Concerns are that everyone in AI is pretty much part of a massive one-way-fits-all mindset, blindly pursuing the same kinds of algorithms and approaches. Only if we break free of this malaise and come up with radically new ideas will AGI be attained. For more on this AI progression heated debate, see my coverage at the link here.
The Bet On A Miracle
Here’s what vocal critics of the incremental approach say is potentially going to happen. Their hope is pinned on the idea that an enterprising AI developer will miraculously see beyond the bounds of existing AI and derive a groundbreaking new approach that no one has yet even imagined. This breakthrough will be the Holy Grail that gets us to AGI. Shortly after inventing or figuring out this incredible innovation, AGI will be right around the corner.
Consider how this gives a different perspective on the timeline.
Maybe the incremental approach muddles along for a dozen years. Some progress is being made, and ongoing self-congratulations are occurring. But AGI doesn’t seem within view. Investors in this AI are getting perturbed and asking hard questions about when AGI is finally going to be had.
Boom, out of nowhere, the enterprising AI developer comes up with an incredible breakthrough, doing so around year 13 or 14. Then, this breakthrough is rapidly nursed into becoming AGI.
In that scenario, there are twelve years of modest incremental progress that is then suddenly punctuated by a new way of devising AI. Once that occurs, in relatively short order the vaunted AGI is figured out. Variations on that timeline are roughly the same in the sense that over the fifteen years, there is a sudden transformative eureka about AI that puts AGI in the picture. Perhaps this happens in year 10 instead of year 13. Or maybe it occurs at the last moment, arising in year 14.
A disconcerting problem with that timeline is that it is a bet on a kind of miracle occurring during the AGI pursuit. You might have seen a popular cartoon of two scientists standing at a chalkboard that is filled with arcane equations, and in the middle of the chalkboard, there is a noticeable gap. One scientist asks the other one, what goes in that gap? The response is that a miracle goes in that spot.
Seven Major Pathways
I’ve come up with seven major pathways that AI is going to advance to become AGI. The first listed path is the incremental progression trail. The AI industry tends to refer to this as the linear path. It is essentially slow and steady. The idea of a sudden miracle happening is usually coined as the moonshot path. Besides those two avenues, there are five more.
Here’s my list of all seven major pathways getting us from contemporary AI to the treasured AGI:
- (1) Linear path (slow-and-steady): This AGI path captures the gradualist view, whereby AI advancement accumulates a step at a time via scaling, engineering, and iteration, ultimately arriving at AGI.
- (2) S-curve path (plateau and resurgence): This AGI path reflects historical trends in the advancement of AI (e.g., early AI winters), and allows for leveling-up via breakthroughs after stagnation.
- (3) Hockey stick path (slow start, then rapid growth): This AGI path emphasizes the impact of a momentous key inflection point that reimagines and redirects AI advancements, possibly arising via theorized emergent capabilities of AI.
- (4) Rambling path (erratic fluctuations): This AGI path accounts for heightened uncertainty in advancing AI, including overhype-disillusionment cycles, and could also be punctuated by externally impactful disruptions (technical, political, social).
- (5) Moonshot path (sudden leap): Encompasses a radical and unanticipated discontinuity in the advancement of AI, such as the famed envisioned intelligence explosion or similar grand convergence that spontaneously and nearly instantaneously arrives at AGI (for my in-depth discussion on the intelligence explosion, see the link here).
- (6) Never-ending path (perpetual muddling): This represents the harshly skeptical view that AGI may be unreachable by humankind, but we keep trying anyway, plugging away with an enduring hope and belief that AGI is around the next corner.
- (7) Dead-end path (AGI can’t seem to be attained): This indicates that there is a chance that humans might arrive at a dead-end in the pursuit of AGI, which might be a temporary impasse or could be a permanent one such that AGI will never be attained no matter what we do.
You can apply those seven possible pathways to whatever timeline you want to come up with. I used the fifteen years of reaching AGI in 2040 as an illustrative example. It could be that 2050 is more likely and this will play out over 25 years. If 2028 is going to be the AGI arrival year, the pathway is going to be markedly compressed.
Placing Your Bets
How does a belief in one pathway over another pathway shape the placing of your bets?
If the linear path is where you are putting your poker chips, it would seem that all that needs to happen is to continue doing what is already being done right now. Keep the ship steady and presumably on course. Don’t let anything distract from that direction.
The sudden leap to AGI via the moonshot path would appear to necessitate a maverick departure from what is being done at this time. Do whatever is feasible to think outside the prevailing box. Fund those wild and wide-eyed new ideas. Nurture them along and do not let the myopic pressures of others convince you otherwise.
Similar strategies apply to each respective pathway.
I’m betting you are avidly curious as to which of the seven pathways is thought to be the most likely. In addition, you might be mildly interested in which of the seven is seen as the least likely.
In talking with many of my fellow AI researchers, a casual and highly informal sense is that the S-curve is the most likely. This generally aligns with high-tech development curves. It also abides by the belief that what we are doing now isn’t going to scale up. During a period of a plateau, some new change is going to nudge us forward and open the door to scaling up. It won’t be a miracle breakthrough. Instead, ingenuity and novelty will help move the needle.
Which of the seven pathways suits your fancy?
In terms of the least likely of the pathways, the same ad hoc semblance of AI colleagues speculates that the moonshot won’t be the rescuer to get us to AGI. In their minds, the miracle cure gets worse odds than lighting striking you while a meteor lands on your head. Maybe this skepticism reflects a belief that what we know is what we know and that there isn’t something else extraordinary that we haven’t yet devised.
I certainly don’t want that sentiment to dampen any AI innovators from stretching boundaries and trying outsized new ideas. Please keep your spirit strong. Do not let naysayers stop you from your heart’s pursuit.
As the famous American art historian remarked: “Miracles happen to those who believe in them.” The same might happen with attaining AGI.