In today’s column, I examine the ongoing debate about how we will know that artificial general intelligence (AGI) is getting close to being attained. Some say that it will be blatantly obvious that we are nearing the vaunted AGI attainment and thus no special means will be needed to ascertain as such. Others assert that achieving AGI is going to be complex and convoluted, necessitating that only the topmost AI experts will be able to tell us that AGI is almost here. In a sense, the belief is that a form of scientific consensus by AI experts will be the mainstay telltale.
But scientific consensus has its hiccups and gotchas, and perhaps a more prudent approach would be via the use of convergence-of-evidence, also known as consilience.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
The AI Experts Consensus Method
Right now, efforts to forecast when AGI is going to be attained consist principally of two paths.
First, there are highly vocal AI luminaires making individualized brazen predictions. Their headiness makes outsized media headlines. Those prophecies seem to be coalescing toward the year 2030 as a targeted date for AGI. A somewhat quieter path is the advent of periodic surveys or polls of AI experts. This wisdom of the crowd approach is a form of scientific consensus. As I discuss at the link here, the latest polls seem to suggest that AI experts generally believe that we will reach AGI by the year 2040.
Should you be swayed by the AI luminaries or more so by the AI experts and their scientific consensus?
Historically, the use of scientific consensus as a method of understanding scientific postures has been relatively popular and construed as the standard way of doing things. If you rely on an individual scientist, they might have their own quirky view of the matter. The beauty of consensus is that a majority or more of those in a given realm are putting their collective weight behind whatever position is being espoused.
The old adage is that two heads are better than one. In the case of scientific consensus, it might be dozens, hundreds, or thousands of heads that are better than one.
Scientific Consensus Is Not Absolute
Scientific consensus turns out to not be absolute or perfect.
We are presently amid societal angst about scientific consensus. The polarization of society has divided our beliefs about scientific consensus. For example, there is much heated debate about the role of scientific consensus in the matter of the Covid origins.
To be abundantly clear, scientific consensus is not to be equated with scientific absolute certainty. Those are different beasts. Consensus implies that a scientific viewpoint lands on a particular status or condition, more so than it might otherwise. There are bound to be those outside of the consensus that do not support the consensus opinion. It isn’t utter unanimity but instead a consensus.
You can look back in recent history and see that scientific consensus has had its ups and downs. A frequently cited instance is that we were told for the longest time that Pluto was a planet. This belief was supported wholeheartedly by scientific consensus. In 2006, the scientific community pulled the rug out from under us and said that Pluto wasn’t a planet (nowadays, it is classified as a dwarf planet). Scientific consensus changed.
The good news about scientific consensus includes these four salient points:
- (1) Scientific consensus is based on collective insight versus individual-only opinion.
- (2) Scientific consensus gives clarity and stability to what we know about scientific facets.
- (3) Scientific consensus serves as a building block for constructing holistic scientific theories.
- (4) Scientific consensus is flexible and can adapt as our understanding of the world changes.
The bad news about scientific consensus includes these four crucial points:
- (1) Scientific consensus can turn out to be wrong and yet we were earlier led to assume it was unquestionably right.
- (2) Scientific consensus is somewhat insidious since it is hard to stridently disagree with a consensus viewpoint.
- (3) Scientific consensus might be reached simply due to bird-of-a-feather convention and not due to hardcore scientific reasoning.
- (4) Scientific consensus at times becomes dogma that no one dares refute.
Convergence-of-Evidence aka Consilience
You might be wondering if scientific consensus has these assorted vulnerabilities and weaknesses, is there any other viable means of coping with grasping the nature of scientific findings and positionings?
Yes, but you probably haven’t heard of the prominent alternative that rarely gets airtime. I’m referring to convergence-of-evidence, also known as concordance-of-evidence or consilience. The principle is pretty straightforward and fully sensible.
It goes like this. We dutifully seek out evidence from a multitude of sources and use that evidence to essentially converge on a scientific posture or status. It is best if the sources are independent of each other. I say that because a bunch of sources that are all from the same drinking well aren’t going to up the ante on being a healthy convergence. The convergence would simply be the same regardless that you had amassed a ton of evidence.
The idea is that we can put our shoulders behind a convergence-of-evidence that comes from different sources that each arrived at their positions via different and separate means. This seems quite impressive when lots of rigorous pursuits so happen to reach the same conclusion. The weight underlying the conclusion can be said to be sturdily strong.
Is convergence-of-evidence perfect?
Nope. It is not. Just like the weaknesses of scientific consensus, there are similar weaknesses associated with consilience. The odds of being off seem lessened though with the convergence-of-evidence.
Ideally, you would use all three methods to determine where things are at. You would bring together individual scientific opinions, along with scientific consensus, and notably have convergence-of-evidence in hand too. If all three of those avenues seem to agree, you have yourself a compelling case. Furthermore, if there are notable disparities between the conclusions of the three prongs, it is a helpful heads-up that something is afoot, and you need to keep your eyes wide open.
Convergence-of-Evidence And AGI
This brings us to the big reveal, namely that in addition to AI luminaires having their predictions about the attainment of AGI, plus having a form of scientific consensus via the use of AI expert surveys, we ought to also include convergence-of-evidence toward AGI into the mix too. Sadly, there isn’t much of a movement yet in the AGI arena towards a convergence-of-evidence or consilience. I am optimistic that we will gradually and inexorably get there.
Let’s get the ball rolling.
I offer a brief sketch of what kind of evidence we would want to encompass in a convergence-of-evidence framework for identifying the nearness of attaining AGI. There would need to be a concerted effort to land on firm metrics and standardize the approach. If a standard isn’t formulated, everyone will be hawking their particular set of evidence, and it will be a chaotic mess.
Here are six vital evidentiary factors that could be part of a well-devised convergence-of-evidence concerning the attaining of AGI:
- (1) AI technological and empirical factors. Incorporate genuine well-tested AI advancement benchmarks that can be early signs of AGI attainment being reached. Devise additional measures to gauge emergent capabilities, cross-domain competencies, and predominant scaling laws about AI. This will be considered the primary factor while the rest of the factors are seen as secondary though still essential and altogether worthwhile.
- (2) Neuroscientific related factors. This subset is a bit controversial, but the belief by some is that AGI will inevitably be based on or shaped via human neuroscientific parallels. Not everyone concurs with that notion. In any case, measures such as alignment with human cognitive architecture and neurosymbolic capacities might be included in this subset.
- (3) AI economic and societal factors. If you assume that AGI will be a steady attainment and not a sudden overnight success, the advancing AI will presumably be seeping into society and our economic affairs. For example, the advanced AI that is nearing AGI would potentially be substituted for human cognition labor. This could be readily measured. It isn’t purely a technological metric and so there are bound to be arguments about whether this ought to be included.
- (4) AI expert consensus factors. Rather than hoping that convergence-of-evidence for AGI would be compared side-by-side with AI expert scientific consensus, we might as well toss it into the consilience per se. Various carefully selected and methodical surveys of AI expert opinions on the nearing of AGI would be incorporated directly into the convergence-of-evidence. A portfolio of such surveys or consensus gatherings would be included rather than just a singular chosen one.
- (5) AI research directional shifts factors. An AI expert consensus consists of what AI experts have to say, which might or might not correlate with what they are actually doing. To ascertain what AI experts are working on, an analysis of how AI research is shifting could be devised and included in the convergence-of-evidence. This might include that if AI research is becoming increasingly secretive it could be due to a belief on their part that they have some secret sauce that is leading to AGI. Of course, such a movement could be for other reasons too.
- (6) AI governance and AI regulatory factors. If governments and lawmakers believe that AGI is nearing, there is likely to be a burst of lawmaking and regulatory action. This doesn’t mean that they are correct in their belief that AGI is nearing. They might be misled or jump the gun. It is nonetheless a sign or indicator that ought to be considered. This factor could include nationwide laws and strategies associated with AGI, international AGI stipulations and agreements, import/export controls about AGI, and so on.
Next Steps On Convergence-of-Evidence AGI
I outlined a productive handful of possibilities or factors for composing an AGI consilience framework. I’m sure you might have additional thoughts and suggestions.
That’s great, please proceed to put together something, and let’s get this underway.
I’ll give the final word for now to Albert Einstein: “To raise new questions, new possibilities, to regard old problems from a new angle, requires creative imagination and marks real advance in science.” That’s precisely why a suitable convergence-of-evidence framework for AGI is sorely needed.