Investors are pouring hundreds of billions of dollars into the AI industry right now, and much of that is going toward the development of a still theoretical technology: artificial general intelligence.
OpenAI, the maker of the buzzy chatbot ChatGPT, has made creating AGI a top priority. Its Big Tech competitors, Google, Meta, and Microsoft, are also devoting their top researchers to the same goal.
But not everyone’s definition of AGI is the same, leading to some confusion over just how close the industry is to inventing this world-changing technology.
Generally speaking, AGI is simply an advanced AI that can reason like humans. For some, it’s more than that. Ian Hogarth, the co-author of the annual “State of AI” report and an investor, defined it as a “God-like AI.” Tom Everitt, an AGI safety researcher at DeepMind, described AGI as AI systems that can solve task in ways that aren’t limited to how they are trained.
Andrew Ng, a leading AI researcher, said in a recent interview with Techsauce that AGI should be able to do “any intellectual tasks that a human can.” It should be able to learn to drive a car, fly a plane, or write a Ph.D. thesis.
According to Ng, though, we’re still decades away from seeing anything close to that.
“I hope we get there in our lifetime, but I’m not sure,” he said, adding that companies that claim AGI is imminent use dubious definitions of the term. “Some companies are using very non-standard definitions of AGI, and if you redefine AGI to be a lower bar, then of course we could get there in 1 to 2 years.”