Artificial General Intelligence: Meaning, Characteristics, Principles, and More

Artificial General Intelligence: Meaning, Characteristics, Principles, and More

9 mins readComment
Syed Aquib Ur
Syed Aquib Ur Rahman
Assistant Manager
Updated on Dec 15, 2023 19:51 IST

Artificial General Intelligence (AGI) represents the quest to create machines mirroring human-like intelligence. AGI aims to possess reasoning, problem-solving abilities, creativity, and learning across diverse domains. Defined by attributes like logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness, AGI is guided by six principles focusing on capabilities, generality, cognition, and progressive development. Levels of AGI range from task-specific Narrow AI to Superhuman Narrow AI surpassing human abilities in specific tasks.

artificial general intelligence

Out of curiosity, the author here wanted to explore the limits or at least end the confusion about whether the most popular large language model like GPT-4 from OpenAI could be considered AGI. 

Here is the response:

As of my last update in April 2023, AGI had not yet been achieved. Current AI systems, including advanced ones like GPT-4, are still considered Narrow AI. They are highly specialized and lack the general, adaptable intelligence that characterizes human cognition. The development of AGI is a subject of extensive research and is considered a significant challenge in the field of AI. There are also numerous ethical, societal, and safety considerations that come with the development of AGI, given its potential impact on various aspects of life and work.”

Artificial General Intelligence (AGI) is theoretically describable in terms of meeting certain criteria that match or cross the performance of humans. Celebrated experts Sam Altman and Mustafa Suleyman agree AGI could be equated to having ‘human cognitive skills better than the smartest human’(Forbes). 

But there is still no single definition of AGI. Nor can there be. 

There are predictions and anticipations of how quickly AI can progress that can drastically change the meaning of the term. Besides being the next saviour (or not) of the AI evolution, fearful risks are predictable too. That it can go out of control due to its ability to ‘deceive’ and ‘displace’ humans, which could do as much harm as good, in terms of geopolitics, labour, and military. 

This underlines the necessity of clubbing certain principles that define it better. Computing researchers, including Shane Leggs, Meredith Ringel Morris, etc., have classified the different levels of capabilities or behaviour of what could be AGI. 

The goal of making AI as capable or better than humans can be traced back to the 1950s. With researchers, including John McCarthy and Marvin Minsky, the study was to ‘simulate’ ‘intelligence’ by making machines use human language to ‘solve problems now reserved for humans, and improve themselves.’ Through the 1970s, the goal was to make the machines capable of educating themselves fast. But, as Shane Leggs et al., mention in Levels of AGI: Operationalizing Progress on the Path to AGI (2023), the term was used first in 1997. 

While engrossing ourselves in this theoretical concept, terms such as narrow AI and autonomy, will be used but will be explained as well. Let’s get into it, then

What Is Artificial General Intelligence (AGI)?

Artificial General Intelligence (AGI) is a form of artificial intelligence with the ability to understand, learn, adapt, and apply its intelligence across a wide range of tasks and domains. It is similar to the cognitive abilities of a human being. 

AGI aims to exhibit human-like intelligence, including reasoning, problem-solving, creativity, understanding natural language, learning from experience, and generalising knowledge to new situations.

Characteristics of AGI

Researchers, such as Yoshihiro Maruyama, associate AGI with these attributes. 

Logic 

This refers to the ability of AGI to engage in abstract thinking, reasoning, and problem-solving. Logic is a fundamental aspect of most AI systems. But in the context of AGI, it would involve a more advanced and generalised form of reasoning applicable across various domains.

Autonomy

Autonomy in AGI means the ability to operate independently, make decisions without human intervention, and adapt to new situations or environments. This involves self-directed learning and decision-making, a key difference from many current AI systems that require specific instructions or human guidance.

Resilience 

This attribute implies the ability of AGI to cope with errors, uncertainties, and unexpected challenges. Resilience in AI involves not just robustness in the face of technical issues, but also the capacity to handle complex, real-world environments that are unpredictable and constantly changing.

Integrity 

Integrity for AGI encompasses reliability, trustworthiness, and ethical behaviour. This means the AGI should operate in a manner that is consistent, transparent, and aligned with human values and ethical principles.

Morality 

This involves the AGI's ability to understand and apply ethical considerations and moral values in its decision-making process. It's a particularly challenging aspect, as it requires the AGI to navigate complex, often subjective, and culturally-dependent moral landscapes.

Emotion 

This attribute refers to the AGI's ability to recognize, interpret, and possibly even simulate human emotions. Emotional intelligence in AGI would enhance its interactions with humans, making it more capable of empathy and social understanding.

Embodiment 

Embodiment means the AGI would have a physical presence or be connected to the physical world through sensors or effectors. This is based on the theory that intelligence is not just a cognitive process but also involves interaction with the physical environment.

Embeddedness 

This concept relates to the AGI being part of a larger system or environment, interacting with and being influenced by its surroundings. It emphasises the interconnectedness of AGI with societal, ecological, and technological systems.

Six Principles of Artificial General Intelligence

In the paper, Levels of AGI, there are 6 principles, which are derived from the various previously and recently held (mis-) conceptions of the term.
Take the Turing Test of 1950s, for example. Alan Turing proposed the Turing Test as a means to assess a machine's intelligence by having it engage in a text-based conversation. Here, a human evaluator interacts with both a machine and another human. This is to attempt understanding which one is the machine, based on their responses. But the test has a tendency to focus on a machine's ability to mimic human responses. That may not demonstrate genuine intelligence or understanding.

Another example is how OpenAI mentions what could be defined as AGI - “highly autonomous systems that outperform humans at most economically valuable work.” This only looks at the economic value of work. It does not look into creativity or understanding emotions, because they don't always directly make money. (Perhaps you can imagine how megalomaniacal a vision it can create in powerful organisations for replacing labour!)

So, onto the six principles…

Focusing on the Capabilities instead of Processes

Focusing on capabilities rather than processes should be Artificial General Intelligence (AGI). This principle highlights what an AGI can achieve, not how it achieves tasks. The previous approaches exclude requirements like human-like thinking processes or qualities such as consciousness and sentience. 

Focusing on Generality and Performance

Here is a popular AGI definition of Mark Gubrud, who defines AGI as, “AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.

Or another definition - “shorthand for any intelligence (there might be many) that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence.

These talk about performance and generality. But there should be a deeper understanding of it. Which is talked about in the next point. 

Focusing on Cognitive and MetaCognitive Tasks
The authors argue that there's a possibility that physical embodiment could be essential in gaining world knowledge for success in some cognitive tasks. Though AGI definitions often prioritise mental tasks over physical ones, recent progress reveals that AI's physical skills are not as developed as its mental abilities

Or, at least, it might represent one possible path to success in certain cognitive domains. But, the ability to perform physical tasks shouldn't be seen as a strict necessary requirement for reaching AGI. 

Instead, metacognitive abilities, such as learning new tasks or knowing when to seek clarification or help, can be considered crucial prerequisites for AI systems to achieve generality.

Focusing on Potential and not Deployment

Deciding if a system is AGI should depend on how well it performs tasks, not if it's used in the real world. Requiring real-world use causes other problems. These can extend legal and ethical issues. You may read about business ethics too!

Focusing on Ecological Validity 

Picking tasks for measuring AGI progress should mirror real-life activities valued by people in various areas, not just economically. This might mean moving away from easy-to-automate AI measures that don't capture the skills valued in an AGI.

Focusing on the Path to AGI, not a Single Endpoint

This principle is about embracing various definitions of AGI, but also identifying to what degree AI is capable of in the current scenario. 

Levels of AGI

The levels of AGI can be categorised into the following. 

  • Narrow AI refers to AI systems focused on specific tasks or a defined set of tasks. Examples include calculator software or compilers for programming languages.
  • Emerging AGI represents AI systems that are somewhat better than unskilled humans and showcase a broader range of capabilities. Examples include ChatGPT, Bard, and Llama 2.
  • Competent Narrow AI indicates AI systems performing at least at the 50th percentile of skilled adults but within a limited scope. Examples include toxicity detectors like Jigsaw and smart speakers such as Siri, Alexa, or Google Assistant.
  • Expert Narrow AI describes AI systems performing at the 90th percentile of skilled adults within a specific area. Examples include spelling and grammar checkers like Grammarly or generative image models like Imagen and DALL-E 2.
  • Virtuoso Narrow AI refers to AI systems that surpass most skilled humans (at least the 99th percentile) in a particular domain. Examples include Deep Blue, AlphaGo, and other advanced game-playing AIs.
  • Superhuman Narrow AI denotes AI systems that outperform all humans in specific tasks. Examples include AlphaFold, AlphaZero, and StockFish in chess.

These categories outline the capabilities and performance levels of AI systems, ranging from narrow task-focused abilities to broader, more general, and superior performances in specific domains.

How Does Artificial General Intelligence (AGI) Differ from Artificial Intelligence (AI)?

Artificial General Intelligence (AGI) and Artificial Intelligence (AI) are different in terms of capabilities. AGI connects to human intellect and is, almost, instinctual, in theory. It would be able to do a range of tasks autonomously. Meaning, they can adapt on their own. It does not have to be trained to do something new, outside its vocabulary created through programming and training. 

AI, on the other hand, can understand human cognition through preprogramming. It cannot learn tasks on its own, as it is not instinctual. 

FAQs

What is AGI's goal?

AGI aims to replicate human-like intelligence, enabling machines to perform tasks autonomously across various domains.

How is AGI different from AI?

AGI is more akin to human intellect, instinctual in learning and adapting, while AI relies on preprogramming and lacks instinctual capabilities.

What are AGI's key attributes?

AGI exhibits logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness in its functioning.

What are the principles defining AGI?

Six principles focus on AGI's capabilities, generality, cognition, potential, ecological validity, and progressive development.

What are the levels of AGI?

AGI levels range from Narrow AI (task-specific) to Superhuman Narrow AI (outperforming humans in specific tasks) based on performance and capability.

 
About the Author
author-image
Syed Aquib Ur Rahman
Assistant Manager

Aquib is a seasoned wordsmith, having penned countless blogs for Indian and international brands. These days, he's all about digital marketing and core management subjects - not to mention his unwavering commitment ... Read Full Bio