A human cognitive imitation Engine— OpenCog

Amit Singh Bhatti
9 min readApr 5, 2021


While we think, it's about using networks to solve worldly problems and call it AI. Extrapolating the networks of networks to solve all the problems in the world is an endless sheet which humans will keep on stitching since there is no upper bound on the intelligence, something which has been thoroughly explained in Francois Chollet’s paper of measure of the intelligence. I believe that while the neural networks work well on solving some of the tasks in our daily lives using computing capabilities, there is more than just matrix multiplication in our cognitive system which makes us think different from the rest of the species on the planet otherwise I wouldn’t have been writing this blog on this platform. On point I suppose!

In this blog, we will see how the current field of AI is defining general intelligence and what does one such framework has to offer as a stepping stone towards AGI.

What is Intelligence?

Historically, since the term intelligence was coined, humans have been bench-marking their capabilities on a specific task and then making comparative evaluations with the developed AI machines, to declare a sense of intelligence. The issue with that kind of measure firstly is, this is very human-centric, secondly, the individual’s on the jury side of the evaluation is a limited pool of so-called experts who cannot define intelligence for the masses, since they are speaking from their wisdom which again is a narrow form acquired intelligence.

The issue with bench-marking then is how do you benchmark metrics such as creativity. These kinds of metrics are open-ended (POETs) and cannot be quantitatively narrowed to a singular number.

Then the question arises, what exactly is intelligence and how can we measure it. Is it something observable, if yes, how and even if it is observable, how do we quantify that observable phenomenon?

What we are doing in terms of machine learning in today’s world is information compression, sampling efficiency, to interpolate unseen data points with learned data-points(manifold learning) or extrapolating the learned data distributions to generate new samples in the same manifold topology. So, till now we haven’t answered what exactly intelligence is, let’s try to answer that by defining some subjection.

  • Intelligence as a collection of task-specific skills
  • Intelligence as a general learning ability

Being able to perform a pool of task comparative to humans and then being able to understand, apply imagination and creativity in novel situation to be able to respond using prior knowledge is what defines “human observable intelligence”.

Hierarchy of intelligence

Another point to note here is that building general intelligence here from top to bottom is more achievable, more intuitive, and less complex compared to the bottom-up approach. Thus, the focus should be to be more able to measure the general learning ability of a system rather than task-specific skills.

What is wrong with Pure DL Systems?

  1. The deep learning systems, firstly lack a physical understanding of the environment, the way humans interactions in the environment is Physics driven. A need for hybrid architectures is needed i.e. combining with the Neurosymbolic-AI. A self reprogramming ability too is something these matrix multiplication machines lack.
  2. Lacks strong generalization, An interpolation in a learn manifold vector space is what these models produce as an output. The generative models too create samples after picking up from the latent variables in that manifold space only. They cannot truly be creative.
  3. The approximative nature of the DNNs also makes them poorly interpretable. Humans cannot rely on their results in novel situations. Basically, the disadvantages of the DNNs are strengths of symbolic systems, which inherently possess compositionality, interpretability, and can exhibit true generalization.

What is AGI?

When we add a term called artificial to define an entity, we are essentially providing a disclaimer that is just an approximation of the real phenomenon. But I certainly feel, that what we have created is compute-based intelligence and not intelligence w.r.t human intelligence, so it a real intelligence in the world of algorithms or to be called an “intelligent algorithm”.

Coming back to the definition of the human-defined intelligence for these systems, Artificial General Intelligence is the field of computer science that creates systems that perform on par with humans across different activities such as vision, speech, text, emotions, motor movements, etc. A unified single compute-based system that can do all these tasks in a coherent fashion.

The core principles of a general intelligence system are

  1. Perception: The very essence of making sense of the environment is called cognitive interaction. An appropriate cognitive sense generates an understandable model of the surrounding, which forms the basis of the right stimulus in the environment.
  2. Memory: The perception needs to be entangled with memory which creates a timeline of experiences that a system should remember. A memory in the system forms the basis of informed stimulus in the surrounding.
  3. Prediction: The interaction in the environment is in the form a decision that the system takes, which can be formulated as the prediction generated by the system.
  4. Action: The tangible outcome after a cognitive decision has been made is formulated as an action in the environment.
  5. Goals: The tangible interactions in the environment are made w.r.t an incentive-driven intent, which basically is the goal that the system whats to achieve in the environment.

For example, consider a bot that has been asked to pour tea in the cups from the tea kettle. The perception module firstly helps the bot make sense of the environment which includes sensing the kettle and cups in the kitchen. The memory elements of the bot help recall what exactly the kettle and cups look like and where are placed in the kitchen. The prediction module helps it make inference on the human’s command i.e. pour the tea in cups. Based on the inference an action is decided by the bot. The action is performed while using the perception, memory, and prediction incoherence. The Bot recognizes the action has been completed and the goal has been achieved with a feedback system of perception, memory, prediction, and action again which is checked with the goal as the reference quantity.

So we see, above 5 elements were repetitively used in the task the bot was performing in the environment it was exposed to.

Approaches Towards AGI

Symbolic Approach


The symbolic approach is the school of thoughts that think that universal Knowledge can be represented as a graph. This graph knowledge representation can be formulated as the hierarchy of concepts learned by the systems. symbolic systems have a large representational power and can perform high-level logic and thinking. In reality, most symbolic approaches fall short in learning and lower-level tasks like perception.

Connectionist Approach

This approach connects different symbolic learning systems. The sub-optimal symbolic learning systems are what we use to connect together to form collaborated abstract concepts. Most of the AI systems that we work with today are connectionist architectures despite the fact that their purpose is not AGI.

Hybrid Approach

Cogprime Hybrid Graph

The hybrid school of thought means combining the group of the symbolic learners in connection with other groups of symbolic learners, to build a hypergraph. Consider the case of Cogprime, which is a collected pool of representation learners and each pool interacts with another pool as a unit in the graph. So you can call it convoluted graphs. In the next section, we will look at the OpenCog framework, which lays down the foundation of the AGI systems.

What is OpenCog?

The OpenCog or the open cognition is the open-source project from Ben Goertzel which is an integrative architecture approach to create an ecosystem of computing intelligent models which can solve most of the problems related to the cognitive domain.

OpenCog is not a model of human neural or cognitive structure or activity. It draws heavily on the knowledge about human intelligence, especially cognitive psychology, but it also deviates from the known nature of human intelligence in many ways with a goal of providing maximal humanly meaningful general intelligence using available computer hardware.

The first generation of the AGI research was confined by technological limitation and the focus started off with expert systems or the “Narrow AI” as we call them today. In the coming decades of the 1950s, cognitive science and neuroscience have taught us a lot about what a cognitive architecture needs to look to support roughly human-like general intelligence.

There has been growing focus on the human-like intelligence-based compute systems, conferences like the BICA(Biologically Inspired Cognitive Architectures) conference, AGI conference series(AGI-08, AGI-09, AGI-10, AGI-11), etc have drawn the focus towards solving the problem in that way.

The previous attempts at solving this problem have been best made by the Stan Franklin in the Consciousness is computational: The LIDA model of global workspace theory. Joscha Bach, in the principle of Sythetic intelligence. The paper concluded that the conscious (as well as the non-conscious) aspects of human thinking, planning and perception are produced by adaptive, biological algorithms. We propose that machine consciousness may be produced by similar adaptive algorithms running on the machine. — Global Workspace Theory

One question that arises is, why haven’t we been able to develop AI systems which behave exactly the way the human cognitive system works.

  1. The intelligent system depends on creating a high-level abstraction of the low-level concepts in the environment.
  2. The creation of an algorithm which precisely creates an abstraction of the surrounding the way human do is not yet been figured out.
  3. Achieving the emergence of these abstractions within a system formed by integrating a number of different AI algorithms and structures is tricky. It requires careful attention to the manner in which these algorithms and structures are integrated, and so far the integration has not been done in the correct way.

Cognitive Synergy — Secret Sauce

The brain works in synergy with the whole body which is a key missing piece in cognitive synergy: the fitting-together of different intelligent components into an appropriate cognitive architecture, in such a way that the components richly and dynamically support and assist each other, interrelating very closely in a similar manner to the components of the brain or body and thus giving rise to appropriate emergent structures and dynamics. The cognitive synergy ensuing from integrating multiple symbolic and subsymbolic learning and memory components in an appropriate cognitive architecture and environment, can yield robust intelligence at the human level and ultimately beyond.


OpenCog as an integrative system has

  1. Cognitive Synergy as defined above.
  2. Glocal memory, i.e. a hybrid memory, not purely localized nor purely global memory.
  3. A cognitive ability to learn objects similar to the way human children learn.
  4. Teaching system human language to supply it with some in-built linguistic facility, in the form of rule-based and statistical linguistic-based NLP systems.
  5. Handling all kinds of memory, declarative, procedural, episodic, sensory, intentional, and attentional.
  6. Complex structures :
    Hierarchical network, representing both a spatiotemporal hierarchy and an approximate “default inheritance”.
    Heterarchical Network of associativity, roughly aligned with the hierarchical network.
    Self Network — an approximate micro image of the whole network.
    inter-reflecting networks — modeling self and others, reflecting a “mirror house” design pattern.
  7. Uncertain logic is a good way to handle declarative knowledge. To deal with the problems facing a human-level AGI, an uncertain logic must integrate imprecise probability and fuzziness with a broad scope of logical constructs. PLN is one good realization.
  8. Evolutionary program learning — a method to handle difficult learning problems.
  9. Economical Activation spreading — A economical and intelligent strategy to handle attention knowledge. Artificial economics is an effective approach to activation spreading and Hebbian learning in the context of neural-symbolic networks. ECAN(Economic attention allocation) is a good realization.
  10. Simulation to handle episodic memory.
  11. Self-improvement using super-compilation and automated theorem-proving.

The above was a high-level view of what AGI systems are and what is OpenCog as an overview. The key concepts in the OpenCog will be covered in the next piece, where we will understand Atomspace, the knowledge representation DB, and the query engine of the OpenCog. The Probabilistic Logic Networks i.e. the learning of the uncertainty. MOSES — meta optimization evolutionary learning algorithm and the economic attention allocation which we pointed out in the key points.

In the end note, I would say, I don’t think there is something called as general intelligence, because we cannot generalise the cognitive abilities for N dimensions while the human perception system understands only three dimensions. The understanding of our intelligence is not the universal intelligence, so whatever we build is human centric intelligent and not general intelligence, calling these systems to be Human general intelligence would be justified.


  1. Measure of intelligence : https://arxiv.org/pdf/1911.01547.pdf
  2. Approaches to AGI : https://medium.com/@kevn.wanf/approaches-to-artificial-general-intelligence-5a2654c48b8c
  3. what is AGI : https://towardsdatascience.com/what-is-artificial-general-intelligence-5b395e63f88b
  4. https://www.researchgate.net/figure/Approach-to-artificial-general-intelligence-Instead-of-trying-to-solve-complex-but_fig1_300646042
  5. https://wiki.opencog.org/w/CogPrime_Overview