A human cognitive imitation Engine— OpenCog


While we think, it's about using networks to solve worldly problems and call it AI. Extrapolating the networks of networks to solve all the problems in the world is an endless sheet which humans will keep on stitching since there is no upper bound on the intelligence, something which has been thoroughly explained in Francois Chollet’s paper of measure of the intelligence. I believe that while the neural networks work well on solving some of the tasks in our daily lives using computing capabilities, there is more than just matrix multiplication in our cognitive system which makes us think different from the rest of the species on the planet otherwise I wouldn’t have been writing this blog on this platform. On point I suppose!

  • Intelligence as a general learning ability
Hierarchy of intelligence

What is wrong with Pure DL Systems?

  1. The deep learning systems, firstly lack a physical understanding of the environment, the way humans interactions in the environment is Physics driven. A need for hybrid architectures is needed i.e. combining with the Neurosymbolic-AI. A self reprogramming ability too is something these matrix multiplication machines lack.
  2. Lacks strong generalization, An interpolation in a learn manifold vector space is what these models produce as an output. The generative models too create samples after picking up from the latent variables in that manifold space only. They cannot truly be creative.
  3. The approximative nature of the DNNs also makes them poorly interpretable. Humans cannot rely on their results in novel situations. Basically, the disadvantages of the DNNs are strengths of symbolic systems, which inherently possess compositionality, interpretability, and can exhibit true generalization.

What is AGI?

  1. Memory: The perception needs to be entangled with memory which creates a timeline of experiences that a system should remember. A memory in the system forms the basis of informed stimulus in the surrounding.
  2. Prediction: The interaction in the environment is in the form a decision that the system takes, which can be formulated as the prediction generated by the system.
  3. Action: The tangible outcome after a cognitive decision has been made is formulated as an action in the environment.
  4. Goals: The tangible interactions in the environment are made w.r.t an incentive-driven intent, which basically is the goal that the system whats to achieve in the environment.

Approaches Towards AGI

Symbolic Approach


Connectionist Approach

This approach connects different symbolic learning systems. The sub-optimal symbolic learning systems are what we use to connect together to form collaborated abstract concepts. Most of the AI systems that we work with today are connectionist architectures despite the fact that their purpose is not AGI.

Hybrid Approach

Cogprime Hybrid Graph

What is OpenCog?

The OpenCog or the open cognition is the open-source project from Ben Goertzel which is an integrative architecture approach to create an ecosystem of computing intelligent models which can solve most of the problems related to the cognitive domain.

  1. The creation of an algorithm which precisely creates an abstraction of the surrounding the way human do is not yet been figured out.
  2. Achieving the emergence of these abstractions within a system formed by integrating a number of different AI algorithms and structures is tricky. It requires careful attention to the manner in which these algorithms and structures are integrated, and so far the integration has not been done in the correct way.


OpenCog as an integrative system has

  1. Glocal memory, i.e. a hybrid memory, not purely localized nor purely global memory.
  2. A cognitive ability to learn objects similar to the way human children learn.
  3. Teaching system human language to supply it with some in-built linguistic facility, in the form of rule-based and statistical linguistic-based NLP systems.
  4. Handling all kinds of memory, declarative, procedural, episodic, sensory, intentional, and attentional.
  5. Complex structures :
    Hierarchical network, representing both a spatiotemporal hierarchy and an approximate “default inheritance”.
    Heterarchical Network of associativity, roughly aligned with the hierarchical network.
    Self Network — an approximate micro image of the whole network.
    inter-reflecting networks — modeling self and others, reflecting a “mirror house” design pattern.
  6. Uncertain logic is a good way to handle declarative knowledge. To deal with the problems facing a human-level AGI, an uncertain logic must integrate imprecise probability and fuzziness with a broad scope of logical constructs. PLN is one good realization.
  7. Evolutionary program learning — a method to handle difficult learning problems.
  8. Economical Activation spreading — A economical and intelligent strategy to handle attention knowledge. Artificial economics is an effective approach to activation spreading and Hebbian learning in the context of neural-symbolic networks. ECAN(Economic attention allocation) is a good realization.
  9. Simulation to handle episodic memory.
  10. Self-improvement using super-compilation and automated theorem-proving.


  1. Measure of intelligence : https://arxiv.org/pdf/1911.01547.pdf
  2. Approaches to AGI : https://medium.com/@kevn.wanf/approaches-to-artificial-general-intelligence-5a2654c48b8c
  3. what is AGI : https://towardsdatascience.com/what-is-artificial-general-intelligence-5b395e63f88b
  4. https://www.researchgate.net/figure/Approach-to-artificial-general-intelligence-Instead-of-trying-to-solve-complex-but_fig1_300646042
  5. https://wiki.opencog.org/w/CogPrime_Overview

Homo Bayesian