A new brain model of language

Namenta logo

Hierarchical Temporal Memory including Cortical Learning Algorithms

Cortical.io logo

Semantic Folding Theory
and its application in Semantic Fingerprinting

A Cortical.io White Paper Version 1.0
Author: Francisco E. De Sousa Webber
Diagram: language intelligence

With Semantic Folding:

  • words, sentences and whole texts can be compared to each other
  • the computation of complex NLU operations is highly efficient
  • the system only needs small amounts of training data
  • and is easily debuggable.

The theory

The theory developed by Jeff Hawkins and Subutai Ahmad from Numenta sees the human neocortex as a 2D sheet of modular, homologous microcircuits that process any kind of information in a consistent data format called Sparse Distributed Representations (SDRs).

learn more about sdrs

Francisco Webber from Cortical.io took the HTM theory as a starting point to create Semantic Folding, a data-encoding mechanism for inputting language semantics into HTM networks.

In Semantic Folding, sparse distributed word vectors (semantic fingerprints) are dynamically positioned in a topographical, two-dimensional semantic map in a way that semantically related word vectors are placed close to each
other.

Download the White Paper

Semantic folding

A new model for intelligent text processing

What is the difference with other machine learning approaches

See a comparison