A trained Retina represents every word by a binary vector of 16K bits. The vector is rendered as image that can be intuitively compared with other words. We call this a fingerprint. Every colored bit corresponds to a single semantic feature, automatically generated during the Retina creation. Bits that are grouped together have related meanings. It is not necessary to identify the meaning of the individual bits.
This demo accepts two words, expressions, or text snippets as input. Each of the inputs is converted into its fingerprint representation and displayed as an image. In the middle pane, the two fingerprints are overlaid. Bits that are shared between the two are rendered in black. Play around with different word pairs and you'll easily determine how semantically related they are.
This demo makes use of the fundamental semantic comparison capability of fingerprints generated by a Retina. This time the system takes a word, expression, or text snippet and scans through its whole dictionary the most related ones. We use Euclidian distance as comparison measure on the fingerprints. The most related fingerprints are then converted back into their corresponding words and displayed.
This list of related terms represents the term's context. Many words have several senses and therefore several context sets. This demo shows largest contexts for your term and groups them in several respective lists. You can also use the part of speech filter to further refine your list of related terms.