Neural Networks are able to understand and work with numerical data. However, there is no easy way to convert the data into numbers when it comes to text and graphical data.

One of the ways that we got around this problem was by creating embeddings for words and graphs. Embeddings of a word or a node in a graph are vectors of numbers that contain all the information that is necessary for a neural network to understand it. This means is that the embeddings of two contextually similar words will be similar and thus have a very large product. Embeddings were invented very recently and they are now the go-to way to create efficient representations of words and graphs.

Similarly, nodes that are close together in a graph will have a lesser distance in a two-dimensional embedding space.

All these embeddings are learned and represented in a Euclidean space; therefore, it is safe to argue that all types of embeddings are limited by the mathematics and dimensionality of Euclidean spaces.