Language, at its core, is a vessel used to transport an idea from one person’s mind and into another’s. An idea, such as this post, is broken into discrete chunks of meaning we commonly refer to as words in the English language. We insert punctuation in order to add or clarify meaning to a grouping of words.
Each meaning chunk, whether it’s a word or punctuation, can be given a meaning potential magnitude and meaning context distribution. For example, the word “chunk” would be fairly low on the meaning potential scale as it is quite defined. It’s context distribution would be fairly tight as well because it doesn’t do much to the rest of the sentence besides besides being the noun. This is getting awfully close to the mathematical definition of a vector (magnitude and direction). Given this, one could map or graph a word, sentence, or whole book. As an example, I put together a quick graph for the sentence “The quick brown fox jumps over the lazy dog.” In this graph, the X-axis shows the context distribution magnitude of the word on the Z-axis. It’s a little confusing, but put in simpler terms the X-axis actually represents locations of the context distribution of the words in the Z-axis.
A textual explanation of this graph would go like this: “The” acts on “quick”, “brown”, and “fox” which gives it a distribution with those three words, “fox” being the highest in magnitude. “quick” acts on “fox” mainly but to a slightly lesser extent “quick”. Ditto with “brown”. “fox” acts on everything as it’s the subject of the sentence and is core to the meaning. “jumps” acts on fox alone. “over” is the verb of the sentence but acts on “dog” in addition to “fox”. “the” acts on “dog” alone. Ditto with “lazy”. “dog” acts on everything as well but to a lesser extent than “fox”. Of course “.” closes the thought so it acts on everything but with a reduced magnitude.
Of course, this doesn’t take into account the actual definition of a word. The definition of a word is really the translation of an idea or concept into a certain set of characters or even images. This set of characters or images is usually governed by rules that vary per each language. The pattern it follows and set it is translated into varies greatly by language. Due to this, there isn’t much we can do here mathematically between two very distant languages. Languages that are closely related is another story entirely.
This method could potentially be useful in automated intelligence research or really anywhere that programming is being used in conjunction with deciphering meaning from words. Anyways, just a thought.