There is explicit knowledge of language, which are the rules, including grammar and syntax, of that language, and there is implicit knowledge of language, which are the things we subconsciously know (feel) are correct, but were never explicitly told are so.
Three, of possibly many more, categories of implicit language knowledge are sounds, meanings, and formal patterns.
Examples of implicit sound knowledge are varying air dynamics of different sounds and differences in pronunciation of the same letter as a function of the preceding vowel.
Implicit knowledge of meaning includes the knowledge of the specific meanings of words, the fact that meanings of words are variable depending on factors like context and convention, and the fact that a large vocabulary does not reflect knowledge of many word definitions.
Formal patterns are implicitly understood through subtle semantic differences in language construction. For example, one could say "Alice baked Bob a cake" or "Alice baked a cake for Bob", but only the latter would be used if the cake was intended to be thrown at Bob and not eaten.
I am curious about the nature of vector-embedded implicit lingual space (as well as implicit vector-embedded lingual space), and how it can be accounted for and even utilized in developing the next generation of agentic systems that would benefit from additional lingual features to enable something like latent space interpolation.
Acknowledgments: The crux of this post was inspired by a lecture by Adele Goldberg at Princeton (PSY309: Psychology of Language).