A linguistics researcher has used supercomputers to develop a method for helping computers learn natural language.
Instead of hard-coding human logic or deciphering dictionaries to try to teach computers language, University of Texas at Austin linguistics researcher, Katrin Erk decided to try a different tactic: feed computers a vast body of texts and use the implicit connections between the words to create a map of relationships.
Erk said that an intuition for him was that a person could visualize the different meanings of a word as points in space.
She said that a person can think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations.
She said that meaning of a word in a particular context is a point in this space and then people don’t have to say how many senses a word has and instead say that ‘this use of the word is close to this usage in another sentence, but far away from the third use.’