The term “Artificial Intelligence” conjures up visions of self-aware machines like Skynet and HAL. But in reality AI research isn’t focused on building a thinking machine. Artificial Intelligence is a broad class of techniques that we use to make otherwise dumb computers act a little bit smarter in very particular ways.
Usually our algorithms do exactly one thing well: a chess-playing computer can beat anyone in the world at chess, but can’t drive a car or understand a sentence. My research teams are generally focused on that last one: enabling computers to do a better job of interpreting language in some useful way.
Most modern AI is able to learn on its own from data, a category of algorithms broadly called “machine learning.” The only hard-coded rules we give the computer deal with techniques for finding patterns in data and generalizing from examples. Then we simply show it lots of examples, usually labeled with some sort of an answer. Many advances are happening now in the field of “representation learning” and “deep learning” which involve techniques for finding meaning in complicated combinations of really simple inputs (like pixels in an image or words in a sentence).
We’re hiring! If you’re looking for a compelling research career and interest in any of the topics mentioned here, please reach out.