01. June 2019

Paraphrasing by imagination

Tasks in machine learning often require a large amount of training data. Somehow, humans don’t. In our latest paper see how this is possible. We reduce one problem that seems to have noting to do with vision, paraphrasing (comparing two sentences), to a vision and language problem. In the process, we do paraphrasing without a single example of a paraphrase!

In review, ask for a preprint.

04. February 2019

How vision helps you learn language

The visual context when a sentence is uttered is an extremely powerful tip about what that sentence might mean. In a recent paper we show how you can learn the structure and meaning of language, even if you never see a single example of those structures. A mechanism for learning language from videos and sentences getting us closer to understanding how children learn.

See more here!

Alt text

20. January 2019

Deep sampling-based planning

Teach your sampling-based planner new tricks. In a recent paper we show how a deep network can guide a planner, how you only need a few examples to make this happen, and how this generalizes to new situations. Even better, when the network is confused, you devolve to having a regular sampling-based planner!

See more here!

Alt text

Andrei's photo

I'm a research scientist at MIT and the Center for Brains, Minds, and Machines,working on language, vision, and robotics, with a touch of neuroscience. I focus on how language can be grounded in perception, how it is acquired by children, and how robots can use language to communicate with us.

The lab page
The CBMM page
abarbu
@_abarbu_
abarbu
Google Scholar
andrei@0xab.com
abarbu@mit.edu