ELI Robot Arm Demos

This is a demonstration of a multi-modal instructional dialog system developed at IBM’s T.J. Watson Research Center. The robot knows about objects, counting, and color and can resolve natural language phenomena like pronoun references. It also can recognize gestures and performs some of its own. In the course of interaction it can learn about new objects, both their appearance and their name. Using the name, it can then consult online data sources to find out about things like recommended dosages and generic substitutions. Finally, the robot is also able to learn new actions through a form of indexicalized verbal scripting. This video was produced by Jonathan Connell with help from Etienne Marcheret, Sharath Pankanti, Michiharu Kudoh, and Risa Nishiyama. Some more details are given in our AGI-12 paper (see http://researcher.watson.ibm.com/researcher/files/us-jconnell/ELI_arm.pdf).

Dieser Beitrag wurde unter Allgemein veröffentlicht. Setze ein Lesezeichen auf den Permalink.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert