Please use this identifier to cite or link to this item:https://hdl.handle.net/20.500.12259/41145
Type of publication: research article
Type of publication (PDB): Straipsnis konferencijos medžiagoje kitose duomenų bazėse / Article in conference proceedings in other databases (P1c)
Field of Science: Informatika / Informatics (N009)
Author(s): Tamošiūnaitė, Minija;Markelic, Irene;Kulvičius, Tomas;Wörgötter, Florentin
Title: Generalizing objects by analyzing language
Is part of: IEEE-RAS : 11th international conference on Humanoid robots bled, Slovenia, October 26-28, 2011 Piscataway : IEEE Press
Extent: p. 557-563
Date: 2011
ISBN: 9781612848686
Abstract: Generalizing objects in an action-context by a robot, for example addressing the problem: ”Which items can be cut with which tools?”, is an unresolved and difficult problem. Answering such a question defines a complete action class and robots cannot do this so far. We use a bootstrapping mechanism similar to that known from human language acquisition, and combine language- with image-analysis to create action classes built around the verb (action) in an utterance. A human teaches the robot a certain sentence, for example: ”Cut a sausage with a knife”, from where on the machine generalizes the arguments (nouns) that the verb takes and searches for possible alternative nouns. Then, by ways of an internet-based image search and a classification algorithm, image classes for the alternative nouns are extracted, by which a large ”picture book” of the possible objects involved in an action is created. This concludes the generalization step. Using the same classifier, the machine can now also perform a recognition procedure. Without having seen the objects before, it can analyze a visual scene, discovering, for example, a cucumber and a mandolin, which match to the earlier found nouns allowing it to suggest actions like: ”I could cut a cucumber with a mandolin”. The algorithm for generalizing objects by analyzing language (GOAL) presented here, allows, thus, generalization and recognition of objects in an action-context. It can then be combined with methods for action execution (e.g. action generation-based on human demonstration) to execute so far unknown actions
Internet: https://hdl.handle.net/20.500.12259/41145
Affiliation(s): Informatikos fakultetas
Vytauto Didžiojo universitetas
Appears in Collections:Universiteto mokslo publikacijos / University Research Publications

Files in This Item:
marc.xml8.97 kBXMLView/Open

MARC21 XML metadata

Show full item record
Export via OAI-PMH Interface in XML Formats
Export to Other Non-XML Formats


CORE Recommender

Page view(s)

79
checked on May 1, 2021

Download(s)

13
checked on May 1, 2021

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.