ETL-Humanoid: A Research Vehicle for Open-Ended Action Imitation View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2003-06-30

AUTHORS

Yasuo Kuniyoshi , Gordon Cheng , Akihiko Nagakubo

ABSTRACT

The capability of action imitation constitutes a fundamental basis of higher human intelligence. In this paper, we first analyze the concept of imitation and mark the essential problems underlying imitation, e.g. redundant sensory and motor degrees of freedom, adaptive mapping, and strong embodiment. It supports a synthetic approach to understanding imitation capabilities, which requires a versatile humanoid robot platform. An overview of our full-body humanoid robot system is presented, which is signified by its versatile physical capabilities and complete open architecture. Then we present our early experiment on multi-modal architecture and behavior imitation with the humanoid. It can continuously interact with humans through visual, auditory and motion modalities in an unmodified everyday environment. And when a person attends to the robot, starting to show a dual arm motion, the robot spontaneously starts to copy it. More... »

PAGES

67-82

References to SciGraph publications

  • 1999. The Cog Project: Building a Humanoid Robot in COMPUTATION FOR METAPHORS, ANALOGY, AND AGENTS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/3-540-36460-9_5

    DOI

    http://dx.doi.org/10.1007/3-540-36460-9_5

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1030298129


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "University of Tokyo", 
              "id": "https://www.grid.ac/institutes/grid.26999.3d", 
              "name": [
                "Dept. of Mechano-Informatics, School of Information Science and Technology, The Univ.of Tokyo, Japan"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Kuniyoshi", 
            "givenName": "Yasuo", 
            "id": "sg:person.013372311431.62", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013372311431.62"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "name": [
                "ATR Human Information Science Labs., Japan"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Cheng", 
            "givenName": "Gordon", 
            "id": "sg:person.011214060205.82", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011214060205.82"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "National Institute of Advanced Industrial Science and Technology", 
              "id": "https://www.grid.ac/institutes/grid.208504.b", 
              "name": [
                "National Institute of Advanced Industrial Science and Technology, Japan"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Nagakubo", 
            "givenName": "Akihiko", 
            "id": "sg:person.010547235761.75", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010547235761.75"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "https://doi.org/10.4324/9781315802794", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1001676034"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/s0893-6080(96)00043-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1015271032"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/s0893-6080(99)00070-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1024464879"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1016/0921-8890(95)00054-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1030224728"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/3-540-48834-0_5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1036490056", 
              "https://doi.org/10.1007/3-540-48834-0_5"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/70.338535", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1061216127"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iros.2000.895195", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093255774"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/robot.2000.846360", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1093826163"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iros.1997.655104", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1094837300"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/iros.2000.895198", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095629380"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "https://doi.org/10.1109/robot.1998.677288", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1095706289"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2003-06-30", 
        "datePublishedReg": "2003-06-30", 
        "description": "The capability of action imitation constitutes a fundamental basis of higher human intelligence. In this paper, we first analyze the concept of imitation and mark the essential problems underlying imitation, e.g. redundant sensory and motor degrees of freedom, adaptive mapping, and strong embodiment. It supports a synthetic approach to understanding imitation capabilities, which requires a versatile humanoid robot platform. An overview of our full-body humanoid robot system is presented, which is signified by its versatile physical capabilities and complete open architecture. Then we present our early experiment on multi-modal architecture and behavior imitation with the humanoid. It can continuously interact with humans through visual, auditory and motion modalities in an unmodified everyday environment. And when a person attends to the robot, starting to show a dual arm motion, the robot spontaneously starts to copy it.", 
        "editor": [
          {
            "familyName": "Jarvis", 
            "givenName": "Raymond Austin", 
            "type": "Person"
          }, 
          {
            "familyName": "Zelinsky", 
            "givenName": "Alexander", 
            "type": "Person"
          }
        ], 
        "genre": "chapter", 
        "id": "sg:pub.10.1007/3-540-36460-9_5", 
        "inLanguage": [
          "en"
        ], 
        "isAccessibleForFree": false, 
        "isPartOf": {
          "isbn": [
            "978-3-540-00550-6"
          ], 
          "name": "Robotics Research", 
          "type": "Book"
        }, 
        "name": "ETL-Humanoid: A Research Vehicle for Open-Ended Action Imitation", 
        "pagination": "67-82", 
        "productId": [
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/3-540-36460-9_5"
            ]
          }, 
          {
            "name": "readcube_id", 
            "type": "PropertyValue", 
            "value": [
              "90558861de6eb56cbfe84410c154ada2bb97fb008bc6c8b054903eedc565e28e"
            ]
          }, 
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1030298129"
            ]
          }
        ], 
        "publisher": {
          "location": "Berlin, Heidelberg", 
          "name": "Springer Berlin Heidelberg", 
          "type": "Organisation"
        }, 
        "sameAs": [
          "https://doi.org/10.1007/3-540-36460-9_5", 
          "https://app.dimensions.ai/details/publication/pub.1030298129"
        ], 
        "sdDataset": "chapters", 
        "sdDatePublished": "2019-04-16T05:26", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000345_0000000345/records_64103_00000001.jsonl", 
        "type": "Chapter", 
        "url": "https://link.springer.com/10.1007%2F3-540-36460-9_5"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/3-540-36460-9_5'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/3-540-36460-9_5'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/3-540-36460-9_5'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/3-540-36460-9_5'


     

    This table displays all metadata directly associated to this object as RDF triples.

    122 TRIPLES      23 PREDICATES      37 URIs      19 LITERALS      8 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/3-540-36460-9_5 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nae2f90e666a54a25a73cdca074edf20b
    4 schema:citation sg:pub.10.1007/3-540-48834-0_5
    5 https://doi.org/10.1016/0921-8890(95)00054-2
    6 https://doi.org/10.1016/s0893-6080(96)00043-3
    7 https://doi.org/10.1016/s0893-6080(99)00070-2
    8 https://doi.org/10.1109/70.338535
    9 https://doi.org/10.1109/iros.1997.655104
    10 https://doi.org/10.1109/iros.2000.895195
    11 https://doi.org/10.1109/iros.2000.895198
    12 https://doi.org/10.1109/robot.1998.677288
    13 https://doi.org/10.1109/robot.2000.846360
    14 https://doi.org/10.4324/9781315802794
    15 schema:datePublished 2003-06-30
    16 schema:datePublishedReg 2003-06-30
    17 schema:description The capability of action imitation constitutes a fundamental basis of higher human intelligence. In this paper, we first analyze the concept of imitation and mark the essential problems underlying imitation, e.g. redundant sensory and motor degrees of freedom, adaptive mapping, and strong embodiment. It supports a synthetic approach to understanding imitation capabilities, which requires a versatile humanoid robot platform. An overview of our full-body humanoid robot system is presented, which is signified by its versatile physical capabilities and complete open architecture. Then we present our early experiment on multi-modal architecture and behavior imitation with the humanoid. It can continuously interact with humans through visual, auditory and motion modalities in an unmodified everyday environment. And when a person attends to the robot, starting to show a dual arm motion, the robot spontaneously starts to copy it.
    18 schema:editor N5a9b9a5384984bab8536b7e639ad3b52
    19 schema:genre chapter
    20 schema:inLanguage en
    21 schema:isAccessibleForFree false
    22 schema:isPartOf Ne13dbe0f4ad24041809072d8f0b3993d
    23 schema:name ETL-Humanoid: A Research Vehicle for Open-Ended Action Imitation
    24 schema:pagination 67-82
    25 schema:productId N01588ee227dd4ae19388bdda35bc150c
    26 N3abba8fd62654267bb1f12698ec0e293
    27 N522b1aa4e2924185bdb459ee25d441d2
    28 schema:publisher N8d9cf56faa8e4c47b71c86d7eb7aab1b
    29 schema:sameAs https://app.dimensions.ai/details/publication/pub.1030298129
    30 https://doi.org/10.1007/3-540-36460-9_5
    31 schema:sdDatePublished 2019-04-16T05:26
    32 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    33 schema:sdPublisher N48c0ad0d2e5b4476b4fb38b8882fa404
    34 schema:url https://link.springer.com/10.1007%2F3-540-36460-9_5
    35 sgo:license sg:explorer/license/
    36 sgo:sdDataset chapters
    37 rdf:type schema:Chapter
    38 N01588ee227dd4ae19388bdda35bc150c schema:name readcube_id
    39 schema:value 90558861de6eb56cbfe84410c154ada2bb97fb008bc6c8b054903eedc565e28e
    40 rdf:type schema:PropertyValue
    41 N34b458074c274920b80b050bdaf60a76 rdf:first sg:person.010547235761.75
    42 rdf:rest rdf:nil
    43 N3abba8fd62654267bb1f12698ec0e293 schema:name dimensions_id
    44 schema:value pub.1030298129
    45 rdf:type schema:PropertyValue
    46 N48c0ad0d2e5b4476b4fb38b8882fa404 schema:name Springer Nature - SN SciGraph project
    47 rdf:type schema:Organization
    48 N522b1aa4e2924185bdb459ee25d441d2 schema:name doi
    49 schema:value 10.1007/3-540-36460-9_5
    50 rdf:type schema:PropertyValue
    51 N5a9b9a5384984bab8536b7e639ad3b52 rdf:first Nca5e047fbd0f44b6af4da3f51dea5924
    52 rdf:rest N9be6c49385e341a9aa7f863106d8c059
    53 N5ba85af253dc419b84038f4320e3138a schema:familyName Zelinsky
    54 schema:givenName Alexander
    55 rdf:type schema:Person
    56 N6244eaf609074be1ace88e7d5742edb4 schema:name ATR Human Information Science Labs., Japan
    57 rdf:type schema:Organization
    58 N8d9cf56faa8e4c47b71c86d7eb7aab1b schema:location Berlin, Heidelberg
    59 schema:name Springer Berlin Heidelberg
    60 rdf:type schema:Organisation
    61 N9be6c49385e341a9aa7f863106d8c059 rdf:first N5ba85af253dc419b84038f4320e3138a
    62 rdf:rest rdf:nil
    63 Nac9243f49787486b818d933bd1dff30a rdf:first sg:person.011214060205.82
    64 rdf:rest N34b458074c274920b80b050bdaf60a76
    65 Nae2f90e666a54a25a73cdca074edf20b rdf:first sg:person.013372311431.62
    66 rdf:rest Nac9243f49787486b818d933bd1dff30a
    67 Nca5e047fbd0f44b6af4da3f51dea5924 schema:familyName Jarvis
    68 schema:givenName Raymond Austin
    69 rdf:type schema:Person
    70 Ne13dbe0f4ad24041809072d8f0b3993d schema:isbn 978-3-540-00550-6
    71 schema:name Robotics Research
    72 rdf:type schema:Book
    73 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    74 schema:name Information and Computing Sciences
    75 rdf:type schema:DefinedTerm
    76 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    77 schema:name Artificial Intelligence and Image Processing
    78 rdf:type schema:DefinedTerm
    79 sg:person.010547235761.75 schema:affiliation https://www.grid.ac/institutes/grid.208504.b
    80 schema:familyName Nagakubo
    81 schema:givenName Akihiko
    82 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010547235761.75
    83 rdf:type schema:Person
    84 sg:person.011214060205.82 schema:affiliation N6244eaf609074be1ace88e7d5742edb4
    85 schema:familyName Cheng
    86 schema:givenName Gordon
    87 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011214060205.82
    88 rdf:type schema:Person
    89 sg:person.013372311431.62 schema:affiliation https://www.grid.ac/institutes/grid.26999.3d
    90 schema:familyName Kuniyoshi
    91 schema:givenName Yasuo
    92 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013372311431.62
    93 rdf:type schema:Person
    94 sg:pub.10.1007/3-540-48834-0_5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1036490056
    95 https://doi.org/10.1007/3-540-48834-0_5
    96 rdf:type schema:CreativeWork
    97 https://doi.org/10.1016/0921-8890(95)00054-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1030224728
    98 rdf:type schema:CreativeWork
    99 https://doi.org/10.1016/s0893-6080(96)00043-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1015271032
    100 rdf:type schema:CreativeWork
    101 https://doi.org/10.1016/s0893-6080(99)00070-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1024464879
    102 rdf:type schema:CreativeWork
    103 https://doi.org/10.1109/70.338535 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061216127
    104 rdf:type schema:CreativeWork
    105 https://doi.org/10.1109/iros.1997.655104 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094837300
    106 rdf:type schema:CreativeWork
    107 https://doi.org/10.1109/iros.2000.895195 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093255774
    108 rdf:type schema:CreativeWork
    109 https://doi.org/10.1109/iros.2000.895198 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095629380
    110 rdf:type schema:CreativeWork
    111 https://doi.org/10.1109/robot.1998.677288 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095706289
    112 rdf:type schema:CreativeWork
    113 https://doi.org/10.1109/robot.2000.846360 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093826163
    114 rdf:type schema:CreativeWork
    115 https://doi.org/10.4324/9781315802794 schema:sameAs https://app.dimensions.ai/details/publication/pub.1001676034
    116 rdf:type schema:CreativeWork
    117 https://www.grid.ac/institutes/grid.208504.b schema:alternateName National Institute of Advanced Industrial Science and Technology
    118 schema:name National Institute of Advanced Industrial Science and Technology, Japan
    119 rdf:type schema:Organization
    120 https://www.grid.ac/institutes/grid.26999.3d schema:alternateName University of Tokyo
    121 schema:name Dept. of Mechano-Informatics, School of Information Science and Technology, The Univ.of Tokyo, Japan
    122 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...