Automated Collection And Labeling Of Object Data


Ontology type: sgo:Patent     


Patent Info

DATE

2016-10-13T00:00

AUTHORS

THIBODEAU, BRYAN J. , REVOW, MICHAEL , JALOBEANU, MIHAI , SHIRAKYAN, GRIGOR

ABSTRACT

Data about a physical object in a real-world environment is automatically collected and labeled. A mechanical device is used to maneuver the object into different poses within a three-dimensional workspace in the real-world environment. While the object is in each different pose an image of the object is input from one or more sensors and data specifying the pose is input from the mechanical device. The image of the object input from each of the sensors for each different pose is labeled with the data specifying the pose and with information identifying the object. A database for the object that includes these labeled images can be generated. The labeled images can also be used to train a detector and classifier to detect and recognize the object when it is in an environment that is similar to the real-world environment. More... »

Related SciGraph Publications

  • 2014. Camera Calibration in COMPUTER VISION
  • JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2746", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2790", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "name": "THIBODEAU, BRYAN J.", 
            "type": "Person"
          }, 
          {
            "name": "REVOW, MICHAEL", 
            "type": "Person"
          }, 
          {
            "name": "JALOBEANU, MIHAI", 
            "type": "Person"
          }, 
          {
            "name": "SHIRAKYAN, GRIGOR", 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-0-387-31439-6_164", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1050178479", 
              "https://doi.org/10.1007/978-0-387-31439-6_164"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-0-387-31439-6_164", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1050178479", 
              "https://doi.org/10.1007/978-0-387-31439-6_164"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2016-10-13T00:00", 
        "description": "

    Data about a physical object in a real-world environment is automatically collected and labeled. A mechanical device is used to maneuver the object into different poses within a three-dimensional workspace in the real-world environment. While the object is in each different pose an image of the object is input from one or more sensors and data specifying the pose is input from the mechanical device. The image of the object input from each of the sensors for each different pose is labeled with the data specifying the pose and with information identifying the object. A database for the object that includes these labeled images can be generated. The labeled images can also be used to train a detector and classifier to detect and recognize the object when it is in an environment that is similar to the real-world environment.

    ", "id": "sg:patent.WO-2016164326-A1", "keywords": [ "automated collection", "physical object", "real-world environment", "mechanical device", "maneuver", "workspace", "input", "sensor", "database", "detector", "classifier", "environment" ], "name": "AUTOMATED COLLECTION AND LABELING OF OBJECT DATA", "recipient": [ { "id": "https://www.grid.ac/institutes/grid.419815.0", "type": "Organization" } ], "sameAs": [ "https://app.dimensions.ai/details/patent/WO-2016164326-A1" ], "sdDataset": "patents", "sdDatePublished": "2019-04-18T10:18", "sdLicense": "https://scigraph.springernature.com/explorer/license/", "sdPublisher": { "name": "Springer Nature - SN SciGraph project", "type": "Organization" }, "sdSource": "s3://com-uberresearch-data-patents-target-20190320-rc/data/sn-export/402f166718b70575fb5d4ffe01f064d1/0000100128-0000352499/json_export_01346.jsonl", "type": "Patent" } ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/patent.WO-2016164326-A1'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/patent.WO-2016164326-A1'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/patent.WO-2016164326-A1'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/patent.WO-2016164326-A1'


     

    This table displays all metadata directly associated to this object as RDF triples.

    53 TRIPLES      15 PREDICATES      28 URIs      20 LITERALS      2 BLANK NODES

    Subject Predicate Object
    1 sg:patent.WO-2016164326-A1 schema:about anzsrc-for:2746
    2 anzsrc-for:2790
    3 schema:author N9da8899a2cf647ae91b819452bbe09a1
    4 schema:citation sg:pub.10.1007/978-0-387-31439-6_164
    5 schema:datePublished 2016-10-13T00:00
    6 schema:description <p num="0000">Data about a physical object in a real-world environment is automatically collected and labeled. A mechanical device is used to maneuver the object into different poses within a three-dimensional workspace in the real-world environment. While the object is in each different pose an image of the object is input from one or more sensors and data specifying the pose is input from the mechanical device. The image of the object input from each of the sensors for each different pose is labeled with the data specifying the pose and with information identifying the object. A database for the object that includes these labeled images can be generated. The labeled images can also be used to train a detector and classifier to detect and recognize the object when it is in an environment that is similar to the real-world environment.</p>
    7 schema:keywords automated collection
    8 classifier
    9 database
    10 detector
    11 environment
    12 input
    13 maneuver
    14 mechanical device
    15 physical object
    16 real-world environment
    17 sensor
    18 workspace
    19 schema:name AUTOMATED COLLECTION AND LABELING OF OBJECT DATA
    20 schema:recipient https://www.grid.ac/institutes/grid.419815.0
    21 schema:sameAs https://app.dimensions.ai/details/patent/WO-2016164326-A1
    22 schema:sdDatePublished 2019-04-18T10:18
    23 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    24 schema:sdPublisher Nbd67c59bb3394ace8330ce5642dcfd72
    25 sgo:license sg:explorer/license/
    26 sgo:sdDataset patents
    27 rdf:type sgo:Patent
    28 N2f0ae5c4d3a9425391f9bdc2229c04e4 schema:name SHIRAKYAN, GRIGOR
    29 rdf:type schema:Person
    30 N7c15b8ee82224e079f690fd57d8959cd rdf:first N2f0ae5c4d3a9425391f9bdc2229c04e4
    31 rdf:rest rdf:nil
    32 N9da8899a2cf647ae91b819452bbe09a1 rdf:first Nf0762e1fdebc4f9b8ef586eccfb4d88d
    33 rdf:rest Nc75faa80940d4d5989c96920f59edfdd
    34 Nbd35b045741c4e6890287be64cc07b18 schema:name REVOW, MICHAEL
    35 rdf:type schema:Person
    36 Nbd67c59bb3394ace8330ce5642dcfd72 schema:name Springer Nature - SN SciGraph project
    37 rdf:type schema:Organization
    38 Nbeba28333c274c28ba3ac2f7aff686ea schema:name JALOBEANU, MIHAI
    39 rdf:type schema:Person
    40 Nc65863d6488f4e4eb233663a156c066c rdf:first Nbeba28333c274c28ba3ac2f7aff686ea
    41 rdf:rest N7c15b8ee82224e079f690fd57d8959cd
    42 Nc75faa80940d4d5989c96920f59edfdd rdf:first Nbd35b045741c4e6890287be64cc07b18
    43 rdf:rest Nc65863d6488f4e4eb233663a156c066c
    44 Nf0762e1fdebc4f9b8ef586eccfb4d88d schema:name THIBODEAU, BRYAN J.
    45 rdf:type schema:Person
    46 anzsrc-for:2746 schema:inDefinedTermSet anzsrc-for:
    47 rdf:type schema:DefinedTerm
    48 anzsrc-for:2790 schema:inDefinedTermSet anzsrc-for:
    49 rdf:type schema:DefinedTerm
    50 sg:pub.10.1007/978-0-387-31439-6_164 schema:sameAs https://app.dimensions.ai/details/publication/pub.1050178479
    51 https://doi.org/10.1007/978-0-387-31439-6_164
    52 rdf:type schema:CreativeWork
    53 https://www.grid.ac/institutes/grid.419815.0 schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...