Object Classification In Image Data Using Machine Learning Models


Ontology type: sgo:Patent     


Patent Info

DATE

2018-05-30T00:00

AUTHORS

Farooqi, Waqas Ahmad , Lipps, Jonas , SCHMIDT, ECKEHARD , FRICKE, THOMAS , VERZANO, NEMRUDE

ABSTRACT

Combined color and depth data for a field of view is received. Thereafter, using at least one bounding polygon algorithm, at least one proposed bounding polygon is defined for the field of view. It can then be determined, using a binary classifier having at least one machine learning model trained using a plurality of images of known objects, whether each proposed bounding polygon encapsulates an object. The image data within each bounding polygon that is determined to encapsulate an object can then be provided to a first object classifier having at least one machine learning model trained using a plurality of images of known objects, to classify the object encapsulated within the respective bounding polygon. Further, the image data within each bounding polygon that is determined to encapsulate an object is provided to a second object classifier having at least one machine learning model trained using a plurality of images of known objects, to classify the object encapsulated within the respective bounding polygon. A final classification for each bounding polygon is then determined based on the output of the first classifier machine learning model and the output of the second classifier machine learning model. More... »

Related SciGraph Publications

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2746", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "name": "Farooqi, Waqas Ahmad", 
        "type": "Person"
      }, 
      {
        "name": "Lipps, Jonas", 
        "type": "Person"
      }, 
      {
        "name": "SCHMIDT, ECKEHARD", 
        "type": "Person"
      }, 
      {
        "name": "FRICKE, THOMAS", 
        "type": "Person"
      }, 
      {
        "name": "VERZANO, NEMRUDE", 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/978-3-319-10584-0_23", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1024540204", 
          "https://doi.org/10.1007/978-3-319-10584-0_23"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/roman.2016.7745248", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094021710"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2018-05-30T00:00", 
    "description": "

Combined color and depth data for a field of view is received. Thereafter, using at least one bounding polygon algorithm, at least one proposed bounding polygon is defined for the field of view. It can then be determined, using a binary classifier having at least one machine learning model trained using a plurality of images of known objects, whether each proposed bounding polygon encapsulates an object. The image data within each bounding polygon that is determined to encapsulate an object can then be provided to a first object classifier having at least one machine learning model trained using a plurality of images of known objects, to classify the object encapsulated within the respective bounding polygon. Further, the image data within each bounding polygon that is determined to encapsulate an object is provided to a second object classifier having at least one machine learning model trained using a plurality of images of known objects, to classify the object encapsulated within the respective bounding polygon. A final classification for each bounding polygon is then determined based on the output of the first classifier machine learning model and the output of the second classifier machine learning model.\n

", "id": "sg:patent.EP-3327616-A1", "keywords": [ "image data", "color", "depth", "algorithm", "polygon", "classifier", "machine", "plurality", "encapsulate", "first object", "classification", "output" ], "name": "OBJECT CLASSIFICATION IN IMAGE DATA USING MACHINE LEARNING MODELS", "recipient": [ { "id": "https://www.grid.ac/institutes/grid.19008.30", "type": "Organization" } ], "sameAs": [ "https://app.dimensions.ai/details/patent/EP-3327616-A1" ], "sdDataset": "patents", "sdDatePublished": "2019-04-18T10:22", "sdLicense": "https://scigraph.springernature.com/explorer/license/", "sdPublisher": { "name": "Springer Nature - SN SciGraph project", "type": "Organization" }, "sdSource": "s3://com-uberresearch-data-patents-target-20190320-rc/data/sn-export/402f166718b70575fb5d4ffe01f064d1/0000100128-0000352499/json_export_01758.jsonl", "type": "Patent" } ]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/patent.EP-3327616-A1'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/patent.EP-3327616-A1'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/patent.EP-3327616-A1'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/patent.EP-3327616-A1'


 

This table displays all metadata directly associated to this object as RDF triples.

57 TRIPLES      15 PREDICATES      28 URIs      20 LITERALS      2 BLANK NODES

Subject Predicate Object
1 sg:patent.EP-3327616-A1 schema:about anzsrc-for:2746
2 schema:author N0e892b0bd16a4f13a0ec0e258cc740ca
3 schema:citation sg:pub.10.1007/978-3-319-10584-0_23
4 https://doi.org/10.1109/roman.2016.7745248
5 schema:datePublished 2018-05-30T00:00
6 schema:description <p id="pa01" num="0001">Combined color and depth data for a field of view is received. Thereafter, using at least one bounding polygon algorithm, at least one proposed bounding polygon is defined for the field of view. It can then be determined, using a binary classifier having at least one machine learning model trained using a plurality of images of known objects, whether each proposed bounding polygon encapsulates an object. The image data within each bounding polygon that is determined to encapsulate an object can then be provided to a first object classifier having at least one machine learning model trained using a plurality of images of known objects, to classify the object encapsulated within the respective bounding polygon. Further, the image data within each bounding polygon that is determined to encapsulate an object is provided to a second object classifier having at least one machine learning model trained using a plurality of images of known objects, to classify the object encapsulated within the respective bounding polygon. A final classification for each bounding polygon is then determined based on the output of the first classifier machine learning model and the output of the second classifier machine learning model. <img id="iaf01" file="imgaf001.tif" wi="78" he="107" img-content="drawing" img-format="tif"/></p>
7 schema:keywords algorithm
8 classification
9 classifier
10 color
11 depth
12 encapsulate
13 first object
14 image data
15 machine
16 output
17 plurality
18 polygon
19 schema:name OBJECT CLASSIFICATION IN IMAGE DATA USING MACHINE LEARNING MODELS
20 schema:recipient https://www.grid.ac/institutes/grid.19008.30
21 schema:sameAs https://app.dimensions.ai/details/patent/EP-3327616-A1
22 schema:sdDatePublished 2019-04-18T10:22
23 schema:sdLicense https://scigraph.springernature.com/explorer/license/
24 schema:sdPublisher Nffa0bd2c16454f14ada4921ef33ca11f
25 sgo:license sg:explorer/license/
26 sgo:sdDataset patents
27 rdf:type sgo:Patent
28 N0e892b0bd16a4f13a0ec0e258cc740ca rdf:first N4f37189abbb6426cbaccc2de97b8f953
29 rdf:rest N738c46c7c8324c4184991ae6a600d280
30 N238d8aa2a19346d1a553556ea316c83c schema:name VERZANO, NEMRUDE
31 rdf:type schema:Person
32 N23c2cde85364483da76e99522997feb6 rdf:first N341b573c98da48c4a8961acfc9d60472
33 rdf:rest Ncd421674cf0b4562a1290f830224e1e2
34 N341b573c98da48c4a8961acfc9d60472 schema:name FRICKE, THOMAS
35 rdf:type schema:Person
36 N484a3715b75440f097d8645707b2be16 schema:name SCHMIDT, ECKEHARD
37 rdf:type schema:Person
38 N4f37189abbb6426cbaccc2de97b8f953 schema:name Farooqi, Waqas Ahmad
39 rdf:type schema:Person
40 N738c46c7c8324c4184991ae6a600d280 rdf:first Nbfedb3c095f842dca5dc605b94882bfa
41 rdf:rest Nc0bb4e6ee73b4336bbc55240081ab71c
42 Nbfedb3c095f842dca5dc605b94882bfa schema:name Lipps, Jonas
43 rdf:type schema:Person
44 Nc0bb4e6ee73b4336bbc55240081ab71c rdf:first N484a3715b75440f097d8645707b2be16
45 rdf:rest N23c2cde85364483da76e99522997feb6
46 Ncd421674cf0b4562a1290f830224e1e2 rdf:first N238d8aa2a19346d1a553556ea316c83c
47 rdf:rest rdf:nil
48 Nffa0bd2c16454f14ada4921ef33ca11f schema:name Springer Nature - SN SciGraph project
49 rdf:type schema:Organization
50 anzsrc-for:2746 schema:inDefinedTermSet anzsrc-for:
51 rdf:type schema:DefinedTerm
52 sg:pub.10.1007/978-3-319-10584-0_23 schema:sameAs https://app.dimensions.ai/details/publication/pub.1024540204
53 https://doi.org/10.1007/978-3-319-10584-0_23
54 rdf:type schema:CreativeWork
55 https://doi.org/10.1109/roman.2016.7745248 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094021710
56 rdf:type schema:CreativeWork
57 https://www.grid.ac/institutes/grid.19008.30 schema:Organization
 




Preview window. Press ESC to close (or click here)


...