Training constrained deconvolutional networks for road scene semantic segmentation


Ontology type: sgo:Patent     


Patent Info

DATE

2018-03-13T00:00

AUTHORS

German ROS SANCHEZ , Simon Stent , Pablo ALCANTARILLA

ABSTRACT

A source deconvolutional network is adaptively trained to perform semantic segmentation. Image data is then input to the source deconvolutional network and outputs of the S-Net are measured. The same image data and the measured outputs of the source deconvolutional network are then used to train a target deconvolutional network. The target deconvolutional network is defined by a substantially fewer numerical parameters than the source deconvolutional network. More... »

Related SciGraph Publications

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/2746", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "name": "German ROS SANCHEZ", 
        "type": "Person"
      }, 
      {
        "name": "Simon Stent", 
        "type": "Person"
      }, 
      {
        "name": "Pablo ALCANTARILLA", 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "https://doi.org/10.1016/j.patrec.2008.04.005", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1001414384"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.patrec.2008.04.005", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1001414384"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2012.231", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1003742061"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11263-015-0816-y", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1009767488", 
          "https://doi.org/10.1007/s11263-015-0816-y"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11263-014-0733-5", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1017073734", 
          "https://doi.org/10.1007/s11263-014-0733-5"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11263-007-0090-8", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1027534025", 
          "https://doi.org/10.1007/s11263-007-0090-8"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2009.109", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061743700"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2014.2299799", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061744621"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1145/1015706.1015720", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1063148832"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2018-03-13T00:00", 
    "description": "

A source deconvolutional network is adaptively trained to perform semantic segmentation. Image data is then input to the source deconvolutional network and outputs of the S-Net are measured. The same image data and the measured outputs of the source deconvolutional network are then used to train a target deconvolutional network. The target deconvolutional network is defined by a substantially fewer numerical parameters than the source deconvolutional network.

", "id": "sg:patent.US-9916522-B2", "keywords": [ "network", "segmentation", "image data", "input", "output", "net", "same image", "parameter" ], "name": "Training constrained deconvolutional networks for road scene semantic segmentation", "recipient": [ { "id": "https://www.grid.ac/institutes/grid.410825.a", "type": "Organization" } ], "sameAs": [ "https://app.dimensions.ai/details/patent/US-9916522-B2" ], "sdDataset": "patents", "sdDatePublished": "2019-04-18T10:22", "sdLicense": "https://scigraph.springernature.com/explorer/license/", "sdPublisher": { "name": "Springer Nature - SN SciGraph project", "type": "Organization" }, "sdSource": "s3://com-uberresearch-data-patents-target-20190320-rc/data/sn-export/402f166718b70575fb5d4ffe01f064d1/0000100128-0000352499/json_export_01704.jsonl", "type": "Patent" } ]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/patent.US-9916522-B2'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/patent.US-9916522-B2'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/patent.US-9916522-B2'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/patent.US-9916522-B2'


 

This table displays all metadata directly associated to this object as RDF triples.

65 TRIPLES      15 PREDICATES      30 URIs      16 LITERALS      2 BLANK NODES

Subject Predicate Object
1 sg:patent.US-9916522-B2 schema:about anzsrc-for:2746
2 schema:author Na3862619a4b94d0a9b17c9a2e8ce8d7e
3 schema:citation sg:pub.10.1007/s11263-007-0090-8
4 sg:pub.10.1007/s11263-014-0733-5
5 sg:pub.10.1007/s11263-015-0816-y
6 https://doi.org/10.1016/j.patrec.2008.04.005
7 https://doi.org/10.1109/tpami.2009.109
8 https://doi.org/10.1109/tpami.2012.231
9 https://doi.org/10.1109/tpami.2014.2299799
10 https://doi.org/10.1145/1015706.1015720
11 schema:datePublished 2018-03-13T00:00
12 schema:description <p id="p-0001" num="0000">A source deconvolutional network is adaptively trained to perform semantic segmentation. Image data is then input to the source deconvolutional network and outputs of the S-Net are measured. The same image data and the measured outputs of the source deconvolutional network are then used to train a target deconvolutional network. The target deconvolutional network is defined by a substantially fewer numerical parameters than the source deconvolutional network.</p>
13 schema:keywords image data
14 input
15 net
16 network
17 output
18 parameter
19 same image
20 segmentation
21 schema:name Training constrained deconvolutional networks for road scene semantic segmentation
22 schema:recipient https://www.grid.ac/institutes/grid.410825.a
23 schema:sameAs https://app.dimensions.ai/details/patent/US-9916522-B2
24 schema:sdDatePublished 2019-04-18T10:22
25 schema:sdLicense https://scigraph.springernature.com/explorer/license/
26 schema:sdPublisher Nf74dd5fc4626421e8c4dc1ccf0036b68
27 sgo:license sg:explorer/license/
28 sgo:sdDataset patents
29 rdf:type sgo:Patent
30 N1d202555a4204b4d828cb2174c277996 schema:name Pablo ALCANTARILLA
31 rdf:type schema:Person
32 N324da387df36453baff9623b1ea2a1cf rdf:first N1d202555a4204b4d828cb2174c277996
33 rdf:rest rdf:nil
34 N9d33d0ea47104d96ab2831797dd5ccff schema:name Simon Stent
35 rdf:type schema:Person
36 Na3862619a4b94d0a9b17c9a2e8ce8d7e rdf:first Nd5092578d91547618638f82a2b78c10f
37 rdf:rest Nba84ada8f14841b89e26dc8b22ff9c40
38 Nba84ada8f14841b89e26dc8b22ff9c40 rdf:first N9d33d0ea47104d96ab2831797dd5ccff
39 rdf:rest N324da387df36453baff9623b1ea2a1cf
40 Nd5092578d91547618638f82a2b78c10f schema:name German ROS SANCHEZ
41 rdf:type schema:Person
42 Nf74dd5fc4626421e8c4dc1ccf0036b68 schema:name Springer Nature - SN SciGraph project
43 rdf:type schema:Organization
44 anzsrc-for:2746 schema:inDefinedTermSet anzsrc-for:
45 rdf:type schema:DefinedTerm
46 sg:pub.10.1007/s11263-007-0090-8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1027534025
47 https://doi.org/10.1007/s11263-007-0090-8
48 rdf:type schema:CreativeWork
49 sg:pub.10.1007/s11263-014-0733-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017073734
50 https://doi.org/10.1007/s11263-014-0733-5
51 rdf:type schema:CreativeWork
52 sg:pub.10.1007/s11263-015-0816-y schema:sameAs https://app.dimensions.ai/details/publication/pub.1009767488
53 https://doi.org/10.1007/s11263-015-0816-y
54 rdf:type schema:CreativeWork
55 https://doi.org/10.1016/j.patrec.2008.04.005 schema:sameAs https://app.dimensions.ai/details/publication/pub.1001414384
56 rdf:type schema:CreativeWork
57 https://doi.org/10.1109/tpami.2009.109 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061743700
58 rdf:type schema:CreativeWork
59 https://doi.org/10.1109/tpami.2012.231 schema:sameAs https://app.dimensions.ai/details/publication/pub.1003742061
60 rdf:type schema:CreativeWork
61 https://doi.org/10.1109/tpami.2014.2299799 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061744621
62 rdf:type schema:CreativeWork
63 https://doi.org/10.1145/1015706.1015720 schema:sameAs https://app.dimensions.ai/details/publication/pub.1063148832
64 rdf:type schema:CreativeWork
65 https://www.grid.ac/institutes/grid.410825.a schema:Organization
 




Preview window. Press ESC to close (or click here)


...