DeepTAM: Deep Tracking and Mapping View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2018-10-06

AUTHORS

Huizhong Zhou , Benjamin Ummenhofer , Thomas Brox

ABSTRACT

We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6 DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classic and deep learning powered dense depth algorithms. More... »

PAGES

851-868

Book

TITLE

Computer Vision – ECCV 2018

ISBN

978-3-030-01269-4
978-3-030-01270-0

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-030-01270-0_50

DOI

http://dx.doi.org/10.1007/978-3-030-01270-0_50

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1107454859


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "University of Freiburg, Freiburg, Germany", 
          "id": "http://www.grid.ac/institutes/grid.5963.9", 
          "name": [
            "University of Freiburg, Freiburg, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Zhou", 
        "givenName": "Huizhong", 
        "id": "sg:person.015337214304.05", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015337214304.05"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of Freiburg, Freiburg, Germany", 
          "id": "http://www.grid.ac/institutes/grid.5963.9", 
          "name": [
            "University of Freiburg, Freiburg, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Ummenhofer", 
        "givenName": "Benjamin", 
        "id": "sg:person.010435022672.00", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010435022672.00"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of Freiburg, Freiburg, Germany", 
          "id": "http://www.grid.ac/institutes/grid.5963.9", 
          "name": [
            "University of Freiburg, Freiburg, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Brox", 
        "givenName": "Thomas", 
        "id": "sg:person.012443225372.65", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2018-10-06", 
    "datePublishedReg": "2018-10-06", 
    "description": "We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6\u00a0DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classic and deep learning powered dense depth algorithms.", 
    "editor": [
      {
        "familyName": "Ferrari", 
        "givenName": "Vittorio", 
        "type": "Person"
      }, 
      {
        "familyName": "Hebert", 
        "givenName": "Martial", 
        "type": "Person"
      }, 
      {
        "familyName": "Sminchisescu", 
        "givenName": "Cristian", 
        "type": "Person"
      }, 
      {
        "familyName": "Weiss", 
        "givenName": "Yair", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-030-01270-0_50", 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-030-01269-4", 
        "978-3-030-01270-0"
      ], 
      "name": "Computer Vision \u2013 ECCV 2018", 
      "type": "Book"
    }, 
    "keywords": [
      "cost volume", 
      "current camera image", 
      "depth map estimation", 
      "image-based priors", 
      "current depth estimate", 
      "camera tracking", 
      "camera pose", 
      "deep learning", 
      "pose hypotheses", 
      "camera motion", 
      "keyframe images", 
      "dataset bias", 
      "Deep Tracking", 
      "art results", 
      "camera images", 
      "learning problem", 
      "mapping network", 
      "depth prediction", 
      "tracking algorithm", 
      "map estimation", 
      "more accurate predictions", 
      "tracking", 
      "synthetic viewpoint", 
      "algorithm", 
      "images", 
      "depth estimates", 
      "depth algorithm", 
      "accurate prediction", 
      "large number", 
      "depth measurements", 
      "pose", 
      "network", 
      "learning", 
      "mapping", 
      "priors", 
      "information", 
      "prediction", 
      "performance", 
      "system", 
      "viewpoint", 
      "estimation", 
      "motion", 
      "number", 
      "use", 
      "measurements", 
      "volume", 
      "state", 
      "results", 
      "increment", 
      "respect", 
      "problem", 
      "approach", 
      "competes", 
      "estimates", 
      "bias", 
      "hypothesis"
    ], 
    "name": "DeepTAM: Deep Tracking and Mapping", 
    "pagination": "851-868", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1107454859"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-030-01270-0_50"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-030-01270-0_50", 
      "https://app.dimensions.ai/details/publication/pub.1107454859"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-12-01T06:51", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/chapter/chapter_323.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-030-01270-0_50"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-01270-0_50'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-01270-0_50'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-01270-0_50'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-030-01270-0_50'


 

This table displays all metadata directly associated to this object as RDF triples.

144 TRIPLES      22 PREDICATES      80 URIs      73 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-030-01270-0_50 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N6a4e1d39064d4e76af0358494d7152e2
4 schema:datePublished 2018-10-06
5 schema:datePublishedReg 2018-10-06
6 schema:description We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6 DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classic and deep learning powered dense depth algorithms.
7 schema:editor N5852ac2c40c84f3ba0a339dcfa83df25
8 schema:genre chapter
9 schema:isAccessibleForFree true
10 schema:isPartOf N43a7141f2f634d1590fee55dea3f56f0
11 schema:keywords Deep Tracking
12 accurate prediction
13 algorithm
14 approach
15 art results
16 bias
17 camera images
18 camera motion
19 camera pose
20 camera tracking
21 competes
22 cost volume
23 current camera image
24 current depth estimate
25 dataset bias
26 deep learning
27 depth algorithm
28 depth estimates
29 depth map estimation
30 depth measurements
31 depth prediction
32 estimates
33 estimation
34 hypothesis
35 image-based priors
36 images
37 increment
38 information
39 keyframe images
40 large number
41 learning
42 learning problem
43 map estimation
44 mapping
45 mapping network
46 measurements
47 more accurate predictions
48 motion
49 network
50 number
51 performance
52 pose
53 pose hypotheses
54 prediction
55 priors
56 problem
57 respect
58 results
59 state
60 synthetic viewpoint
61 system
62 tracking
63 tracking algorithm
64 use
65 viewpoint
66 volume
67 schema:name DeepTAM: Deep Tracking and Mapping
68 schema:pagination 851-868
69 schema:productId N525a8960a8574ea5bb801c657b88f2e5
70 N7cfc22fb320943f7a104acd6aafaa4c4
71 schema:publisher N8af1ec45b59c4db2af22b6cc3e5578f6
72 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454859
73 https://doi.org/10.1007/978-3-030-01270-0_50
74 schema:sdDatePublished 2022-12-01T06:51
75 schema:sdLicense https://scigraph.springernature.com/explorer/license/
76 schema:sdPublisher N77f634ea14be43569d2a5ae46ea2cae3
77 schema:url https://doi.org/10.1007/978-3-030-01270-0_50
78 sgo:license sg:explorer/license/
79 sgo:sdDataset chapters
80 rdf:type schema:Chapter
81 N11731cc312d941e9a106f7559e801370 rdf:first Nd480918247b64c338fc0e431595cd6b7
82 rdf:rest N777f7988cb2d49029abbe0f12ddc00e0
83 N188a1494e8ed4f1ca1dbe3aa101fa0e6 schema:familyName Hebert
84 schema:givenName Martial
85 rdf:type schema:Person
86 N2bd5057e1ba44f33ab3c9c198bede7da schema:familyName Weiss
87 schema:givenName Yair
88 rdf:type schema:Person
89 N43a7141f2f634d1590fee55dea3f56f0 schema:isbn 978-3-030-01269-4
90 978-3-030-01270-0
91 schema:name Computer Vision – ECCV 2018
92 rdf:type schema:Book
93 N525a8960a8574ea5bb801c657b88f2e5 schema:name doi
94 schema:value 10.1007/978-3-030-01270-0_50
95 rdf:type schema:PropertyValue
96 N54c0e4539e424db6b497ea159096b04b rdf:first N188a1494e8ed4f1ca1dbe3aa101fa0e6
97 rdf:rest N11731cc312d941e9a106f7559e801370
98 N5852ac2c40c84f3ba0a339dcfa83df25 rdf:first Nceebf4ef95fe4a9d8e17abbae505feae
99 rdf:rest N54c0e4539e424db6b497ea159096b04b
100 N6a4e1d39064d4e76af0358494d7152e2 rdf:first sg:person.015337214304.05
101 rdf:rest N9b3fffe6b4b142588e8e6560e6105813
102 N777f7988cb2d49029abbe0f12ddc00e0 rdf:first N2bd5057e1ba44f33ab3c9c198bede7da
103 rdf:rest rdf:nil
104 N77f634ea14be43569d2a5ae46ea2cae3 schema:name Springer Nature - SN SciGraph project
105 rdf:type schema:Organization
106 N7cfc22fb320943f7a104acd6aafaa4c4 schema:name dimensions_id
107 schema:value pub.1107454859
108 rdf:type schema:PropertyValue
109 N8af1ec45b59c4db2af22b6cc3e5578f6 schema:name Springer Nature
110 rdf:type schema:Organisation
111 N9b3fffe6b4b142588e8e6560e6105813 rdf:first sg:person.010435022672.00
112 rdf:rest Ne8f28daa09684cf392189145c82eac93
113 Nceebf4ef95fe4a9d8e17abbae505feae schema:familyName Ferrari
114 schema:givenName Vittorio
115 rdf:type schema:Person
116 Nd480918247b64c338fc0e431595cd6b7 schema:familyName Sminchisescu
117 schema:givenName Cristian
118 rdf:type schema:Person
119 Ne8f28daa09684cf392189145c82eac93 rdf:first sg:person.012443225372.65
120 rdf:rest rdf:nil
121 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
122 schema:name Information and Computing Sciences
123 rdf:type schema:DefinedTerm
124 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
125 schema:name Artificial Intelligence and Image Processing
126 rdf:type schema:DefinedTerm
127 sg:person.010435022672.00 schema:affiliation grid-institutes:grid.5963.9
128 schema:familyName Ummenhofer
129 schema:givenName Benjamin
130 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010435022672.00
131 rdf:type schema:Person
132 sg:person.012443225372.65 schema:affiliation grid-institutes:grid.5963.9
133 schema:familyName Brox
134 schema:givenName Thomas
135 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65
136 rdf:type schema:Person
137 sg:person.015337214304.05 schema:affiliation grid-institutes:grid.5963.9
138 schema:familyName Zhou
139 schema:givenName Huizhong
140 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015337214304.05
141 rdf:type schema:Person
142 grid-institutes:grid.5963.9 schema:alternateName University of Freiburg, Freiburg, Germany
143 schema:name University of Freiburg, Freiburg, Germany
144 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...