A Naturalistic Open Source Movie for Optical Flow Evaluation View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2012

AUTHORS

Daniel J. Butler , Jonas Wulff , Garrett B. Stanley , Michael J. Black

ABSTRACT

Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available. More... »

PAGES

611-625

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44

DOI

http://dx.doi.org/10.1007/978-3-642-33783-3_44

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1004909083


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "University of Washington, Seattle, WA, USA", 
          "id": "http://www.grid.ac/institutes/grid.34477.33", 
          "name": [
            "University of Washington, Seattle, WA, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Butler", 
        "givenName": "Daniel J.", 
        "id": "sg:person.015332011533.57", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015332011533.57"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Max-Planck Institute for Intelligent Systems, T\u00fcbingen, Germany", 
          "id": "http://www.grid.ac/institutes/grid.419534.e", 
          "name": [
            "Max-Planck Institute for Intelligent Systems, T\u00fcbingen, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Wulff", 
        "givenName": "Jonas", 
        "id": "sg:person.011017261733.90", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011017261733.90"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Georgia Institute of Technology, Atlanta, GA, USA", 
          "id": "http://www.grid.ac/institutes/grid.213917.f", 
          "name": [
            "Georgia Institute of Technology, Atlanta, GA, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Stanley", 
        "givenName": "Garrett B.", 
        "id": "sg:person.01352253415.84", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01352253415.84"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Max-Planck Institute for Intelligent Systems, T\u00fcbingen, Germany", 
          "id": "http://www.grid.ac/institutes/grid.419534.e", 
          "name": [
            "Max-Planck Institute for Intelligent Systems, T\u00fcbingen, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Black", 
        "givenName": "Michael J.", 
        "id": "sg:person.01077541547.92", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01077541547.92"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2012", 
    "datePublishedReg": "2012-01-01", 
    "description": "Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available.", 
    "editor": [
      {
        "familyName": "Fitzgibbon", 
        "givenName": "Andrew", 
        "type": "Person"
      }, 
      {
        "familyName": "Lazebnik", 
        "givenName": "Svetlana", 
        "type": "Person"
      }, 
      {
        "familyName": "Perona", 
        "givenName": "Pietro", 
        "type": "Person"
      }, 
      {
        "familyName": "Sato", 
        "givenName": "Yoichi", 
        "type": "Person"
      }, 
      {
        "familyName": "Schmid", 
        "givenName": "Cordelia", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-642-33783-3_44", 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-642-33782-6", 
        "978-3-642-33783-3"
      ], 
      "name": "Computer Vision \u2013 ECCV 2012", 
      "type": "Book"
    }, 
    "keywords": [
      "optical flow algorithm", 
      "flow algorithm", 
      "data sets", 
      "optical flow data", 
      "optical flow evaluation", 
      "ground truth optical flow", 
      "optical flow estimation", 
      "real scenes", 
      "Middlebury evaluation", 
      "open source", 
      "open-source 3D", 
      "complex data", 
      "graphic data", 
      "flow data sets", 
      "motion blur", 
      "optical flow", 
      "defocus blur", 
      "evaluation websites", 
      "synthetic data", 
      "realistic data", 
      "flow estimation", 
      "algorithm", 
      "long sequences", 
      "natural motion", 
      "blur", 
      "scene", 
      "Sintel", 
      "large motion", 
      "important features", 
      "complexity", 
      "set", 
      "movies", 
      "flow data", 
      "video", 
      "specular reflection", 
      "websites", 
      "terms of size", 
      "images", 
      "metrics", 
      "data", 
      "evaluation", 
      "features", 
      "estimation", 
      "motion", 
      "flow evaluation", 
      "method", 
      "research", 
      "difficulties", 
      "terms", 
      "use", 
      "sequence", 
      "further research", 
      "results", 
      "source", 
      "size", 
      "real films", 
      "diversity", 
      "atmospheric effects", 
      "flow", 
      "films", 
      "reflection", 
      "conditions", 
      "effect"
    ], 
    "name": "A Naturalistic Open Source Movie for Optical Flow Evaluation", 
    "pagination": "611-625", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1004909083"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-642-33783-3_44"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-642-33783-3_44", 
      "https://app.dimensions.ai/details/publication/pub.1004909083"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-11-24T21:18", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221124/entities/gbq_results/chapter/chapter_415.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-642-33783-3_44"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44'


 

This table displays all metadata directly associated to this object as RDF triples.

169 TRIPLES      22 PREDICATES      88 URIs      81 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-642-33783-3_44 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N7d0262a9eb1b4c8f97d0585d97e197ee
4 schema:datePublished 2012
5 schema:datePublishedReg 2012-01-01
6 schema:description Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available.
7 schema:editor N0fdd3169aa31428f9c8965de9aa9f8fc
8 schema:genre chapter
9 schema:isAccessibleForFree false
10 schema:isPartOf N98cc01ab0428486db4cdf316f11715ed
11 schema:keywords Middlebury evaluation
12 Sintel
13 algorithm
14 atmospheric effects
15 blur
16 complex data
17 complexity
18 conditions
19 data
20 data sets
21 defocus blur
22 difficulties
23 diversity
24 effect
25 estimation
26 evaluation
27 evaluation websites
28 features
29 films
30 flow
31 flow algorithm
32 flow data
33 flow data sets
34 flow estimation
35 flow evaluation
36 further research
37 graphic data
38 ground truth optical flow
39 images
40 important features
41 large motion
42 long sequences
43 method
44 metrics
45 motion
46 motion blur
47 movies
48 natural motion
49 open source
50 open-source 3D
51 optical flow
52 optical flow algorithm
53 optical flow data
54 optical flow estimation
55 optical flow evaluation
56 real films
57 real scenes
58 realistic data
59 reflection
60 research
61 results
62 scene
63 sequence
64 set
65 size
66 source
67 specular reflection
68 synthetic data
69 terms
70 terms of size
71 use
72 video
73 websites
74 schema:name A Naturalistic Open Source Movie for Optical Flow Evaluation
75 schema:pagination 611-625
76 schema:productId N4cf4e4838e714dc2bc72236ce964a0e0
77 Nf25424f3a03949988d957cad66d04438
78 schema:publisher Ncc41c56df64244b79c97346dc8de914d
79 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004909083
80 https://doi.org/10.1007/978-3-642-33783-3_44
81 schema:sdDatePublished 2022-11-24T21:18
82 schema:sdLicense https://scigraph.springernature.com/explorer/license/
83 schema:sdPublisher Nc6a41f1a50f84ffb9c6f6fe7869d285f
84 schema:url https://doi.org/10.1007/978-3-642-33783-3_44
85 sgo:license sg:explorer/license/
86 sgo:sdDataset chapters
87 rdf:type schema:Chapter
88 N09483aa37ed240ccb3dd05fb679a083e rdf:first Naa4083aeff54430dbbbc6937c455d71b
89 rdf:rest N3d0e9ee0600f4e4397c0394827d711e6
90 N0fdd3169aa31428f9c8965de9aa9f8fc rdf:first N28dd31bebc97432498477fca728d3851
91 rdf:rest Ne3f75b1fdf2c4e639d3e963f85cec244
92 N28dd31bebc97432498477fca728d3851 schema:familyName Fitzgibbon
93 schema:givenName Andrew
94 rdf:type schema:Person
95 N3d0e9ee0600f4e4397c0394827d711e6 rdf:first Nf7f7047beb8342479cd6137019d0a8a2
96 rdf:rest rdf:nil
97 N4a6c605ebf8e4cc98fda43241da83ef6 schema:familyName Perona
98 schema:givenName Pietro
99 rdf:type schema:Person
100 N4c7a0bc384c947139437510f06d00bb5 schema:familyName Lazebnik
101 schema:givenName Svetlana
102 rdf:type schema:Person
103 N4cf4e4838e714dc2bc72236ce964a0e0 schema:name dimensions_id
104 schema:value pub.1004909083
105 rdf:type schema:PropertyValue
106 N61657d1dbd08446090e36b11e654885c rdf:first sg:person.01077541547.92
107 rdf:rest rdf:nil
108 N76bc47ed8d32407eaac48d2bc63d5889 rdf:first N4a6c605ebf8e4cc98fda43241da83ef6
109 rdf:rest N09483aa37ed240ccb3dd05fb679a083e
110 N7d0262a9eb1b4c8f97d0585d97e197ee rdf:first sg:person.015332011533.57
111 rdf:rest Nc8acf6d981fb43ba8e241563cb28bdc5
112 N98cc01ab0428486db4cdf316f11715ed schema:isbn 978-3-642-33782-6
113 978-3-642-33783-3
114 schema:name Computer Vision – ECCV 2012
115 rdf:type schema:Book
116 Naa4083aeff54430dbbbc6937c455d71b schema:familyName Sato
117 schema:givenName Yoichi
118 rdf:type schema:Person
119 Nc6a41f1a50f84ffb9c6f6fe7869d285f schema:name Springer Nature - SN SciGraph project
120 rdf:type schema:Organization
121 Nc8acf6d981fb43ba8e241563cb28bdc5 rdf:first sg:person.011017261733.90
122 rdf:rest Nf2571c70a6bb40ebafffe14f6f0717f5
123 Ncc41c56df64244b79c97346dc8de914d schema:name Springer Nature
124 rdf:type schema:Organisation
125 Ne3f75b1fdf2c4e639d3e963f85cec244 rdf:first N4c7a0bc384c947139437510f06d00bb5
126 rdf:rest N76bc47ed8d32407eaac48d2bc63d5889
127 Nf25424f3a03949988d957cad66d04438 schema:name doi
128 schema:value 10.1007/978-3-642-33783-3_44
129 rdf:type schema:PropertyValue
130 Nf2571c70a6bb40ebafffe14f6f0717f5 rdf:first sg:person.01352253415.84
131 rdf:rest N61657d1dbd08446090e36b11e654885c
132 Nf7f7047beb8342479cd6137019d0a8a2 schema:familyName Schmid
133 schema:givenName Cordelia
134 rdf:type schema:Person
135 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
136 schema:name Information and Computing Sciences
137 rdf:type schema:DefinedTerm
138 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
139 schema:name Artificial Intelligence and Image Processing
140 rdf:type schema:DefinedTerm
141 sg:person.01077541547.92 schema:affiliation grid-institutes:grid.419534.e
142 schema:familyName Black
143 schema:givenName Michael J.
144 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01077541547.92
145 rdf:type schema:Person
146 sg:person.011017261733.90 schema:affiliation grid-institutes:grid.419534.e
147 schema:familyName Wulff
148 schema:givenName Jonas
149 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011017261733.90
150 rdf:type schema:Person
151 sg:person.01352253415.84 schema:affiliation grid-institutes:grid.213917.f
152 schema:familyName Stanley
153 schema:givenName Garrett B.
154 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01352253415.84
155 rdf:type schema:Person
156 sg:person.015332011533.57 schema:affiliation grid-institutes:grid.34477.33
157 schema:familyName Butler
158 schema:givenName Daniel J.
159 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015332011533.57
160 rdf:type schema:Person
161 grid-institutes:grid.213917.f schema:alternateName Georgia Institute of Technology, Atlanta, GA, USA
162 schema:name Georgia Institute of Technology, Atlanta, GA, USA
163 rdf:type schema:Organization
164 grid-institutes:grid.34477.33 schema:alternateName University of Washington, Seattle, WA, USA
165 schema:name University of Washington, Seattle, WA, USA
166 rdf:type schema:Organization
167 grid-institutes:grid.419534.e schema:alternateName Max-Planck Institute for Intelligent Systems, Tübingen, Germany
168 schema:name Max-Planck Institute for Intelligent Systems, Tübingen, Germany
169 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...