A Naturalistic Open Source Movie for Optical Flow Evaluation View Full Text


Ontology type: schema:Chapter     


Chapter Info

DATE

2012

AUTHORS

Daniel J. Butler , Jonas Wulff , Garrett B. Stanley , Michael J. Black

ABSTRACT

Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available. More... »

PAGES

611-625

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44

DOI

http://dx.doi.org/10.1007/978-3-642-33783-3_44

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1004909083


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "University of Washington, Seattle, WA, USA", 
          "id": "http://www.grid.ac/institutes/grid.34477.33", 
          "name": [
            "University of Washington, Seattle, WA, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Butler", 
        "givenName": "Daniel J.", 
        "id": "sg:person.015332011533.57", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015332011533.57"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Max-Planck Institute for Intelligent Systems, T\u00fcbingen, Germany", 
          "id": "http://www.grid.ac/institutes/grid.419534.e", 
          "name": [
            "Max-Planck Institute for Intelligent Systems, T\u00fcbingen, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Wulff", 
        "givenName": "Jonas", 
        "id": "sg:person.011017261733.90", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011017261733.90"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Georgia Institute of Technology, Atlanta, GA, USA", 
          "id": "http://www.grid.ac/institutes/grid.213917.f", 
          "name": [
            "Georgia Institute of Technology, Atlanta, GA, USA"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Stanley", 
        "givenName": "Garrett B.", 
        "id": "sg:person.01352253415.84", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01352253415.84"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Max-Planck Institute for Intelligent Systems, T\u00fcbingen, Germany", 
          "id": "http://www.grid.ac/institutes/grid.419534.e", 
          "name": [
            "Max-Planck Institute for Intelligent Systems, T\u00fcbingen, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Black", 
        "givenName": "Michael J.", 
        "id": "sg:person.01077541547.92", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01077541547.92"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2012", 
    "datePublishedReg": "2012-01-01", 
    "description": "Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available.", 
    "editor": [
      {
        "familyName": "Fitzgibbon", 
        "givenName": "Andrew", 
        "type": "Person"
      }, 
      {
        "familyName": "Lazebnik", 
        "givenName": "Svetlana", 
        "type": "Person"
      }, 
      {
        "familyName": "Perona", 
        "givenName": "Pietro", 
        "type": "Person"
      }, 
      {
        "familyName": "Sato", 
        "givenName": "Yoichi", 
        "type": "Person"
      }, 
      {
        "familyName": "Schmid", 
        "givenName": "Cordelia", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-642-33783-3_44", 
    "isAccessibleForFree": false, 
    "isPartOf": {
      "isbn": [
        "978-3-642-33782-6", 
        "978-3-642-33783-3"
      ], 
      "name": "Computer Vision \u2013 ECCV 2012", 
      "type": "Book"
    }, 
    "keywords": [
      "optical flow algorithm", 
      "flow algorithm", 
      "data sets", 
      "optical flow data", 
      "optical flow evaluation", 
      "ground truth optical flow", 
      "optical flow estimation", 
      "real scenes", 
      "Middlebury evaluation", 
      "open source", 
      "open-source 3D", 
      "complex data", 
      "graphic data", 
      "flow data sets", 
      "motion blur", 
      "optical flow", 
      "defocus blur", 
      "evaluation websites", 
      "synthetic data", 
      "realistic data", 
      "flow estimation", 
      "algorithm", 
      "long sequences", 
      "natural motion", 
      "blur", 
      "scene", 
      "Sintel", 
      "large motion", 
      "important features", 
      "complexity", 
      "set", 
      "movies", 
      "flow data", 
      "video", 
      "specular reflection", 
      "websites", 
      "terms of size", 
      "images", 
      "metrics", 
      "data", 
      "evaluation", 
      "features", 
      "estimation", 
      "motion", 
      "flow evaluation", 
      "method", 
      "research", 
      "difficulties", 
      "terms", 
      "use", 
      "sequence", 
      "further research", 
      "results", 
      "source", 
      "size", 
      "real films", 
      "diversity", 
      "atmospheric effects", 
      "flow", 
      "films", 
      "reflection", 
      "conditions", 
      "effect"
    ], 
    "name": "A Naturalistic Open Source Movie for Optical Flow Evaluation", 
    "pagination": "611-625", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1004909083"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-642-33783-3_44"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-642-33783-3_44", 
      "https://app.dimensions.ai/details/publication/pub.1004909083"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-11-24T21:18", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221124/entities/gbq_results/chapter/chapter_415.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-642-33783-3_44"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-642-33783-3_44'


 

This table displays all metadata directly associated to this object as RDF triples.

169 TRIPLES      22 PREDICATES      88 URIs      81 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-642-33783-3_44 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N0923483ba62a4bdca0e6b3d13fbaf4f7
4 schema:datePublished 2012
5 schema:datePublishedReg 2012-01-01
6 schema:description Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available.
7 schema:editor Nc6460cbdbe684f5e990f3e45671a6a54
8 schema:genre chapter
9 schema:isAccessibleForFree false
10 schema:isPartOf Nce109629ca19446792aa64c068c6b17b
11 schema:keywords Middlebury evaluation
12 Sintel
13 algorithm
14 atmospheric effects
15 blur
16 complex data
17 complexity
18 conditions
19 data
20 data sets
21 defocus blur
22 difficulties
23 diversity
24 effect
25 estimation
26 evaluation
27 evaluation websites
28 features
29 films
30 flow
31 flow algorithm
32 flow data
33 flow data sets
34 flow estimation
35 flow evaluation
36 further research
37 graphic data
38 ground truth optical flow
39 images
40 important features
41 large motion
42 long sequences
43 method
44 metrics
45 motion
46 motion blur
47 movies
48 natural motion
49 open source
50 open-source 3D
51 optical flow
52 optical flow algorithm
53 optical flow data
54 optical flow estimation
55 optical flow evaluation
56 real films
57 real scenes
58 realistic data
59 reflection
60 research
61 results
62 scene
63 sequence
64 set
65 size
66 source
67 specular reflection
68 synthetic data
69 terms
70 terms of size
71 use
72 video
73 websites
74 schema:name A Naturalistic Open Source Movie for Optical Flow Evaluation
75 schema:pagination 611-625
76 schema:productId N80656960fbda412da3ce85b1eed432aa
77 Nec9f240446434050a88b8950b94f928b
78 schema:publisher N5972894badca479a921b8af9d402740a
79 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004909083
80 https://doi.org/10.1007/978-3-642-33783-3_44
81 schema:sdDatePublished 2022-11-24T21:18
82 schema:sdLicense https://scigraph.springernature.com/explorer/license/
83 schema:sdPublisher N7ad458465f7b4e5c94b44505265f832f
84 schema:url https://doi.org/10.1007/978-3-642-33783-3_44
85 sgo:license sg:explorer/license/
86 sgo:sdDataset chapters
87 rdf:type schema:Chapter
88 N028849b2b6b44118a0e1b4b0980aacff rdf:first Nc571afe6f7f5445db901ec771e309e94
89 rdf:rest rdf:nil
90 N0923483ba62a4bdca0e6b3d13fbaf4f7 rdf:first sg:person.015332011533.57
91 rdf:rest Nb3bd5d0c6df3422cb55cf7f4bee6c1ca
92 N5972894badca479a921b8af9d402740a schema:name Springer Nature
93 rdf:type schema:Organisation
94 N6ac7a0a3cdb64d388e1a2dddc18632f2 schema:familyName Lazebnik
95 schema:givenName Svetlana
96 rdf:type schema:Person
97 N6fb66f49b2144311b6e72c0a5977322e schema:familyName Fitzgibbon
98 schema:givenName Andrew
99 rdf:type schema:Person
100 N7ad458465f7b4e5c94b44505265f832f schema:name Springer Nature - SN SciGraph project
101 rdf:type schema:Organization
102 N80656960fbda412da3ce85b1eed432aa schema:name doi
103 schema:value 10.1007/978-3-642-33783-3_44
104 rdf:type schema:PropertyValue
105 N9be6638e6af042d897d56aabb0ef9114 schema:familyName Perona
106 schema:givenName Pietro
107 rdf:type schema:Person
108 N9e710c5f67be4df38f321de10550988a rdf:first N9be6638e6af042d897d56aabb0ef9114
109 rdf:rest Na7620f70e1214c0780eb1b7f2405dd4e
110 Na7620f70e1214c0780eb1b7f2405dd4e rdf:first Nb4842a0af3b246e498e64022af7bda88
111 rdf:rest N028849b2b6b44118a0e1b4b0980aacff
112 Nb3bd5d0c6df3422cb55cf7f4bee6c1ca rdf:first sg:person.011017261733.90
113 rdf:rest Nd3f621bdb05f4ac58166dd14cbef6ee4
114 Nb4842a0af3b246e498e64022af7bda88 schema:familyName Sato
115 schema:givenName Yoichi
116 rdf:type schema:Person
117 Nc1ef28b6a199416eb39019ed3e19dfc8 rdf:first N6ac7a0a3cdb64d388e1a2dddc18632f2
118 rdf:rest N9e710c5f67be4df38f321de10550988a
119 Nc571afe6f7f5445db901ec771e309e94 schema:familyName Schmid
120 schema:givenName Cordelia
121 rdf:type schema:Person
122 Nc6460cbdbe684f5e990f3e45671a6a54 rdf:first N6fb66f49b2144311b6e72c0a5977322e
123 rdf:rest Nc1ef28b6a199416eb39019ed3e19dfc8
124 Nce109629ca19446792aa64c068c6b17b schema:isbn 978-3-642-33782-6
125 978-3-642-33783-3
126 schema:name Computer Vision – ECCV 2012
127 rdf:type schema:Book
128 Nd3f621bdb05f4ac58166dd14cbef6ee4 rdf:first sg:person.01352253415.84
129 rdf:rest Nd61b4e22345f4577a7318621ef8d652e
130 Nd61b4e22345f4577a7318621ef8d652e rdf:first sg:person.01077541547.92
131 rdf:rest rdf:nil
132 Nec9f240446434050a88b8950b94f928b schema:name dimensions_id
133 schema:value pub.1004909083
134 rdf:type schema:PropertyValue
135 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
136 schema:name Information and Computing Sciences
137 rdf:type schema:DefinedTerm
138 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
139 schema:name Artificial Intelligence and Image Processing
140 rdf:type schema:DefinedTerm
141 sg:person.01077541547.92 schema:affiliation grid-institutes:grid.419534.e
142 schema:familyName Black
143 schema:givenName Michael J.
144 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01077541547.92
145 rdf:type schema:Person
146 sg:person.011017261733.90 schema:affiliation grid-institutes:grid.419534.e
147 schema:familyName Wulff
148 schema:givenName Jonas
149 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011017261733.90
150 rdf:type schema:Person
151 sg:person.01352253415.84 schema:affiliation grid-institutes:grid.213917.f
152 schema:familyName Stanley
153 schema:givenName Garrett B.
154 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01352253415.84
155 rdf:type schema:Person
156 sg:person.015332011533.57 schema:affiliation grid-institutes:grid.34477.33
157 schema:familyName Butler
158 schema:givenName Daniel J.
159 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015332011533.57
160 rdf:type schema:Person
161 grid-institutes:grid.213917.f schema:alternateName Georgia Institute of Technology, Atlanta, GA, USA
162 schema:name Georgia Institute of Technology, Atlanta, GA, USA
163 rdf:type schema:Organization
164 grid-institutes:grid.34477.33 schema:alternateName University of Washington, Seattle, WA, USA
165 schema:name University of Washington, Seattle, WA, USA
166 rdf:type schema:Organization
167 grid-institutes:grid.419534.e schema:alternateName Max-Planck Institute for Intelligent Systems, Tübingen, Germany
168 schema:name Max-Planck Institute for Intelligent Systems, Tübingen, Germany
169 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...