Motion Based Foreground Detection and Poselet Motion Features for Action Recognition View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2015-04-17

AUTHORS

Erwin Kraft , Thomas Brox

ABSTRACT

For action recognition, the actor(s) and the tools they use as well as their motion are of central importance. In this paper, we propose separating foreground items of an action from the background on the basis of motion cues. As a consequence, separate descriptors can be defined for the foreground regions, while combined foreground-background descriptors still capture the context of an action. Also a low-dimensional global camera motion descriptor can be computed. Poselet activations in the foreground area indicate the actor and its pose. We propose tracking these poselets to obtain detailed motion features of the actor. Experiments on the Hollywood2 dataset show that foreground-background separation and the poselet motion features lead to consistently favorable results, both relative to the baseline and in comparison to the current state-of-the-art. More... »

PAGES

350-365

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23

DOI

http://dx.doi.org/10.1007/978-3-319-16814-2_23

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1050278454


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Fraunhofer ITWM, Fraunhofer-Platz 1, Kaiserslautern, Germany", 
          "id": "http://www.grid.ac/institutes/grid.461635.3", 
          "name": [
            "Fraunhofer ITWM, Fraunhofer-Platz 1, Kaiserslautern, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Kraft", 
        "givenName": "Erwin", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of Freiburg, Georges-K\u00f6hler-Allee 52, Freiburg, Germany", 
          "id": "http://www.grid.ac/institutes/grid.5963.9", 
          "name": [
            "University of Freiburg, Georges-K\u00f6hler-Allee 52, Freiburg, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Brox", 
        "givenName": "Thomas", 
        "id": "sg:person.012443225372.65", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2015-04-17", 
    "datePublishedReg": "2015-04-17", 
    "description": "For action recognition, the actor(s) and the tools they use as well as their motion are of central importance. In this paper, we propose separating foreground items of an action from the background on the basis of motion cues. As a consequence, separate descriptors can be defined for the foreground regions, while combined foreground-background descriptors still capture the context of an action. Also a low-dimensional global camera motion descriptor can be computed. Poselet activations in the foreground area indicate the actor and its pose. We propose tracking these poselets to obtain detailed motion features of the actor. Experiments on the Hollywood2 dataset show that foreground-background separation and the poselet motion features lead to consistently favorable results, both relative to the baseline and in comparison to the current state-of-the-art.", 
    "editor": [
      {
        "familyName": "Cremers", 
        "givenName": "Daniel", 
        "type": "Person"
      }, 
      {
        "familyName": "Reid", 
        "givenName": "Ian", 
        "type": "Person"
      }, 
      {
        "familyName": "Saito", 
        "givenName": "Hideo", 
        "type": "Person"
      }, 
      {
        "familyName": "Yang", 
        "givenName": "Ming-Hsuan", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-319-16814-2_23", 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-319-16813-5", 
        "978-3-319-16814-2"
      ], 
      "name": "Computer Vision -- ACCV 2014", 
      "type": "Book"
    }, 
    "keywords": [
      "motion features", 
      "motion", 
      "separation", 
      "foreground-background separation", 
      "motion cues", 
      "current state", 
      "pose", 
      "experiments", 
      "motion descriptors", 
      "foreground regions", 
      "features", 
      "foreground detection", 
      "separate descriptors", 
      "results", 
      "comparison", 
      "foreground area", 
      "show", 
      "favorable results", 
      "detection", 
      "area", 
      "dataset show", 
      "tool", 
      "region", 
      "descriptors", 
      "state", 
      "basis", 
      "art", 
      "poselets", 
      "action recognition", 
      "importance", 
      "recognition", 
      "action", 
      "central importance", 
      "consequences", 
      "background", 
      "context", 
      "items", 
      "cues", 
      "activation", 
      "baseline", 
      "actors", 
      "foreground items", 
      "paper", 
      "camera motion descriptor", 
      "poselet activations"
    ], 
    "name": "Motion Based Foreground Detection and Poselet Motion Features for Action Recognition", 
    "pagination": "350-365", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1050278454"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-319-16814-2_23"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-319-16814-2_23", 
      "https://app.dimensions.ai/details/publication/pub.1050278454"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-12-01T06:47", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/chapter/chapter_171.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-319-16814-2_23"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23'


 

This table displays all metadata directly associated to this object as RDF triples.

128 TRIPLES      22 PREDICATES      69 URIs      62 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-319-16814-2_23 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N0f5475bd8aae4f44af61234fe87157b5
4 schema:datePublished 2015-04-17
5 schema:datePublishedReg 2015-04-17
6 schema:description For action recognition, the actor(s) and the tools they use as well as their motion are of central importance. In this paper, we propose separating foreground items of an action from the background on the basis of motion cues. As a consequence, separate descriptors can be defined for the foreground regions, while combined foreground-background descriptors still capture the context of an action. Also a low-dimensional global camera motion descriptor can be computed. Poselet activations in the foreground area indicate the actor and its pose. We propose tracking these poselets to obtain detailed motion features of the actor. Experiments on the Hollywood2 dataset show that foreground-background separation and the poselet motion features lead to consistently favorable results, both relative to the baseline and in comparison to the current state-of-the-art.
7 schema:editor Nb2b13a373dd2445988ce95b83f90a524
8 schema:genre chapter
9 schema:isAccessibleForFree true
10 schema:isPartOf Nae08c1eddba1478bb171c1d1ac3bef7b
11 schema:keywords action
12 action recognition
13 activation
14 actors
15 area
16 art
17 background
18 baseline
19 basis
20 camera motion descriptor
21 central importance
22 comparison
23 consequences
24 context
25 cues
26 current state
27 dataset show
28 descriptors
29 detection
30 experiments
31 favorable results
32 features
33 foreground area
34 foreground detection
35 foreground items
36 foreground regions
37 foreground-background separation
38 importance
39 items
40 motion
41 motion cues
42 motion descriptors
43 motion features
44 paper
45 pose
46 poselet activations
47 poselets
48 recognition
49 region
50 results
51 separate descriptors
52 separation
53 show
54 state
55 tool
56 schema:name Motion Based Foreground Detection and Poselet Motion Features for Action Recognition
57 schema:pagination 350-365
58 schema:productId N4de009f12dde49dc845735a602344f7b
59 Nf6c16a4a021a408aa6a7267b36ee0499
60 schema:publisher N4d17995faf394febabd056ac65a0c863
61 schema:sameAs https://app.dimensions.ai/details/publication/pub.1050278454
62 https://doi.org/10.1007/978-3-319-16814-2_23
63 schema:sdDatePublished 2022-12-01T06:47
64 schema:sdLicense https://scigraph.springernature.com/explorer/license/
65 schema:sdPublisher N23ca20a8c1ba48858ce57afee2d1dfd4
66 schema:url https://doi.org/10.1007/978-3-319-16814-2_23
67 sgo:license sg:explorer/license/
68 sgo:sdDataset chapters
69 rdf:type schema:Chapter
70 N054178eaec2143c98bab83ca485792b8 schema:familyName Cremers
71 schema:givenName Daniel
72 rdf:type schema:Person
73 N0f5475bd8aae4f44af61234fe87157b5 rdf:first N83578e4a4ad14fab862a9e7a69d4765e
74 rdf:rest Nc1f999fd16744866a0d775b36c0415e1
75 N23ca20a8c1ba48858ce57afee2d1dfd4 schema:name Springer Nature - SN SciGraph project
76 rdf:type schema:Organization
77 N2e37af578bd045ebb79d349a624f9db2 schema:familyName Yang
78 schema:givenName Ming-Hsuan
79 rdf:type schema:Person
80 N4d17995faf394febabd056ac65a0c863 schema:name Springer Nature
81 rdf:type schema:Organisation
82 N4de009f12dde49dc845735a602344f7b schema:name doi
83 schema:value 10.1007/978-3-319-16814-2_23
84 rdf:type schema:PropertyValue
85 N72e223bde3774d63b19df4cf01d6f99b rdf:first Nc8013c6bdba64340b450ad9f6f7fdb88
86 rdf:rest Nbc800cef59664deab19901af50e3b8b6
87 N83578e4a4ad14fab862a9e7a69d4765e schema:affiliation grid-institutes:grid.461635.3
88 schema:familyName Kraft
89 schema:givenName Erwin
90 rdf:type schema:Person
91 Nae08c1eddba1478bb171c1d1ac3bef7b schema:isbn 978-3-319-16813-5
92 978-3-319-16814-2
93 schema:name Computer Vision -- ACCV 2014
94 rdf:type schema:Book
95 Nb2b13a373dd2445988ce95b83f90a524 rdf:first N054178eaec2143c98bab83ca485792b8
96 rdf:rest Nc375d5452407422f806a4eb61dba88f2
97 Nbc800cef59664deab19901af50e3b8b6 rdf:first N2e37af578bd045ebb79d349a624f9db2
98 rdf:rest rdf:nil
99 Nc1f999fd16744866a0d775b36c0415e1 rdf:first sg:person.012443225372.65
100 rdf:rest rdf:nil
101 Nc375d5452407422f806a4eb61dba88f2 rdf:first Nd6a107516abd4a1282a1310ba5a3fed2
102 rdf:rest N72e223bde3774d63b19df4cf01d6f99b
103 Nc8013c6bdba64340b450ad9f6f7fdb88 schema:familyName Saito
104 schema:givenName Hideo
105 rdf:type schema:Person
106 Nd6a107516abd4a1282a1310ba5a3fed2 schema:familyName Reid
107 schema:givenName Ian
108 rdf:type schema:Person
109 Nf6c16a4a021a408aa6a7267b36ee0499 schema:name dimensions_id
110 schema:value pub.1050278454
111 rdf:type schema:PropertyValue
112 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
113 schema:name Information and Computing Sciences
114 rdf:type schema:DefinedTerm
115 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
116 schema:name Artificial Intelligence and Image Processing
117 rdf:type schema:DefinedTerm
118 sg:person.012443225372.65 schema:affiliation grid-institutes:grid.5963.9
119 schema:familyName Brox
120 schema:givenName Thomas
121 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65
122 rdf:type schema:Person
123 grid-institutes:grid.461635.3 schema:alternateName Fraunhofer ITWM, Fraunhofer-Platz 1, Kaiserslautern, Germany
124 schema:name Fraunhofer ITWM, Fraunhofer-Platz 1, Kaiserslautern, Germany
125 rdf:type schema:Organization
126 grid-institutes:grid.5963.9 schema:alternateName University of Freiburg, Georges-Köhler-Allee 52, Freiburg, Germany
127 schema:name University of Freiburg, Georges-Köhler-Allee 52, Freiburg, Germany
128 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...