Motion Based Foreground Detection and Poselet Motion Features for Action Recognition View Full Text


Ontology type: schema:Chapter      Open Access: True


Chapter Info

DATE

2015-04-17

AUTHORS

Erwin Kraft , Thomas Brox

ABSTRACT

For action recognition, the actor(s) and the tools they use as well as their motion are of central importance. In this paper, we propose separating foreground items of an action from the background on the basis of motion cues. As a consequence, separate descriptors can be defined for the foreground regions, while combined foreground-background descriptors still capture the context of an action. Also a low-dimensional global camera motion descriptor can be computed. Poselet activations in the foreground area indicate the actor and its pose. We propose tracking these poselets to obtain detailed motion features of the actor. Experiments on the Hollywood2 dataset show that foreground-background separation and the poselet motion features lead to consistently favorable results, both relative to the baseline and in comparison to the current state-of-the-art. More... »

PAGES

350-365

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23

DOI

http://dx.doi.org/10.1007/978-3-319-16814-2_23

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1050278454


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Fraunhofer ITWM, Fraunhofer-Platz 1, Kaiserslautern, Germany", 
          "id": "http://www.grid.ac/institutes/grid.461635.3", 
          "name": [
            "Fraunhofer ITWM, Fraunhofer-Platz 1, Kaiserslautern, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Kraft", 
        "givenName": "Erwin", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of Freiburg, Georges-K\u00f6hler-Allee 52, Freiburg, Germany", 
          "id": "http://www.grid.ac/institutes/grid.5963.9", 
          "name": [
            "University of Freiburg, Georges-K\u00f6hler-Allee 52, Freiburg, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Brox", 
        "givenName": "Thomas", 
        "id": "sg:person.012443225372.65", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65"
        ], 
        "type": "Person"
      }
    ], 
    "datePublished": "2015-04-17", 
    "datePublishedReg": "2015-04-17", 
    "description": "For action recognition, the actor(s) and the tools they use as well as their motion are of central importance. In this paper, we propose separating foreground items of an action from the background on the basis of motion cues. As a consequence, separate descriptors can be defined for the foreground regions, while combined foreground-background descriptors still capture the context of an action. Also a low-dimensional global camera motion descriptor can be computed. Poselet activations in the foreground area indicate the actor and its pose. We propose tracking these poselets to obtain detailed motion features of the actor. Experiments on the Hollywood2 dataset show that foreground-background separation and the poselet motion features lead to consistently favorable results, both relative to the baseline and in comparison to the current state-of-the-art.", 
    "editor": [
      {
        "familyName": "Cremers", 
        "givenName": "Daniel", 
        "type": "Person"
      }, 
      {
        "familyName": "Reid", 
        "givenName": "Ian", 
        "type": "Person"
      }, 
      {
        "familyName": "Saito", 
        "givenName": "Hideo", 
        "type": "Person"
      }, 
      {
        "familyName": "Yang", 
        "givenName": "Ming-Hsuan", 
        "type": "Person"
      }
    ], 
    "genre": "chapter", 
    "id": "sg:pub.10.1007/978-3-319-16814-2_23", 
    "isAccessibleForFree": true, 
    "isPartOf": {
      "isbn": [
        "978-3-319-16813-5", 
        "978-3-319-16814-2"
      ], 
      "name": "Computer Vision -- ACCV 2014", 
      "type": "Book"
    }, 
    "keywords": [
      "motion features", 
      "motion", 
      "separation", 
      "foreground-background separation", 
      "motion cues", 
      "current state", 
      "pose", 
      "experiments", 
      "motion descriptors", 
      "foreground regions", 
      "features", 
      "foreground detection", 
      "separate descriptors", 
      "results", 
      "comparison", 
      "foreground area", 
      "show", 
      "favorable results", 
      "detection", 
      "area", 
      "dataset show", 
      "tool", 
      "region", 
      "descriptors", 
      "state", 
      "basis", 
      "art", 
      "poselets", 
      "action recognition", 
      "importance", 
      "recognition", 
      "action", 
      "central importance", 
      "consequences", 
      "background", 
      "context", 
      "items", 
      "cues", 
      "activation", 
      "baseline", 
      "actors", 
      "foreground items", 
      "paper", 
      "camera motion descriptor", 
      "poselet activations"
    ], 
    "name": "Motion Based Foreground Detection and Poselet Motion Features for Action Recognition", 
    "pagination": "350-365", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1050278454"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/978-3-319-16814-2_23"
        ]
      }
    ], 
    "publisher": {
      "name": "Springer Nature", 
      "type": "Organisation"
    }, 
    "sameAs": [
      "https://doi.org/10.1007/978-3-319-16814-2_23", 
      "https://app.dimensions.ai/details/publication/pub.1050278454"
    ], 
    "sdDataset": "chapters", 
    "sdDatePublished": "2022-10-01T06:55", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20221001/entities/gbq_results/chapter/chapter_283.jsonl", 
    "type": "Chapter", 
    "url": "https://doi.org/10.1007/978-3-319-16814-2_23"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/978-3-319-16814-2_23'


 

This table displays all metadata directly associated to this object as RDF triples.

128 TRIPLES      22 PREDICATES      69 URIs      62 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/978-3-319-16814-2_23 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Nfa90760335254f0bb1f9367bfa847889
4 schema:datePublished 2015-04-17
5 schema:datePublishedReg 2015-04-17
6 schema:description For action recognition, the actor(s) and the tools they use as well as their motion are of central importance. In this paper, we propose separating foreground items of an action from the background on the basis of motion cues. As a consequence, separate descriptors can be defined for the foreground regions, while combined foreground-background descriptors still capture the context of an action. Also a low-dimensional global camera motion descriptor can be computed. Poselet activations in the foreground area indicate the actor and its pose. We propose tracking these poselets to obtain detailed motion features of the actor. Experiments on the Hollywood2 dataset show that foreground-background separation and the poselet motion features lead to consistently favorable results, both relative to the baseline and in comparison to the current state-of-the-art.
7 schema:editor N114f982ad3d14eeb92a8f13ca11a59a3
8 schema:genre chapter
9 schema:isAccessibleForFree true
10 schema:isPartOf Nca0c2b3fb9bf4912bd971bee7f3d5c52
11 schema:keywords action
12 action recognition
13 activation
14 actors
15 area
16 art
17 background
18 baseline
19 basis
20 camera motion descriptor
21 central importance
22 comparison
23 consequences
24 context
25 cues
26 current state
27 dataset show
28 descriptors
29 detection
30 experiments
31 favorable results
32 features
33 foreground area
34 foreground detection
35 foreground items
36 foreground regions
37 foreground-background separation
38 importance
39 items
40 motion
41 motion cues
42 motion descriptors
43 motion features
44 paper
45 pose
46 poselet activations
47 poselets
48 recognition
49 region
50 results
51 separate descriptors
52 separation
53 show
54 state
55 tool
56 schema:name Motion Based Foreground Detection and Poselet Motion Features for Action Recognition
57 schema:pagination 350-365
58 schema:productId N390d9600cded4daabe32af01c2292f55
59 N444230770b594eaea16bea8e8ffedf80
60 schema:publisher N3b59f8df8e104f5da33a278ee66ef269
61 schema:sameAs https://app.dimensions.ai/details/publication/pub.1050278454
62 https://doi.org/10.1007/978-3-319-16814-2_23
63 schema:sdDatePublished 2022-10-01T06:55
64 schema:sdLicense https://scigraph.springernature.com/explorer/license/
65 schema:sdPublisher Ne85285eb84bd46b58d719906a11b7d0f
66 schema:url https://doi.org/10.1007/978-3-319-16814-2_23
67 sgo:license sg:explorer/license/
68 sgo:sdDataset chapters
69 rdf:type schema:Chapter
70 N114f982ad3d14eeb92a8f13ca11a59a3 rdf:first N4b51564f3eb5493da5e4868c6d9782cb
71 rdf:rest N56176027ac8d4f28a159a08fa53b97e7
72 N390d9600cded4daabe32af01c2292f55 schema:name doi
73 schema:value 10.1007/978-3-319-16814-2_23
74 rdf:type schema:PropertyValue
75 N3b59f8df8e104f5da33a278ee66ef269 schema:name Springer Nature
76 rdf:type schema:Organisation
77 N43ffa381fb8a483f9cd074dcc01268a1 schema:familyName Yang
78 schema:givenName Ming-Hsuan
79 rdf:type schema:Person
80 N444230770b594eaea16bea8e8ffedf80 schema:name dimensions_id
81 schema:value pub.1050278454
82 rdf:type schema:PropertyValue
83 N45e837bf7f9447169b26f115b606aa65 rdf:first N43ffa381fb8a483f9cd074dcc01268a1
84 rdf:rest rdf:nil
85 N4b51564f3eb5493da5e4868c6d9782cb schema:familyName Cremers
86 schema:givenName Daniel
87 rdf:type schema:Person
88 N56176027ac8d4f28a159a08fa53b97e7 rdf:first Nb16e23aaa21642168dcefb1477a5fc01
89 rdf:rest N9992022502ee4aefb0ffeca9df8fbd9e
90 N68b291e4ffd745739516e6ec74a47ceb schema:affiliation grid-institutes:grid.461635.3
91 schema:familyName Kraft
92 schema:givenName Erwin
93 rdf:type schema:Person
94 N7d6634f3ad3a4866886afd7d8c65a041 rdf:first sg:person.012443225372.65
95 rdf:rest rdf:nil
96 N9992022502ee4aefb0ffeca9df8fbd9e rdf:first Ncb3b74b0eafe46d8ad52119e347b1fcf
97 rdf:rest N45e837bf7f9447169b26f115b606aa65
98 Nb16e23aaa21642168dcefb1477a5fc01 schema:familyName Reid
99 schema:givenName Ian
100 rdf:type schema:Person
101 Nca0c2b3fb9bf4912bd971bee7f3d5c52 schema:isbn 978-3-319-16813-5
102 978-3-319-16814-2
103 schema:name Computer Vision -- ACCV 2014
104 rdf:type schema:Book
105 Ncb3b74b0eafe46d8ad52119e347b1fcf schema:familyName Saito
106 schema:givenName Hideo
107 rdf:type schema:Person
108 Ne85285eb84bd46b58d719906a11b7d0f schema:name Springer Nature - SN SciGraph project
109 rdf:type schema:Organization
110 Nfa90760335254f0bb1f9367bfa847889 rdf:first N68b291e4ffd745739516e6ec74a47ceb
111 rdf:rest N7d6634f3ad3a4866886afd7d8c65a041
112 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
113 schema:name Information and Computing Sciences
114 rdf:type schema:DefinedTerm
115 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
116 schema:name Artificial Intelligence and Image Processing
117 rdf:type schema:DefinedTerm
118 sg:person.012443225372.65 schema:affiliation grid-institutes:grid.5963.9
119 schema:familyName Brox
120 schema:givenName Thomas
121 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012443225372.65
122 rdf:type schema:Person
123 grid-institutes:grid.461635.3 schema:alternateName Fraunhofer ITWM, Fraunhofer-Platz 1, Kaiserslautern, Germany
124 schema:name Fraunhofer ITWM, Fraunhofer-Platz 1, Kaiserslautern, Germany
125 rdf:type schema:Organization
126 grid-institutes:grid.5963.9 schema:alternateName University of Freiburg, Georges-Köhler-Allee 52, Freiburg, Germany
127 schema:name University of Freiburg, Georges-Köhler-Allee 52, Freiburg, Germany
128 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...