Visual node prediction for visual tracking View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2019-01-30

AUTHORS

Heng Yuan, Wen-Tao Jiang, Wan-Jun Liu, Sheng-Chong Zhang

ABSTRACT

A novel visual tracking algorithm based on visual node (VN) prediction is proposed in this paper. First, we count the distribution area and gray levels of the larger probability density in the VN. Then, all the frequencies of the VN are calculated, of which the weaker frequency gradient is removed by filtration. The stronger frequency gradient of the VN is reserved. Finally, we estimate the optimal object position by maximizing the likelihood of node clusters, which are formed by VNs. Extensive experiments show that the proposed approach has good adaptability to variable-structure tracking and outperforms the state-of-the-art trackers. More... »

PAGES

1-10

References to SciGraph publications

Journal

TITLE

Multimedia Systems

ISSUE

N/A

VOLUME

N/A

Author Affiliations

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s00530-019-00603-1

DOI

http://dx.doi.org/10.1007/s00530-019-00603-1

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1111781034


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0104", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Statistics", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/01", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Mathematical Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Liaoning Technical University", 
          "id": "https://www.grid.ac/institutes/grid.464369.a", 
          "name": [
            "Centre for Image and Visual Information Calculating Research, Liaoning Technical University, 125105, Huludao, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Yuan", 
        "givenName": "Heng", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Liaoning Technical University", 
          "id": "https://www.grid.ac/institutes/grid.464369.a", 
          "name": [
            "Centre for Image and Visual Information Calculating Research, Liaoning Technical University, 125105, Huludao, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Jiang", 
        "givenName": "Wen-Tao", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Liaoning Technical University", 
          "id": "https://www.grid.ac/institutes/grid.464369.a", 
          "name": [
            "Centre for Image and Visual Information Calculating Research, Liaoning Technical University, 125105, Huludao, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Liu", 
        "givenName": "Wan-Jun", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "name": [
            "Key Laboratory of Electro-optical Information Control and Security Technology, China Electronic Technology Corporation, 300300, Tianjin, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Zhang", 
        "givenName": "Sheng-Chong", 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/s11263-014-0736-2", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1016069434", 
          "https://doi.org/10.1007/s11263-014-0736-2"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11263-014-0763-z", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1019914009", 
          "https://doi.org/10.1007/s11263-014-0763-z"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.cviu.2016.02.003", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1037372500"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.neucom.2016.06.048", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1051815714"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1049/el.2016.3011", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1056758035"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2016.2531283", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061644863"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2016.2614135", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061645242"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2016.2537330", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061745047"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2016.2609928", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061745160"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2017.2676346", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1084206485"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2017.2699791", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1085304034"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2016.156", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1093690223"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2016.468", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094816219"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.patcog.2018.03.029", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1101838500"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.patcog.2018.05.017", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1104269819"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2018.2864965", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1106137171"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2018.00511", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1110720645"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2018.00511", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1110720645"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2019-01-30", 
    "datePublishedReg": "2019-01-30", 
    "description": "A novel visual tracking algorithm based on visual node (VN) prediction is proposed in this paper. First, we count the distribution area and gray levels of the larger probability density in the VN. Then, all the frequencies of the VN are calculated, of which the weaker frequency gradient is removed by filtration. The stronger frequency gradient of the VN is reserved. Finally, we estimate the optimal object position by maximizing the likelihood of node clusters, which are formed by VNs. Extensive experiments show that the proposed approach has good adaptability to variable-structure tracking and outperforms the state-of-the-art trackers.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1007/s00530-019-00603-1", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": false, 
    "isPartOf": [
      {
        "id": "sg:journal.1284647", 
        "issn": [
          "0942-4962", 
          "1432-1882"
        ], 
        "name": "Multimedia Systems", 
        "type": "Periodical"
      }
    ], 
    "name": "Visual node prediction for visual tracking", 
    "pagination": "1-10", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "ffe36aa51b3dec9bfd91a2a7b142b845b4bbe8d39a56cb9773d2de6a8c0c1da6"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s00530-019-00603-1"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1111781034"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s00530-019-00603-1", 
      "https://app.dimensions.ai/details/publication/pub.1111781034"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-11T08:59", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000327_0000000327/records_114961_00000000.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://link.springer.com/10.1007%2Fs00530-019-00603-1"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00530-019-00603-1'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00530-019-00603-1'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00530-019-00603-1'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00530-019-00603-1'


 

This table displays all metadata directly associated to this object as RDF triples.

127 TRIPLES      21 PREDICATES      41 URIs      16 LITERALS      5 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s00530-019-00603-1 schema:about anzsrc-for:01
2 anzsrc-for:0104
3 schema:author N98855b9f5a074b41a7212912c9f51302
4 schema:citation sg:pub.10.1007/s11263-014-0736-2
5 sg:pub.10.1007/s11263-014-0763-z
6 https://doi.org/10.1016/j.cviu.2016.02.003
7 https://doi.org/10.1016/j.neucom.2016.06.048
8 https://doi.org/10.1016/j.patcog.2018.03.029
9 https://doi.org/10.1016/j.patcog.2018.05.017
10 https://doi.org/10.1049/el.2016.3011
11 https://doi.org/10.1109/cvpr.2016.156
12 https://doi.org/10.1109/cvpr.2016.468
13 https://doi.org/10.1109/cvpr.2018.00511
14 https://doi.org/10.1109/tip.2016.2531283
15 https://doi.org/10.1109/tip.2016.2614135
16 https://doi.org/10.1109/tip.2017.2676346
17 https://doi.org/10.1109/tip.2017.2699791
18 https://doi.org/10.1109/tpami.2016.2537330
19 https://doi.org/10.1109/tpami.2016.2609928
20 https://doi.org/10.1109/tpami.2018.2864965
21 schema:datePublished 2019-01-30
22 schema:datePublishedReg 2019-01-30
23 schema:description A novel visual tracking algorithm based on visual node (VN) prediction is proposed in this paper. First, we count the distribution area and gray levels of the larger probability density in the VN. Then, all the frequencies of the VN are calculated, of which the weaker frequency gradient is removed by filtration. The stronger frequency gradient of the VN is reserved. Finally, we estimate the optimal object position by maximizing the likelihood of node clusters, which are formed by VNs. Extensive experiments show that the proposed approach has good adaptability to variable-structure tracking and outperforms the state-of-the-art trackers.
24 schema:genre research_article
25 schema:inLanguage en
26 schema:isAccessibleForFree false
27 schema:isPartOf sg:journal.1284647
28 schema:name Visual node prediction for visual tracking
29 schema:pagination 1-10
30 schema:productId N37563b800e034d84a6265c52d1525887
31 N6a2e147ef5534fc6bb76179bba128d83
32 Ndf7662a7c60248a580a5d935cba13b8d
33 schema:sameAs https://app.dimensions.ai/details/publication/pub.1111781034
34 https://doi.org/10.1007/s00530-019-00603-1
35 schema:sdDatePublished 2019-04-11T08:59
36 schema:sdLicense https://scigraph.springernature.com/explorer/license/
37 schema:sdPublisher Nd6aebbd23ff242c3a8bf3bd5478079e5
38 schema:url https://link.springer.com/10.1007%2Fs00530-019-00603-1
39 sgo:license sg:explorer/license/
40 sgo:sdDataset articles
41 rdf:type schema:ScholarlyArticle
42 N37563b800e034d84a6265c52d1525887 schema:name dimensions_id
43 schema:value pub.1111781034
44 rdf:type schema:PropertyValue
45 N600189e6f4674ca582ae2ccea4d3dc22 schema:affiliation Na31e6a76d4ba4758b4f4a038835ecdda
46 schema:familyName Zhang
47 schema:givenName Sheng-Chong
48 rdf:type schema:Person
49 N61c07b8ea8e84f658e7490bf168cff77 rdf:first N7683e0b9e3d2417097df0bf3b7e1056c
50 rdf:rest Nee0f9a96d9b84ec68cb3cfa3cec03660
51 N6a2e147ef5534fc6bb76179bba128d83 schema:name readcube_id
52 schema:value ffe36aa51b3dec9bfd91a2a7b142b845b4bbe8d39a56cb9773d2de6a8c0c1da6
53 rdf:type schema:PropertyValue
54 N6d80d79d90ec4bbe97d5966b0ff39211 schema:affiliation https://www.grid.ac/institutes/grid.464369.a
55 schema:familyName Yuan
56 schema:givenName Heng
57 rdf:type schema:Person
58 N7683e0b9e3d2417097df0bf3b7e1056c schema:affiliation https://www.grid.ac/institutes/grid.464369.a
59 schema:familyName Jiang
60 schema:givenName Wen-Tao
61 rdf:type schema:Person
62 N98855b9f5a074b41a7212912c9f51302 rdf:first N6d80d79d90ec4bbe97d5966b0ff39211
63 rdf:rest N61c07b8ea8e84f658e7490bf168cff77
64 Na31e6a76d4ba4758b4f4a038835ecdda schema:name Key Laboratory of Electro-optical Information Control and Security Technology, China Electronic Technology Corporation, 300300, Tianjin, China
65 rdf:type schema:Organization
66 Naf31cd471ac14c3996ecc4c53248c91f schema:affiliation https://www.grid.ac/institutes/grid.464369.a
67 schema:familyName Liu
68 schema:givenName Wan-Jun
69 rdf:type schema:Person
70 Nb17d34106e03471b8e5fc419201ada2f rdf:first N600189e6f4674ca582ae2ccea4d3dc22
71 rdf:rest rdf:nil
72 Nd6aebbd23ff242c3a8bf3bd5478079e5 schema:name Springer Nature - SN SciGraph project
73 rdf:type schema:Organization
74 Ndf7662a7c60248a580a5d935cba13b8d schema:name doi
75 schema:value 10.1007/s00530-019-00603-1
76 rdf:type schema:PropertyValue
77 Nee0f9a96d9b84ec68cb3cfa3cec03660 rdf:first Naf31cd471ac14c3996ecc4c53248c91f
78 rdf:rest Nb17d34106e03471b8e5fc419201ada2f
79 anzsrc-for:01 schema:inDefinedTermSet anzsrc-for:
80 schema:name Mathematical Sciences
81 rdf:type schema:DefinedTerm
82 anzsrc-for:0104 schema:inDefinedTermSet anzsrc-for:
83 schema:name Statistics
84 rdf:type schema:DefinedTerm
85 sg:journal.1284647 schema:issn 0942-4962
86 1432-1882
87 schema:name Multimedia Systems
88 rdf:type schema:Periodical
89 sg:pub.10.1007/s11263-014-0736-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016069434
90 https://doi.org/10.1007/s11263-014-0736-2
91 rdf:type schema:CreativeWork
92 sg:pub.10.1007/s11263-014-0763-z schema:sameAs https://app.dimensions.ai/details/publication/pub.1019914009
93 https://doi.org/10.1007/s11263-014-0763-z
94 rdf:type schema:CreativeWork
95 https://doi.org/10.1016/j.cviu.2016.02.003 schema:sameAs https://app.dimensions.ai/details/publication/pub.1037372500
96 rdf:type schema:CreativeWork
97 https://doi.org/10.1016/j.neucom.2016.06.048 schema:sameAs https://app.dimensions.ai/details/publication/pub.1051815714
98 rdf:type schema:CreativeWork
99 https://doi.org/10.1016/j.patcog.2018.03.029 schema:sameAs https://app.dimensions.ai/details/publication/pub.1101838500
100 rdf:type schema:CreativeWork
101 https://doi.org/10.1016/j.patcog.2018.05.017 schema:sameAs https://app.dimensions.ai/details/publication/pub.1104269819
102 rdf:type schema:CreativeWork
103 https://doi.org/10.1049/el.2016.3011 schema:sameAs https://app.dimensions.ai/details/publication/pub.1056758035
104 rdf:type schema:CreativeWork
105 https://doi.org/10.1109/cvpr.2016.156 schema:sameAs https://app.dimensions.ai/details/publication/pub.1093690223
106 rdf:type schema:CreativeWork
107 https://doi.org/10.1109/cvpr.2016.468 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094816219
108 rdf:type schema:CreativeWork
109 https://doi.org/10.1109/cvpr.2018.00511 schema:sameAs https://app.dimensions.ai/details/publication/pub.1110720645
110 rdf:type schema:CreativeWork
111 https://doi.org/10.1109/tip.2016.2531283 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061644863
112 rdf:type schema:CreativeWork
113 https://doi.org/10.1109/tip.2016.2614135 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061645242
114 rdf:type schema:CreativeWork
115 https://doi.org/10.1109/tip.2017.2676346 schema:sameAs https://app.dimensions.ai/details/publication/pub.1084206485
116 rdf:type schema:CreativeWork
117 https://doi.org/10.1109/tip.2017.2699791 schema:sameAs https://app.dimensions.ai/details/publication/pub.1085304034
118 rdf:type schema:CreativeWork
119 https://doi.org/10.1109/tpami.2016.2537330 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061745047
120 rdf:type schema:CreativeWork
121 https://doi.org/10.1109/tpami.2016.2609928 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061745160
122 rdf:type schema:CreativeWork
123 https://doi.org/10.1109/tpami.2018.2864965 schema:sameAs https://app.dimensions.ai/details/publication/pub.1106137171
124 rdf:type schema:CreativeWork
125 https://www.grid.ac/institutes/grid.464369.a schema:alternateName Liaoning Technical University
126 schema:name Centre for Image and Visual Information Calculating Research, Liaoning Technical University, 125105, Huludao, China
127 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...