Video Super-resolution using Edge-based Optical Flow and Intensity Prediction View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2018-12

AUTHORS

Jen-Wen Wang, Ching-Te Chiu

ABSTRACT

Full-image based motion prediction is widely used in video super-resolution (VSR) that results outstanding outputs with arbitrary scenes but costs huge time complexity. In this paper, we propose an edge-based motion and intensity prediction scheme to reduce the computation cost while maintain good enough quality simultaneously. The key point of reducing computation cost is to focus on extracted edges rather than the whole frame when finding motion vectors (optical flow) of the video sequence in accordance with human vision system (HVS). Bi-directional optical flow is usually adopted to increase the prediction accuracy but it also increase the computation time. Here we propose to obtain the backward flow from foregoing forward flow prediction which effectively save the heavy load. We perform a series of experiments and comparisons between existed VSR methods and our proposed edge-based method with different sequences and upscaling factors. The results reveal that our proposed scheme can successfully keep the super-resolved sequence quality and get about 4x speed up in computation time. More... »

PAGES

1699-1711

References to SciGraph publications

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s11265-017-1310-2

DOI

http://dx.doi.org/10.1007/s11265-017-1310-2

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1099705767


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "National Tsing Hua University", 
          "id": "https://www.grid.ac/institutes/grid.38348.34", 
          "name": [
            "Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Wang", 
        "givenName": "Jen-Wen", 
        "id": "sg:person.015277747535.60", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015277747535.60"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "National Tsing Hua University", 
          "id": "https://www.grid.ac/institutes/grid.38348.34", 
          "name": [
            "Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Chiu", 
        "givenName": "Ching-Te", 
        "id": "sg:person.016261545177.29", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016261545177.29"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/3-540-61123-1_171", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1009407696", 
          "https://doi.org/10.1007/3-540-61123-1_171"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/b978-0-08-050753-8.50042-5", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1034256146"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/bf00985891", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1040551933", 
          "https://doi.org/10.1007/bf00985891"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1049/iet-ipr.2010.0430", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1056829025"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1049/iet-ipr.2010.0489", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1056829029"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/29.56062", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061144638"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/83.650116", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061239669"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/lsp.2009.2028106", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061377569"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/lsp.2010.2059700", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061377799"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/lsp.2014.2332118", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061378811"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tassp.1978.1163154", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061518481"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tcsvt.2010.2087454", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061575635"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tcsvt.2012.2201669", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061575938"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tcsvt.2013.2242631", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061576075"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2004.834669", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061641068"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2008.2008067", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061642010"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2009.2012906", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061642158"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2009.2023703", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061642250"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2010.2041408", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061642425"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2010.2042115", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061642434"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2010.2045707", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061642478"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2011.2106793", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061642744"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2011.2173204", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061643036"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tip.2012.2192127", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061643185"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tmi.1983.4307610", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061694052"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tnnls.2013.2262001", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061718324"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tnnls.2013.2281313", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061718420"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2013.127", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061744445"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1587/transinf.e96.d.1569", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1068094645"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/acssc.2012.6489325", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094168801"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/sitis.2013.41", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094567525"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/acssc.2012.6489318", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094944619"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/globalsip.2014.7032279", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095062388"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/sips.2013.6674482", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095678171"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/icip.2008.4711835", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1095812571"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2018-12", 
    "datePublishedReg": "2018-12-01", 
    "description": "Full-image based motion prediction is widely used in video super-resolution (VSR) that results outstanding outputs with arbitrary scenes but costs huge time complexity. In this paper, we propose an edge-based motion and intensity prediction scheme to reduce the computation cost while maintain good enough quality simultaneously. The key point of reducing computation cost is to focus on extracted edges rather than the whole frame when finding motion vectors (optical flow) of the video sequence in accordance with human vision system (HVS). Bi-directional optical flow is usually adopted to increase the prediction accuracy but it also increase the computation time. Here we propose to obtain the backward flow from foregoing forward flow prediction which effectively save the heavy load. We perform a series of experiments and comparisons between existed VSR methods and our proposed edge-based method with different sequences and upscaling factors. The results reveal that our proposed scheme can successfully keep the super-resolved sequence quality and get about 4x speed up in computation time.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1007/s11265-017-1310-2", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": false, 
    "isPartOf": [
      {
        "id": "sg:journal.1297359", 
        "issn": [
          "0922-5773", 
          "1939-8115"
        ], 
        "name": "Journal of Signal Processing Systems", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "12", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "90"
      }
    ], 
    "name": "Video Super-resolution using Edge-based Optical Flow and Intensity Prediction", 
    "pagination": "1699-1711", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "16e1a2fce2caa4188540e2d612aa2b9d79d2742bf3043446d8ed452c42e232ea"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s11265-017-1310-2"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1099705767"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s11265-017-1310-2", 
      "https://app.dimensions.ai/details/publication/pub.1099705767"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-10T15:09", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000001_0000000264/records_8663_00000559.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://link.springer.com/10.1007%2Fs11265-017-1310-2"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11265-017-1310-2'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11265-017-1310-2'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11265-017-1310-2'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11265-017-1310-2'


 

This table displays all metadata directly associated to this object as RDF triples.

175 TRIPLES      21 PREDICATES      62 URIs      19 LITERALS      7 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s11265-017-1310-2 schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N3425db4e008f45eabf79b7c07258325f
4 schema:citation sg:pub.10.1007/3-540-61123-1_171
5 sg:pub.10.1007/bf00985891
6 https://doi.org/10.1016/b978-0-08-050753-8.50042-5
7 https://doi.org/10.1049/iet-ipr.2010.0430
8 https://doi.org/10.1049/iet-ipr.2010.0489
9 https://doi.org/10.1109/29.56062
10 https://doi.org/10.1109/83.650116
11 https://doi.org/10.1109/acssc.2012.6489318
12 https://doi.org/10.1109/acssc.2012.6489325
13 https://doi.org/10.1109/globalsip.2014.7032279
14 https://doi.org/10.1109/icip.2008.4711835
15 https://doi.org/10.1109/lsp.2009.2028106
16 https://doi.org/10.1109/lsp.2010.2059700
17 https://doi.org/10.1109/lsp.2014.2332118
18 https://doi.org/10.1109/sips.2013.6674482
19 https://doi.org/10.1109/sitis.2013.41
20 https://doi.org/10.1109/tassp.1978.1163154
21 https://doi.org/10.1109/tcsvt.2010.2087454
22 https://doi.org/10.1109/tcsvt.2012.2201669
23 https://doi.org/10.1109/tcsvt.2013.2242631
24 https://doi.org/10.1109/tip.2004.834669
25 https://doi.org/10.1109/tip.2008.2008067
26 https://doi.org/10.1109/tip.2009.2012906
27 https://doi.org/10.1109/tip.2009.2023703
28 https://doi.org/10.1109/tip.2010.2041408
29 https://doi.org/10.1109/tip.2010.2042115
30 https://doi.org/10.1109/tip.2010.2045707
31 https://doi.org/10.1109/tip.2011.2106793
32 https://doi.org/10.1109/tip.2011.2173204
33 https://doi.org/10.1109/tip.2012.2192127
34 https://doi.org/10.1109/tmi.1983.4307610
35 https://doi.org/10.1109/tnnls.2013.2262001
36 https://doi.org/10.1109/tnnls.2013.2281313
37 https://doi.org/10.1109/tpami.2013.127
38 https://doi.org/10.1587/transinf.e96.d.1569
39 schema:datePublished 2018-12
40 schema:datePublishedReg 2018-12-01
41 schema:description Full-image based motion prediction is widely used in video super-resolution (VSR) that results outstanding outputs with arbitrary scenes but costs huge time complexity. In this paper, we propose an edge-based motion and intensity prediction scheme to reduce the computation cost while maintain good enough quality simultaneously. The key point of reducing computation cost is to focus on extracted edges rather than the whole frame when finding motion vectors (optical flow) of the video sequence in accordance with human vision system (HVS). Bi-directional optical flow is usually adopted to increase the prediction accuracy but it also increase the computation time. Here we propose to obtain the backward flow from foregoing forward flow prediction which effectively save the heavy load. We perform a series of experiments and comparisons between existed VSR methods and our proposed edge-based method with different sequences and upscaling factors. The results reveal that our proposed scheme can successfully keep the super-resolved sequence quality and get about 4x speed up in computation time.
42 schema:genre research_article
43 schema:inLanguage en
44 schema:isAccessibleForFree false
45 schema:isPartOf N2aae862f01b94a9da48eabd63875b036
46 Ne6d0ec79d1cf43189a1b655950e3e2d3
47 sg:journal.1297359
48 schema:name Video Super-resolution using Edge-based Optical Flow and Intensity Prediction
49 schema:pagination 1699-1711
50 schema:productId N640488f0ac1b4150a438bc373596e58f
51 Nb8ffbbe3b2fe46569174811b8728d08c
52 Nef181134a32c42b7a12d1d12aa0eb58d
53 schema:sameAs https://app.dimensions.ai/details/publication/pub.1099705767
54 https://doi.org/10.1007/s11265-017-1310-2
55 schema:sdDatePublished 2019-04-10T15:09
56 schema:sdLicense https://scigraph.springernature.com/explorer/license/
57 schema:sdPublisher N19716c4a04e5466caa23229d6038c3d1
58 schema:url https://link.springer.com/10.1007%2Fs11265-017-1310-2
59 sgo:license sg:explorer/license/
60 sgo:sdDataset articles
61 rdf:type schema:ScholarlyArticle
62 N0d83eb4e93024b17ab683f0af8ab879e rdf:first sg:person.016261545177.29
63 rdf:rest rdf:nil
64 N19716c4a04e5466caa23229d6038c3d1 schema:name Springer Nature - SN SciGraph project
65 rdf:type schema:Organization
66 N2aae862f01b94a9da48eabd63875b036 schema:issueNumber 12
67 rdf:type schema:PublicationIssue
68 N3425db4e008f45eabf79b7c07258325f rdf:first sg:person.015277747535.60
69 rdf:rest N0d83eb4e93024b17ab683f0af8ab879e
70 N640488f0ac1b4150a438bc373596e58f schema:name readcube_id
71 schema:value 16e1a2fce2caa4188540e2d612aa2b9d79d2742bf3043446d8ed452c42e232ea
72 rdf:type schema:PropertyValue
73 Nb8ffbbe3b2fe46569174811b8728d08c schema:name doi
74 schema:value 10.1007/s11265-017-1310-2
75 rdf:type schema:PropertyValue
76 Ne6d0ec79d1cf43189a1b655950e3e2d3 schema:volumeNumber 90
77 rdf:type schema:PublicationVolume
78 Nef181134a32c42b7a12d1d12aa0eb58d schema:name dimensions_id
79 schema:value pub.1099705767
80 rdf:type schema:PropertyValue
81 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
82 schema:name Information and Computing Sciences
83 rdf:type schema:DefinedTerm
84 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
85 schema:name Artificial Intelligence and Image Processing
86 rdf:type schema:DefinedTerm
87 sg:journal.1297359 schema:issn 0922-5773
88 1939-8115
89 schema:name Journal of Signal Processing Systems
90 rdf:type schema:Periodical
91 sg:person.015277747535.60 schema:affiliation https://www.grid.ac/institutes/grid.38348.34
92 schema:familyName Wang
93 schema:givenName Jen-Wen
94 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015277747535.60
95 rdf:type schema:Person
96 sg:person.016261545177.29 schema:affiliation https://www.grid.ac/institutes/grid.38348.34
97 schema:familyName Chiu
98 schema:givenName Ching-Te
99 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016261545177.29
100 rdf:type schema:Person
101 sg:pub.10.1007/3-540-61123-1_171 schema:sameAs https://app.dimensions.ai/details/publication/pub.1009407696
102 https://doi.org/10.1007/3-540-61123-1_171
103 rdf:type schema:CreativeWork
104 sg:pub.10.1007/bf00985891 schema:sameAs https://app.dimensions.ai/details/publication/pub.1040551933
105 https://doi.org/10.1007/bf00985891
106 rdf:type schema:CreativeWork
107 https://doi.org/10.1016/b978-0-08-050753-8.50042-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034256146
108 rdf:type schema:CreativeWork
109 https://doi.org/10.1049/iet-ipr.2010.0430 schema:sameAs https://app.dimensions.ai/details/publication/pub.1056829025
110 rdf:type schema:CreativeWork
111 https://doi.org/10.1049/iet-ipr.2010.0489 schema:sameAs https://app.dimensions.ai/details/publication/pub.1056829029
112 rdf:type schema:CreativeWork
113 https://doi.org/10.1109/29.56062 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061144638
114 rdf:type schema:CreativeWork
115 https://doi.org/10.1109/83.650116 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061239669
116 rdf:type schema:CreativeWork
117 https://doi.org/10.1109/acssc.2012.6489318 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094944619
118 rdf:type schema:CreativeWork
119 https://doi.org/10.1109/acssc.2012.6489325 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094168801
120 rdf:type schema:CreativeWork
121 https://doi.org/10.1109/globalsip.2014.7032279 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095062388
122 rdf:type schema:CreativeWork
123 https://doi.org/10.1109/icip.2008.4711835 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095812571
124 rdf:type schema:CreativeWork
125 https://doi.org/10.1109/lsp.2009.2028106 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061377569
126 rdf:type schema:CreativeWork
127 https://doi.org/10.1109/lsp.2010.2059700 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061377799
128 rdf:type schema:CreativeWork
129 https://doi.org/10.1109/lsp.2014.2332118 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061378811
130 rdf:type schema:CreativeWork
131 https://doi.org/10.1109/sips.2013.6674482 schema:sameAs https://app.dimensions.ai/details/publication/pub.1095678171
132 rdf:type schema:CreativeWork
133 https://doi.org/10.1109/sitis.2013.41 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094567525
134 rdf:type schema:CreativeWork
135 https://doi.org/10.1109/tassp.1978.1163154 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061518481
136 rdf:type schema:CreativeWork
137 https://doi.org/10.1109/tcsvt.2010.2087454 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061575635
138 rdf:type schema:CreativeWork
139 https://doi.org/10.1109/tcsvt.2012.2201669 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061575938
140 rdf:type schema:CreativeWork
141 https://doi.org/10.1109/tcsvt.2013.2242631 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061576075
142 rdf:type schema:CreativeWork
143 https://doi.org/10.1109/tip.2004.834669 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061641068
144 rdf:type schema:CreativeWork
145 https://doi.org/10.1109/tip.2008.2008067 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061642010
146 rdf:type schema:CreativeWork
147 https://doi.org/10.1109/tip.2009.2012906 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061642158
148 rdf:type schema:CreativeWork
149 https://doi.org/10.1109/tip.2009.2023703 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061642250
150 rdf:type schema:CreativeWork
151 https://doi.org/10.1109/tip.2010.2041408 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061642425
152 rdf:type schema:CreativeWork
153 https://doi.org/10.1109/tip.2010.2042115 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061642434
154 rdf:type schema:CreativeWork
155 https://doi.org/10.1109/tip.2010.2045707 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061642478
156 rdf:type schema:CreativeWork
157 https://doi.org/10.1109/tip.2011.2106793 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061642744
158 rdf:type schema:CreativeWork
159 https://doi.org/10.1109/tip.2011.2173204 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061643036
160 rdf:type schema:CreativeWork
161 https://doi.org/10.1109/tip.2012.2192127 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061643185
162 rdf:type schema:CreativeWork
163 https://doi.org/10.1109/tmi.1983.4307610 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061694052
164 rdf:type schema:CreativeWork
165 https://doi.org/10.1109/tnnls.2013.2262001 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061718324
166 rdf:type schema:CreativeWork
167 https://doi.org/10.1109/tnnls.2013.2281313 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061718420
168 rdf:type schema:CreativeWork
169 https://doi.org/10.1109/tpami.2013.127 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061744445
170 rdf:type schema:CreativeWork
171 https://doi.org/10.1587/transinf.e96.d.1569 schema:sameAs https://app.dimensions.ai/details/publication/pub.1068094645
172 rdf:type schema:CreativeWork
173 https://www.grid.ac/institutes/grid.38348.34 schema:alternateName National Tsing Hua University
174 schema:name Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
175 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...