DnS: Distill-and-Select for Efficient and Accurate Video Indexing and Retrieval View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2022-08-05

AUTHORS

Giorgos Kordopatis-Zilos, Christos Tzelepis, Symeon Papadopoulos, Ioannis Kompatsiaris, Ioannis Patras

ABSTRACT

In this paper, we address the problem of high performance and computationally efficient content-based video retrieval in large-scale datasets. Current methods typically propose either: (i) fine-grained approaches employing spatio-temporal representations and similarity calculations, achieving high performance at a high computational cost or (ii) coarse-grained approaches representing/indexing videos as global vectors, where the spatio-temporal structure is lost, providing low performance but also having low computational cost. In this work, we propose a Knowledge Distillation framework, called Distill-and-Select (DnS), that starting from a well-performing fine-grained Teacher Network learns: (a) Student Networks at different retrieval performance and computational efficiency trade-offs and (b) a Selector Network that at test time rapidly directs samples to the appropriate student to maintain both high retrieval performance and high computational efficiency. We train several students with different architectures and arrive at different trade-offs of performance and efficiency, i.e., speed and storage requirements, including fine-grained students that store/index videos using binary representations. Importantly, the proposed scheme allows Knowledge Distillation in large, unlabelled datasets—this leads to good students. We evaluate DnS on five public datasets on three different video retrieval tasks and demonstrate (a) that our students achieve state-of-the-art performance in several cases and (b) that the DnS framework provides an excellent trade-off between retrieval performance, computational speed, and storage space. In specific configurations, the proposed method achieves similar mAP with the teacher but is 20 times faster and requires 240 times less storage space. The collected dataset and implementation are publicly available: https://github.com/mever-team/distill-and-select. More... »

PAGES

2385-2407

References to SciGraph publications

  • 2018-10-07. Modality Distillation with Multiple Stream Networks for Action Recognition in COMPUTER VISION – ECCV 2018
  • 2018-10-09. Graph Distillation for Action Detection with Privileged Modalities in COMPUTER VISION – ECCV 2018
  • 2016-12-02. An image-based near-duplicate video retrieval and localization using improved Edit distance in MULTIMEDIA TOOLS AND APPLICATIONS
  • 2012. Negative Evidences and Co-occurences in Image Retrieval: The Benefit of PCA and Whitening in COMPUTER VISION – ECCV 2012
  • 2016-12-31. Compact CNN Based Video Representation for Efficient Video Copy Detection in MULTIMEDIA MODELING
  • 2020-11-03. Attention-Based Query Expansion Learning in COMPUTER VISION – ECCV 2020
  • 2016-12-31. Near-Duplicate Video Retrieval by Aggregating Intermediate CNN Layers in MULTIMEDIA MODELING
  • 2021-03-22. Knowledge Distillation: A Survey in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2018-05-04. Multiscale video sequence matching for near-duplicate detection and retrieval in MULTIMEDIA TOOLS AND APPLICATIONS
  • 2019-12-24. An Efficient Hierarchical Near-Duplicate Video Detection Algorithm Based on Deep Semantic Features in MULTIMEDIA MODELING
  • 2014. VCDB: A Large-Scale Database for Partial Copy Detection in Videos in COMPUTER VISION – ECCV 2014
  • 2018-10-09. Video Re-localization in COMPUTER VISION – ECCV 2018
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11263-022-01651-3

    DOI

    http://dx.doi.org/10.1007/s11263-022-01651-3

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1150025283


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Queen Mary University of London, Mile End Road, E1 4NS, London, UK", 
              "id": "http://www.grid.ac/institutes/grid.4868.2", 
              "name": [
                "Information Technologies Institute, Centre for Research and Technology Hellas, Thessalon\u00edki, Greece", 
                "Queen Mary University of London, Mile End Road, E1 4NS, London, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Kordopatis-Zilos", 
            "givenName": "Giorgos", 
            "id": "sg:person.013210773377.30", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013210773377.30"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Queen Mary University of London, Mile End Road, E1 4NS, London, UK", 
              "id": "http://www.grid.ac/institutes/grid.4868.2", 
              "name": [
                "Queen Mary University of London, Mile End Road, E1 4NS, London, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Tzelepis", 
            "givenName": "Christos", 
            "id": "sg:person.015035157403.08", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015035157403.08"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Information Technologies Institute, Centre for Research and Technology Hellas, Thessalon\u00edki, Greece", 
              "id": "http://www.grid.ac/institutes/grid.423747.1", 
              "name": [
                "Information Technologies Institute, Centre for Research and Technology Hellas, Thessalon\u00edki, Greece"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Papadopoulos", 
            "givenName": "Symeon", 
            "id": "sg:person.0706132635.30", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0706132635.30"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Information Technologies Institute, Centre for Research and Technology Hellas, Thessalon\u00edki, Greece", 
              "id": "http://www.grid.ac/institutes/grid.423747.1", 
              "name": [
                "Information Technologies Institute, Centre for Research and Technology Hellas, Thessalon\u00edki, Greece"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Kompatsiaris", 
            "givenName": "Ioannis", 
            "id": "sg:person.01102744015.20", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01102744015.20"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Queen Mary University of London, Mile End Road, E1 4NS, London, UK", 
              "id": "http://www.grid.ac/institutes/grid.4868.2", 
              "name": [
                "Queen Mary University of London, Mile End Road, E1 4NS, London, UK"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Patras", 
            "givenName": "Ioannis", 
            "id": "sg:person.013746631002.23", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013746631002.23"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-319-51811-4_21", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1051108242", 
              "https://doi.org/10.1007/978-3-319-51811-4_21"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-021-01453-z", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1136588160", 
              "https://doi.org/10.1007/s11263-021-01453-z"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-642-33709-3_55", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1053473906", 
              "https://doi.org/10.1007/978-3-642-33709-3_55"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10593-2_24", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1036132330", 
              "https://doi.org/10.1007/978-3-319-10593-2_24"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-37731-1_61", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1123689741", 
              "https://doi.org/10.1007/978-3-030-37731-1_61"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01237-3_7", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463338", 
              "https://doi.org/10.1007/978-3-030-01237-3_7"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01264-9_4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107502741", 
              "https://doi.org/10.1007/978-3-030-01264-9_4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-58604-1_11", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1132270011", 
              "https://doi.org/10.1007/978-3-030-58604-1_11"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11042-018-5862-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1103788044", 
              "https://doi.org/10.1007/s11042-018-5862-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01264-9_11", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107502710", 
              "https://doi.org/10.1007/978-3-030-01264-9_11"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-51811-4_47", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1051770638", 
              "https://doi.org/10.1007/978-3-319-51811-4_47"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11042-016-4176-6", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1015210096", 
              "https://doi.org/10.1007/s11042-016-4176-6"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2022-08-05", 
        "datePublishedReg": "2022-08-05", 
        "description": "In this paper, we address the problem of high performance and computationally efficient content-based video retrieval in large-scale datasets. Current methods typically propose either: (i) fine-grained approaches employing spatio-temporal representations and similarity calculations, achieving high performance at a high computational cost or (ii) coarse-grained approaches representing/indexing videos as global vectors, where the spatio-temporal structure is lost, providing low performance but also having low computational cost. In this work, we propose a Knowledge Distillation framework, called Distill-and-Select (DnS), that starting from a well-performing fine-grained Teacher Network learns: (a) Student Networks at different retrieval performance and computational efficiency trade-offs and (b) a Selector Network that at test time rapidly directs samples to the appropriate student to maintain both high retrieval performance and high computational efficiency. We train several students with different architectures and arrive at different trade-offs of performance and efficiency, i.e., speed and storage requirements, including fine-grained students that store/index videos using binary representations. Importantly, the proposed scheme allows Knowledge Distillation in large, unlabelled datasets\u2014this leads to good students. We evaluate DnS on five public datasets on three different video retrieval tasks and demonstrate (a) that our students achieve state-of-the-art performance in several cases and (b) that the DnS framework provides an excellent trade-off between retrieval performance, computational speed, and storage space. In specific configurations, the proposed method achieves similar mAP with the teacher but is 20 times faster and requires 240 times less storage space. The collected dataset and implementation are publicly available: https://github.com/mever-team/distill-and-select.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s11263-022-01651-3", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.9244762", 
            "type": "MonetaryGrant"
          }, 
          {
            "id": "sg:grant.9385254", 
            "type": "MonetaryGrant"
          }, 
          {
            "id": "sg:grant.7443904", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1032807", 
            "issn": [
              "0920-5691", 
              "1573-1405"
            ], 
            "name": "International Journal of Computer Vision", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "10", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "130"
          }
        ], 
        "keywords": [
          "retrieval performance", 
          "storage space", 
          "content-based video retrieval", 
          "video retrieval tasks", 
          "computational cost", 
          "times less storage space", 
          "high retrieval performance", 
          "large-scale datasets", 
          "knowledge distillation framework", 
          "computational efficiency", 
          "less storage space", 
          "spatio-temporal representation", 
          "high computational cost", 
          "high performance", 
          "low computational cost", 
          "video retrieval", 
          "indexing video", 
          "video indexing", 
          "index videos", 
          "retrieval tasks", 
          "high computational efficiency", 
          "knowledge distillation", 
          "distillation framework", 
          "art performance", 
          "public datasets", 
          "similarity calculation", 
          "teacher network", 
          "student network", 
          "storage requirements", 
          "computational speed", 
          "collected dataset", 
          "different architectures", 
          "binary representation", 
          "global vector", 
          "dataset", 
          "network", 
          "video", 
          "retrieval", 
          "test time", 
          "spatio-temporal structure", 
          "low performance", 
          "framework", 
          "indexing", 
          "performance", 
          "representation", 
          "current methods", 
          "architecture", 
          "distills", 
          "specific configuration", 
          "task", 
          "cost", 
          "efficiency", 
          "implementation", 
          "speed", 
          "selects", 
          "scheme", 
          "space", 
          "requirements", 
          "method", 
          "maps", 
          "vector", 
          "time", 
          "demonstrate", 
          "work", 
          "students", 
          "configuration", 
          "similar maps", 
          "DN", 
          "state", 
          "structure", 
          "cases", 
          "distillation", 
          "best students", 
          "teachers", 
          "calculations", 
          "samples", 
          "approach", 
          "paper", 
          "problem", 
          "appropriate students"
        ], 
        "name": "DnS: Distill-and-Select for Efficient and Accurate Video Indexing and Retrieval", 
        "pagination": "2385-2407", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1150025283"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11263-022-01651-3"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11263-022-01651-3", 
          "https://app.dimensions.ai/details/publication/pub.1150025283"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-11-24T21:09", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221124/entities/gbq_results/article/article_939.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s11263-022-01651-3"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01651-3'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01651-3'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01651-3'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01651-3'


     

    This table displays all metadata directly associated to this object as RDF triples.

    223 TRIPLES      21 PREDICATES      116 URIs      96 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11263-022-01651-3 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N5f66ffc8d2fd45c080b8e0d77e49388d
    4 schema:citation sg:pub.10.1007/978-3-030-01237-3_7
    5 sg:pub.10.1007/978-3-030-01264-9_11
    6 sg:pub.10.1007/978-3-030-01264-9_4
    7 sg:pub.10.1007/978-3-030-37731-1_61
    8 sg:pub.10.1007/978-3-030-58604-1_11
    9 sg:pub.10.1007/978-3-319-10593-2_24
    10 sg:pub.10.1007/978-3-319-51811-4_21
    11 sg:pub.10.1007/978-3-319-51811-4_47
    12 sg:pub.10.1007/978-3-642-33709-3_55
    13 sg:pub.10.1007/s11042-016-4176-6
    14 sg:pub.10.1007/s11042-018-5862-3
    15 sg:pub.10.1007/s11263-021-01453-z
    16 schema:datePublished 2022-08-05
    17 schema:datePublishedReg 2022-08-05
    18 schema:description In this paper, we address the problem of high performance and computationally efficient content-based video retrieval in large-scale datasets. Current methods typically propose either: (i) fine-grained approaches employing spatio-temporal representations and similarity calculations, achieving high performance at a high computational cost or (ii) coarse-grained approaches representing/indexing videos as global vectors, where the spatio-temporal structure is lost, providing low performance but also having low computational cost. In this work, we propose a Knowledge Distillation framework, called Distill-and-Select (DnS), that starting from a well-performing fine-grained Teacher Network learns: (a) Student Networks at different retrieval performance and computational efficiency trade-offs and (b) a Selector Network that at test time rapidly directs samples to the appropriate student to maintain both high retrieval performance and high computational efficiency. We train several students with different architectures and arrive at different trade-offs of performance and efficiency, i.e., speed and storage requirements, including fine-grained students that store/index videos using binary representations. Importantly, the proposed scheme allows Knowledge Distillation in large, unlabelled datasets—this leads to good students. We evaluate DnS on five public datasets on three different video retrieval tasks and demonstrate (a) that our students achieve state-of-the-art performance in several cases and (b) that the DnS framework provides an excellent trade-off between retrieval performance, computational speed, and storage space. In specific configurations, the proposed method achieves similar mAP with the teacher but is 20 times faster and requires 240 times less storage space. The collected dataset and implementation are publicly available: https://github.com/mever-team/distill-and-select.
    19 schema:genre article
    20 schema:isAccessibleForFree true
    21 schema:isPartOf Nda27bce1aace428a82d2d6ae0561687b
    22 Ne692d3f4f64747a38371691a1819195d
    23 sg:journal.1032807
    24 schema:keywords DN
    25 approach
    26 appropriate students
    27 architecture
    28 art performance
    29 best students
    30 binary representation
    31 calculations
    32 cases
    33 collected dataset
    34 computational cost
    35 computational efficiency
    36 computational speed
    37 configuration
    38 content-based video retrieval
    39 cost
    40 current methods
    41 dataset
    42 demonstrate
    43 different architectures
    44 distillation
    45 distillation framework
    46 distills
    47 efficiency
    48 framework
    49 global vector
    50 high computational cost
    51 high computational efficiency
    52 high performance
    53 high retrieval performance
    54 implementation
    55 index videos
    56 indexing
    57 indexing video
    58 knowledge distillation
    59 knowledge distillation framework
    60 large-scale datasets
    61 less storage space
    62 low computational cost
    63 low performance
    64 maps
    65 method
    66 network
    67 paper
    68 performance
    69 problem
    70 public datasets
    71 representation
    72 requirements
    73 retrieval
    74 retrieval performance
    75 retrieval tasks
    76 samples
    77 scheme
    78 selects
    79 similar maps
    80 similarity calculation
    81 space
    82 spatio-temporal representation
    83 spatio-temporal structure
    84 specific configuration
    85 speed
    86 state
    87 storage requirements
    88 storage space
    89 structure
    90 student network
    91 students
    92 task
    93 teacher network
    94 teachers
    95 test time
    96 time
    97 times less storage space
    98 vector
    99 video
    100 video indexing
    101 video retrieval
    102 video retrieval tasks
    103 work
    104 schema:name DnS: Distill-and-Select for Efficient and Accurate Video Indexing and Retrieval
    105 schema:pagination 2385-2407
    106 schema:productId N36784e8120254d24a67867c825cf918d
    107 Ne5dfde0288444fd4817a5bcd79d51ccb
    108 schema:sameAs https://app.dimensions.ai/details/publication/pub.1150025283
    109 https://doi.org/10.1007/s11263-022-01651-3
    110 schema:sdDatePublished 2022-11-24T21:09
    111 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    112 schema:sdPublisher Na3cd54f99ee24989b25fc36234d2dfd8
    113 schema:url https://doi.org/10.1007/s11263-022-01651-3
    114 sgo:license sg:explorer/license/
    115 sgo:sdDataset articles
    116 rdf:type schema:ScholarlyArticle
    117 N36784e8120254d24a67867c825cf918d schema:name doi
    118 schema:value 10.1007/s11263-022-01651-3
    119 rdf:type schema:PropertyValue
    120 N5f66ffc8d2fd45c080b8e0d77e49388d rdf:first sg:person.013210773377.30
    121 rdf:rest N832c414689ed466093e665256291fd7b
    122 N832c414689ed466093e665256291fd7b rdf:first sg:person.015035157403.08
    123 rdf:rest N8848ff4833164133875c0e6b3075519d
    124 N8848ff4833164133875c0e6b3075519d rdf:first sg:person.0706132635.30
    125 rdf:rest Nf287f6196c8242ceadd622ff7df504d7
    126 N9f17a9d77228419c894b4537be37e4e0 rdf:first sg:person.013746631002.23
    127 rdf:rest rdf:nil
    128 Na3cd54f99ee24989b25fc36234d2dfd8 schema:name Springer Nature - SN SciGraph project
    129 rdf:type schema:Organization
    130 Nda27bce1aace428a82d2d6ae0561687b schema:issueNumber 10
    131 rdf:type schema:PublicationIssue
    132 Ne5dfde0288444fd4817a5bcd79d51ccb schema:name dimensions_id
    133 schema:value pub.1150025283
    134 rdf:type schema:PropertyValue
    135 Ne692d3f4f64747a38371691a1819195d schema:volumeNumber 130
    136 rdf:type schema:PublicationVolume
    137 Nf287f6196c8242ceadd622ff7df504d7 rdf:first sg:person.01102744015.20
    138 rdf:rest N9f17a9d77228419c894b4537be37e4e0
    139 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    140 schema:name Information and Computing Sciences
    141 rdf:type schema:DefinedTerm
    142 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    143 schema:name Artificial Intelligence and Image Processing
    144 rdf:type schema:DefinedTerm
    145 sg:grant.7443904 http://pending.schema.org/fundedItem sg:pub.10.1007/s11263-022-01651-3
    146 rdf:type schema:MonetaryGrant
    147 sg:grant.9244762 http://pending.schema.org/fundedItem sg:pub.10.1007/s11263-022-01651-3
    148 rdf:type schema:MonetaryGrant
    149 sg:grant.9385254 http://pending.schema.org/fundedItem sg:pub.10.1007/s11263-022-01651-3
    150 rdf:type schema:MonetaryGrant
    151 sg:journal.1032807 schema:issn 0920-5691
    152 1573-1405
    153 schema:name International Journal of Computer Vision
    154 schema:publisher Springer Nature
    155 rdf:type schema:Periodical
    156 sg:person.01102744015.20 schema:affiliation grid-institutes:grid.423747.1
    157 schema:familyName Kompatsiaris
    158 schema:givenName Ioannis
    159 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01102744015.20
    160 rdf:type schema:Person
    161 sg:person.013210773377.30 schema:affiliation grid-institutes:grid.4868.2
    162 schema:familyName Kordopatis-Zilos
    163 schema:givenName Giorgos
    164 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013210773377.30
    165 rdf:type schema:Person
    166 sg:person.013746631002.23 schema:affiliation grid-institutes:grid.4868.2
    167 schema:familyName Patras
    168 schema:givenName Ioannis
    169 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013746631002.23
    170 rdf:type schema:Person
    171 sg:person.015035157403.08 schema:affiliation grid-institutes:grid.4868.2
    172 schema:familyName Tzelepis
    173 schema:givenName Christos
    174 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015035157403.08
    175 rdf:type schema:Person
    176 sg:person.0706132635.30 schema:affiliation grid-institutes:grid.423747.1
    177 schema:familyName Papadopoulos
    178 schema:givenName Symeon
    179 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.0706132635.30
    180 rdf:type schema:Person
    181 sg:pub.10.1007/978-3-030-01237-3_7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463338
    182 https://doi.org/10.1007/978-3-030-01237-3_7
    183 rdf:type schema:CreativeWork
    184 sg:pub.10.1007/978-3-030-01264-9_11 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107502710
    185 https://doi.org/10.1007/978-3-030-01264-9_11
    186 rdf:type schema:CreativeWork
    187 sg:pub.10.1007/978-3-030-01264-9_4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107502741
    188 https://doi.org/10.1007/978-3-030-01264-9_4
    189 rdf:type schema:CreativeWork
    190 sg:pub.10.1007/978-3-030-37731-1_61 schema:sameAs https://app.dimensions.ai/details/publication/pub.1123689741
    191 https://doi.org/10.1007/978-3-030-37731-1_61
    192 rdf:type schema:CreativeWork
    193 sg:pub.10.1007/978-3-030-58604-1_11 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132270011
    194 https://doi.org/10.1007/978-3-030-58604-1_11
    195 rdf:type schema:CreativeWork
    196 sg:pub.10.1007/978-3-319-10593-2_24 schema:sameAs https://app.dimensions.ai/details/publication/pub.1036132330
    197 https://doi.org/10.1007/978-3-319-10593-2_24
    198 rdf:type schema:CreativeWork
    199 sg:pub.10.1007/978-3-319-51811-4_21 schema:sameAs https://app.dimensions.ai/details/publication/pub.1051108242
    200 https://doi.org/10.1007/978-3-319-51811-4_21
    201 rdf:type schema:CreativeWork
    202 sg:pub.10.1007/978-3-319-51811-4_47 schema:sameAs https://app.dimensions.ai/details/publication/pub.1051770638
    203 https://doi.org/10.1007/978-3-319-51811-4_47
    204 rdf:type schema:CreativeWork
    205 sg:pub.10.1007/978-3-642-33709-3_55 schema:sameAs https://app.dimensions.ai/details/publication/pub.1053473906
    206 https://doi.org/10.1007/978-3-642-33709-3_55
    207 rdf:type schema:CreativeWork
    208 sg:pub.10.1007/s11042-016-4176-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1015210096
    209 https://doi.org/10.1007/s11042-016-4176-6
    210 rdf:type schema:CreativeWork
    211 sg:pub.10.1007/s11042-018-5862-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1103788044
    212 https://doi.org/10.1007/s11042-018-5862-3
    213 rdf:type schema:CreativeWork
    214 sg:pub.10.1007/s11263-021-01453-z schema:sameAs https://app.dimensions.ai/details/publication/pub.1136588160
    215 https://doi.org/10.1007/s11263-021-01453-z
    216 rdf:type schema:CreativeWork
    217 grid-institutes:grid.423747.1 schema:alternateName Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloníki, Greece
    218 schema:name Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloníki, Greece
    219 rdf:type schema:Organization
    220 grid-institutes:grid.4868.2 schema:alternateName Queen Mary University of London, Mile End Road, E1 4NS, London, UK
    221 schema:name Information Technologies Institute, Centre for Research and Technology Hellas, Thessaloníki, Greece
    222 Queen Mary University of London, Mile End Road, E1 4NS, London, UK
    223 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...