Efficient Joint-Dimensional Search with Solution Space Regularization for Real-Time Semantic Segmentation View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2022-08-24

AUTHORS

Peng Ye, Baopu Li, Tao Chen, Jiayuan Fan, Zhen Mei, Chen Lin, Chongyan Zuo, Qinghua Chi, Wanli Ouyang

ABSTRACT

Semantic segmentation is a popular research topic in computer vision, and many efforts have been made on it with impressive results. In this paper, we intend to search an optimal network structure that can run in real-time for this problem. Towards this goal, we jointly search the depth, channel, dilation rate and feature spatial resolution, which results in a search space consisting of about 2.78×10324\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.78\times 10^{324}$$\end{document} possible choices. To handle such a large search space, we leverage differential architecture search methods. However, the architecture parameters searched using existing differential methods need to be discretized, which causes the discretization gap between the architecture parameters found by the differential methods and their discretized version as the final solution for the architecture search. Hence, we relieve the problem of discretization gap from the innovative perspective of solution space regularization. Specifically, a novel Solution Space Regularization (SSR) loss is first proposed to effectively encourage the supernet to converge to its discrete one. Then, a new Hierarchical and Progressive Solution Space Shrinking method is presented to further achieve high efficiency of searching. In addition, we theoretically show that the optimization of SSR loss is equivalent to the L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{0}$$\end{document}-norm regularization, which accounts for the improved search-evaluation gap. Comprehensive experiments show that the proposed search scheme can efficiently find an optimal network structure that yields an extremely fast speed (175 FPS) of segmentation with a small model size (1 M) while maintaining comparable accuracy. More... »

PAGES

2674-2694

References to SciGraph publications

  • 2018-10-06. BiSeNet: Bilateral Segmentation Network for Real-Time Semantic Segmentation in COMPUTER VISION – ECCV 2018
  • 2020-11-03. Progressive DARTS: Bridging the Optimization Gap for NAS in the Wild in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2018-10-07. ICNet for Real-Time Semantic Segmentation on High-Resolution Images in COMPUTER VISION – ECCV 2018
  • 2020-11-09. BigNAS: Scaling up Neural Architecture Search with Big Single-Stage Models in COMPUTER VISION – ECCV 2020
  • 2020-11-16. Fair DARTS: Eliminating Unfair Advantages in Differentiable Architecture Search in COMPUTER VISION – ECCV 2020
  • 2021-09-03. BiSeNet V2: Bilateral Network with Guided Aggregation for Real-Time Semantic Segmentation in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2018-12-07. Semantic Understanding of Scenes Through the ADE20K Dataset in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2020-10-07. Shape Adaptor: A Learnable Resizing Module in COMPUTER VISION – ECCV 2020
  • 2021-02-19. Real-Time Semantic Segmentation via Auto Depth, Downsampling Joint Decision and Feature Aggregation in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11263-022-01663-z

    DOI

    http://dx.doi.org/10.1007/s11263-022-01663-z

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1150460627


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "School of Information Science and Technology, Fudan University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.8547.e", 
              "name": [
                "School of Information Science and Technology, Fudan University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ye", 
            "givenName": "Peng", 
            "id": "sg:person.010125133262.27", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010125133262.27"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Oracle Health and AI, Oracle, USA", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "Oracle Health and AI, Oracle, USA"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Li", 
            "givenName": "Baopu", 
            "id": "sg:person.01234252240.13", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01234252240.13"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Information Science and Technology, Fudan University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.8547.e", 
              "name": [
                "School of Information Science and Technology, Fudan University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Chen", 
            "givenName": "Tao", 
            "id": "sg:person.012201615160.43", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012201615160.43"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Academy for Engineering and Technology, Fudan University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.8547.e", 
              "name": [
                "Academy for Engineering and Technology, Fudan University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Fan", 
            "givenName": "Jiayuan", 
            "id": "sg:person.010521024225.97", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010521024225.97"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "School of Information Science and Technology, Fudan University, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/grid.8547.e", 
              "name": [
                "School of Information Science and Technology, Fudan University, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Mei", 
            "givenName": "Zhen", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "University of Oxford, Oxford, England", 
              "id": "http://www.grid.ac/institutes/grid.4991.5", 
              "name": [
                "University of Oxford, Oxford, England"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Lin", 
            "givenName": "Chen", 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Huawei Inc. China, Huawei, China", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "Huawei Inc. China, Huawei, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Zuo", 
            "givenName": "Chongyan", 
            "id": "sg:person.010124733255.13", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010124733255.13"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Huawei Inc. China, Huawei, China", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "Huawei Inc. China, Huawei, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Chi", 
            "givenName": "Qinghua", 
            "id": "sg:person.011517674255.08", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011517674255.08"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Shanghai AI Laboratory, Shanghai, China", 
              "id": "http://www.grid.ac/institutes/None", 
              "name": [
                "University of Sydney, Sydney, Australia", 
                "Shanghai AI Laboratory, Shanghai, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Ouyang", 
            "givenName": "Wanli", 
            "id": "sg:person.01033623146.37", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01033623146.37"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-030-58610-2_39", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1131467703", 
              "https://doi.org/10.1007/978-3-030-58610-2_39"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01219-9_25", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463210", 
              "https://doi.org/10.1007/978-3-030-01219-9_25"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-021-01515-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1140869497", 
              "https://doi.org/10.1007/s11263-021-01515-2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-58571-6_41", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1132443227", 
              "https://doi.org/10.1007/978-3-030-58571-6_41"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01261-8_20", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107502671", 
              "https://doi.org/10.1007/978-3-030-01261-8_20"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-020-01396-x", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1132292096", 
              "https://doi.org/10.1007/s11263-020-01396-x"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-021-01433-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1135466459", 
              "https://doi.org/10.1007/s11263-021-01433-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-018-1140-0", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1110448859", 
              "https://doi.org/10.1007/s11263-018-1140-0"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-58555-6_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1132654591", 
              "https://doi.org/10.1007/978-3-030-58555-6_28"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2022-08-24", 
        "datePublishedReg": "2022-08-24", 
        "description": "Semantic segmentation is a popular research topic in computer vision, and many efforts have been made on it with impressive results. In this paper, we intend to search an optimal network structure that can run in real-time for this problem. Towards this goal, we jointly search the depth, channel, dilation rate and feature spatial resolution, which results in a search space consisting of about 2.78\u00d710324\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym}\n\t\t\t\t\\usepackage{amsfonts}\n\t\t\t\t\\usepackage{amssymb}\n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$2.78\\times 10^{324}$$\\end{document} possible choices. To handle such a large search space, we leverage differential architecture search methods. However, the architecture parameters searched using existing differential methods need to be discretized, which causes the discretization gap between the architecture parameters found by the differential methods and their discretized version as the final solution for the architecture search. Hence, we relieve the problem of discretization gap from the innovative perspective of solution space regularization. Specifically, a novel Solution Space Regularization (SSR) loss is first proposed to effectively encourage the supernet to converge to its discrete one. Then, a new Hierarchical and Progressive Solution Space Shrinking method is presented to further achieve high efficiency of searching. In addition, we theoretically show that the optimization of SSR loss is equivalent to the L0\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym}\n\t\t\t\t\\usepackage{amsfonts}\n\t\t\t\t\\usepackage{amssymb}\n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$L_{0}$$\\end{document}-norm regularization, which accounts for the improved search-evaluation gap. Comprehensive experiments show that the proposed search scheme can efficiently find an optimal network structure that yields an extremely fast speed (175 FPS) of segmentation with a small model size (1 M) while maintaining comparable accuracy.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s11263-022-01663-z", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1032807", 
            "issn": [
              "0920-5691", 
              "1573-1405"
            ], 
            "name": "International Journal of Computer Vision", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "11", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "130"
          }
        ], 
        "keywords": [
          "optimal network structure", 
          "semantic segmentation", 
          "search space", 
          "Time Semantic Segmentation", 
          "large search space", 
          "smaller model size", 
          "architecture parameters", 
          "network structure", 
          "popular research topic", 
          "architecture search", 
          "computer vision", 
          "search scheme", 
          "regularization loss", 
          "comprehensive experiments", 
          "model size", 
          "search method", 
          "segmentation", 
          "research topic", 
          "impressive results", 
          "new hierarchical", 
          "dilation rate", 
          "comparable accuracy", 
          "norm regularization", 
          "fast speed", 
          "discrete ones", 
          "final solution", 
          "regularization", 
          "supernet", 
          "search", 
          "possible choices", 
          "Hierarchical", 
          "vision", 
          "scheme", 
          "space", 
          "accuracy", 
          "method", 
          "optimization", 
          "high efficiency", 
          "version", 
          "spatial resolution", 
          "differential method", 
          "goal", 
          "topic", 
          "speed", 
          "innovative perspective", 
          "efficiency", 
          "solution", 
          "gap", 
          "experiments", 
          "efforts", 
          "channels", 
          "parameters", 
          "one", 
          "perspective", 
          "structure", 
          "results", 
          "resolution", 
          "choice", 
          "size", 
          "addition", 
          "loss", 
          "rate", 
          "depth", 
          "problem", 
          "paper"
        ], 
        "name": "Efficient Joint-Dimensional Search with Solution Space Regularization for Real-Time Semantic Segmentation", 
        "pagination": "2674-2694", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1150460627"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11263-022-01663-z"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11263-022-01663-z", 
          "https://app.dimensions.ai/details/publication/pub.1150460627"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-12-01T06:44", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/article/article_921.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s11263-022-01663-z"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01663-z'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01663-z'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01663-z'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11263-022-01663-z'


     

    This table displays all metadata directly associated to this object as RDF triples.

    225 TRIPLES      21 PREDICATES      98 URIs      81 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11263-022-01663-z schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N33994e278e4f4d328521438f59ffd9ee
    4 schema:citation sg:pub.10.1007/978-3-030-01219-9_25
    5 sg:pub.10.1007/978-3-030-01261-8_20
    6 sg:pub.10.1007/978-3-030-58555-6_28
    7 sg:pub.10.1007/978-3-030-58571-6_41
    8 sg:pub.10.1007/978-3-030-58610-2_39
    9 sg:pub.10.1007/s11263-018-1140-0
    10 sg:pub.10.1007/s11263-020-01396-x
    11 sg:pub.10.1007/s11263-021-01433-3
    12 sg:pub.10.1007/s11263-021-01515-2
    13 schema:datePublished 2022-08-24
    14 schema:datePublishedReg 2022-08-24
    15 schema:description Semantic segmentation is a popular research topic in computer vision, and many efforts have been made on it with impressive results. In this paper, we intend to search an optimal network structure that can run in real-time for this problem. Towards this goal, we jointly search the depth, channel, dilation rate and feature spatial resolution, which results in a search space consisting of about 2.78×10324\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.78\times 10^{324}$$\end{document} possible choices. To handle such a large search space, we leverage differential architecture search methods. However, the architecture parameters searched using existing differential methods need to be discretized, which causes the discretization gap between the architecture parameters found by the differential methods and their discretized version as the final solution for the architecture search. Hence, we relieve the problem of discretization gap from the innovative perspective of solution space regularization. Specifically, a novel Solution Space Regularization (SSR) loss is first proposed to effectively encourage the supernet to converge to its discrete one. Then, a new Hierarchical and Progressive Solution Space Shrinking method is presented to further achieve high efficiency of searching. In addition, we theoretically show that the optimization of SSR loss is equivalent to the L0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{0}$$\end{document}-norm regularization, which accounts for the improved search-evaluation gap. Comprehensive experiments show that the proposed search scheme can efficiently find an optimal network structure that yields an extremely fast speed (175 FPS) of segmentation with a small model size (1 M) while maintaining comparable accuracy.
    16 schema:genre article
    17 schema:isAccessibleForFree true
    18 schema:isPartOf N14837ee673f541a5804ed8f651153a32
    19 Nfe9401bad3324d0f972872d609ca118f
    20 sg:journal.1032807
    21 schema:keywords Hierarchical
    22 Time Semantic Segmentation
    23 accuracy
    24 addition
    25 architecture parameters
    26 architecture search
    27 channels
    28 choice
    29 comparable accuracy
    30 comprehensive experiments
    31 computer vision
    32 depth
    33 differential method
    34 dilation rate
    35 discrete ones
    36 efficiency
    37 efforts
    38 experiments
    39 fast speed
    40 final solution
    41 gap
    42 goal
    43 high efficiency
    44 impressive results
    45 innovative perspective
    46 large search space
    47 loss
    48 method
    49 model size
    50 network structure
    51 new hierarchical
    52 norm regularization
    53 one
    54 optimal network structure
    55 optimization
    56 paper
    57 parameters
    58 perspective
    59 popular research topic
    60 possible choices
    61 problem
    62 rate
    63 regularization
    64 regularization loss
    65 research topic
    66 resolution
    67 results
    68 scheme
    69 search
    70 search method
    71 search scheme
    72 search space
    73 segmentation
    74 semantic segmentation
    75 size
    76 smaller model size
    77 solution
    78 space
    79 spatial resolution
    80 speed
    81 structure
    82 supernet
    83 topic
    84 version
    85 vision
    86 schema:name Efficient Joint-Dimensional Search with Solution Space Regularization for Real-Time Semantic Segmentation
    87 schema:pagination 2674-2694
    88 schema:productId N1de64b72cc3f4ad4a96c3d4698c8ca7f
    89 N9953c7c402c64c74b21653a1991176df
    90 schema:sameAs https://app.dimensions.ai/details/publication/pub.1150460627
    91 https://doi.org/10.1007/s11263-022-01663-z
    92 schema:sdDatePublished 2022-12-01T06:44
    93 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    94 schema:sdPublisher Nb7ecd48233e047db80befae19c2026be
    95 schema:url https://doi.org/10.1007/s11263-022-01663-z
    96 sgo:license sg:explorer/license/
    97 sgo:sdDataset articles
    98 rdf:type schema:ScholarlyArticle
    99 N14837ee673f541a5804ed8f651153a32 schema:issueNumber 11
    100 rdf:type schema:PublicationIssue
    101 N1de64b72cc3f4ad4a96c3d4698c8ca7f schema:name doi
    102 schema:value 10.1007/s11263-022-01663-z
    103 rdf:type schema:PropertyValue
    104 N2e182a3a423f4b39b3a6f6d18602b0fd rdf:first sg:person.010521024225.97
    105 rdf:rest Nf85c213bbe1e4f50a96a43e086b3327a
    106 N33994e278e4f4d328521438f59ffd9ee rdf:first sg:person.010125133262.27
    107 rdf:rest Nf48ecaa5fe0c435c86a2d38d6b6a2c02
    108 N557d7374250e43eb93d7fe123df150ce rdf:first sg:person.012201615160.43
    109 rdf:rest N2e182a3a423f4b39b3a6f6d18602b0fd
    110 N65664fa815244f19a926343f5bec601f rdf:first sg:person.01033623146.37
    111 rdf:rest rdf:nil
    112 N800f5dd2e608491ba7e340490c26a69f schema:affiliation grid-institutes:grid.8547.e
    113 schema:familyName Mei
    114 schema:givenName Zhen
    115 rdf:type schema:Person
    116 N85486521d3be4ce9b43b0f4f4ffab715 rdf:first sg:person.011517674255.08
    117 rdf:rest N65664fa815244f19a926343f5bec601f
    118 N9953c7c402c64c74b21653a1991176df schema:name dimensions_id
    119 schema:value pub.1150460627
    120 rdf:type schema:PropertyValue
    121 Na380f64f32c448e2ab66c0795854dd63 schema:affiliation grid-institutes:grid.4991.5
    122 schema:familyName Lin
    123 schema:givenName Chen
    124 rdf:type schema:Person
    125 Na7734e25158540cd9df4a6d3d6e654d5 rdf:first Na380f64f32c448e2ab66c0795854dd63
    126 rdf:rest Nc5c0f1100ebd4a7aa86681a18883cfb1
    127 Nb7ecd48233e047db80befae19c2026be schema:name Springer Nature - SN SciGraph project
    128 rdf:type schema:Organization
    129 Nc5c0f1100ebd4a7aa86681a18883cfb1 rdf:first sg:person.010124733255.13
    130 rdf:rest N85486521d3be4ce9b43b0f4f4ffab715
    131 Nf48ecaa5fe0c435c86a2d38d6b6a2c02 rdf:first sg:person.01234252240.13
    132 rdf:rest N557d7374250e43eb93d7fe123df150ce
    133 Nf85c213bbe1e4f50a96a43e086b3327a rdf:first N800f5dd2e608491ba7e340490c26a69f
    134 rdf:rest Na7734e25158540cd9df4a6d3d6e654d5
    135 Nfe9401bad3324d0f972872d609ca118f schema:volumeNumber 130
    136 rdf:type schema:PublicationVolume
    137 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    138 schema:name Information and Computing Sciences
    139 rdf:type schema:DefinedTerm
    140 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    141 schema:name Artificial Intelligence and Image Processing
    142 rdf:type schema:DefinedTerm
    143 sg:journal.1032807 schema:issn 0920-5691
    144 1573-1405
    145 schema:name International Journal of Computer Vision
    146 schema:publisher Springer Nature
    147 rdf:type schema:Periodical
    148 sg:person.010124733255.13 schema:affiliation grid-institutes:None
    149 schema:familyName Zuo
    150 schema:givenName Chongyan
    151 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010124733255.13
    152 rdf:type schema:Person
    153 sg:person.010125133262.27 schema:affiliation grid-institutes:grid.8547.e
    154 schema:familyName Ye
    155 schema:givenName Peng
    156 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010125133262.27
    157 rdf:type schema:Person
    158 sg:person.01033623146.37 schema:affiliation grid-institutes:None
    159 schema:familyName Ouyang
    160 schema:givenName Wanli
    161 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01033623146.37
    162 rdf:type schema:Person
    163 sg:person.010521024225.97 schema:affiliation grid-institutes:grid.8547.e
    164 schema:familyName Fan
    165 schema:givenName Jiayuan
    166 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010521024225.97
    167 rdf:type schema:Person
    168 sg:person.011517674255.08 schema:affiliation grid-institutes:None
    169 schema:familyName Chi
    170 schema:givenName Qinghua
    171 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.011517674255.08
    172 rdf:type schema:Person
    173 sg:person.012201615160.43 schema:affiliation grid-institutes:grid.8547.e
    174 schema:familyName Chen
    175 schema:givenName Tao
    176 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012201615160.43
    177 rdf:type schema:Person
    178 sg:person.01234252240.13 schema:affiliation grid-institutes:None
    179 schema:familyName Li
    180 schema:givenName Baopu
    181 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01234252240.13
    182 rdf:type schema:Person
    183 sg:pub.10.1007/978-3-030-01219-9_25 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463210
    184 https://doi.org/10.1007/978-3-030-01219-9_25
    185 rdf:type schema:CreativeWork
    186 sg:pub.10.1007/978-3-030-01261-8_20 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107502671
    187 https://doi.org/10.1007/978-3-030-01261-8_20
    188 rdf:type schema:CreativeWork
    189 sg:pub.10.1007/978-3-030-58555-6_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132654591
    190 https://doi.org/10.1007/978-3-030-58555-6_28
    191 rdf:type schema:CreativeWork
    192 sg:pub.10.1007/978-3-030-58571-6_41 schema:sameAs https://app.dimensions.ai/details/publication/pub.1132443227
    193 https://doi.org/10.1007/978-3-030-58571-6_41
    194 rdf:type schema:CreativeWork
    195 sg:pub.10.1007/978-3-030-58610-2_39 schema:sameAs https://app.dimensions.ai/details/publication/pub.1131467703
    196 https://doi.org/10.1007/978-3-030-58610-2_39
    197 rdf:type schema:CreativeWork
    198 sg:pub.10.1007/s11263-018-1140-0 schema:sameAs https://app.dimensions.ai/details/publication/pub.1110448859
    199 https://doi.org/10.1007/s11263-018-1140-0
    200 rdf:type schema:CreativeWork
    201 sg:pub.10.1007/s11263-020-01396-x schema:sameAs https://app.dimensions.ai/details/publication/pub.1132292096
    202 https://doi.org/10.1007/s11263-020-01396-x
    203 rdf:type schema:CreativeWork
    204 sg:pub.10.1007/s11263-021-01433-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1135466459
    205 https://doi.org/10.1007/s11263-021-01433-3
    206 rdf:type schema:CreativeWork
    207 sg:pub.10.1007/s11263-021-01515-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1140869497
    208 https://doi.org/10.1007/s11263-021-01515-2
    209 rdf:type schema:CreativeWork
    210 grid-institutes:None schema:alternateName Huawei Inc. China, Huawei, China
    211 Oracle Health and AI, Oracle, USA
    212 Shanghai AI Laboratory, Shanghai, China
    213 schema:name Huawei Inc. China, Huawei, China
    214 Oracle Health and AI, Oracle, USA
    215 Shanghai AI Laboratory, Shanghai, China
    216 University of Sydney, Sydney, Australia
    217 rdf:type schema:Organization
    218 grid-institutes:grid.4991.5 schema:alternateName University of Oxford, Oxford, England
    219 schema:name University of Oxford, Oxford, England
    220 rdf:type schema:Organization
    221 grid-institutes:grid.8547.e schema:alternateName Academy for Engineering and Technology, Fudan University, Shanghai, China
    222 School of Information Science and Technology, Fudan University, Shanghai, China
    223 schema:name Academy for Engineering and Technology, Fudan University, Shanghai, China
    224 School of Information Science and Technology, Fudan University, Shanghai, China
    225 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...