Semantic convolutional features for face detection View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2021-10-30

AUTHORS

The-Anh Pham

ABSTRACT

Convolutional neural networks have been extensively used as the key role to address many computer vision applications. Traditionally, learning convolutional features is performed in a hierarchical manner along the dimension of network depth to create multi-scale feature maps. As a result, strong semantic features are derived at the top-level layers only. This paper proposes a novel feature pyramid fashion to produce semantic features at all levels of the network for specially addressing the problem of face detection. Particularly, a Semantic Convolutional Box (SCBox) is presented by merging the features from different layers in a bottom-up fashion. The proposed lightweight detector is stacked of alternating SCBox and Inception residual modules to learn the visual features in both the dimensions of network depth and width. In addition, the newly introduced objective functions (e.g., focal and CIoU losses) are incorporated to effectively address the problem of unbalanced data, resulting in stable training. The proposed model has been validated on the standard benchmarks FDDB and WIDER FACES, in comparison with the state-of-the-art methods. Experiments showed promising results in terms of both processing time and detection accuracy. For instance, the proposed network achieves an average precision of 96.8%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$96.8\%$$\end{document} on FDDB, 82.4%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$82.4\%$$\end{document} on WIDER FACES, and gains an inference speed of 106 FPS on a moderate GPU configuration or 20 FPS on a CPU machine. More... »

PAGES

3

References to SciGraph publications

  • 2018-10-05. PyramidBox: A Context-Assisted Single Shot Face Detector in COMPUTER VISION – ECCV 2018
  • 2014-07-14. Adaboost face detector based on Joint Integral Histogram and Genetic Algorithms for feature extraction process in SPRINGERPLUS
  • 2014. Face Detection without Bells and Whistles in COMPUTER VISION – ECCV 2014
  • 2020-03-12. YOLO-face: a real-time face detector in THE VISUAL COMPUTER
  • 2016-10-08. A Fast Deep Convolutional Neural Network for Face Detection in Big Visual Data in ADVANCES IN BIG DATA
  • 2016-09-17. A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection in COMPUTER VISION – ECCV 2016
  • 2016-09-17. Face Detection with End-to-End Integration of a ConvNet and a 3D Model in COMPUTER VISION – ECCV 2016
  • 2016-09-17. SSD: Single Shot MultiBox Detector in COMPUTER VISION – ECCV 2016
  • 2017-08-02. CMS-RCNN: Contextual Multi-Scale Region-Based CNN for Unconstrained Face Detection in DEEP LEARNING FOR BIOMETRICS
  • 2014. Edge Boxes: Locating Object Proposals from Edges in COMPUTER VISION – ECCV 2014
  • 2018-10-06. Receptive Field Block Net for Accurate and Fast Object Detection in COMPUTER VISION – ECCV 2018
  • 2014. Joint Cascade Face Detection and Alignment in COMPUTER VISION – ECCV 2014
  • 2019-02-19. Single-Shot Scale-Aware Network for Real-Time Face Detection in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2019-02-14. Face detection based on evolutionary Haar filter in PATTERN ANALYSIS AND APPLICATIONS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s00138-021-01245-y

    DOI

    http://dx.doi.org/10.1007/s00138-021-01245-y

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1142264004


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "Hong Duc University (HDU), Thanh Hoa city, Vietnam", 
              "id": "http://www.grid.ac/institutes/grid.444885.1", 
              "name": [
                "Hong Duc University (HDU), Thanh Hoa city, Vietnam"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Pham", 
            "givenName": "The-Anh", 
            "id": "sg:person.012065350461.71", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012065350461.71"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-319-46487-9_26", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1050938134", 
              "https://doi.org/10.1007/978-3-319-46487-9_26"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10044-019-00784-5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1112141614", 
              "https://doi.org/10.1007/s10044-019-00784-5"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10593-2_47", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1049590036", 
              "https://doi.org/10.1007/978-3-319-10593-2_47"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01252-6_24", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107454735", 
              "https://doi.org/10.1007/978-3-030-01252-6_24"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46493-0_22", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1047850584", 
              "https://doi.org/10.1007/978-3-319-46493-0_22"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-61657-5_3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1090940503", 
              "https://doi.org/10.1007/978-3-319-61657-5_3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-46448-0_2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017177111", 
              "https://doi.org/10.1007/978-3-319-46448-0_2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01240-3_49", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107463384", 
              "https://doi.org/10.1007/978-3-030-01240-3_49"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/2193-1801-3-355", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1001763265", 
              "https://doi.org/10.1186/2193-1801-3-355"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-019-01159-3", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1112225390", 
              "https://doi.org/10.1007/s11263-019-01159-3"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00371-020-01831-7", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1125613124", 
              "https://doi.org/10.1007/s00371-020-01831-7"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10599-4_8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1011488332", 
              "https://doi.org/10.1007/978-3-319-10599-4_8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-47898-2_7", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1084902637", 
              "https://doi.org/10.1007/978-3-319-47898-2_7"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-10602-1_26", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1011079920", 
              "https://doi.org/10.1007/978-3-319-10602-1_26"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2021-10-30", 
        "datePublishedReg": "2021-10-30", 
        "description": "Convolutional neural networks have been extensively used as the key role to address many computer vision applications. Traditionally, learning convolutional features is performed in a hierarchical manner along the dimension of network depth to create multi-scale feature maps. As a result, strong semantic features are derived at the top-level layers only. This paper proposes a novel feature pyramid fashion to produce semantic features at all levels of the network for specially addressing the problem of face detection. Particularly, a Semantic Convolutional Box (SCBox) is presented by merging the features from different layers in a bottom-up fashion. The proposed lightweight detector is stacked of alternating SCBox and Inception residual modules to learn the visual features in both the dimensions of network depth and width. In addition, the newly introduced objective functions (e.g., focal and CIoU losses) are incorporated to effectively address the problem of unbalanced data, resulting in stable training. The proposed model has been validated on the standard benchmarks FDDB and WIDER FACES, in comparison with the state-of-the-art methods. Experiments showed promising results in terms of both processing time and detection accuracy. For instance, the proposed network achieves an average precision of 96.8%\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym}\n\t\t\t\t\\usepackage{amsfonts}\n\t\t\t\t\\usepackage{amssymb}\n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$96.8\\%$$\\end{document} on FDDB, 82.4%\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym}\n\t\t\t\t\\usepackage{amsfonts}\n\t\t\t\t\\usepackage{amssymb}\n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$82.4\\%$$\\end{document} on WIDER FACES, and gains an inference speed of 106 FPS on a moderate GPU configuration or 20 FPS on a CPU machine.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s00138-021-01245-y", 
        "inLanguage": "en", 
        "isAccessibleForFree": false, 
        "isPartOf": [
          {
            "id": "sg:journal.1045266", 
            "issn": [
              "0932-8092", 
              "1432-1769"
            ], 
            "name": "Machine Vision and Applications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "1", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "33"
          }
        ], 
        "keywords": [
          "convolutional features", 
          "face detection", 
          "network depth", 
          "semantic features", 
          "multi-scale feature maps", 
          "computer vision applications", 
          "convolutional neural network", 
          "strong semantic features", 
          "top-level layer", 
          "vision applications", 
          "CPU machine", 
          "inference speed", 
          "residual module", 
          "GPU configurations", 
          "feature maps", 
          "neural network", 
          "visual features", 
          "stable training", 
          "average precision", 
          "art methods", 
          "detection accuracy", 
          "lightweight detectors", 
          "pyramid fashion", 
          "hierarchical manner", 
          "processing time", 
          "unbalanced data", 
          "FDDB", 
          "network", 
          "objective function", 
          "promising results", 
          "FPS", 
          "different layers", 
          "WIDER", 
          "features", 
          "machine", 
          "detection", 
          "module", 
          "accuracy", 
          "instances", 
          "applications", 
          "fashion", 
          "maps", 
          "precision", 
          "speed", 
          "box", 
          "training", 
          "model", 
          "data", 
          "results", 
          "method", 
          "experiments", 
          "dimensions", 
          "configuration", 
          "detector", 
          "manner", 
          "terms", 
          "time", 
          "layer", 
          "state", 
          "key role", 
          "function", 
          "comparison", 
          "bottom", 
          "addition", 
          "depth", 
          "levels", 
          "role", 
          "width", 
          "problem", 
          "paper", 
          "novel feature pyramid fashion", 
          "feature pyramid fashion", 
          "Semantic Convolutional Box", 
          "Convolutional Box", 
          "SCBox", 
          "Inception residual modules", 
          "standard benchmarks FDDB", 
          "benchmarks FDDB", 
          "moderate GPU configuration", 
          "Semantic convolutional features"
        ], 
        "name": "Semantic convolutional features for face detection", 
        "pagination": "3", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1142264004"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s00138-021-01245-y"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s00138-021-01245-y", 
          "https://app.dimensions.ai/details/publication/pub.1142264004"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-01-01T18:58", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/article/article_900.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s00138-021-01245-y"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01245-y'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01245-y'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01245-y'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01245-y'


     

    This table displays all metadata directly associated to this object as RDF triples.

    194 TRIPLES      22 PREDICATES      119 URIs      97 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s00138-021-01245-y schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author N99caea1f89d14450a0c50fb31015a6b3
    4 schema:citation sg:pub.10.1007/978-3-030-01240-3_49
    5 sg:pub.10.1007/978-3-030-01252-6_24
    6 sg:pub.10.1007/978-3-319-10593-2_47
    7 sg:pub.10.1007/978-3-319-10599-4_8
    8 sg:pub.10.1007/978-3-319-10602-1_26
    9 sg:pub.10.1007/978-3-319-46448-0_2
    10 sg:pub.10.1007/978-3-319-46487-9_26
    11 sg:pub.10.1007/978-3-319-46493-0_22
    12 sg:pub.10.1007/978-3-319-47898-2_7
    13 sg:pub.10.1007/978-3-319-61657-5_3
    14 sg:pub.10.1007/s00371-020-01831-7
    15 sg:pub.10.1007/s10044-019-00784-5
    16 sg:pub.10.1007/s11263-019-01159-3
    17 sg:pub.10.1186/2193-1801-3-355
    18 schema:datePublished 2021-10-30
    19 schema:datePublishedReg 2021-10-30
    20 schema:description Convolutional neural networks have been extensively used as the key role to address many computer vision applications. Traditionally, learning convolutional features is performed in a hierarchical manner along the dimension of network depth to create multi-scale feature maps. As a result, strong semantic features are derived at the top-level layers only. This paper proposes a novel feature pyramid fashion to produce semantic features at all levels of the network for specially addressing the problem of face detection. Particularly, a Semantic Convolutional Box (SCBox) is presented by merging the features from different layers in a bottom-up fashion. The proposed lightweight detector is stacked of alternating SCBox and Inception residual modules to learn the visual features in both the dimensions of network depth and width. In addition, the newly introduced objective functions (e.g., focal and CIoU losses) are incorporated to effectively address the problem of unbalanced data, resulting in stable training. The proposed model has been validated on the standard benchmarks FDDB and WIDER FACES, in comparison with the state-of-the-art methods. Experiments showed promising results in terms of both processing time and detection accuracy. For instance, the proposed network achieves an average precision of 96.8%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$96.8\%$$\end{document} on FDDB, 82.4%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$82.4\%$$\end{document} on WIDER FACES, and gains an inference speed of 106 FPS on a moderate GPU configuration or 20 FPS on a CPU machine.
    21 schema:genre article
    22 schema:inLanguage en
    23 schema:isAccessibleForFree false
    24 schema:isPartOf N87c43a29fc86437b89481914f3505da5
    25 Nf23dce25d64e4748a495bce008f11546
    26 sg:journal.1045266
    27 schema:keywords CPU machine
    28 Convolutional Box
    29 FDDB
    30 FPS
    31 GPU configurations
    32 Inception residual modules
    33 SCBox
    34 Semantic Convolutional Box
    35 Semantic convolutional features
    36 WIDER
    37 accuracy
    38 addition
    39 applications
    40 art methods
    41 average precision
    42 benchmarks FDDB
    43 bottom
    44 box
    45 comparison
    46 computer vision applications
    47 configuration
    48 convolutional features
    49 convolutional neural network
    50 data
    51 depth
    52 detection
    53 detection accuracy
    54 detector
    55 different layers
    56 dimensions
    57 experiments
    58 face detection
    59 fashion
    60 feature maps
    61 feature pyramid fashion
    62 features
    63 function
    64 hierarchical manner
    65 inference speed
    66 instances
    67 key role
    68 layer
    69 levels
    70 lightweight detectors
    71 machine
    72 manner
    73 maps
    74 method
    75 model
    76 moderate GPU configuration
    77 module
    78 multi-scale feature maps
    79 network
    80 network depth
    81 neural network
    82 novel feature pyramid fashion
    83 objective function
    84 paper
    85 precision
    86 problem
    87 processing time
    88 promising results
    89 pyramid fashion
    90 residual module
    91 results
    92 role
    93 semantic features
    94 speed
    95 stable training
    96 standard benchmarks FDDB
    97 state
    98 strong semantic features
    99 terms
    100 time
    101 top-level layer
    102 training
    103 unbalanced data
    104 vision applications
    105 visual features
    106 width
    107 schema:name Semantic convolutional features for face detection
    108 schema:pagination 3
    109 schema:productId Nb8514595d37c48a390dd1cc7f9ff30b4
    110 Ne2c29fd8e8f342d08eb349d6e7540cc7
    111 schema:sameAs https://app.dimensions.ai/details/publication/pub.1142264004
    112 https://doi.org/10.1007/s00138-021-01245-y
    113 schema:sdDatePublished 2022-01-01T18:58
    114 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    115 schema:sdPublisher N19bc1b262cd9498ebf3855a5e26edce9
    116 schema:url https://doi.org/10.1007/s00138-021-01245-y
    117 sgo:license sg:explorer/license/
    118 sgo:sdDataset articles
    119 rdf:type schema:ScholarlyArticle
    120 N19bc1b262cd9498ebf3855a5e26edce9 schema:name Springer Nature - SN SciGraph project
    121 rdf:type schema:Organization
    122 N87c43a29fc86437b89481914f3505da5 schema:issueNumber 1
    123 rdf:type schema:PublicationIssue
    124 N99caea1f89d14450a0c50fb31015a6b3 rdf:first sg:person.012065350461.71
    125 rdf:rest rdf:nil
    126 Nb8514595d37c48a390dd1cc7f9ff30b4 schema:name doi
    127 schema:value 10.1007/s00138-021-01245-y
    128 rdf:type schema:PropertyValue
    129 Ne2c29fd8e8f342d08eb349d6e7540cc7 schema:name dimensions_id
    130 schema:value pub.1142264004
    131 rdf:type schema:PropertyValue
    132 Nf23dce25d64e4748a495bce008f11546 schema:volumeNumber 33
    133 rdf:type schema:PublicationVolume
    134 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    135 schema:name Information and Computing Sciences
    136 rdf:type schema:DefinedTerm
    137 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    138 schema:name Artificial Intelligence and Image Processing
    139 rdf:type schema:DefinedTerm
    140 sg:journal.1045266 schema:issn 0932-8092
    141 1432-1769
    142 schema:name Machine Vision and Applications
    143 schema:publisher Springer Nature
    144 rdf:type schema:Periodical
    145 sg:person.012065350461.71 schema:affiliation grid-institutes:grid.444885.1
    146 schema:familyName Pham
    147 schema:givenName The-Anh
    148 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012065350461.71
    149 rdf:type schema:Person
    150 sg:pub.10.1007/978-3-030-01240-3_49 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107463384
    151 https://doi.org/10.1007/978-3-030-01240-3_49
    152 rdf:type schema:CreativeWork
    153 sg:pub.10.1007/978-3-030-01252-6_24 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454735
    154 https://doi.org/10.1007/978-3-030-01252-6_24
    155 rdf:type schema:CreativeWork
    156 sg:pub.10.1007/978-3-319-10593-2_47 schema:sameAs https://app.dimensions.ai/details/publication/pub.1049590036
    157 https://doi.org/10.1007/978-3-319-10593-2_47
    158 rdf:type schema:CreativeWork
    159 sg:pub.10.1007/978-3-319-10599-4_8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1011488332
    160 https://doi.org/10.1007/978-3-319-10599-4_8
    161 rdf:type schema:CreativeWork
    162 sg:pub.10.1007/978-3-319-10602-1_26 schema:sameAs https://app.dimensions.ai/details/publication/pub.1011079920
    163 https://doi.org/10.1007/978-3-319-10602-1_26
    164 rdf:type schema:CreativeWork
    165 sg:pub.10.1007/978-3-319-46448-0_2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017177111
    166 https://doi.org/10.1007/978-3-319-46448-0_2
    167 rdf:type schema:CreativeWork
    168 sg:pub.10.1007/978-3-319-46487-9_26 schema:sameAs https://app.dimensions.ai/details/publication/pub.1050938134
    169 https://doi.org/10.1007/978-3-319-46487-9_26
    170 rdf:type schema:CreativeWork
    171 sg:pub.10.1007/978-3-319-46493-0_22 schema:sameAs https://app.dimensions.ai/details/publication/pub.1047850584
    172 https://doi.org/10.1007/978-3-319-46493-0_22
    173 rdf:type schema:CreativeWork
    174 sg:pub.10.1007/978-3-319-47898-2_7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1084902637
    175 https://doi.org/10.1007/978-3-319-47898-2_7
    176 rdf:type schema:CreativeWork
    177 sg:pub.10.1007/978-3-319-61657-5_3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1090940503
    178 https://doi.org/10.1007/978-3-319-61657-5_3
    179 rdf:type schema:CreativeWork
    180 sg:pub.10.1007/s00371-020-01831-7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1125613124
    181 https://doi.org/10.1007/s00371-020-01831-7
    182 rdf:type schema:CreativeWork
    183 sg:pub.10.1007/s10044-019-00784-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112141614
    184 https://doi.org/10.1007/s10044-019-00784-5
    185 rdf:type schema:CreativeWork
    186 sg:pub.10.1007/s11263-019-01159-3 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112225390
    187 https://doi.org/10.1007/s11263-019-01159-3
    188 rdf:type schema:CreativeWork
    189 sg:pub.10.1186/2193-1801-3-355 schema:sameAs https://app.dimensions.ai/details/publication/pub.1001763265
    190 https://doi.org/10.1186/2193-1801-3-355
    191 rdf:type schema:CreativeWork
    192 grid-institutes:grid.444885.1 schema:alternateName Hong Duc University (HDU), Thanh Hoa city, Vietnam
    193 schema:name Hong Duc University (HDU), Thanh Hoa city, Vietnam
    194 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...