XOR-ed visual secret sharing scheme with robust and meaningful shadows based on QR codes View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2019-12-09

AUTHORS

Longdan Tan, Yuliang Lu, Xuehu Yan, Lintao Liu, Xuan Zhou

ABSTRACT

Quick response (QR) codes are becoming increasingly popular in various areas of life due to the advantages of the error correction capacity, the ability to be scanned quickly and the capacity to contain meaningful content. The distribution of dark and light modules of a QR code looks random, but the content of a code can be decoded by a standard QR reader. Thus, a QR code is often used in combination with visual secret sharing (VSS) to generate meaningful shadows. There may be some losses in the process of distribution and preservation of the shadows. To recover secret images with high quality, it is necessary to consider the scheme’s robustness. However, few studies examine robustness of VSS combined with QR codes. In this paper, we propose a robust (k, n)-threshold XOR-ed VSS (XVSS) scheme based on a QR code with the error correction ability. Compared with OR-ed VSS (OVSS), XVSS can recover the secret image losslessly, and the amount of computation needed is low. Since the standard QR encoder does not check if the padding codewords are correct during the encoding phase, we replace padding codewords by initial shadows shared from the secret image using XVSS to generate QR code shadows. As a result, the shadows can be decoded normally, and their error correction abilities are preserved. Once all the shadows have been collected, the secret image can be recovered losslessly. More importantly, if some conventional image attacks, including rotation, JPEG compression, Gaussian noise, salt-and-pepper noise, cropping, resizing, and even the addition of camera and screen noises are performed on the shadows, the secret image can still be recovered. The experimental results and comparisons demonstrate the effectiveness of our scheme. More... »

PAGES

5719-5741

References to SciGraph publications

  • 2014-11-29. 2D Barcodes for visual cryptography in MULTIMEDIA TOOLS AND APPLICATIONS
  • 2016-08-18. Developing Visual Cryptography for Authentication on Smartphones in INDUSTRIAL IOT TECHNOLOGIES AND APPLICATIONS
  • 1995. Visual cryptography in ADVANCES IN CRYPTOLOGY — EUROCRYPT'94
  • 2015-10-27. An enhanced threshold visual secret sharing based on random grids in JOURNAL OF REAL-TIME IMAGE PROCESSING
  • 2005-10. XOR-based Visual Cryptography Schemes in DESIGNS, CODES AND CRYPTOGRAPHY
  • 2016-06-30. Exploiting the Error Correction Mechanism in QR Codes for Secret Sharing in INFORMATION SECURITY AND PRIVACY
  • 2012. Authenticating Visual Cryptography Shares Using 2D Barcodes in DIGITAL FORENSICS AND WATERMARKING
  • 2013-12-24. Visual secret sharing based on random grids with abilities of AND and XOR lossless recovery in MULTIMEDIA TOOLS AND APPLICATIONS
  • 2016-10-04. Perfect contrast XOR-based visual cryptography schemes via linear algebra in DESIGNS, CODES AND CRYPTOGRAPHY
  • 2017-02-09. Progressive visual secret sharing for general access structure with multiple decryptions in MULTIMEDIA TOOLS AND APPLICATIONS
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s11042-019-08351-0

    DOI

    http://dx.doi.org/10.1007/s11042-019-08351-0

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1123224418


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "National University of Defense Technology, Anhui, China", 
              "id": "http://www.grid.ac/institutes/grid.412110.7", 
              "name": [
                "National University of Defense Technology, Anhui, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Tan", 
            "givenName": "Longdan", 
            "id": "sg:person.013130434267.25", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013130434267.25"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "National University of Defense Technology, Anhui, China", 
              "id": "http://www.grid.ac/institutes/grid.412110.7", 
              "name": [
                "National University of Defense Technology, Anhui, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Lu", 
            "givenName": "Yuliang", 
            "id": "sg:person.015112370271.93", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015112370271.93"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "National University of Defense Technology, Anhui, China", 
              "id": "http://www.grid.ac/institutes/grid.412110.7", 
              "name": [
                "National University of Defense Technology, Anhui, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Yan", 
            "givenName": "Xuehu", 
            "id": "sg:person.010467364517.31", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010467364517.31"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "National University of Defense Technology, Anhui, China", 
              "id": "http://www.grid.ac/institutes/grid.412110.7", 
              "name": [
                "National University of Defense Technology, Anhui, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Liu", 
            "givenName": "Lintao", 
            "id": "sg:person.013517427271.32", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013517427271.32"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "National University of Defense Technology, Anhui, China", 
              "id": "http://www.grid.ac/institutes/grid.412110.7", 
              "name": [
                "National University of Defense Technology, Anhui, China"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Zhou", 
            "givenName": "Xuan", 
            "id": "sg:person.016432265776.07", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016432265776.07"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-642-32205-1_17", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1016435694", 
              "https://doi.org/10.1007/978-3-642-32205-1_17"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/bfb0053419", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1008839066", 
              "https://doi.org/10.1007/bfb0053419"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10623-004-3816-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1027834328", 
              "https://doi.org/10.1007/s10623-004-3816-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-40253-6_25", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1004151043", 
              "https://doi.org/10.1007/978-3-319-40253-6_25"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-44350-8_19", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1041548380", 
              "https://doi.org/10.1007/978-3-319-44350-8_19"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11042-013-1784-2", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1021413130", 
              "https://doi.org/10.1007/s11042-013-1784-2"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11554-015-0540-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1022432477", 
              "https://doi.org/10.1007/s11554-015-0540-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11042-014-2365-8", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1043974419", 
              "https://doi.org/10.1007/s11042-014-2365-8"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11042-017-4421-7", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1083744330", 
              "https://doi.org/10.1007/s11042-017-4421-7"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s10623-016-0285-5", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1030004022", 
              "https://doi.org/10.1007/s10623-016-0285-5"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2019-12-09", 
        "datePublishedReg": "2019-12-09", 
        "description": "Quick response (QR) codes are becoming increasingly popular in various areas of life due to the advantages of the error correction capacity, the ability to be scanned quickly and the capacity to contain meaningful content. The distribution of dark and light modules of a QR code looks random, but the content of a code can be decoded by a standard QR reader. Thus, a QR code is often used in combination with visual secret sharing (VSS) to generate meaningful shadows. There may be some losses in the process of distribution and preservation of the shadows. To recover secret images with high quality, it is necessary to consider the scheme\u2019s robustness. However, few studies examine robustness of VSS combined with QR codes. In this paper, we propose a robust (k, n)-threshold XOR-ed VSS (XVSS) scheme based on a QR code with the error correction ability. Compared with OR-ed VSS (OVSS), XVSS can recover the secret image losslessly, and the amount of computation needed is low. Since the standard QR encoder does not check if the padding codewords are correct during the encoding phase, we replace padding codewords by initial shadows shared from the secret image using XVSS to generate QR code shadows. As a result, the shadows can be decoded normally, and their error correction abilities are preserved. Once all the shadows have been collected, the secret image can be recovered losslessly. More importantly, if some conventional image attacks, including rotation, JPEG compression, Gaussian noise, salt-and-pepper noise, cropping, resizing, and even the addition of camera and screen noises are performed on the shadows, the secret image can still be recovered. The experimental results and comparisons demonstrate the effectiveness of our scheme.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s11042-019-08351-0", 
        "isAccessibleForFree": true, 
        "isFundedItemOf": [
          {
            "id": "sg:grant.8304044", 
            "type": "MonetaryGrant"
          }
        ], 
        "isPartOf": [
          {
            "id": "sg:journal.1044869", 
            "issn": [
              "1380-7501", 
              "1573-7721"
            ], 
            "name": "Multimedia Tools and Applications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "9-10", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "79"
          }
        ], 
        "keywords": [
          "visual secret sharing", 
          "secret image", 
          "QR code", 
          "error correction ability", 
          "meaningful shadows", 
          "visual secret sharing scheme", 
          "standard QR reader", 
          "addition of cameras", 
          "secret sharing scheme", 
          "quick response code", 
          "correction ability", 
          "amount of computation", 
          "error correction capacity", 
          "VSS schemes", 
          "image attacks", 
          "JPEG compression", 
          "secret sharing", 
          "sharing scheme", 
          "QR reader", 
          "initial shadows", 
          "correction capacity", 
          "scheme robustness", 
          "pepper noise", 
          "response codes", 
          "meaningful content", 
          "images", 
          "code", 
          "experimental results", 
          "process of distribution", 
          "Gaussian noise", 
          "codewords", 
          "robustness", 
          "scheme", 
          "areas of life", 
          "high quality", 
          "encoder", 
          "light module", 
          "sharing", 
          "camera", 
          "noise", 
          "shadow", 
          "computation", 
          "attacks", 
          "module", 
          "effectiveness", 
          "compression", 
          "advantages", 
          "ability", 
          "quality", 
          "cropping", 
          "results", 
          "readers", 
          "process", 
          "content", 
          "amount", 
          "area", 
          "capacity", 
          "preservation", 
          "combination", 
          "comparison", 
          "distribution", 
          "addition", 
          "rotation", 
          "life", 
          "phase", 
          "loss", 
          "study", 
          "paper", 
          "salt"
        ], 
        "name": "XOR-ed visual secret sharing scheme with robust and meaningful shadows based on QR codes", 
        "pagination": "5719-5741", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1123224418"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s11042-019-08351-0"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s11042-019-08351-0", 
          "https://app.dimensions.ai/details/publication/pub.1123224418"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-12-01T06:40", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20221201/entities/gbq_results/article/article_820.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s11042-019-08351-0"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s11042-019-08351-0'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s11042-019-08351-0'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s11042-019-08351-0'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s11042-019-08351-0'


     

    This table displays all metadata directly associated to this object as RDF triples.

    196 TRIPLES      21 PREDICATES      103 URIs      85 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s11042-019-08351-0 schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nedd42d1195d3449f99b8464697064067
    4 schema:citation sg:pub.10.1007/978-3-319-40253-6_25
    5 sg:pub.10.1007/978-3-319-44350-8_19
    6 sg:pub.10.1007/978-3-642-32205-1_17
    7 sg:pub.10.1007/bfb0053419
    8 sg:pub.10.1007/s10623-004-3816-4
    9 sg:pub.10.1007/s10623-016-0285-5
    10 sg:pub.10.1007/s11042-013-1784-2
    11 sg:pub.10.1007/s11042-014-2365-8
    12 sg:pub.10.1007/s11042-017-4421-7
    13 sg:pub.10.1007/s11554-015-0540-4
    14 schema:datePublished 2019-12-09
    15 schema:datePublishedReg 2019-12-09
    16 schema:description Quick response (QR) codes are becoming increasingly popular in various areas of life due to the advantages of the error correction capacity, the ability to be scanned quickly and the capacity to contain meaningful content. The distribution of dark and light modules of a QR code looks random, but the content of a code can be decoded by a standard QR reader. Thus, a QR code is often used in combination with visual secret sharing (VSS) to generate meaningful shadows. There may be some losses in the process of distribution and preservation of the shadows. To recover secret images with high quality, it is necessary to consider the scheme’s robustness. However, few studies examine robustness of VSS combined with QR codes. In this paper, we propose a robust (k, n)-threshold XOR-ed VSS (XVSS) scheme based on a QR code with the error correction ability. Compared with OR-ed VSS (OVSS), XVSS can recover the secret image losslessly, and the amount of computation needed is low. Since the standard QR encoder does not check if the padding codewords are correct during the encoding phase, we replace padding codewords by initial shadows shared from the secret image using XVSS to generate QR code shadows. As a result, the shadows can be decoded normally, and their error correction abilities are preserved. Once all the shadows have been collected, the secret image can be recovered losslessly. More importantly, if some conventional image attacks, including rotation, JPEG compression, Gaussian noise, salt-and-pepper noise, cropping, resizing, and even the addition of camera and screen noises are performed on the shadows, the secret image can still be recovered. The experimental results and comparisons demonstrate the effectiveness of our scheme.
    17 schema:genre article
    18 schema:isAccessibleForFree true
    19 schema:isPartOf N2d4123c85a364a26aecbf93c07ccb2e4
    20 N51739979bd0e44bdae2a66959bf5b0f4
    21 sg:journal.1044869
    22 schema:keywords Gaussian noise
    23 JPEG compression
    24 QR code
    25 QR reader
    26 VSS schemes
    27 ability
    28 addition
    29 addition of cameras
    30 advantages
    31 amount
    32 amount of computation
    33 area
    34 areas of life
    35 attacks
    36 camera
    37 capacity
    38 code
    39 codewords
    40 combination
    41 comparison
    42 compression
    43 computation
    44 content
    45 correction ability
    46 correction capacity
    47 cropping
    48 distribution
    49 effectiveness
    50 encoder
    51 error correction ability
    52 error correction capacity
    53 experimental results
    54 high quality
    55 image attacks
    56 images
    57 initial shadows
    58 life
    59 light module
    60 loss
    61 meaningful content
    62 meaningful shadows
    63 module
    64 noise
    65 paper
    66 pepper noise
    67 phase
    68 preservation
    69 process
    70 process of distribution
    71 quality
    72 quick response code
    73 readers
    74 response codes
    75 results
    76 robustness
    77 rotation
    78 salt
    79 scheme
    80 scheme robustness
    81 secret image
    82 secret sharing
    83 secret sharing scheme
    84 shadow
    85 sharing
    86 sharing scheme
    87 standard QR reader
    88 study
    89 visual secret sharing
    90 visual secret sharing scheme
    91 schema:name XOR-ed visual secret sharing scheme with robust and meaningful shadows based on QR codes
    92 schema:pagination 5719-5741
    93 schema:productId N16b59d1c58e34c4a837d74ddbbc7f803
    94 N9acb814f8ad747d18f1e7a594662379f
    95 schema:sameAs https://app.dimensions.ai/details/publication/pub.1123224418
    96 https://doi.org/10.1007/s11042-019-08351-0
    97 schema:sdDatePublished 2022-12-01T06:40
    98 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    99 schema:sdPublisher N68d3efe827464a27ad50d39c9a6b5e80
    100 schema:url https://doi.org/10.1007/s11042-019-08351-0
    101 sgo:license sg:explorer/license/
    102 sgo:sdDataset articles
    103 rdf:type schema:ScholarlyArticle
    104 N0b77b9d1c82740bcacca7fb882ee26a5 rdf:first sg:person.013517427271.32
    105 rdf:rest Ne098cdf8ce614bd38e4a6384456b4b35
    106 N16b59d1c58e34c4a837d74ddbbc7f803 schema:name dimensions_id
    107 schema:value pub.1123224418
    108 rdf:type schema:PropertyValue
    109 N2d4123c85a364a26aecbf93c07ccb2e4 schema:volumeNumber 79
    110 rdf:type schema:PublicationVolume
    111 N51739979bd0e44bdae2a66959bf5b0f4 schema:issueNumber 9-10
    112 rdf:type schema:PublicationIssue
    113 N68d3efe827464a27ad50d39c9a6b5e80 schema:name Springer Nature - SN SciGraph project
    114 rdf:type schema:Organization
    115 N7572c096099e468fa5a220fd32d1ec43 rdf:first sg:person.015112370271.93
    116 rdf:rest Nba0103a6a0d04adaa1c14a1692ae0253
    117 N9acb814f8ad747d18f1e7a594662379f schema:name doi
    118 schema:value 10.1007/s11042-019-08351-0
    119 rdf:type schema:PropertyValue
    120 Nba0103a6a0d04adaa1c14a1692ae0253 rdf:first sg:person.010467364517.31
    121 rdf:rest N0b77b9d1c82740bcacca7fb882ee26a5
    122 Ne098cdf8ce614bd38e4a6384456b4b35 rdf:first sg:person.016432265776.07
    123 rdf:rest rdf:nil
    124 Nedd42d1195d3449f99b8464697064067 rdf:first sg:person.013130434267.25
    125 rdf:rest N7572c096099e468fa5a220fd32d1ec43
    126 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    127 schema:name Information and Computing Sciences
    128 rdf:type schema:DefinedTerm
    129 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    130 schema:name Artificial Intelligence and Image Processing
    131 rdf:type schema:DefinedTerm
    132 sg:grant.8304044 http://pending.schema.org/fundedItem sg:pub.10.1007/s11042-019-08351-0
    133 rdf:type schema:MonetaryGrant
    134 sg:journal.1044869 schema:issn 1380-7501
    135 1573-7721
    136 schema:name Multimedia Tools and Applications
    137 schema:publisher Springer Nature
    138 rdf:type schema:Periodical
    139 sg:person.010467364517.31 schema:affiliation grid-institutes:grid.412110.7
    140 schema:familyName Yan
    141 schema:givenName Xuehu
    142 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010467364517.31
    143 rdf:type schema:Person
    144 sg:person.013130434267.25 schema:affiliation grid-institutes:grid.412110.7
    145 schema:familyName Tan
    146 schema:givenName Longdan
    147 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013130434267.25
    148 rdf:type schema:Person
    149 sg:person.013517427271.32 schema:affiliation grid-institutes:grid.412110.7
    150 schema:familyName Liu
    151 schema:givenName Lintao
    152 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013517427271.32
    153 rdf:type schema:Person
    154 sg:person.015112370271.93 schema:affiliation grid-institutes:grid.412110.7
    155 schema:familyName Lu
    156 schema:givenName Yuliang
    157 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015112370271.93
    158 rdf:type schema:Person
    159 sg:person.016432265776.07 schema:affiliation grid-institutes:grid.412110.7
    160 schema:familyName Zhou
    161 schema:givenName Xuan
    162 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016432265776.07
    163 rdf:type schema:Person
    164 sg:pub.10.1007/978-3-319-40253-6_25 schema:sameAs https://app.dimensions.ai/details/publication/pub.1004151043
    165 https://doi.org/10.1007/978-3-319-40253-6_25
    166 rdf:type schema:CreativeWork
    167 sg:pub.10.1007/978-3-319-44350-8_19 schema:sameAs https://app.dimensions.ai/details/publication/pub.1041548380
    168 https://doi.org/10.1007/978-3-319-44350-8_19
    169 rdf:type schema:CreativeWork
    170 sg:pub.10.1007/978-3-642-32205-1_17 schema:sameAs https://app.dimensions.ai/details/publication/pub.1016435694
    171 https://doi.org/10.1007/978-3-642-32205-1_17
    172 rdf:type schema:CreativeWork
    173 sg:pub.10.1007/bfb0053419 schema:sameAs https://app.dimensions.ai/details/publication/pub.1008839066
    174 https://doi.org/10.1007/bfb0053419
    175 rdf:type schema:CreativeWork
    176 sg:pub.10.1007/s10623-004-3816-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1027834328
    177 https://doi.org/10.1007/s10623-004-3816-4
    178 rdf:type schema:CreativeWork
    179 sg:pub.10.1007/s10623-016-0285-5 schema:sameAs https://app.dimensions.ai/details/publication/pub.1030004022
    180 https://doi.org/10.1007/s10623-016-0285-5
    181 rdf:type schema:CreativeWork
    182 sg:pub.10.1007/s11042-013-1784-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1021413130
    183 https://doi.org/10.1007/s11042-013-1784-2
    184 rdf:type schema:CreativeWork
    185 sg:pub.10.1007/s11042-014-2365-8 schema:sameAs https://app.dimensions.ai/details/publication/pub.1043974419
    186 https://doi.org/10.1007/s11042-014-2365-8
    187 rdf:type schema:CreativeWork
    188 sg:pub.10.1007/s11042-017-4421-7 schema:sameAs https://app.dimensions.ai/details/publication/pub.1083744330
    189 https://doi.org/10.1007/s11042-017-4421-7
    190 rdf:type schema:CreativeWork
    191 sg:pub.10.1007/s11554-015-0540-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1022432477
    192 https://doi.org/10.1007/s11554-015-0540-4
    193 rdf:type schema:CreativeWork
    194 grid-institutes:grid.412110.7 schema:alternateName National University of Defense Technology, Anhui, China
    195 schema:name National University of Defense Technology, Anhui, China
    196 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...