Defect segmentation for multi-illumination quality control systems View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2021-09-23

AUTHORS

David Honzátko, Engin Türetken, Siavash A. Bigdeli, L. Andrea Dunbar, Pascal Fua

ABSTRACT

Thanks to recent advancements in image processing and deep learning techniques, visual surface inspection in production lines has become an automated process as long as all the defects are visible in a single or a few images. However, it is often necessary to inspect parts under many different illumination conditions to capture all the defects. Training deep networks to perform this task requires large quantities of annotated data, which are rarely available and cumbersome to obtain. To alleviate this problem, we devised an original augmentation approach that, given a small image collection, generates rotated versions of the images while preserving illumination effects, something that random rotations cannot do. We introduce three real multi-illumination datasets, on which we demonstrate the effectiveness of our illumination preserving rotation approach. Training deep neural architectures with our approach delivers a performance increase of up to 51% in terms of AuPRC score over using standard rotations to perform data augmentation. More... »

PAGES

118

References to SciGraph publications

  • 2018-10-06. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation in COMPUTER VISION – ECCV 2018
  • 2009-09-09. The Pascal Visual Object Classes (VOC) Challenge in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2019-07-06. A survey on Image Data Augmentation for Deep Learning in JOURNAL OF BIG DATA
  • 2014. Convolutional Neural Networks for Steel Surface Defect Detection from Photometric Stereo Images in ADVANCES IN VISUAL COMPUTING
  • 2018-10-31. Shadow identification and height estimation of defects by direct processing of grayscale images in THE INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY
  • 2015-11-18. U-Net: Convolutional Networks for Biomedical Image Segmentation in MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2015
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z

    DOI

    http://dx.doi.org/10.1007/s00138-021-01244-z

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1141327588


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "\u00c9cole polytechnique f\u00e9d\u00e9rale de Lausanne (EPFL), Lausanne, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.5333.6", 
              "name": [
                "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland", 
                "\u00c9cole polytechnique f\u00e9d\u00e9rale de Lausanne (EPFL), Lausanne, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Honz\u00e1tko", 
            "givenName": "David", 
            "id": "sg:person.015725135177.43", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015725135177.43"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.423798.3", 
              "name": [
                "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "T\u00fcretken", 
            "givenName": "Engin", 
            "id": "sg:person.016662347741.34", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016662347741.34"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.423798.3", 
              "name": [
                "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Bigdeli", 
            "givenName": "Siavash A.", 
            "id": "sg:person.016340442161.94", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016340442161.94"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.423798.3", 
              "name": [
                "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Dunbar", 
            "givenName": "L. Andrea", 
            "id": "sg:person.01212242751.90", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01212242751.90"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "\u00c9cole polytechnique f\u00e9d\u00e9rale de Lausanne (EPFL), Lausanne, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.5333.6", 
              "name": [
                "\u00c9cole polytechnique f\u00e9d\u00e9rale de Lausanne (EPFL), Lausanne, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Fua", 
            "givenName": "Pascal", 
            "id": "sg:person.01165407431.32", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01165407431.32"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-030-01234-2_49", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107454614", 
              "https://doi.org/10.1007/978-3-030-01234-2_49"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-009-0275-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014796149", 
              "https://doi.org/10.1007/s11263-009-0275-4"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-24574-4_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017774818", 
              "https://doi.org/10.1007/978-3-319-24574-4_28"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00170-018-2933-6", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107949450", 
              "https://doi.org/10.1007/s00170-018-2933-6"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/s40537-019-0197-0", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1117799804", 
              "https://doi.org/10.1186/s40537-019-0197-0"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-14249-4_64", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1001194921", 
              "https://doi.org/10.1007/978-3-319-14249-4_64"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2021-09-23", 
        "datePublishedReg": "2021-09-23", 
        "description": "Thanks to recent advancements in image processing and deep learning techniques, visual surface inspection in production lines has become an automated process as long as all the defects are visible in a single or a few images. However, it is often necessary to inspect parts under many different illumination conditions to capture all the defects. Training deep networks to perform this task requires large quantities of annotated data, which are rarely available and cumbersome to obtain. To alleviate this problem, we devised an original augmentation approach that, given a small image collection, generates rotated versions of the images while preserving illumination effects, something that random rotations cannot do. We introduce three real multi-illumination datasets, on which we demonstrate the effectiveness of our illumination preserving rotation approach. Training deep neural architectures with our approach delivers a performance increase of up to 51% in terms of AuPRC score over using standard rotations to perform data augmentation.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s00138-021-01244-z", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1045266", 
            "issn": [
              "0932-8092", 
              "1432-1769"
            ], 
            "name": "Machine Vision and Applications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "6", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "32"
          }
        ], 
        "keywords": [
          "terms of AUPRC", 
          "deep learning techniques", 
          "deep neural architecture", 
          "visual surface inspection", 
          "different illumination conditions", 
          "deep network", 
          "image collections", 
          "learning techniques", 
          "data augmentation", 
          "image processing", 
          "defect segmentation", 
          "neural architecture", 
          "augmentation approach", 
          "surface inspection", 
          "illumination conditions", 
          "performance increase", 
          "illumination effects", 
          "control system", 
          "production line", 
          "recent advancements", 
          "images", 
          "random rotation", 
          "AUPRC", 
          "segmentation", 
          "architecture", 
          "dataset", 
          "network", 
          "task", 
          "quality control system", 
          "standard rotation", 
          "processing", 
          "inspection", 
          "advancement", 
          "collection", 
          "thanks", 
          "effectiveness", 
          "system", 
          "version", 
          "large quantities", 
          "technique", 
          "rotation approach", 
          "data", 
          "illumination", 
          "augmentation", 
          "terms", 
          "process", 
          "part", 
          "rotation", 
          "quantity", 
          "lines", 
          "conditions", 
          "defects", 
          "increase", 
          "effect", 
          "approach", 
          "problem", 
          "original augmentation approach", 
          "small image collection", 
          "real multi-illumination datasets", 
          "multi-illumination datasets", 
          "multi-illumination quality control systems"
        ], 
        "name": "Defect segmentation for multi-illumination quality control systems", 
        "pagination": "118", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1141327588"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s00138-021-01244-z"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s00138-021-01244-z", 
          "https://app.dimensions.ai/details/publication/pub.1141327588"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-01-01T18:58", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220101/entities/gbq_results/article/article_890.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s00138-021-01244-z"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z'


     

    This table displays all metadata directly associated to this object as RDF triples.

    175 TRIPLES      22 PREDICATES      92 URIs      78 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s00138-021-01244-z schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Ne06fa3b0e1dd43e3a7e23931f396684a
    4 schema:citation sg:pub.10.1007/978-3-030-01234-2_49
    5 sg:pub.10.1007/978-3-319-14249-4_64
    6 sg:pub.10.1007/978-3-319-24574-4_28
    7 sg:pub.10.1007/s00170-018-2933-6
    8 sg:pub.10.1007/s11263-009-0275-4
    9 sg:pub.10.1186/s40537-019-0197-0
    10 schema:datePublished 2021-09-23
    11 schema:datePublishedReg 2021-09-23
    12 schema:description Thanks to recent advancements in image processing and deep learning techniques, visual surface inspection in production lines has become an automated process as long as all the defects are visible in a single or a few images. However, it is often necessary to inspect parts under many different illumination conditions to capture all the defects. Training deep networks to perform this task requires large quantities of annotated data, which are rarely available and cumbersome to obtain. To alleviate this problem, we devised an original augmentation approach that, given a small image collection, generates rotated versions of the images while preserving illumination effects, something that random rotations cannot do. We introduce three real multi-illumination datasets, on which we demonstrate the effectiveness of our illumination preserving rotation approach. Training deep neural architectures with our approach delivers a performance increase of up to 51% in terms of AuPRC score over using standard rotations to perform data augmentation.
    13 schema:genre article
    14 schema:inLanguage en
    15 schema:isAccessibleForFree true
    16 schema:isPartOf N900cfeb6cfc4486f9e52e21c7631b003
    17 Nf01260ee03f14c169480664cb6a76911
    18 sg:journal.1045266
    19 schema:keywords AUPRC
    20 advancement
    21 approach
    22 architecture
    23 augmentation
    24 augmentation approach
    25 collection
    26 conditions
    27 control system
    28 data
    29 data augmentation
    30 dataset
    31 deep learning techniques
    32 deep network
    33 deep neural architecture
    34 defect segmentation
    35 defects
    36 different illumination conditions
    37 effect
    38 effectiveness
    39 illumination
    40 illumination conditions
    41 illumination effects
    42 image collections
    43 image processing
    44 images
    45 increase
    46 inspection
    47 large quantities
    48 learning techniques
    49 lines
    50 multi-illumination datasets
    51 multi-illumination quality control systems
    52 network
    53 neural architecture
    54 original augmentation approach
    55 part
    56 performance increase
    57 problem
    58 process
    59 processing
    60 production line
    61 quality control system
    62 quantity
    63 random rotation
    64 real multi-illumination datasets
    65 recent advancements
    66 rotation
    67 rotation approach
    68 segmentation
    69 small image collection
    70 standard rotation
    71 surface inspection
    72 system
    73 task
    74 technique
    75 terms
    76 terms of AUPRC
    77 thanks
    78 version
    79 visual surface inspection
    80 schema:name Defect segmentation for multi-illumination quality control systems
    81 schema:pagination 118
    82 schema:productId N6f66bc312df743ba9db1a34de8abe682
    83 Ncab3799a08df43c78988bdf92ddd9253
    84 schema:sameAs https://app.dimensions.ai/details/publication/pub.1141327588
    85 https://doi.org/10.1007/s00138-021-01244-z
    86 schema:sdDatePublished 2022-01-01T18:58
    87 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    88 schema:sdPublisher Nc82f9fffc73c4036a15e1b8987e644f9
    89 schema:url https://doi.org/10.1007/s00138-021-01244-z
    90 sgo:license sg:explorer/license/
    91 sgo:sdDataset articles
    92 rdf:type schema:ScholarlyArticle
    93 N4f3ec79db2934eceb4dcfe4fcfc8050c rdf:first sg:person.016662347741.34
    94 rdf:rest N511dd74c979947d9bf90f088135a2852
    95 N511dd74c979947d9bf90f088135a2852 rdf:first sg:person.016340442161.94
    96 rdf:rest N942db39bfcd4489785651947f341ed6f
    97 N6f66bc312df743ba9db1a34de8abe682 schema:name doi
    98 schema:value 10.1007/s00138-021-01244-z
    99 rdf:type schema:PropertyValue
    100 N900cfeb6cfc4486f9e52e21c7631b003 schema:volumeNumber 32
    101 rdf:type schema:PublicationVolume
    102 N942db39bfcd4489785651947f341ed6f rdf:first sg:person.01212242751.90
    103 rdf:rest Nef9ce43af3e24fba9e7a7ef88e50243b
    104 Nc82f9fffc73c4036a15e1b8987e644f9 schema:name Springer Nature - SN SciGraph project
    105 rdf:type schema:Organization
    106 Ncab3799a08df43c78988bdf92ddd9253 schema:name dimensions_id
    107 schema:value pub.1141327588
    108 rdf:type schema:PropertyValue
    109 Ne06fa3b0e1dd43e3a7e23931f396684a rdf:first sg:person.015725135177.43
    110 rdf:rest N4f3ec79db2934eceb4dcfe4fcfc8050c
    111 Nef9ce43af3e24fba9e7a7ef88e50243b rdf:first sg:person.01165407431.32
    112 rdf:rest rdf:nil
    113 Nf01260ee03f14c169480664cb6a76911 schema:issueNumber 6
    114 rdf:type schema:PublicationIssue
    115 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    116 schema:name Information and Computing Sciences
    117 rdf:type schema:DefinedTerm
    118 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    119 schema:name Artificial Intelligence and Image Processing
    120 rdf:type schema:DefinedTerm
    121 sg:journal.1045266 schema:issn 0932-8092
    122 1432-1769
    123 schema:name Machine Vision and Applications
    124 schema:publisher Springer Nature
    125 rdf:type schema:Periodical
    126 sg:person.01165407431.32 schema:affiliation grid-institutes:grid.5333.6
    127 schema:familyName Fua
    128 schema:givenName Pascal
    129 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01165407431.32
    130 rdf:type schema:Person
    131 sg:person.01212242751.90 schema:affiliation grid-institutes:grid.423798.3
    132 schema:familyName Dunbar
    133 schema:givenName L. Andrea
    134 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01212242751.90
    135 rdf:type schema:Person
    136 sg:person.015725135177.43 schema:affiliation grid-institutes:grid.5333.6
    137 schema:familyName Honzátko
    138 schema:givenName David
    139 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015725135177.43
    140 rdf:type schema:Person
    141 sg:person.016340442161.94 schema:affiliation grid-institutes:grid.423798.3
    142 schema:familyName Bigdeli
    143 schema:givenName Siavash A.
    144 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016340442161.94
    145 rdf:type schema:Person
    146 sg:person.016662347741.34 schema:affiliation grid-institutes:grid.423798.3
    147 schema:familyName Türetken
    148 schema:givenName Engin
    149 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016662347741.34
    150 rdf:type schema:Person
    151 sg:pub.10.1007/978-3-030-01234-2_49 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454614
    152 https://doi.org/10.1007/978-3-030-01234-2_49
    153 rdf:type schema:CreativeWork
    154 sg:pub.10.1007/978-3-319-14249-4_64 schema:sameAs https://app.dimensions.ai/details/publication/pub.1001194921
    155 https://doi.org/10.1007/978-3-319-14249-4_64
    156 rdf:type schema:CreativeWork
    157 sg:pub.10.1007/978-3-319-24574-4_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017774818
    158 https://doi.org/10.1007/978-3-319-24574-4_28
    159 rdf:type schema:CreativeWork
    160 sg:pub.10.1007/s00170-018-2933-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107949450
    161 https://doi.org/10.1007/s00170-018-2933-6
    162 rdf:type schema:CreativeWork
    163 sg:pub.10.1007/s11263-009-0275-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014796149
    164 https://doi.org/10.1007/s11263-009-0275-4
    165 rdf:type schema:CreativeWork
    166 sg:pub.10.1186/s40537-019-0197-0 schema:sameAs https://app.dimensions.ai/details/publication/pub.1117799804
    167 https://doi.org/10.1186/s40537-019-0197-0
    168 rdf:type schema:CreativeWork
    169 grid-institutes:grid.423798.3 schema:alternateName Centre Suisse d’Electronique et de Microtechnique (CSEM), Neuchâtel, Switzerland
    170 schema:name Centre Suisse d’Electronique et de Microtechnique (CSEM), Neuchâtel, Switzerland
    171 rdf:type schema:Organization
    172 grid-institutes:grid.5333.6 schema:alternateName École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland
    173 schema:name Centre Suisse d’Electronique et de Microtechnique (CSEM), Neuchâtel, Switzerland
    174 École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland
    175 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...