Defect segmentation for multi-illumination quality control systems View Full Text


Ontology type: schema:ScholarlyArticle      Open Access: True


Article Info

DATE

2021-09-23

AUTHORS

David Honzátko, Engin Türetken, Siavash A. Bigdeli, L. Andrea Dunbar, Pascal Fua

ABSTRACT

Thanks to recent advancements in image processing and deep learning techniques, visual surface inspection in production lines has become an automated process as long as all the defects are visible in a single or a few images. However, it is often necessary to inspect parts under many different illumination conditions to capture all the defects. Training deep networks to perform this task requires large quantities of annotated data, which are rarely available and cumbersome to obtain. To alleviate this problem, we devised an original augmentation approach that, given a small image collection, generates rotated versions of the images while preserving illumination effects, something that random rotations cannot do. We introduce three real multi-illumination datasets, on which we demonstrate the effectiveness of our illumination preserving rotation approach. Training deep neural architectures with our approach delivers a performance increase of up to 51% in terms of AuPRC score over using standard rotations to perform data augmentation. More... »

PAGES

118

References to SciGraph publications

  • 2018-10-06. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation in COMPUTER VISION – ECCV 2018
  • 2009-09-09. The Pascal Visual Object Classes (VOC) Challenge in INTERNATIONAL JOURNAL OF COMPUTER VISION
  • 2019-07-06. A survey on Image Data Augmentation for Deep Learning in JOURNAL OF BIG DATA
  • 2014. Convolutional Neural Networks for Steel Surface Defect Detection from Photometric Stereo Images in ADVANCES IN VISUAL COMPUTING
  • 2018-10-31. Shadow identification and height estimation of defects by direct processing of grayscale images in THE INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY
  • 2015-11-18. U-Net: Convolutional Networks for Biomedical Image Segmentation in MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2015
  • Identifiers

    URI

    http://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z

    DOI

    http://dx.doi.org/10.1007/s00138-021-01244-z

    DIMENSIONS

    https://app.dimensions.ai/details/publication/pub.1141327588


    Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
    Incoming Citations Browse incoming citations for this publication using opencitations.net

    JSON-LD is the canonical representation for SciGraph data.

    TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

    [
      {
        "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
        "about": [
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Information and Computing Sciences", 
            "type": "DefinedTerm"
          }, 
          {
            "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
            "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
            "name": "Artificial Intelligence and Image Processing", 
            "type": "DefinedTerm"
          }
        ], 
        "author": [
          {
            "affiliation": {
              "alternateName": "\u00c9cole polytechnique f\u00e9d\u00e9rale de Lausanne (EPFL), Lausanne, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.5333.6", 
              "name": [
                "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland", 
                "\u00c9cole polytechnique f\u00e9d\u00e9rale de Lausanne (EPFL), Lausanne, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Honz\u00e1tko", 
            "givenName": "David", 
            "id": "sg:person.015725135177.43", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015725135177.43"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.423798.3", 
              "name": [
                "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "T\u00fcretken", 
            "givenName": "Engin", 
            "id": "sg:person.016662347741.34", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016662347741.34"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.423798.3", 
              "name": [
                "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Bigdeli", 
            "givenName": "Siavash A.", 
            "id": "sg:person.016340442161.94", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016340442161.94"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.423798.3", 
              "name": [
                "Centre Suisse d\u2019Electronique et de Microtechnique (CSEM), Neuch\u00e2tel, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Dunbar", 
            "givenName": "L. Andrea", 
            "id": "sg:person.01212242751.90", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01212242751.90"
            ], 
            "type": "Person"
          }, 
          {
            "affiliation": {
              "alternateName": "\u00c9cole polytechnique f\u00e9d\u00e9rale de Lausanne (EPFL), Lausanne, Switzerland", 
              "id": "http://www.grid.ac/institutes/grid.5333.6", 
              "name": [
                "\u00c9cole polytechnique f\u00e9d\u00e9rale de Lausanne (EPFL), Lausanne, Switzerland"
              ], 
              "type": "Organization"
            }, 
            "familyName": "Fua", 
            "givenName": "Pascal", 
            "id": "sg:person.01165407431.32", 
            "sameAs": [
              "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01165407431.32"
            ], 
            "type": "Person"
          }
        ], 
        "citation": [
          {
            "id": "sg:pub.10.1007/978-3-319-24574-4_28", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1017774818", 
              "https://doi.org/10.1007/978-3-319-24574-4_28"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-319-14249-4_64", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1001194921", 
              "https://doi.org/10.1007/978-3-319-14249-4_64"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/978-3-030-01234-2_49", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107454614", 
              "https://doi.org/10.1007/978-3-030-01234-2_49"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s00170-018-2933-6", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1107949450", 
              "https://doi.org/10.1007/s00170-018-2933-6"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1186/s40537-019-0197-0", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1117799804", 
              "https://doi.org/10.1186/s40537-019-0197-0"
            ], 
            "type": "CreativeWork"
          }, 
          {
            "id": "sg:pub.10.1007/s11263-009-0275-4", 
            "sameAs": [
              "https://app.dimensions.ai/details/publication/pub.1014796149", 
              "https://doi.org/10.1007/s11263-009-0275-4"
            ], 
            "type": "CreativeWork"
          }
        ], 
        "datePublished": "2021-09-23", 
        "datePublishedReg": "2021-09-23", 
        "description": "Thanks to recent advancements in image processing and deep learning techniques, visual surface inspection in production lines has become an automated process as long as all the defects are visible in a single or a few images. However, it is often necessary to inspect parts under many different illumination conditions to capture all the defects. Training deep networks to perform this task requires large quantities of annotated data, which are rarely available and cumbersome to obtain. To alleviate this problem, we devised an original augmentation approach that, given a small image collection, generates rotated versions of the images while preserving illumination effects, something that random rotations cannot do. We introduce three real multi-illumination datasets, on which we demonstrate the effectiveness of our illumination preserving rotation approach. Training deep neural architectures with our approach delivers a performance increase of up to 51% in terms of AuPRC score over using standard rotations to perform data augmentation.", 
        "genre": "article", 
        "id": "sg:pub.10.1007/s00138-021-01244-z", 
        "inLanguage": "en", 
        "isAccessibleForFree": true, 
        "isPartOf": [
          {
            "id": "sg:journal.1045266", 
            "issn": [
              "0932-8092", 
              "1432-1769"
            ], 
            "name": "Machine Vision and Applications", 
            "publisher": "Springer Nature", 
            "type": "Periodical"
          }, 
          {
            "issueNumber": "6", 
            "type": "PublicationIssue"
          }, 
          {
            "type": "PublicationVolume", 
            "volumeNumber": "32"
          }
        ], 
        "keywords": [
          "deep learning techniques", 
          "deep neural architectures", 
          "terms of AUPRC", 
          "visual surface inspection", 
          "different illumination conditions", 
          "deep network", 
          "image collection", 
          "learning techniques", 
          "data augmentation", 
          "image processing", 
          "defect segmentation", 
          "neural architecture", 
          "augmentation approach", 
          "surface inspection", 
          "illumination conditions", 
          "performance increase", 
          "illumination effects", 
          "control system", 
          "production line", 
          "random rotation", 
          "recent advancements", 
          "images", 
          "segmentation", 
          "AUPRC", 
          "architecture", 
          "standard rotation", 
          "dataset", 
          "network", 
          "quality control system", 
          "task", 
          "processing", 
          "rotation approach", 
          "inspection", 
          "advancement", 
          "collection", 
          "effectiveness", 
          "thanks", 
          "large quantities", 
          "system", 
          "version", 
          "technique", 
          "augmentation", 
          "data", 
          "illumination", 
          "terms", 
          "process", 
          "part", 
          "rotation", 
          "quantity", 
          "lines", 
          "conditions", 
          "defects", 
          "increase", 
          "effect", 
          "approach", 
          "problem"
        ], 
        "name": "Defect segmentation for multi-illumination quality control systems", 
        "pagination": "118", 
        "productId": [
          {
            "name": "dimensions_id", 
            "type": "PropertyValue", 
            "value": [
              "pub.1141327588"
            ]
          }, 
          {
            "name": "doi", 
            "type": "PropertyValue", 
            "value": [
              "10.1007/s00138-021-01244-z"
            ]
          }
        ], 
        "sameAs": [
          "https://doi.org/10.1007/s00138-021-01244-z", 
          "https://app.dimensions.ai/details/publication/pub.1141327588"
        ], 
        "sdDataset": "articles", 
        "sdDatePublished": "2022-05-10T10:28", 
        "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
        "sdPublisher": {
          "name": "Springer Nature - SN SciGraph project", 
          "type": "Organization"
        }, 
        "sdSource": "s3://com-springernature-scigraph/baseset/20220509/entities/gbq_results/article/article_880.jsonl", 
        "type": "ScholarlyArticle", 
        "url": "https://doi.org/10.1007/s00138-021-01244-z"
      }
    ]
     

    Download the RDF metadata as:  json-ld nt turtle xml License info

    HOW TO GET THIS DATA PROGRAMMATICALLY:

    JSON-LD is a popular format for linked data which is fully compatible with JSON.

    curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z'

    N-Triples is a line-based linked data format ideal for batch operations.

    curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z'

    Turtle is a human-readable linked data format.

    curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z'

    RDF/XML is a standard XML format for linked data.

    curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01244-z'


     

    This table displays all metadata directly associated to this object as RDF triples.

    170 TRIPLES      22 PREDICATES      87 URIs      73 LITERALS      6 BLANK NODES

    Subject Predicate Object
    1 sg:pub.10.1007/s00138-021-01244-z schema:about anzsrc-for:08
    2 anzsrc-for:0801
    3 schema:author Nb6bcdd96d7ed4e5c956cf3917c2009ee
    4 schema:citation sg:pub.10.1007/978-3-030-01234-2_49
    5 sg:pub.10.1007/978-3-319-14249-4_64
    6 sg:pub.10.1007/978-3-319-24574-4_28
    7 sg:pub.10.1007/s00170-018-2933-6
    8 sg:pub.10.1007/s11263-009-0275-4
    9 sg:pub.10.1186/s40537-019-0197-0
    10 schema:datePublished 2021-09-23
    11 schema:datePublishedReg 2021-09-23
    12 schema:description Thanks to recent advancements in image processing and deep learning techniques, visual surface inspection in production lines has become an automated process as long as all the defects are visible in a single or a few images. However, it is often necessary to inspect parts under many different illumination conditions to capture all the defects. Training deep networks to perform this task requires large quantities of annotated data, which are rarely available and cumbersome to obtain. To alleviate this problem, we devised an original augmentation approach that, given a small image collection, generates rotated versions of the images while preserving illumination effects, something that random rotations cannot do. We introduce three real multi-illumination datasets, on which we demonstrate the effectiveness of our illumination preserving rotation approach. Training deep neural architectures with our approach delivers a performance increase of up to 51% in terms of AuPRC score over using standard rotations to perform data augmentation.
    13 schema:genre article
    14 schema:inLanguage en
    15 schema:isAccessibleForFree true
    16 schema:isPartOf N2f9ed854c65d4c3e865b0572dc41dd85
    17 Nc18593e1cb31492ca9301d83c5b59b16
    18 sg:journal.1045266
    19 schema:keywords AUPRC
    20 advancement
    21 approach
    22 architecture
    23 augmentation
    24 augmentation approach
    25 collection
    26 conditions
    27 control system
    28 data
    29 data augmentation
    30 dataset
    31 deep learning techniques
    32 deep network
    33 deep neural architectures
    34 defect segmentation
    35 defects
    36 different illumination conditions
    37 effect
    38 effectiveness
    39 illumination
    40 illumination conditions
    41 illumination effects
    42 image collection
    43 image processing
    44 images
    45 increase
    46 inspection
    47 large quantities
    48 learning techniques
    49 lines
    50 network
    51 neural architecture
    52 part
    53 performance increase
    54 problem
    55 process
    56 processing
    57 production line
    58 quality control system
    59 quantity
    60 random rotation
    61 recent advancements
    62 rotation
    63 rotation approach
    64 segmentation
    65 standard rotation
    66 surface inspection
    67 system
    68 task
    69 technique
    70 terms
    71 terms of AUPRC
    72 thanks
    73 version
    74 visual surface inspection
    75 schema:name Defect segmentation for multi-illumination quality control systems
    76 schema:pagination 118
    77 schema:productId Nad144f5d915d411292b20a3f981457b3
    78 Ncbc8175c7da54a6eb5528c8f833d5953
    79 schema:sameAs https://app.dimensions.ai/details/publication/pub.1141327588
    80 https://doi.org/10.1007/s00138-021-01244-z
    81 schema:sdDatePublished 2022-05-10T10:28
    82 schema:sdLicense https://scigraph.springernature.com/explorer/license/
    83 schema:sdPublisher N4a68347dd61d41b9b7a7f2f1629e7d8c
    84 schema:url https://doi.org/10.1007/s00138-021-01244-z
    85 sgo:license sg:explorer/license/
    86 sgo:sdDataset articles
    87 rdf:type schema:ScholarlyArticle
    88 N0ad9e389c92f4e028a62ed841e831b9d rdf:first sg:person.016662347741.34
    89 rdf:rest Nd3802703167b438b850c7167de25bc45
    90 N18bea57eed134769a03b72474574e3ef rdf:first sg:person.01165407431.32
    91 rdf:rest rdf:nil
    92 N2f9ed854c65d4c3e865b0572dc41dd85 schema:volumeNumber 32
    93 rdf:type schema:PublicationVolume
    94 N4a68347dd61d41b9b7a7f2f1629e7d8c schema:name Springer Nature - SN SciGraph project
    95 rdf:type schema:Organization
    96 N5b19bc4f2838491b92adfb2c0c00b440 rdf:first sg:person.01212242751.90
    97 rdf:rest N18bea57eed134769a03b72474574e3ef
    98 Nad144f5d915d411292b20a3f981457b3 schema:name dimensions_id
    99 schema:value pub.1141327588
    100 rdf:type schema:PropertyValue
    101 Nb6bcdd96d7ed4e5c956cf3917c2009ee rdf:first sg:person.015725135177.43
    102 rdf:rest N0ad9e389c92f4e028a62ed841e831b9d
    103 Nc18593e1cb31492ca9301d83c5b59b16 schema:issueNumber 6
    104 rdf:type schema:PublicationIssue
    105 Ncbc8175c7da54a6eb5528c8f833d5953 schema:name doi
    106 schema:value 10.1007/s00138-021-01244-z
    107 rdf:type schema:PropertyValue
    108 Nd3802703167b438b850c7167de25bc45 rdf:first sg:person.016340442161.94
    109 rdf:rest N5b19bc4f2838491b92adfb2c0c00b440
    110 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
    111 schema:name Information and Computing Sciences
    112 rdf:type schema:DefinedTerm
    113 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
    114 schema:name Artificial Intelligence and Image Processing
    115 rdf:type schema:DefinedTerm
    116 sg:journal.1045266 schema:issn 0932-8092
    117 1432-1769
    118 schema:name Machine Vision and Applications
    119 schema:publisher Springer Nature
    120 rdf:type schema:Periodical
    121 sg:person.01165407431.32 schema:affiliation grid-institutes:grid.5333.6
    122 schema:familyName Fua
    123 schema:givenName Pascal
    124 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01165407431.32
    125 rdf:type schema:Person
    126 sg:person.01212242751.90 schema:affiliation grid-institutes:grid.423798.3
    127 schema:familyName Dunbar
    128 schema:givenName L. Andrea
    129 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.01212242751.90
    130 rdf:type schema:Person
    131 sg:person.015725135177.43 schema:affiliation grid-institutes:grid.5333.6
    132 schema:familyName Honzátko
    133 schema:givenName David
    134 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015725135177.43
    135 rdf:type schema:Person
    136 sg:person.016340442161.94 schema:affiliation grid-institutes:grid.423798.3
    137 schema:familyName Bigdeli
    138 schema:givenName Siavash A.
    139 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016340442161.94
    140 rdf:type schema:Person
    141 sg:person.016662347741.34 schema:affiliation grid-institutes:grid.423798.3
    142 schema:familyName Türetken
    143 schema:givenName Engin
    144 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016662347741.34
    145 rdf:type schema:Person
    146 sg:pub.10.1007/978-3-030-01234-2_49 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107454614
    147 https://doi.org/10.1007/978-3-030-01234-2_49
    148 rdf:type schema:CreativeWork
    149 sg:pub.10.1007/978-3-319-14249-4_64 schema:sameAs https://app.dimensions.ai/details/publication/pub.1001194921
    150 https://doi.org/10.1007/978-3-319-14249-4_64
    151 rdf:type schema:CreativeWork
    152 sg:pub.10.1007/978-3-319-24574-4_28 schema:sameAs https://app.dimensions.ai/details/publication/pub.1017774818
    153 https://doi.org/10.1007/978-3-319-24574-4_28
    154 rdf:type schema:CreativeWork
    155 sg:pub.10.1007/s00170-018-2933-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1107949450
    156 https://doi.org/10.1007/s00170-018-2933-6
    157 rdf:type schema:CreativeWork
    158 sg:pub.10.1007/s11263-009-0275-4 schema:sameAs https://app.dimensions.ai/details/publication/pub.1014796149
    159 https://doi.org/10.1007/s11263-009-0275-4
    160 rdf:type schema:CreativeWork
    161 sg:pub.10.1186/s40537-019-0197-0 schema:sameAs https://app.dimensions.ai/details/publication/pub.1117799804
    162 https://doi.org/10.1186/s40537-019-0197-0
    163 rdf:type schema:CreativeWork
    164 grid-institutes:grid.423798.3 schema:alternateName Centre Suisse d’Electronique et de Microtechnique (CSEM), Neuchâtel, Switzerland
    165 schema:name Centre Suisse d’Electronique et de Microtechnique (CSEM), Neuchâtel, Switzerland
    166 rdf:type schema:Organization
    167 grid-institutes:grid.5333.6 schema:alternateName École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland
    168 schema:name Centre Suisse d’Electronique et de Microtechnique (CSEM), Neuchâtel, Switzerland
    169 École polytechnique fédérale de Lausanne (EPFL), Lausanne, Switzerland
    170 rdf:type schema:Organization
     




    Preview window. Press ESC to close (or click here)


    ...