Ontology type: schema:ScholarlyArticle
2021-09-21
AUTHORSM. J. Augustin, Vandana Ramesh, R. Krishna Prasad, Nitesh Gupta, M. Ramesh Kumar
ABSTRACTAmong different manufacturing techniques available for composite aircraft structures, prepreg-based manual layup is widely used. During the fabrication process, the protective films of the prepregs or other materials used in the process could get inside as a foreign object between the layers. The present method of finding the inclusions during the prepreg layup is by visual inspection in the cleanroom. Carrying out visual inspection is challenging as the layup is usually carried out on large surfaces and reflective by nature. This paper proposes a 3D laser scanner-based approach for the detection of inclusion on flat and curved surfaces. Using the portable laser scanner, the surfaces of each layer are scanned and compared the resulting point clouds using with a reference layer data. Thicknesses between two surfaces are computed with Cloud to Cloud, Mesh to Cloud and Hausdorff distance to enhance the visibility of inclusions. It was found that this approach could enhance the visibility of inclusions over 50 micron and above. These enhanced features are used to train a multiview convolutional neural network to mark the inclusion regions, which can aid the inspector to identify the inclusion regions in a fast and efficient way. More... »
PAGES117
http://scigraph.springernature.com/pub.10.1007/s00138-021-01241-2
DOIhttp://dx.doi.org/10.1007/s00138-021-01241-2
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1141273311
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "National Aerospace Laboratories CSIR, Bangalore, India",
"id": "http://www.grid.ac/institutes/grid.462641.3",
"name": [
"National Aerospace Laboratories CSIR, Bangalore, India"
],
"type": "Organization"
},
"familyName": "Augustin",
"givenName": "M. J.",
"id": "sg:person.016105362036.43",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016105362036.43"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "National Aerospace Laboratories CSIR, Bangalore, India",
"id": "http://www.grid.ac/institutes/grid.462641.3",
"name": [
"National Aerospace Laboratories CSIR, Bangalore, India"
],
"type": "Organization"
},
"familyName": "Ramesh",
"givenName": "Vandana",
"id": "sg:person.016104537401.99",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016104537401.99"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "National Aerospace Laboratories CSIR, Bangalore, India",
"id": "http://www.grid.ac/institutes/grid.462641.3",
"name": [
"National Aerospace Laboratories CSIR, Bangalore, India"
],
"type": "Organization"
},
"familyName": "Prasad",
"givenName": "R. Krishna",
"type": "Person"
},
{
"affiliation": {
"alternateName": "National Aerospace Laboratories CSIR, Bangalore, India",
"id": "http://www.grid.ac/institutes/grid.462641.3",
"name": [
"National Aerospace Laboratories CSIR, Bangalore, India"
],
"type": "Organization"
},
"familyName": "Gupta",
"givenName": "Nitesh",
"id": "sg:person.010207276603.40",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010207276603.40"
],
"type": "Person"
},
{
"affiliation": {
"alternateName": "National Aerospace Laboratories CSIR, Bangalore, India",
"id": "http://www.grid.ac/institutes/grid.462641.3",
"name": [
"National Aerospace Laboratories CSIR, Bangalore, India"
],
"type": "Organization"
},
"familyName": "Kumar",
"givenName": "M. Ramesh",
"id": "sg:person.012373231301.51",
"sameAs": [
"https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012373231301.51"
],
"type": "Person"
}
],
"citation": [
{
"id": "sg:pub.10.1007/s00138-003-0132-3",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1022331956",
"https://doi.org/10.1007/s00138-003-0132-3"
],
"type": "CreativeWork"
}
],
"datePublished": "2021-09-21",
"datePublishedReg": "2021-09-21",
"description": "Among different manufacturing techniques available for composite aircraft structures, prepreg-based manual layup is widely used. During the fabrication process, the protective films of the prepregs or other materials used in the process could get inside as a foreign object between the layers. The present method of finding the inclusions during the prepreg layup is by visual inspection in the cleanroom. Carrying out visual inspection is challenging as the layup is usually carried out on large surfaces and reflective by nature. This paper proposes a 3D laser scanner-based approach for the detection of inclusion on flat and curved surfaces. Using the portable laser scanner, the surfaces of each layer are scanned and compared the resulting point clouds using with a reference layer data. Thicknesses between two surfaces are computed with Cloud to Cloud, Mesh to Cloud and Hausdorff distance to enhance the visibility of inclusions. It was found that this approach could enhance the visibility of inclusions over 50 micron and above. These enhanced features are used to train a multiview convolutional neural network to mark the inclusion regions, which can aid the inspector to identify the inclusion regions in a fast and efficient way.",
"genre": "article",
"id": "sg:pub.10.1007/s00138-021-01241-2",
"inLanguage": "en",
"isAccessibleForFree": false,
"isPartOf": [
{
"id": "sg:journal.1045266",
"issn": [
"0932-8092",
"1432-1769"
],
"name": "Machine Vision and Applications",
"publisher": "Springer Nature",
"type": "Periodical"
},
{
"issueNumber": "6",
"type": "PublicationIssue"
},
{
"type": "PublicationVolume",
"volumeNumber": "32"
}
],
"keywords": [
"manufacturing techniques",
"composite aircraft structures",
"detection of inclusions",
"different manufacturing techniques",
"laser scanner",
"manual layup",
"aircraft structures",
"prepreg layup",
"fabrication process",
"portable laser scanner",
"protective film",
"layup",
"curved surfaces",
"large surface",
"layer data",
"visual inspection",
"surface",
"inclusion regions",
"present method",
"layer",
"neural network",
"prepreg",
"point clouds",
"enhanced features",
"convolutional neural network",
"efficient way",
"inspection",
"cleanroom",
"films",
"foreign objects",
"thickness",
"materials",
"technique",
"process",
"scanner",
"microns",
"network",
"multiview convolutional neural networks",
"structure",
"cloud",
"detection",
"approach",
"method",
"inclusion",
"region",
"distance",
"inspectors",
"visibility",
"objects",
"features",
"way",
"nature",
"data",
"Hausdorff distance",
"paper"
],
"name": "Detection of inclusion by using 3D laser scanner in composite prepreg manufacturing technique using convolutional neural networks",
"pagination": "117",
"productId": [
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1141273311"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/s00138-021-01241-2"
]
}
],
"sameAs": [
"https://doi.org/10.1007/s00138-021-01241-2",
"https://app.dimensions.ai/details/publication/pub.1141273311"
],
"sdDataset": "articles",
"sdDatePublished": "2022-05-20T07:39",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-springernature-scigraph/baseset/20220519/entities/gbq_results/article/article_899.jsonl",
"type": "ScholarlyArticle",
"url": "https://doi.org/10.1007/s00138-021-01241-2"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01241-2'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01241-2'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01241-2'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s00138-021-01241-2'
This table displays all metadata directly associated to this object as RDF triples.
144 TRIPLES
22 PREDICATES
81 URIs
72 LITERALS
6 BLANK NODES
Subject | Predicate | Object | |
---|---|---|---|
1 | sg:pub.10.1007/s00138-021-01241-2 | schema:about | anzsrc-for:08 |
2 | ″ | ″ | anzsrc-for:0801 |
3 | ″ | schema:author | N4ceafc0e69bf44f6bff8c7ba27492356 |
4 | ″ | schema:citation | sg:pub.10.1007/s00138-003-0132-3 |
5 | ″ | schema:datePublished | 2021-09-21 |
6 | ″ | schema:datePublishedReg | 2021-09-21 |
7 | ″ | schema:description | Among different manufacturing techniques available for composite aircraft structures, prepreg-based manual layup is widely used. During the fabrication process, the protective films of the prepregs or other materials used in the process could get inside as a foreign object between the layers. The present method of finding the inclusions during the prepreg layup is by visual inspection in the cleanroom. Carrying out visual inspection is challenging as the layup is usually carried out on large surfaces and reflective by nature. This paper proposes a 3D laser scanner-based approach for the detection of inclusion on flat and curved surfaces. Using the portable laser scanner, the surfaces of each layer are scanned and compared the resulting point clouds using with a reference layer data. Thicknesses between two surfaces are computed with Cloud to Cloud, Mesh to Cloud and Hausdorff distance to enhance the visibility of inclusions. It was found that this approach could enhance the visibility of inclusions over 50 micron and above. These enhanced features are used to train a multiview convolutional neural network to mark the inclusion regions, which can aid the inspector to identify the inclusion regions in a fast and efficient way. |
8 | ″ | schema:genre | article |
9 | ″ | schema:inLanguage | en |
10 | ″ | schema:isAccessibleForFree | false |
11 | ″ | schema:isPartOf | Ncc8397eac8874675872c3ffc97d2a4a1 |
12 | ″ | ″ | Ndcf9ae0348574201aa533c6d41817360 |
13 | ″ | ″ | sg:journal.1045266 |
14 | ″ | schema:keywords | Hausdorff distance |
15 | ″ | ″ | aircraft structures |
16 | ″ | ″ | approach |
17 | ″ | ″ | cleanroom |
18 | ″ | ″ | cloud |
19 | ″ | ″ | composite aircraft structures |
20 | ″ | ″ | convolutional neural network |
21 | ″ | ″ | curved surfaces |
22 | ″ | ″ | data |
23 | ″ | ″ | detection |
24 | ″ | ″ | detection of inclusions |
25 | ″ | ″ | different manufacturing techniques |
26 | ″ | ″ | distance |
27 | ″ | ″ | efficient way |
28 | ″ | ″ | enhanced features |
29 | ″ | ″ | fabrication process |
30 | ″ | ″ | features |
31 | ″ | ″ | films |
32 | ″ | ″ | foreign objects |
33 | ″ | ″ | inclusion |
34 | ″ | ″ | inclusion regions |
35 | ″ | ″ | inspection |
36 | ″ | ″ | inspectors |
37 | ″ | ″ | large surface |
38 | ″ | ″ | laser scanner |
39 | ″ | ″ | layer |
40 | ″ | ″ | layer data |
41 | ″ | ″ | layup |
42 | ″ | ″ | manual layup |
43 | ″ | ″ | manufacturing techniques |
44 | ″ | ″ | materials |
45 | ″ | ″ | method |
46 | ″ | ″ | microns |
47 | ″ | ″ | multiview convolutional neural networks |
48 | ″ | ″ | nature |
49 | ″ | ″ | network |
50 | ″ | ″ | neural network |
51 | ″ | ″ | objects |
52 | ″ | ″ | paper |
53 | ″ | ″ | point clouds |
54 | ″ | ″ | portable laser scanner |
55 | ″ | ″ | prepreg |
56 | ″ | ″ | prepreg layup |
57 | ″ | ″ | present method |
58 | ″ | ″ | process |
59 | ″ | ″ | protective film |
60 | ″ | ″ | region |
61 | ″ | ″ | scanner |
62 | ″ | ″ | structure |
63 | ″ | ″ | surface |
64 | ″ | ″ | technique |
65 | ″ | ″ | thickness |
66 | ″ | ″ | visibility |
67 | ″ | ″ | visual inspection |
68 | ″ | ″ | way |
69 | ″ | schema:name | Detection of inclusion by using 3D laser scanner in composite prepreg manufacturing technique using convolutional neural networks |
70 | ″ | schema:pagination | 117 |
71 | ″ | schema:productId | Nbb35ebda357c4db9969d4e7703cd3b27 |
72 | ″ | ″ | Nf0730fe580a34c918b31d7dd5254bdcf |
73 | ″ | schema:sameAs | https://app.dimensions.ai/details/publication/pub.1141273311 |
74 | ″ | ″ | https://doi.org/10.1007/s00138-021-01241-2 |
75 | ″ | schema:sdDatePublished | 2022-05-20T07:39 |
76 | ″ | schema:sdLicense | https://scigraph.springernature.com/explorer/license/ |
77 | ″ | schema:sdPublisher | Na39e2e114c5b452bb725d85a284e592a |
78 | ″ | schema:url | https://doi.org/10.1007/s00138-021-01241-2 |
79 | ″ | sgo:license | sg:explorer/license/ |
80 | ″ | sgo:sdDataset | articles |
81 | ″ | rdf:type | schema:ScholarlyArticle |
82 | N1e4627b481c341bbbff8fcf8d11e3c42 | rdf:first | sg:person.012373231301.51 |
83 | ″ | rdf:rest | rdf:nil |
84 | N4ceafc0e69bf44f6bff8c7ba27492356 | rdf:first | sg:person.016105362036.43 |
85 | ″ | rdf:rest | N6e5574d5ef6e4a4790eb5bf7b35f296c |
86 | N6e5574d5ef6e4a4790eb5bf7b35f296c | rdf:first | sg:person.016104537401.99 |
87 | ″ | rdf:rest | N7d61594dbf01481ca8104743912aa72e |
88 | N7d61594dbf01481ca8104743912aa72e | rdf:first | Nedf0dfad249045338e48bd0d34b9265f |
89 | ″ | rdf:rest | Ne35ec1ef1462482c87e5b8a4ca5405ab |
90 | Na39e2e114c5b452bb725d85a284e592a | schema:name | Springer Nature - SN SciGraph project |
91 | ″ | rdf:type | schema:Organization |
92 | Nbb35ebda357c4db9969d4e7703cd3b27 | schema:name | doi |
93 | ″ | schema:value | 10.1007/s00138-021-01241-2 |
94 | ″ | rdf:type | schema:PropertyValue |
95 | Ncc8397eac8874675872c3ffc97d2a4a1 | schema:volumeNumber | 32 |
96 | ″ | rdf:type | schema:PublicationVolume |
97 | Ndcf9ae0348574201aa533c6d41817360 | schema:issueNumber | 6 |
98 | ″ | rdf:type | schema:PublicationIssue |
99 | Ne35ec1ef1462482c87e5b8a4ca5405ab | rdf:first | sg:person.010207276603.40 |
100 | ″ | rdf:rest | N1e4627b481c341bbbff8fcf8d11e3c42 |
101 | Nedf0dfad249045338e48bd0d34b9265f | schema:affiliation | grid-institutes:grid.462641.3 |
102 | ″ | schema:familyName | Prasad |
103 | ″ | schema:givenName | R. Krishna |
104 | ″ | rdf:type | schema:Person |
105 | Nf0730fe580a34c918b31d7dd5254bdcf | schema:name | dimensions_id |
106 | ″ | schema:value | pub.1141273311 |
107 | ″ | rdf:type | schema:PropertyValue |
108 | anzsrc-for:08 | schema:inDefinedTermSet | anzsrc-for: |
109 | ″ | schema:name | Information and Computing Sciences |
110 | ″ | rdf:type | schema:DefinedTerm |
111 | anzsrc-for:0801 | schema:inDefinedTermSet | anzsrc-for: |
112 | ″ | schema:name | Artificial Intelligence and Image Processing |
113 | ″ | rdf:type | schema:DefinedTerm |
114 | sg:journal.1045266 | schema:issn | 0932-8092 |
115 | ″ | ″ | 1432-1769 |
116 | ″ | schema:name | Machine Vision and Applications |
117 | ″ | schema:publisher | Springer Nature |
118 | ″ | rdf:type | schema:Periodical |
119 | sg:person.010207276603.40 | schema:affiliation | grid-institutes:grid.462641.3 |
120 | ″ | schema:familyName | Gupta |
121 | ″ | schema:givenName | Nitesh |
122 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.010207276603.40 |
123 | ″ | rdf:type | schema:Person |
124 | sg:person.012373231301.51 | schema:affiliation | grid-institutes:grid.462641.3 |
125 | ″ | schema:familyName | Kumar |
126 | ″ | schema:givenName | M. Ramesh |
127 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012373231301.51 |
128 | ″ | rdf:type | schema:Person |
129 | sg:person.016104537401.99 | schema:affiliation | grid-institutes:grid.462641.3 |
130 | ″ | schema:familyName | Ramesh |
131 | ″ | schema:givenName | Vandana |
132 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016104537401.99 |
133 | ″ | rdf:type | schema:Person |
134 | sg:person.016105362036.43 | schema:affiliation | grid-institutes:grid.462641.3 |
135 | ″ | schema:familyName | Augustin |
136 | ″ | schema:givenName | M. J. |
137 | ″ | schema:sameAs | https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.016105362036.43 |
138 | ″ | rdf:type | schema:Person |
139 | sg:pub.10.1007/s00138-003-0132-3 | schema:sameAs | https://app.dimensions.ai/details/publication/pub.1022331956 |
140 | ″ | ″ | https://doi.org/10.1007/s00138-003-0132-3 |
141 | ″ | rdf:type | schema:CreativeWork |
142 | grid-institutes:grid.462641.3 | schema:alternateName | National Aerospace Laboratories CSIR, Bangalore, India |
143 | ″ | schema:name | National Aerospace Laboratories CSIR, Bangalore, India |
144 | ″ | rdf:type | schema:Organization |