Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2017-06-23

AUTHORS

Jinhua Lin, Yanjie Wang, Xin Li, Lu Wang

ABSTRACT

In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings. More... »

PAGES

24

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s13319-017-0134-y

DOI

http://dx.doi.org/10.1007/s13319-017-0134-y

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1086119821


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Computer Application Technology, Changchun University of Technology, No. 229, Xiuzheng Road, 130012, Changchun City, Jilin Province, China", 
          "id": "http://www.grid.ac/institutes/grid.440668.8", 
          "name": [
            "Computer Application Technology, Changchun University of Technology, No. 229, Xiuzheng Road, 130012, Changchun City, Jilin Province, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Lin", 
        "givenName": "Jinhua", 
        "id": "sg:person.014741635031.80", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014741635031.80"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Machinery and Electronics Engineering, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 130033, Changchun City, China", 
          "id": "http://www.grid.ac/institutes/grid.9227.e", 
          "name": [
            "Machinery and Electronics Engineering, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 130033, Changchun City, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Wang", 
        "givenName": "Yanjie", 
        "id": "sg:person.012170355033.80", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012170355033.80"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computer Application Technology, Changchun University of Technology, No. 229, Xiuzheng Road, 130012, Changchun City, Jilin Province, China", 
          "id": "http://www.grid.ac/institutes/grid.440668.8", 
          "name": [
            "Computer Application Technology, Changchun University of Technology, No. 229, Xiuzheng Road, 130012, Changchun City, Jilin Province, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Li", 
        "givenName": "Xin", 
        "id": "sg:person.013314766111.60", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013314766111.60"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Computer Application Technology, Changchun University of Technology, No. 229, Xiuzheng Road, 130012, Changchun City, Jilin Province, China", 
          "id": "http://www.grid.ac/institutes/grid.440668.8", 
          "name": [
            "Computer Application Technology, Changchun University of Technology, No. 229, Xiuzheng Road, 130012, Changchun City, Jilin Province, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Wang", 
        "givenName": "Lu", 
        "id": "sg:person.014112346511.33", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014112346511.33"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/s13319-016-0112-9", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1034364377", 
          "https://doi.org/10.1007/s13319-016-0112-9"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s13319-017-0117-z", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1083761437", 
          "https://doi.org/10.1007/s13319-017-0117-z"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2017-06-23", 
    "datePublishedReg": "2017-06-23", 
    "description": "In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold \u03c4 is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.", 
    "genre": "article", 
    "id": "sg:pub.10.1007/s13319-017-0134-y", 
    "isAccessibleForFree": false, 
    "isPartOf": [
      {
        "id": "sg:journal.1136468", 
        "issn": [
          "2092-6731"
        ], 
        "name": "3D Research", 
        "publisher": "Springer Nature", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "3", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "8"
      }
    ], 
    "keywords": [
      "feature representation", 
      "neural network", 
      "computer vision system", 
      "convolution neural network", 
      "neural network model", 
      "local geometric features", 
      "scanning data", 
      "vision system", 
      "training labels", 
      "robust reconstruction", 
      "matching objects", 
      "learning model", 
      "matching accuracy", 
      "depth learning", 
      "challenging task", 
      "local correspondence", 
      "network model", 
      "automobile castings", 
      "reconstruction algorithm", 
      "loop rate", 
      "geometric features", 
      "matching failure", 
      "sensor noise", 
      "threshold \u03c4", 
      "network", 
      "experimental results", 
      "descriptors", 
      "complex 3D geometries", 
      "scanning resolution", 
      "metric functions", 
      "incomplete matching", 
      "accuracy", 
      "representation", 
      "drift phenomenon", 
      "key points", 
      "algorithm", 
      "task", 
      "learning", 
      "matching", 
      "noise", 
      "objects", 
      "reconstruction", 
      "labels", 
      "model", 
      "data", 
      "order", 
      "closed loop", 
      "features", 
      "system", 
      "space", 
      "correspondence", 
      "key descriptors", 
      "method", 
      "interference", 
      "point", 
      "distance", 
      "results", 
      "resolution", 
      "loop", 
      "geometry", 
      "function", 
      "failure", 
      "rate", 
      "casting", 
      "phenomenon", 
      "surface", 
      "effect", 
      "problem"
    ], 
    "name": "Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting", 
    "pagination": "24", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1086119821"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s13319-017-0134-y"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s13319-017-0134-y", 
      "https://app.dimensions.ai/details/publication/pub.1086119821"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2022-08-04T17:05", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20220804/entities/gbq_results/article/article_746.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://doi.org/10.1007/s13319-017-0134-y"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s13319-017-0134-y'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s13319-017-0134-y'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s13319-017-0134-y'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s13319-017-0134-y'


 

This table displays all metadata directly associated to this object as RDF triples.

156 TRIPLES      21 PREDICATES      94 URIs      84 LITERALS      6 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s13319-017-0134-y schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author N668ed8d3fd0d4138b4ccd4cb9df9e253
4 schema:citation sg:pub.10.1007/s13319-016-0112-9
5 sg:pub.10.1007/s13319-017-0117-z
6 schema:datePublished 2017-06-23
7 schema:datePublishedReg 2017-06-23
8 schema:description In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.
9 schema:genre article
10 schema:isAccessibleForFree false
11 schema:isPartOf Ne37eefe4ecf9442993c8c62eeb5c546c
12 Nff632489c74244eabdec9a9f651b0df5
13 sg:journal.1136468
14 schema:keywords accuracy
15 algorithm
16 automobile castings
17 casting
18 challenging task
19 closed loop
20 complex 3D geometries
21 computer vision system
22 convolution neural network
23 correspondence
24 data
25 depth learning
26 descriptors
27 distance
28 drift phenomenon
29 effect
30 experimental results
31 failure
32 feature representation
33 features
34 function
35 geometric features
36 geometry
37 incomplete matching
38 interference
39 key descriptors
40 key points
41 labels
42 learning
43 learning model
44 local correspondence
45 local geometric features
46 loop
47 loop rate
48 matching
49 matching accuracy
50 matching failure
51 matching objects
52 method
53 metric functions
54 model
55 network
56 network model
57 neural network
58 neural network model
59 noise
60 objects
61 order
62 phenomenon
63 point
64 problem
65 rate
66 reconstruction
67 reconstruction algorithm
68 representation
69 resolution
70 results
71 robust reconstruction
72 scanning data
73 scanning resolution
74 sensor noise
75 space
76 surface
77 system
78 task
79 threshold τ
80 training labels
81 vision system
82 schema:name Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting
83 schema:pagination 24
84 schema:productId N1e07cf6e034c4d97b089d83d4e14013a
85 N90b795aa04cb4289bcdf448ff46bc65d
86 schema:sameAs https://app.dimensions.ai/details/publication/pub.1086119821
87 https://doi.org/10.1007/s13319-017-0134-y
88 schema:sdDatePublished 2022-08-04T17:05
89 schema:sdLicense https://scigraph.springernature.com/explorer/license/
90 schema:sdPublisher N058e05645f0d4e30b20503a7eced55fe
91 schema:url https://doi.org/10.1007/s13319-017-0134-y
92 sgo:license sg:explorer/license/
93 sgo:sdDataset articles
94 rdf:type schema:ScholarlyArticle
95 N058e05645f0d4e30b20503a7eced55fe schema:name Springer Nature - SN SciGraph project
96 rdf:type schema:Organization
97 N199fc295f737454daccb57e038f68c3c rdf:first sg:person.014112346511.33
98 rdf:rest rdf:nil
99 N1a5c0a33190c46d0870e7854f401d9c0 rdf:first sg:person.013314766111.60
100 rdf:rest N199fc295f737454daccb57e038f68c3c
101 N1e07cf6e034c4d97b089d83d4e14013a schema:name doi
102 schema:value 10.1007/s13319-017-0134-y
103 rdf:type schema:PropertyValue
104 N668ed8d3fd0d4138b4ccd4cb9df9e253 rdf:first sg:person.014741635031.80
105 rdf:rest N7453ed01b85e4cf4a47daa28f505a01a
106 N7453ed01b85e4cf4a47daa28f505a01a rdf:first sg:person.012170355033.80
107 rdf:rest N1a5c0a33190c46d0870e7854f401d9c0
108 N90b795aa04cb4289bcdf448ff46bc65d schema:name dimensions_id
109 schema:value pub.1086119821
110 rdf:type schema:PropertyValue
111 Ne37eefe4ecf9442993c8c62eeb5c546c schema:volumeNumber 8
112 rdf:type schema:PublicationVolume
113 Nff632489c74244eabdec9a9f651b0df5 schema:issueNumber 3
114 rdf:type schema:PublicationIssue
115 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
116 schema:name Information and Computing Sciences
117 rdf:type schema:DefinedTerm
118 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
119 schema:name Artificial Intelligence and Image Processing
120 rdf:type schema:DefinedTerm
121 sg:journal.1136468 schema:issn 2092-6731
122 schema:name 3D Research
123 schema:publisher Springer Nature
124 rdf:type schema:Periodical
125 sg:person.012170355033.80 schema:affiliation grid-institutes:grid.9227.e
126 schema:familyName Wang
127 schema:givenName Yanjie
128 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.012170355033.80
129 rdf:type schema:Person
130 sg:person.013314766111.60 schema:affiliation grid-institutes:grid.440668.8
131 schema:familyName Li
132 schema:givenName Xin
133 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.013314766111.60
134 rdf:type schema:Person
135 sg:person.014112346511.33 schema:affiliation grid-institutes:grid.440668.8
136 schema:familyName Wang
137 schema:givenName Lu
138 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014112346511.33
139 rdf:type schema:Person
140 sg:person.014741635031.80 schema:affiliation grid-institutes:grid.440668.8
141 schema:familyName Lin
142 schema:givenName Jinhua
143 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.014741635031.80
144 rdf:type schema:Person
145 sg:pub.10.1007/s13319-016-0112-9 schema:sameAs https://app.dimensions.ai/details/publication/pub.1034364377
146 https://doi.org/10.1007/s13319-016-0112-9
147 rdf:type schema:CreativeWork
148 sg:pub.10.1007/s13319-017-0117-z schema:sameAs https://app.dimensions.ai/details/publication/pub.1083761437
149 https://doi.org/10.1007/s13319-017-0117-z
150 rdf:type schema:CreativeWork
151 grid-institutes:grid.440668.8 schema:alternateName Computer Application Technology, Changchun University of Technology, No. 229, Xiuzheng Road, 130012, Changchun City, Jilin Province, China
152 schema:name Computer Application Technology, Changchun University of Technology, No. 229, Xiuzheng Road, 130012, Changchun City, Jilin Province, China
153 rdf:type schema:Organization
154 grid-institutes:grid.9227.e schema:alternateName Machinery and Electronics Engineering, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 130033, Changchun City, China
155 schema:name Machinery and Electronics Engineering, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 130033, Changchun City, China
156 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...