Efficient Online Segmentation for Sparse 3D Laser Scans View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2017-02-20

AUTHORS

Igor Bogoslavskyi, Cyrill Stachniss

ABSTRACT

The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results. More... »

PAGES

41-52

References to SciGraph publications

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s41064-016-0003-y

DOI

http://dx.doi.org/10.1007/s41064-016-0003-y

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1083893630


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "Institute of Geodesy and Geoinformation, University of Bonn, Nussallee 15, 53115, Bonn, Germany", 
          "id": "http://www.grid.ac/institutes/grid.10388.32", 
          "name": [
            "Institute of Geodesy and Geoinformation, University of Bonn, Nussallee 15, 53115, Bonn, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Bogoslavskyi", 
        "givenName": "Igor", 
        "id": "sg:person.015107027235.47", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015107027235.47"
        ], 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "Institute of Geodesy and Geoinformation, University of Bonn, Nussallee 15, 53115, Bonn, Germany", 
          "id": "http://www.grid.ac/institutes/grid.10388.32", 
          "name": [
            "Institute of Geodesy and Geoinformation, University of Bonn, Nussallee 15, 53115, Bonn, Germany"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Stachniss", 
        "givenName": "Cyrill", 
        "id": "sg:person.015152144445.37", 
        "sameAs": [
          "https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015152144445.37"
        ], 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/978-3-642-28572-1_40", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1053108866", 
          "https://doi.org/10.1007/978-3-642-28572-1_40"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2017-02-20", 
    "datePublishedReg": "2017-02-20", 
    "description": "The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.", 
    "genre": "article", 
    "id": "sg:pub.10.1007/s41064-016-0003-y", 
    "isAccessibleForFree": false, 
    "isPartOf": [
      {
        "id": "sg:journal.1320463", 
        "issn": [
          "2512-2789", 
          "2512-2819"
        ], 
        "name": "PFG \u2013 Journal of Photogrammetry, Remote Sensing and Geoinformation Science", 
        "publisher": "Springer Nature", 
        "type": "Periodical"
      }, 
      {
        "issueNumber": "1", 
        "type": "PublicationIssue"
      }, 
      {
        "type": "PublicationVolume", 
        "volumeNumber": "85"
      }
    ], 
    "keywords": [
      "high-quality segmentation results", 
      "range image representation", 
      "autonomous navigation system", 
      "most mobile systems", 
      "individual objects", 
      "sparse 3D data", 
      "laser scans", 
      "mobile robot", 
      "mobile CPU", 
      "online segmentation", 
      "fast segmentation", 
      "image representation", 
      "source code", 
      "autonomous cars", 
      "fast execution", 
      "small computational demands", 
      "segmentation results", 
      "mobile systems", 
      "dynamic environment", 
      "point clouds", 
      "range images", 
      "computational demands", 
      "first processing step", 
      "frame rate", 
      "current image", 
      "perception cues", 
      "different objects", 
      "navigation system", 
      "single core", 
      "such systems", 
      "segmentation", 
      "objects", 
      "processing steps", 
      "images", 
      "implementation", 
      "CPU", 
      "robot", 
      "large number", 
      "execution", 
      "cloud", 
      "scene", 
      "system", 
      "scanner", 
      "computation", 
      "code", 
      "key focus", 
      "representation", 
      "effective method", 
      "sensors", 
      "cars", 
      "environment", 
      "data", 
      "method", 
      "demand", 
      "work", 
      "step", 
      "Further analysis", 
      "number", 
      "scans", 
      "focus", 
      "core", 
      "cues", 
      "ability", 
      "results", 
      "analysis", 
      "ground", 
      "ROS", 
      "rate", 
      "Hertz", 
      "approach", 
      "paper"
    ], 
    "name": "Efficient Online Segmentation for Sparse 3D Laser Scans", 
    "pagination": "41-52", 
    "productId": [
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1083893630"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s41064-016-0003-y"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s41064-016-0003-y", 
      "https://app.dimensions.ai/details/publication/pub.1083893630"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2022-08-04T17:05", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-springernature-scigraph/baseset/20220804/entities/gbq_results/article/article_739.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://doi.org/10.1007/s41064-016-0003-y"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s41064-016-0003-y'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s41064-016-0003-y'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s41064-016-0003-y'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s41064-016-0003-y'


 

This table displays all metadata directly associated to this object as RDF triples.

139 TRIPLES      21 PREDICATES      96 URIs      87 LITERALS      6 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s41064-016-0003-y schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Neb8e4560431249e78e605a46078302b2
4 schema:citation sg:pub.10.1007/978-3-642-28572-1_40
5 schema:datePublished 2017-02-20
6 schema:datePublishedReg 2017-02-20
7 schema:description The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.
8 schema:genre article
9 schema:isAccessibleForFree false
10 schema:isPartOf N867a40c7abb94a2e8d34185b78272ac6
11 Ndf83cbf4875d4e8994affbefd4861700
12 sg:journal.1320463
13 schema:keywords CPU
14 Further analysis
15 Hertz
16 ROS
17 ability
18 analysis
19 approach
20 autonomous cars
21 autonomous navigation system
22 cars
23 cloud
24 code
25 computation
26 computational demands
27 core
28 cues
29 current image
30 data
31 demand
32 different objects
33 dynamic environment
34 effective method
35 environment
36 execution
37 fast execution
38 fast segmentation
39 first processing step
40 focus
41 frame rate
42 ground
43 high-quality segmentation results
44 image representation
45 images
46 implementation
47 individual objects
48 key focus
49 large number
50 laser scans
51 method
52 mobile CPU
53 mobile robot
54 mobile systems
55 most mobile systems
56 navigation system
57 number
58 objects
59 online segmentation
60 paper
61 perception cues
62 point clouds
63 processing steps
64 range image representation
65 range images
66 rate
67 representation
68 results
69 robot
70 scanner
71 scans
72 scene
73 segmentation
74 segmentation results
75 sensors
76 single core
77 small computational demands
78 source code
79 sparse 3D data
80 step
81 such systems
82 system
83 work
84 schema:name Efficient Online Segmentation for Sparse 3D Laser Scans
85 schema:pagination 41-52
86 schema:productId N87f86fc8d4634d729cb916175e54e360
87 Nd66f6581be16415ea319bffcc6cf9808
88 schema:sameAs https://app.dimensions.ai/details/publication/pub.1083893630
89 https://doi.org/10.1007/s41064-016-0003-y
90 schema:sdDatePublished 2022-08-04T17:05
91 schema:sdLicense https://scigraph.springernature.com/explorer/license/
92 schema:sdPublisher Ne49d81fc87cd44d194c166e8befc9509
93 schema:url https://doi.org/10.1007/s41064-016-0003-y
94 sgo:license sg:explorer/license/
95 sgo:sdDataset articles
96 rdf:type schema:ScholarlyArticle
97 N0c6e2489260a4fda96894221c5446236 rdf:first sg:person.015152144445.37
98 rdf:rest rdf:nil
99 N867a40c7abb94a2e8d34185b78272ac6 schema:issueNumber 1
100 rdf:type schema:PublicationIssue
101 N87f86fc8d4634d729cb916175e54e360 schema:name doi
102 schema:value 10.1007/s41064-016-0003-y
103 rdf:type schema:PropertyValue
104 Nd66f6581be16415ea319bffcc6cf9808 schema:name dimensions_id
105 schema:value pub.1083893630
106 rdf:type schema:PropertyValue
107 Ndf83cbf4875d4e8994affbefd4861700 schema:volumeNumber 85
108 rdf:type schema:PublicationVolume
109 Ne49d81fc87cd44d194c166e8befc9509 schema:name Springer Nature - SN SciGraph project
110 rdf:type schema:Organization
111 Neb8e4560431249e78e605a46078302b2 rdf:first sg:person.015107027235.47
112 rdf:rest N0c6e2489260a4fda96894221c5446236
113 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
114 schema:name Information and Computing Sciences
115 rdf:type schema:DefinedTerm
116 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
117 schema:name Artificial Intelligence and Image Processing
118 rdf:type schema:DefinedTerm
119 sg:journal.1320463 schema:issn 2512-2789
120 2512-2819
121 schema:name PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science
122 schema:publisher Springer Nature
123 rdf:type schema:Periodical
124 sg:person.015107027235.47 schema:affiliation grid-institutes:grid.10388.32
125 schema:familyName Bogoslavskyi
126 schema:givenName Igor
127 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015107027235.47
128 rdf:type schema:Person
129 sg:person.015152144445.37 schema:affiliation grid-institutes:grid.10388.32
130 schema:familyName Stachniss
131 schema:givenName Cyrill
132 schema:sameAs https://app.dimensions.ai/discover/publication?and_facet_researcher=ur.015152144445.37
133 rdf:type schema:Person
134 sg:pub.10.1007/978-3-642-28572-1_40 schema:sameAs https://app.dimensions.ai/details/publication/pub.1053108866
135 https://doi.org/10.1007/978-3-642-28572-1_40
136 rdf:type schema:CreativeWork
137 grid-institutes:grid.10388.32 schema:alternateName Institute of Geodesy and Geoinformation, University of Bonn, Nussallee 15, 53115, Bonn, Germany
138 schema:name Institute of Geodesy and Geoinformation, University of Bonn, Nussallee 15, 53115, Bonn, Germany
139 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...