Ontology type: schema:ScholarlyArticle
2019-02-27
AUTHORSGaohua Lin, Yongming Zhang, Gao Xu, Qixing Zhang
ABSTRACTResearch on video smoke detection has become a hot topic in fire disaster prevention and control as it can realize early detection. Conventional methods use handcrafted features rely on prior knowledge to recognize whether a frame contains smoke. Such methods are often proposed for fixed fire scene and sensitive to the environment resulting in false alarms. In this paper, we use convolutional neural networks (CNN), which are state-of-the-art for image recognition tasks to identify smoke in video. We develop a joint detection framework based on faster RCNN and 3D CNN. An improved faster RCNN with non-maximum annexation is used to realize the smoke target location based on static spatial information. Then, 3D CNN realizes smoke recognition by combining dynamic spatial–temporal information. Compared with common CNN methods using image for smoke detection, 3D CNN improved the recognition accuracy significantly. Different network structures and data processing methods of 3D CNN have been compared, including Slow Fusion and optical flow. Tested on a dataset that comprises smoke video from multiple sources, the proposed frameworks are shown to perform very well in smoke location and recognition. Finally, the framework of two-stream 3D CNN performs the best, with a detection rate of 95.23% and a low false alarm rate of 0.39% for smoke video sequences. More... »
PAGES1-21
http://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w
DOIhttp://dx.doi.org/10.1007/s10694-019-00832-w
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1112435274
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "University of Science and Technology of China",
"id": "https://www.grid.ac/institutes/grid.59053.3a",
"name": [
"State Key Laboratory of Fire Science, University of Science and Technology of China, 230026, Hefei, China"
],
"type": "Organization"
},
"familyName": "Lin",
"givenName": "Gaohua",
"type": "Person"
},
{
"affiliation": {
"alternateName": "University of Science and Technology of China",
"id": "https://www.grid.ac/institutes/grid.59053.3a",
"name": [
"State Key Laboratory of Fire Science, University of Science and Technology of China, 230026, Hefei, China"
],
"type": "Organization"
},
"familyName": "Zhang",
"givenName": "Yongming",
"type": "Person"
},
{
"affiliation": {
"alternateName": "University of Science and Technology of China",
"id": "https://www.grid.ac/institutes/grid.59053.3a",
"name": [
"State Key Laboratory of Fire Science, University of Science and Technology of China, 230026, Hefei, China"
],
"type": "Organization"
},
"familyName": "Xu",
"givenName": "Gao",
"type": "Person"
},
{
"affiliation": {
"alternateName": "University of Science and Technology of China",
"id": "https://www.grid.ac/institutes/grid.59053.3a",
"name": [
"State Key Laboratory of Fire Science, University of Science and Technology of China, 230026, Hefei, China"
],
"type": "Organization"
},
"familyName": "Zhang",
"givenName": "Qixing",
"type": "Person"
}
],
"citation": [
{
"id": "sg:pub.10.1007/s10694-014-0453-y",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1023724711",
"https://doi.org/10.1007/s10694-014-0453-y"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s10694-009-0110-z",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1030702027",
"https://doi.org/10.1007/s10694-009-0110-z"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s10694-009-0110-z",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1030702027",
"https://doi.org/10.1007/s10694-009-0110-z"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.dsp.2013.07.003",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1033708337"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.2014.223",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1037471929"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1113/jphysiol.1962.sp006837",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1037811822"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1117/1.2748752",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1053448110"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/5.726791",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061179979"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/tpami.2016.2577031",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061745117"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/tpami.2016.2599174",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061745144"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-319-63315-2_60",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1090831548",
"https://doi.org/10.1007/978-3-319-63315-2_60"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/978-3-319-65172-9_16",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1090941240",
"https://doi.org/10.1007/978-3-319-65172-9_16"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s11042-017-5090-2",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1091307147",
"https://doi.org/10.1007/s11042-017-5090-2"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/access.2017.2747399",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1091480020"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.2014.81",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1094727707"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvpr.2016.119",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1094850311"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.2991/ifmeita-16.2016.105",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1099210417"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.2991/ifmeita-16.2016.105",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1099210417"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/iccv.2017.617",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1100060634"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s10694-017-0695-6",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1100165842",
"https://doi.org/10.1007/s10694-017-0695-6"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.proeng.2017.12.034",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1100899581"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/access.2018.2812835",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1101404038"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/3191442.3191450",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1103772782"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/3191442.3191450",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1103772782"
],
"type": "CreativeWork"
}
],
"datePublished": "2019-02-27",
"datePublishedReg": "2019-02-27",
"description": "Research on video smoke detection has become a hot topic in fire disaster prevention and control as it can realize early detection. Conventional methods use handcrafted features rely on prior knowledge to recognize whether a frame contains smoke. Such methods are often proposed for fixed fire scene and sensitive to the environment resulting in false alarms. In this paper, we use convolutional neural networks (CNN), which are state-of-the-art for image recognition tasks to identify smoke in video. We develop a joint detection framework based on faster RCNN and 3D CNN. An improved faster RCNN with non-maximum annexation is used to realize the smoke target location based on static spatial information. Then, 3D CNN realizes smoke recognition by combining dynamic spatial\u2013temporal information. Compared with common CNN methods using image for smoke detection, 3D CNN improved the recognition accuracy significantly. Different network structures and data processing methods of 3D CNN have been compared, including Slow Fusion and optical flow. Tested on a dataset that comprises smoke video from multiple sources, the proposed frameworks are shown to perform very well in smoke location and recognition. Finally, the framework of two-stream 3D CNN performs the best, with a detection rate of 95.23% and a low false alarm rate of 0.39% for smoke video sequences.",
"genre": "research_article",
"id": "sg:pub.10.1007/s10694-019-00832-w",
"inLanguage": [
"en"
],
"isAccessibleForFree": false,
"isPartOf": [
{
"id": "sg:journal.1122008",
"issn": [
"0015-2684",
"1572-8099"
],
"name": "Fire Technology",
"type": "Periodical"
}
],
"name": "Smoke Detection on Video Sequences Using 3D Convolutional Neural Networks",
"pagination": "1-21",
"productId": [
{
"name": "readcube_id",
"type": "PropertyValue",
"value": [
"40b3021f48e8608987a011f31984bea29310c70bc8409bc56bda5923b95caf83"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/s10694-019-00832-w"
]
},
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1112435274"
]
}
],
"sameAs": [
"https://doi.org/10.1007/s10694-019-00832-w",
"https://app.dimensions.ai/details/publication/pub.1112435274"
],
"sdDataset": "articles",
"sdDatePublished": "2019-04-11T10:20",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000348_0000000348/records_54331_00000002.jsonl",
"type": "ScholarlyArticle",
"url": "https://link.springer.com/10.1007%2Fs10694-019-00832-w"
}
]
Download the RDF metadata as:Â json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w'
This table displays all metadata directly associated to this object as RDF triples.
141 TRIPLES
21 PREDICATES
45 URIs
16 LITERALS
5 BLANK NODES