Smoke Detection on Video Sequences Using 3D Convolutional Neural Networks View Full Text


Ontology type: schema:ScholarlyArticle     


Article Info

DATE

2019-02-27

AUTHORS

Gaohua Lin, Yongming Zhang, Gao Xu, Qixing Zhang

ABSTRACT

Research on video smoke detection has become a hot topic in fire disaster prevention and control as it can realize early detection. Conventional methods use handcrafted features rely on prior knowledge to recognize whether a frame contains smoke. Such methods are often proposed for fixed fire scene and sensitive to the environment resulting in false alarms. In this paper, we use convolutional neural networks (CNN), which are state-of-the-art for image recognition tasks to identify smoke in video. We develop a joint detection framework based on faster RCNN and 3D CNN. An improved faster RCNN with non-maximum annexation is used to realize the smoke target location based on static spatial information. Then, 3D CNN realizes smoke recognition by combining dynamic spatial–temporal information. Compared with common CNN methods using image for smoke detection, 3D CNN improved the recognition accuracy significantly. Different network structures and data processing methods of 3D CNN have been compared, including Slow Fusion and optical flow. Tested on a dataset that comprises smoke video from multiple sources, the proposed frameworks are shown to perform very well in smoke location and recognition. Finally, the framework of two-stream 3D CNN performs the best, with a detection rate of 95.23% and a low false alarm rate of 0.39% for smoke video sequences. More... »

PAGES

1-21

Identifiers

URI

http://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w

DOI

http://dx.doi.org/10.1007/s10694-019-00832-w

DIMENSIONS

https://app.dimensions.ai/details/publication/pub.1112435274


Indexing Status Check whether this publication has been indexed by Scopus and Web Of Science using the SN Indexing Status Tool
Incoming Citations Browse incoming citations for this publication using opencitations.net

JSON-LD is the canonical representation for SciGraph data.

TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT

[
  {
    "@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json", 
    "about": [
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Artificial Intelligence and Image Processing", 
        "type": "DefinedTerm"
      }, 
      {
        "id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08", 
        "inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/", 
        "name": "Information and Computing Sciences", 
        "type": "DefinedTerm"
      }
    ], 
    "author": [
      {
        "affiliation": {
          "alternateName": "University of Science and Technology of China", 
          "id": "https://www.grid.ac/institutes/grid.59053.3a", 
          "name": [
            "State Key Laboratory of Fire Science, University of Science and Technology of China, 230026, Hefei, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Lin", 
        "givenName": "Gaohua", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of Science and Technology of China", 
          "id": "https://www.grid.ac/institutes/grid.59053.3a", 
          "name": [
            "State Key Laboratory of Fire Science, University of Science and Technology of China, 230026, Hefei, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Zhang", 
        "givenName": "Yongming", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of Science and Technology of China", 
          "id": "https://www.grid.ac/institutes/grid.59053.3a", 
          "name": [
            "State Key Laboratory of Fire Science, University of Science and Technology of China, 230026, Hefei, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Xu", 
        "givenName": "Gao", 
        "type": "Person"
      }, 
      {
        "affiliation": {
          "alternateName": "University of Science and Technology of China", 
          "id": "https://www.grid.ac/institutes/grid.59053.3a", 
          "name": [
            "State Key Laboratory of Fire Science, University of Science and Technology of China, 230026, Hefei, China"
          ], 
          "type": "Organization"
        }, 
        "familyName": "Zhang", 
        "givenName": "Qixing", 
        "type": "Person"
      }
    ], 
    "citation": [
      {
        "id": "sg:pub.10.1007/s10694-014-0453-y", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1023724711", 
          "https://doi.org/10.1007/s10694-014-0453-y"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s10694-009-0110-z", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1030702027", 
          "https://doi.org/10.1007/s10694-009-0110-z"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s10694-009-0110-z", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1030702027", 
          "https://doi.org/10.1007/s10694-009-0110-z"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.dsp.2013.07.003", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1033708337"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2014.223", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1037471929"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1113/jphysiol.1962.sp006837", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1037811822"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1117/1.2748752", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1053448110"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/5.726791", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061179979"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2016.2577031", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061745117"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/tpami.2016.2599174", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1061745144"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-319-63315-2_60", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1090831548", 
          "https://doi.org/10.1007/978-3-319-63315-2_60"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/978-3-319-65172-9_16", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1090941240", 
          "https://doi.org/10.1007/978-3-319-65172-9_16"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s11042-017-5090-2", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1091307147", 
          "https://doi.org/10.1007/s11042-017-5090-2"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/access.2017.2747399", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1091480020"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2014.81", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094727707"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/cvpr.2016.119", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1094850311"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.2991/ifmeita-16.2016.105", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1099210417"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.2991/ifmeita-16.2016.105", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1099210417"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/iccv.2017.617", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1100060634"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "sg:pub.10.1007/s10694-017-0695-6", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1100165842", 
          "https://doi.org/10.1007/s10694-017-0695-6"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1016/j.proeng.2017.12.034", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1100899581"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1109/access.2018.2812835", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1101404038"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1145/3191442.3191450", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1103772782"
        ], 
        "type": "CreativeWork"
      }, 
      {
        "id": "https://doi.org/10.1145/3191442.3191450", 
        "sameAs": [
          "https://app.dimensions.ai/details/publication/pub.1103772782"
        ], 
        "type": "CreativeWork"
      }
    ], 
    "datePublished": "2019-02-27", 
    "datePublishedReg": "2019-02-27", 
    "description": "Research on video smoke detection has become a hot topic in fire disaster prevention and control as it can realize early detection. Conventional methods use handcrafted features rely on prior knowledge to recognize whether a frame contains smoke. Such methods are often proposed for fixed fire scene and sensitive to the environment resulting in false alarms. In this paper, we use convolutional neural networks (CNN), which are state-of-the-art for image recognition tasks to identify smoke in video. We develop a joint detection framework based on faster RCNN and 3D CNN. An improved faster RCNN with non-maximum annexation is used to realize the smoke target location based on static spatial information. Then, 3D CNN realizes smoke recognition by combining dynamic spatial\u2013temporal information. Compared with common CNN methods using image for smoke detection, 3D CNN improved the recognition accuracy significantly. Different network structures and data processing methods of 3D CNN have been compared, including Slow Fusion and optical flow. Tested on a dataset that comprises smoke video from multiple sources, the proposed frameworks are shown to perform very well in smoke location and recognition. Finally, the framework of two-stream 3D CNN performs the best, with a detection rate of 95.23% and a low false alarm rate of 0.39% for smoke video sequences.", 
    "genre": "research_article", 
    "id": "sg:pub.10.1007/s10694-019-00832-w", 
    "inLanguage": [
      "en"
    ], 
    "isAccessibleForFree": false, 
    "isPartOf": [
      {
        "id": "sg:journal.1122008", 
        "issn": [
          "0015-2684", 
          "1572-8099"
        ], 
        "name": "Fire Technology", 
        "type": "Periodical"
      }
    ], 
    "name": "Smoke Detection on Video Sequences Using 3D Convolutional Neural Networks", 
    "pagination": "1-21", 
    "productId": [
      {
        "name": "readcube_id", 
        "type": "PropertyValue", 
        "value": [
          "40b3021f48e8608987a011f31984bea29310c70bc8409bc56bda5923b95caf83"
        ]
      }, 
      {
        "name": "doi", 
        "type": "PropertyValue", 
        "value": [
          "10.1007/s10694-019-00832-w"
        ]
      }, 
      {
        "name": "dimensions_id", 
        "type": "PropertyValue", 
        "value": [
          "pub.1112435274"
        ]
      }
    ], 
    "sameAs": [
      "https://doi.org/10.1007/s10694-019-00832-w", 
      "https://app.dimensions.ai/details/publication/pub.1112435274"
    ], 
    "sdDataset": "articles", 
    "sdDatePublished": "2019-04-11T10:20", 
    "sdLicense": "https://scigraph.springernature.com/explorer/license/", 
    "sdPublisher": {
      "name": "Springer Nature - SN SciGraph project", 
      "type": "Organization"
    }, 
    "sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000348_0000000348/records_54331_00000002.jsonl", 
    "type": "ScholarlyArticle", 
    "url": "https://link.springer.com/10.1007%2Fs10694-019-00832-w"
  }
]
 

Download the RDF metadata as:  json-ld nt turtle xml License info

HOW TO GET THIS DATA PROGRAMMATICALLY:

JSON-LD is a popular format for linked data which is fully compatible with JSON.

curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w'

N-Triples is a line-based linked data format ideal for batch operations.

curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w'

Turtle is a human-readable linked data format.

curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w'

RDF/XML is a standard XML format for linked data.

curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00832-w'


 

This table displays all metadata directly associated to this object as RDF triples.

141 TRIPLES      21 PREDICATES      45 URIs      16 LITERALS      5 BLANK NODES

Subject Predicate Object
1 sg:pub.10.1007/s10694-019-00832-w schema:about anzsrc-for:08
2 anzsrc-for:0801
3 schema:author Nfa5ea11c9d3f4842bbd5302785b76840
4 schema:citation sg:pub.10.1007/978-3-319-63315-2_60
5 sg:pub.10.1007/978-3-319-65172-9_16
6 sg:pub.10.1007/s10694-009-0110-z
7 sg:pub.10.1007/s10694-014-0453-y
8 sg:pub.10.1007/s10694-017-0695-6
9 sg:pub.10.1007/s11042-017-5090-2
10 https://doi.org/10.1016/j.dsp.2013.07.003
11 https://doi.org/10.1016/j.proeng.2017.12.034
12 https://doi.org/10.1109/5.726791
13 https://doi.org/10.1109/access.2017.2747399
14 https://doi.org/10.1109/access.2018.2812835
15 https://doi.org/10.1109/cvpr.2014.223
16 https://doi.org/10.1109/cvpr.2014.81
17 https://doi.org/10.1109/cvpr.2016.119
18 https://doi.org/10.1109/iccv.2017.617
19 https://doi.org/10.1109/tpami.2016.2577031
20 https://doi.org/10.1109/tpami.2016.2599174
21 https://doi.org/10.1113/jphysiol.1962.sp006837
22 https://doi.org/10.1117/1.2748752
23 https://doi.org/10.1145/3191442.3191450
24 https://doi.org/10.2991/ifmeita-16.2016.105
25 schema:datePublished 2019-02-27
26 schema:datePublishedReg 2019-02-27
27 schema:description Research on video smoke detection has become a hot topic in fire disaster prevention and control as it can realize early detection. Conventional methods use handcrafted features rely on prior knowledge to recognize whether a frame contains smoke. Such methods are often proposed for fixed fire scene and sensitive to the environment resulting in false alarms. In this paper, we use convolutional neural networks (CNN), which are state-of-the-art for image recognition tasks to identify smoke in video. We develop a joint detection framework based on faster RCNN and 3D CNN. An improved faster RCNN with non-maximum annexation is used to realize the smoke target location based on static spatial information. Then, 3D CNN realizes smoke recognition by combining dynamic spatial–temporal information. Compared with common CNN methods using image for smoke detection, 3D CNN improved the recognition accuracy significantly. Different network structures and data processing methods of 3D CNN have been compared, including Slow Fusion and optical flow. Tested on a dataset that comprises smoke video from multiple sources, the proposed frameworks are shown to perform very well in smoke location and recognition. Finally, the framework of two-stream 3D CNN performs the best, with a detection rate of 95.23% and a low false alarm rate of 0.39% for smoke video sequences.
28 schema:genre research_article
29 schema:inLanguage en
30 schema:isAccessibleForFree false
31 schema:isPartOf sg:journal.1122008
32 schema:name Smoke Detection on Video Sequences Using 3D Convolutional Neural Networks
33 schema:pagination 1-21
34 schema:productId N485c867f4c1e41b7b034f3fc50b9ec7e
35 Nda0348db29ec4c75982726d1a359b672
36 Nfc2b1ca5d8f1435d80bbf4e7ff61b8bc
37 schema:sameAs https://app.dimensions.ai/details/publication/pub.1112435274
38 https://doi.org/10.1007/s10694-019-00832-w
39 schema:sdDatePublished 2019-04-11T10:20
40 schema:sdLicense https://scigraph.springernature.com/explorer/license/
41 schema:sdPublisher Nae1d879518574690a2c455b4513c37ae
42 schema:url https://link.springer.com/10.1007%2Fs10694-019-00832-w
43 sgo:license sg:explorer/license/
44 sgo:sdDataset articles
45 rdf:type schema:ScholarlyArticle
46 N35b0af0c6825437e9442affb473b24db rdf:first Ndd3d39f3481a4330a15d394a7ad7e707
47 rdf:rest Neec91fcd041c4782bbe52e5068304951
48 N485c867f4c1e41b7b034f3fc50b9ec7e schema:name doi
49 schema:value 10.1007/s10694-019-00832-w
50 rdf:type schema:PropertyValue
51 N67162331ee7d4fc49258bb40a240272e rdf:first Ndbb72b8441964581a71d385f35a6c55d
52 rdf:rest N35b0af0c6825437e9442affb473b24db
53 Nae1d879518574690a2c455b4513c37ae schema:name Springer Nature - SN SciGraph project
54 rdf:type schema:Organization
55 Nd4c41934d3884e189a095007ea5a2cc1 schema:affiliation https://www.grid.ac/institutes/grid.59053.3a
56 schema:familyName Lin
57 schema:givenName Gaohua
58 rdf:type schema:Person
59 Nda0348db29ec4c75982726d1a359b672 schema:name dimensions_id
60 schema:value pub.1112435274
61 rdf:type schema:PropertyValue
62 Ndbb72b8441964581a71d385f35a6c55d schema:affiliation https://www.grid.ac/institutes/grid.59053.3a
63 schema:familyName Zhang
64 schema:givenName Yongming
65 rdf:type schema:Person
66 Ndd3d39f3481a4330a15d394a7ad7e707 schema:affiliation https://www.grid.ac/institutes/grid.59053.3a
67 schema:familyName Xu
68 schema:givenName Gao
69 rdf:type schema:Person
70 Ne37f8ec50e1e4d1bb51aeef0f3ce77a2 schema:affiliation https://www.grid.ac/institutes/grid.59053.3a
71 schema:familyName Zhang
72 schema:givenName Qixing
73 rdf:type schema:Person
74 Neec91fcd041c4782bbe52e5068304951 rdf:first Ne37f8ec50e1e4d1bb51aeef0f3ce77a2
75 rdf:rest rdf:nil
76 Nfa5ea11c9d3f4842bbd5302785b76840 rdf:first Nd4c41934d3884e189a095007ea5a2cc1
77 rdf:rest N67162331ee7d4fc49258bb40a240272e
78 Nfc2b1ca5d8f1435d80bbf4e7ff61b8bc schema:name readcube_id
79 schema:value 40b3021f48e8608987a011f31984bea29310c70bc8409bc56bda5923b95caf83
80 rdf:type schema:PropertyValue
81 anzsrc-for:08 schema:inDefinedTermSet anzsrc-for:
82 schema:name Information and Computing Sciences
83 rdf:type schema:DefinedTerm
84 anzsrc-for:0801 schema:inDefinedTermSet anzsrc-for:
85 schema:name Artificial Intelligence and Image Processing
86 rdf:type schema:DefinedTerm
87 sg:journal.1122008 schema:issn 0015-2684
88 1572-8099
89 schema:name Fire Technology
90 rdf:type schema:Periodical
91 sg:pub.10.1007/978-3-319-63315-2_60 schema:sameAs https://app.dimensions.ai/details/publication/pub.1090831548
92 https://doi.org/10.1007/978-3-319-63315-2_60
93 rdf:type schema:CreativeWork
94 sg:pub.10.1007/978-3-319-65172-9_16 schema:sameAs https://app.dimensions.ai/details/publication/pub.1090941240
95 https://doi.org/10.1007/978-3-319-65172-9_16
96 rdf:type schema:CreativeWork
97 sg:pub.10.1007/s10694-009-0110-z schema:sameAs https://app.dimensions.ai/details/publication/pub.1030702027
98 https://doi.org/10.1007/s10694-009-0110-z
99 rdf:type schema:CreativeWork
100 sg:pub.10.1007/s10694-014-0453-y schema:sameAs https://app.dimensions.ai/details/publication/pub.1023724711
101 https://doi.org/10.1007/s10694-014-0453-y
102 rdf:type schema:CreativeWork
103 sg:pub.10.1007/s10694-017-0695-6 schema:sameAs https://app.dimensions.ai/details/publication/pub.1100165842
104 https://doi.org/10.1007/s10694-017-0695-6
105 rdf:type schema:CreativeWork
106 sg:pub.10.1007/s11042-017-5090-2 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091307147
107 https://doi.org/10.1007/s11042-017-5090-2
108 rdf:type schema:CreativeWork
109 https://doi.org/10.1016/j.dsp.2013.07.003 schema:sameAs https://app.dimensions.ai/details/publication/pub.1033708337
110 rdf:type schema:CreativeWork
111 https://doi.org/10.1016/j.proeng.2017.12.034 schema:sameAs https://app.dimensions.ai/details/publication/pub.1100899581
112 rdf:type schema:CreativeWork
113 https://doi.org/10.1109/5.726791 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061179979
114 rdf:type schema:CreativeWork
115 https://doi.org/10.1109/access.2017.2747399 schema:sameAs https://app.dimensions.ai/details/publication/pub.1091480020
116 rdf:type schema:CreativeWork
117 https://doi.org/10.1109/access.2018.2812835 schema:sameAs https://app.dimensions.ai/details/publication/pub.1101404038
118 rdf:type schema:CreativeWork
119 https://doi.org/10.1109/cvpr.2014.223 schema:sameAs https://app.dimensions.ai/details/publication/pub.1037471929
120 rdf:type schema:CreativeWork
121 https://doi.org/10.1109/cvpr.2014.81 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094727707
122 rdf:type schema:CreativeWork
123 https://doi.org/10.1109/cvpr.2016.119 schema:sameAs https://app.dimensions.ai/details/publication/pub.1094850311
124 rdf:type schema:CreativeWork
125 https://doi.org/10.1109/iccv.2017.617 schema:sameAs https://app.dimensions.ai/details/publication/pub.1100060634
126 rdf:type schema:CreativeWork
127 https://doi.org/10.1109/tpami.2016.2577031 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061745117
128 rdf:type schema:CreativeWork
129 https://doi.org/10.1109/tpami.2016.2599174 schema:sameAs https://app.dimensions.ai/details/publication/pub.1061745144
130 rdf:type schema:CreativeWork
131 https://doi.org/10.1113/jphysiol.1962.sp006837 schema:sameAs https://app.dimensions.ai/details/publication/pub.1037811822
132 rdf:type schema:CreativeWork
133 https://doi.org/10.1117/1.2748752 schema:sameAs https://app.dimensions.ai/details/publication/pub.1053448110
134 rdf:type schema:CreativeWork
135 https://doi.org/10.1145/3191442.3191450 schema:sameAs https://app.dimensions.ai/details/publication/pub.1103772782
136 rdf:type schema:CreativeWork
137 https://doi.org/10.2991/ifmeita-16.2016.105 schema:sameAs https://app.dimensions.ai/details/publication/pub.1099210417
138 rdf:type schema:CreativeWork
139 https://www.grid.ac/institutes/grid.59053.3a schema:alternateName University of Science and Technology of China
140 schema:name State Key Laboratory of Fire Science, University of Science and Technology of China, 230026, Hefei, China
141 rdf:type schema:Organization
 




Preview window. Press ESC to close (or click here)


...