Ontology type: schema:ScholarlyArticle
2019-02-28
AUTHORSYu Gao, Pengle Cheng
ABSTRACTThe damage caused by forest fire to forestry resources and economy is quite serious. As one of the most important characters of early forest fire, smoke is widely used as a signal of forest fire. In this paper, we propose a novel forest fire smoke detection method based on computer vision and diffusion model. Unlike the video-based methods that usually rely on image characters extraction, we try to find the shape of smoke that is at the generation stage. To combine vision and diffusion model together, the basic concept of smoke root is proposed. In the frame processing stage, none characters of fire smoke are extracted (like texture, color, frequency information etc.), and continuous frames are only used to extract stable points in dynamic areas as the smoke root candidate points. In the diffusion model simulation stage, all smoke root candidate points information is adopted by the model to generate the simulation smoke. Finally, the match algorithm based on color, dynamic areas and simulation smoke is implemented to get the final results. In order to reduce the complexity of computation, we ignored the simulation process of the smoke details, such as texture and turbulence, and only retained the contour features in two-dimensional form. Experiments show that under the condition of smoke root existing in the frames, the algorithm can obtain stable detection results and low false positive rate in cloudy scenes. More... »
PAGES1-26
http://scigraph.springernature.com/pub.10.1007/s10694-019-00831-x
DOIhttp://dx.doi.org/10.1007/s10694-019-00831-x
DIMENSIONShttps://app.dimensions.ai/details/publication/pub.1112477939
JSON-LD is the canonical representation for SciGraph data.
TIP: You can open this SciGraph record using an external JSON-LD service: JSON-LD Playground Google SDTT
[
{
"@context": "https://springernature.github.io/scigraph/jsonld/sgcontext.json",
"about": [
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/0801",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Artificial Intelligence and Image Processing",
"type": "DefinedTerm"
},
{
"id": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/08",
"inDefinedTermSet": "http://purl.org/au-research/vocabulary/anzsrc-for/2008/",
"name": "Information and Computing Sciences",
"type": "DefinedTerm"
}
],
"author": [
{
"affiliation": {
"alternateName": "Beijing Forestry University",
"id": "https://www.grid.ac/institutes/grid.66741.32",
"name": [
"School of Technology, Beijing Forestry University, No. 35 East Tsinghua Road, 100083, Beijing, Haidian, China"
],
"type": "Organization"
},
"familyName": "Gao",
"givenName": "Yu",
"type": "Person"
},
{
"affiliation": {
"alternateName": "Beijing Forestry University",
"id": "https://www.grid.ac/institutes/grid.66741.32",
"name": [
"School of Technology, Beijing Forestry University, No. 35 East Tsinghua Road, 100083, Beijing, Haidian, China"
],
"type": "Organization"
},
"familyName": "Cheng",
"givenName": "Pengle",
"type": "Person"
}
],
"citation": [
{
"id": "https://doi.org/10.1145/311535.311548",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1015455772"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.ijleo.2015.05.082",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1016512366"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/383259.383260",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1017619206"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1090/s0025-5718-1968-0242392-2",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1027150656"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.firesaf.2011.03.003",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1027359232"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1145/357994.358023",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1030198768"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1006/gmip.1996.0039",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1037359928"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.firesaf.2016.08.004",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1044207741"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1071/wf02048",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1050783990"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1023/b:visi.0000029664.99615.94",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1052687286",
"https://doi.org/10.1023/b:visi.0000029664.99615.94"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/tip.2010.2101613",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1061642729"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1561/2200000006",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1068001401"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1134/s1054661817010138",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1084223305",
"https://doi.org/10.1134/s1054661817010138"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s11760-017-1102-y",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1085103968",
"https://doi.org/10.1007/s11760-017-1102-y"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s11760-017-1102-y",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1085103968",
"https://doi.org/10.1007/s11760-017-1102-y"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.3233/jifs-161605",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1086201338"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s10694-017-0665-z",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1086351650",
"https://doi.org/10.1007/s10694-017-0665-z"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s10694-017-0665-z",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1086351650",
"https://doi.org/10.1007/s10694-017-0665-z"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/access.2017.2747399",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1091480020"
],
"type": "CreativeWork"
},
{
"id": "sg:pub.10.1007/s10694-017-0683-x",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1092747097",
"https://doi.org/10.1007/s10694-017-0683-x"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/wcica.2016.7578611",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1093534144"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/iciicii.2016.0045",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1093879221"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/cvprw.2012.6238924",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1094191922"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/wcnm.2005.1544272",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1094491028"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/isoen.2017.7968889",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1094763647"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/iecon.2016.7793196",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1095650815"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1016/j.proeng.2017.12.034",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1100899581"
],
"type": "CreativeWork"
},
{
"id": "https://doi.org/10.1109/ic4me2.2018.8465661",
"sameAs": [
"https://app.dimensions.ai/details/publication/pub.1107147809"
],
"type": "CreativeWork"
}
],
"datePublished": "2019-02-28",
"datePublishedReg": "2019-02-28",
"description": "The damage caused by forest fire to forestry resources and economy is quite serious. As one of the most important characters of early forest fire, smoke is widely used as a signal of forest fire. In this paper, we propose a novel forest fire smoke detection method based on computer vision and diffusion model. Unlike the video-based methods that usually rely on image characters extraction, we try to find the shape of smoke that is at the generation stage. To combine vision and diffusion model together, the basic concept of smoke root is proposed. In the frame processing stage, none characters of fire smoke are extracted (like texture, color, frequency information etc.), and continuous frames are only used to extract stable points in dynamic areas as the smoke root candidate points. In the diffusion model simulation stage, all smoke root candidate points information is adopted by the model to generate the simulation smoke. Finally, the match algorithm based on color, dynamic areas and simulation smoke is implemented to get the final results. In order to reduce the complexity of computation, we ignored the simulation process of the smoke details, such as texture and turbulence, and only retained the contour features in two-dimensional form. Experiments show that under the condition of smoke root existing in the frames, the algorithm can obtain stable detection results and low false positive rate in cloudy scenes.",
"genre": "research_article",
"id": "sg:pub.10.1007/s10694-019-00831-x",
"inLanguage": [
"en"
],
"isAccessibleForFree": false,
"isPartOf": [
{
"id": "sg:journal.1122008",
"issn": [
"0015-2684",
"1572-8099"
],
"name": "Fire Technology",
"type": "Periodical"
}
],
"name": "Forest Fire Smoke Detection Based on Visual Smoke Root and Diffusion Model",
"pagination": "1-26",
"productId": [
{
"name": "readcube_id",
"type": "PropertyValue",
"value": [
"a82c952a3c106980570b19d52ec49961c054834eddbd5f7b161caadd0879c8f1"
]
},
{
"name": "doi",
"type": "PropertyValue",
"value": [
"10.1007/s10694-019-00831-x"
]
},
{
"name": "dimensions_id",
"type": "PropertyValue",
"value": [
"pub.1112477939"
]
}
],
"sameAs": [
"https://doi.org/10.1007/s10694-019-00831-x",
"https://app.dimensions.ai/details/publication/pub.1112477939"
],
"sdDataset": "articles",
"sdDatePublished": "2019-04-11T11:04",
"sdLicense": "https://scigraph.springernature.com/explorer/license/",
"sdPublisher": {
"name": "Springer Nature - SN SciGraph project",
"type": "Organization"
},
"sdSource": "s3://com-uberresearch-data-dimensions-target-20181106-alternative/cleanup/v134/2549eaecd7973599484d7c17b260dba0a4ecb94b/merge/v9/a6c9fde33151104705d4d7ff012ea9563521a3ce/jats-lookup/v90/0000000352_0000000352/records_60366_00000004.jsonl",
"type": "ScholarlyArticle",
"url": "https://link.springer.com/10.1007%2Fs10694-019-00831-x"
}
]
Download the RDF metadata as: json-ld nt turtle xml License info
JSON-LD is a popular format for linked data which is fully compatible with JSON.
curl -H 'Accept: application/ld+json' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00831-x'
N-Triples is a line-based linked data format ideal for batch operations.
curl -H 'Accept: application/n-triples' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00831-x'
Turtle is a human-readable linked data format.
curl -H 'Accept: text/turtle' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00831-x'
RDF/XML is a standard XML format for linked data.
curl -H 'Accept: application/rdf+xml' 'https://scigraph.springernature.com/pub.10.1007/s10694-019-00831-x'
This table displays all metadata directly associated to this object as RDF triples.
143 TRIPLES
21 PREDICATES
50 URIs
16 LITERALS
5 BLANK NODES