森林火点检测在林火应急救援中起着至关重要的作用.鉴于现有模型在样本质量、多尺度检测以及多视角图像泛化能力方面存在不足,以YOLOv7为基础,提出一种森林火点目标检测方法FFD-YOLO(forest fire detection based on YOLO).首先,构建多...森林火点检测在林火应急救援中起着至关重要的作用.鉴于现有模型在样本质量、多尺度检测以及多视角图像泛化能力方面存在不足,以YOLOv7为基础,提出一种森林火点目标检测方法FFD-YOLO(forest fire detection based on YOLO).首先,构建多视角可见光图像森林火灾高点检测数据集FFHPV(forest fire of high point view),旨在增强模型对多视角火点知识的学习能力;其次,引入全维动态卷积,构建空间金字塔池化层(OD-SPP),以此提升模型针对多视角数据的火点特征提取能力;最后,引入具有动态非单调聚焦机制的边界框定位损失函数Wise-IoU(wise intersection over union),降低低质量数据对模型精度的影响,提高小目标火点的检测能力.实验结果表明:所提出的FFD-YOLO方法相较于YOLOv7,精度提高3.9%,召回率提高3.7%,均值平均精度提高4.0%,F1分数提高0.038;同时,在与YOLOv5、YOLOv8、DDQ(dense distinct query)、DINO(detection transformer with improved denoising anchor boxes)、Faster R-CNN、Sparse R-CNN、Mask R-CNN、FCOS和YOLOX的对比实验中,FFD-YOLO具有最高的精度75.3%、召回率73.8%、均值平均精度77.6%和F1分数0.745,验证了该方法的可行性与有效性.展开更多
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
文摘森林火点检测在林火应急救援中起着至关重要的作用.鉴于现有模型在样本质量、多尺度检测以及多视角图像泛化能力方面存在不足,以YOLOv7为基础,提出一种森林火点目标检测方法FFD-YOLO(forest fire detection based on YOLO).首先,构建多视角可见光图像森林火灾高点检测数据集FFHPV(forest fire of high point view),旨在增强模型对多视角火点知识的学习能力;其次,引入全维动态卷积,构建空间金字塔池化层(OD-SPP),以此提升模型针对多视角数据的火点特征提取能力;最后,引入具有动态非单调聚焦机制的边界框定位损失函数Wise-IoU(wise intersection over union),降低低质量数据对模型精度的影响,提高小目标火点的检测能力.实验结果表明:所提出的FFD-YOLO方法相较于YOLOv7,精度提高3.9%,召回率提高3.7%,均值平均精度提高4.0%,F1分数提高0.038;同时,在与YOLOv5、YOLOv8、DDQ(dense distinct query)、DINO(detection transformer with improved denoising anchor boxes)、Faster R-CNN、Sparse R-CNN、Mask R-CNN、FCOS和YOLOX的对比实验中,FFD-YOLO具有最高的精度75.3%、召回率73.8%、均值平均精度77.6%和F1分数0.745,验证了该方法的可行性与有效性.
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.