期刊文献+
共找到185篇文章
< 1 2 10 >
每页显示 20 50 100
气象观测场在汽车试验场中的应用研究
1
作者 陈海建 《时代汽车》 2024年第14期172-174,178,共4页
汽车试验场作为汽车开展道路测试的重要场所,用于验证汽车产品的品质以及可靠性。除了场地道路外,气象条件作为汽车道路测试的重要一环,在《GB/T12534-1990汽车道路试验方法通则》中也有明确要求,如:试验时应是无雨无雾天气,相对湿度小... 汽车试验场作为汽车开展道路测试的重要场所,用于验证汽车产品的品质以及可靠性。除了场地道路外,气象条件作为汽车道路测试的重要一环,在《GB/T12534-1990汽车道路试验方法通则》中也有明确要求,如:试验时应是无雨无雾天气,相对湿度小于95%,气温0-40℃,风速不大于3m/s。同时气象条件也作为试验场道路管控的重要依据,实时风速、雨量、能见度等信息为场地管理者发布限速、限行、封场等通知提供必要参考依据,直接影响道路测试安全管控的及时性。因此,文章从气象观测场的建设、气象服务、异常天气道路管控等方面开展气象观测场在汽车试验场中的应用研究。 展开更多
关键词 products. In addition to the SITE roads METEOROLOGICAL conditions are an important part of AUTOMOTIVE ROAD testing and there are also clear requirements in the GB/T12534-1990 General Rules for AUTOMOTIVE ROAD Test Methods. For example the test should be conducted in rain and fog free weather with a relative humidity of less than 95% a temperature of 0-40 and a wind SPEED of no more than 3m/s. At the same time METEOROLOGICAL conditions also serve as an important basis for ROAD control in the test site. Real time information such as wind SPEED rainfall and visibility provides necessary reference for SITE managers to issue notices on SPEED limits SITE closures and trac restrictions directly aecting the timeliness of ROAD testing safety control. Therefore this article conducts research on the application of METEOROLOGICAL observation SITES in AUTOMOTIVE testing SITES from the construction of METEOROLOGICAL observation SITES METEOROLOGICAL services and abnormal weather ROAD control.
在线阅读 下载PDF
Exploratory Research on Defense against Natural Adversarial Examples in Image Classification
2
作者 Yaoxuan Zhu Hua Yang Bin Zhu 《Computers, Materials & Continua》 2025年第2期1947-1968,共22页
The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natura... The emergence of adversarial examples has revealed the inadequacies in the robustness of image classification models based on Convolutional Neural Networks (CNNs). Particularly in recent years, the discovery of natural adversarial examples has posed significant challenges, as traditional defense methods against adversarial attacks have proven to be largely ineffective against these natural adversarial examples. This paper explores defenses against these natural adversarial examples from three perspectives: adversarial examples, model architecture, and dataset. First, it employs Class Activation Mapping (CAM) to visualize how models classify natural adversarial examples, identifying several typical attack patterns. Next, various common CNN models are analyzed to evaluate their susceptibility to these attacks, revealing that different architectures exhibit varying defensive capabilities. The study finds that as the depth of a network increases, its defenses against natural adversarial examples strengthen. Lastly, Finally, the impact of dataset class distribution on the defense capability of models is examined, focusing on two aspects: the number of classes in the training set and the number of predicted classes. This study investigates how these factors influence the model’s ability to defend against natural adversarial examples. Results indicate that reducing the number of training classes enhances the model’s defense against natural adversarial examples. Additionally, under a fixed number of training classes, some CNN models show an optimal range of predicted classes for achieving the best defense performance against these adversarial examples. 展开更多
关键词 Image classification convolutional neural network natural adversarial example data set defense against adversarial examples
在线阅读 下载PDF
A Survey of Adversarial Examples in Computer Vision:Attack,Defense,and Beyond
3
作者 XU Keyizhi LU Yajuan +1 位作者 WANG Zhongyuan LIANG Chao 《Wuhan University Journal of Natural Sciences》 2025年第1期1-20,共20页
Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples ca... Recent years have witnessed the ever-increasing performance of Deep Neural Networks(DNNs)in computer vision tasks.However,researchers have identified a potential vulnerability:carefully crafted adversarial examples can easily mislead DNNs into incorrect behavior via the injection of imperceptible modification to the input data.In this survey,we focus on(1)adversarial attack algorithms to generate adversarial examples,(2)adversarial defense techniques to secure DNNs against adversarial examples,and(3)important problems in the realm of adversarial examples beyond attack and defense,including the theoretical explanations,trade-off issues and benign attacks in adversarial examples.Additionally,we draw a brief comparison between recently published surveys on adversarial examples,and identify the future directions for the research of adversarial examples,such as the generalization of methods and the understanding of transferability,that might be solutions to the open problems in this field. 展开更多
关键词 computer vision adversarial examples adversarial attack adversarial defense
原文传递
Defending Adversarial Examples by a Clipped Residual U-Net Model
4
作者 Kazim Ali Adnan N.Qureshi +2 位作者 Muhammad Shahid Bhatti Abid Sohail Mohammad Hijji 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2237-2256,共20页
Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can qu... Deep learning-based systems have succeeded in many computer vision tasks.However,it is found that the latest study indicates that these systems are in danger in the presence of adversarial attacks.These attacks can quickly spoil deep learning models,e.g.,different convolutional neural networks(CNNs),used in various computer vision tasks from image classification to object detection.The adversarial examples are carefully designed by injecting a slight perturbation into the clean images.The proposed CRU-Net defense model is inspired by state-of-the-art defense mechanisms such as MagNet defense,Generative Adversarial Net-work Defense,Deep Regret Analytic Generative Adversarial Networks Defense,Deep Denoising Sparse Autoencoder Defense,and Condtional Generattive Adversarial Network Defense.We have experimentally proved that our approach is better than previous defensive techniques.Our proposed CRU-Net model maps the adversarial image examples into clean images by eliminating the adversarial perturbation.The proposed defensive approach is based on residual and U-Net learning.Many experiments are done on the datasets MNIST and CIFAR10 to prove that our proposed CRU-Net defense model prevents adversarial example attacks in WhiteBox and BlackBox settings and improves the robustness of the deep learning algorithms especially in the computer visionfield.We have also reported similarity(SSIM and PSNR)between the original and restored clean image examples by the proposed CRU-Net defense model. 展开更多
关键词 Adversarial examples adversarial attacks defense method residual learning u-net cgan cru-et model
在线阅读 下载PDF
Auto-expanded multi query examples technology in content-based image retrieval 被引量:1
5
作者 王小玲 谢康林 《Journal of Southeast University(English Edition)》 EI CAS 2005年第3期287-292,共6页
In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image ... In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image used in traditional image retrieval into multi query examples so as to include more image features related with semantics.Retrieving images for each of the multi query examples and integrating the retrieval results,more relevant images can be obtained.The property of the recall-precision curve of a general retrieval algorithm and the K-means clustering method are used to realize the expansion according to the distance of image features of the initially retrieved images.The experimental results demonstrate that the AMQE technology can greatly improve the recall and precision of the original algorithms. 展开更多
关键词 content-based image retrieval SEMANTIC multi query examples K-means clustering
在线阅读 下载PDF
SOME EXAMPLES OF TRANSITION SPANS MINERALIZING AND ANALYSIS OFTHEIR DYNAMICS
6
作者 BI Hua1, ZHAO Zhi zhong1, ZHU Wei huang2, YANG Yuan gen2,HUANG Lan3 and LIU Qiang3(1. Department of Resources, Environment and Tourism, Hainan Normal College, Haikou 571158, China 2. Institute of Geochemistry, Chinese Academy of Sciences, Guiyang 550002, China 3. Biology Department, Hainan Normal College,Haikou 571158,China) 《Geotectonica et Metallogenia》 2002年第1期89-92,共4页
The transitional span is a special environment for deposits. Taking peat, oil gas, metallic deposits as examples, this paper discusses the spatial temporal transitional characteristics of mineralization in transitiona... The transitional span is a special environment for deposits. Taking peat, oil gas, metallic deposits as examples, this paper discusses the spatial temporal transitional characteristics of mineralization in transitional regions, points out the importance of the mineralization in transition spans, and analyses their dynamics finally. 展开更多
关键词 TRANSITION SPANS ORE formation examples DYNAMICS
在线阅读 下载PDF
Torsion in Groups of Integral Triangles
7
作者 Will Murray 《Advances in Pure Mathematics》 2013年第1期116-120,共5页
Let 0<γ<π be a fixed pythagorean angle. We study the abelian group Hr of primitive integral triangles (a,b,c) for which the angle opposite side c is γ. Addition in Hr is defined by adding the angles β opposi... Let 0<γ<π be a fixed pythagorean angle. We study the abelian group Hr of primitive integral triangles (a,b,c) for which the angle opposite side c is γ. Addition in Hr is defined by adding the angles β opposite side b and modding out by π-γ. The only Hr for which the structure is known is Hπ/2, which is free abelian. We prove that for generalγ, Hr has an element of order two iff 2(1- cosγ) is a rational square, and it has elements of order three iff the cubic (2cosγ)x3-3x2+1=0 has a rational solution 0<x<1. This shows that the set of values ofγ for which Hr has two-torsion is dense in [0, π], and similarly for three-torsion. We also show that there is at most one copy of either Z2 or Z3 in Hr. Finally, we give some examples of higher order torsion elements in Hr. 展开更多
关键词 ABELIAN GROUPS Cubic Equations examples Free ABELIAN Geometric Constructions Group Theory INTEGRAL TRIANGLES Law of Cosines Primitive PYTHAGOREAN Angles PYTHAGOREAN TRIANGLES PYTHAGOREAN Triples Rational Squares Three-Torsion TORSION Torsion-Free Two-Torsion Triangle Geometry
在线阅读 下载PDF
An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection
8
作者 Younghoon Ban Myeonghyun Kim Haehyun Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3535-3563,共29页
Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware ... Antivirus vendors and the research community employ Machine Learning(ML)or Deep Learning(DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6%against well-known ML-based malware detectors and can reach a remarkable 99%evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17%of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers. 展开更多
关键词 Malware classification machine learning adversarial examples evasion attack CYBERSECURITY
在线阅读 下载PDF
Examples for Clinical Use of Ma Zi Ren Wan
9
作者 张书文 段树民 《Journal of Traditional Chinese Medicine》 SCIE CAS CSCD 2002年第3期216-217,共2页
  Ma Zi Ren Wan (麻子仁丸), originally recorded in Treatise on Febrile Diseases (伤寒论), is composed of Ma Zi Ren (麻子仁Fructus Cannabis), Bai Shao (白芍Radix Paeoniae Alba), Zhi Shi (枳实Fructus Aurantii Immaturu...   Ma Zi Ren Wan (麻子仁丸), originally recorded in Treatise on Febrile Diseases (伤寒论), is composed of Ma Zi Ren (麻子仁Fructus Cannabis), Bai Shao (白芍Radix Paeoniae Alba), Zhi Shi (枳实Fructus Aurantii Immaturus), Da Huang (大黄Radix etRhizoma Rhei), Hou Po (厚朴cortex Magnoliae Officinalis) and Xing Ren (杏仁Semen Armeniacae Amarum). Good therapeutic results have been achieved by using Ma ZiRen Wan in treatment of febrile disease at the restoring stage, chronic consumptive diseases, hemorrhoid, disorders in women after delivery, chronic kidney disease, senile constipation, pulmonary heart disease, diabetes, coronary heart disease and hypertension. Some illustrative cases are introduced below.   …… 展开更多
关键词 examples for Clinical Use of Ma Zi Ren Wan
暂未订购
An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments
10
作者 Weizheng Wang Xiangqi Wang +5 位作者 Xianmin Pan Xingxing Gong Jian Liang Pradip Kumar Sharma Osama Alfarraj Wael Said 《Computers, Materials & Continua》 SCIE EI 2023年第9期3859-3876,共18页
Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they ... Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%. 展开更多
关键词 Deep neural networks adversarial example image denoising adversarial example detection machine learning adversarial attack
在线阅读 下载PDF
A new method of constructing adversarial examplesfor quantum variational circuits
11
作者 颜金歌 闫丽丽 张仕斌 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第7期268-272,共5页
A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model wil... A quantum variational circuit is a quantum machine learning model similar to a neural network.A crafted adversarial example can lead to incorrect results for the model.Using adversarial examples to train the model will greatly improve its robustness.The existing method is to use automatic differentials or finite difference to obtain a gradient and use it to construct adversarial examples.This paper proposes an innovative method for constructing adversarial examples of quantum variational circuits.In this method,the gradient can be obtained by measuring the expected value of a quantum bit respectively in a series quantum circuit.This method can be used to construct the adversarial examples for a quantum variational circuit classifier.The implementation results prove the effectiveness of the proposed method.Compared with the existing method,our method requires fewer resources and is more efficient. 展开更多
关键词 quantum variational circuit adversarial examples quantum machine learning quantum circuit
原文传递
Omni-Detection of Adversarial Examples with Diverse Magnitudes
12
作者 Ke Jianpeng Wang Wenqi +3 位作者 Yang Kang Wang Lina Ye Aoshuang Wang Run 《China Communications》 SCIE CSCD 2024年第12期139-151,共13页
Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plen... Deep neural networks(DNNs)are poten-tially susceptible to adversarial examples that are ma-liciously manipulated by adding imperceptible pertur-bations to legitimate inputs,leading to abnormal be-havior of models.Plenty of methods have been pro-posed to defend against adversarial examples.How-ever,the majority of them are suffering the follow-ing weaknesses:1)lack of generalization and prac-ticality.2)fail to deal with unknown attacks.To ad-dress the above issues,we design the adversarial na-ture eraser(ANE)and feature map detector(FMD)to detect fragile and high-intensity adversarial examples,respectively.Then,we apply the ensemble learning method to compose our detector,dealing with adver-sarial examples with diverse magnitudes in a divide-and-conquer manner.Experimental results show that our approach achieves 99.30%and 99.62%Area un-der Curve(AUC)scores on average when tested with various Lp norm-based attacks on CIFAR-10 and Im-ageNet,respectively.Furthermore,our approach also shows its potential in detecting unknown attacks. 展开更多
关键词 adversarial example detection ensemble learning feature maps fragile and high-intensity ad-versarial examples
在线阅读 下载PDF
The algorithm AE_(11) of learning from examples
13
作者 ZHANG Hai-yi BI Jian-dong 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2006年第2期226-232,共7页
We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final re... We first put forward the idea of a positive extension matrix (PEM) on paper. Then, an algorithm, AE_ 11, was built with the aid of the PEM. Finally, we made the comparisons of our experimental results and the final result was fairly satisfying. 展开更多
关键词 learning from examples concept acquisition inductive learning knowledge acquisition
在线阅读 下载PDF
石油污染土壤处理方法及其实例应用 被引量:5
14
作者 袁付礼 王保东 《石油化工安全环保技术》 CAS 2010年第6期58-61,共4页
介绍了石油污染土壤处理方法,包括原位修复技术的物理方法、化学方法、生物修复技术、植物修复技术,以及异位修复技术的预制床法、堆肥法和生物反应器法。并结合实际案例,提出石油污染土壤的3种修复方案,为今后类似的处置提供了依据。
关键词 APPLICATION EXAMPLE METHODS OF
在线阅读 下载PDF
A Survey on Adversarial Examples in Deep Learning 被引量:3
15
作者 Kai Chen Haoqi Zhu +1 位作者 Leiming Yan Jinwei Wang 《Journal on Big Data》 2020年第2期71-84,共14页
Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial ex... Adversarial examples are hot topics in the field of security in deep learning.The feature,generation methods,attack and defense methods of the adversarial examples are focuses of the current research on adversarial examples.This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples,the occurrences of the adversarial examples,the attacking methods of adversarial examples.This article lists the possible reasons for the adversarial examples.This article also analyzes several typical generation methods of adversarial examples in detail:Limited-memory BFGS(L-BFGS),Fast Gradient Sign Method(FGSM),Basic Iterative Method(BIM),Iterative Least-likely Class Method(LLC),etc.Furthermore,in the perspective of the attack methods and reasons of the adversarial examples,the main defense techniques for the adversarial examples are listed:preprocessing,regularization and adversarial training method,distillation method,etc.,which application scenarios and deficiencies of different defense measures are pointed out.This article further discusses the application of adversarial examples which currently is mainly used in adversarial evaluation and adversarial training.Finally,the overall research direction of the adversarial examples is prospected to completely solve the adversarial attack problem.There are still a lot of practical and theoretical problems that need to be solved.Finding out the characteristics of the adversarial examples,giving a mathematical description of its practical application prospects,exploring the universal method of adversarial example generation and the generation mechanism of the adversarial examples are the main research directions of the adversarial examples in the future. 展开更多
关键词 Adversarial examples generation methods defense methods
在线阅读 下载PDF
Environmental Implication of Subaqueous Lava Flows from A Continental Large Igneous Province: Examples from the Moroccan Central Atlantic Magmatic Province(CAMP)
16
作者 S.EL GHILANI N.YOUBI +4 位作者 J.MADEIRA E.H.CHELLAI ALBERTO LóPEZGALINDO L.MARTINS J.MATA 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2016年第S1期117-,共1页
The Early Jurassic volcanic sequence of the Central Atlantic Magmatic Province(CAMP)of Morocco is classically subdivided into four stratigraphic units:the Lower,Middle,Upper and Recurrent Formations separated
关键词 examples from the Moroccan Central Atlantic Magmatic Province Environmental Implication of Subaqueous Lava Flows from A Continental Large Igneous Province CAMP
在线阅读 下载PDF
Adversarial Examples Protect Your Privacy on Speech Enhancement System
17
作者 Mingyu Dong Diqun Yan Rangding Wang 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期1-12,共12页
Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automa... Speech is easily leaked imperceptibly.When people use their phones,the personal voice assistant is constantly listening and waiting to be activated.Private content in speech may be maliciously extracted through automatic speech recognition(ASR)technology by some applications on phone devices.To guarantee that the recognized speech content is accurate,speech enhancement technology is used to denoise the input speech.Speech enhancement technology has developed rapidly along with deep neural networks(DNNs),but adversarial examples can cause DNNs to fail.Considering that the vulnerability of DNN can be used to protect the privacy in speech.In this work,we propose an adversarial method to degrade speech enhancement systems,which can prevent the malicious extraction of private information in speech.Experimental results show that the generated enhanced adversarial examples can be removed most content of the target speech or replaced with target speech content by speech enhancement.The word error rate(WER)between the enhanced original example and enhanced adversarial example recognition result can reach 89.0%.WER of target attack between enhanced adversarial example and target example is low at 33.75%.The adversarial perturbation in the adversarial example can bring much more change than itself.The rate of difference between two enhanced examples and adversarial perturbation can reach more than 1.4430.Meanwhile,the transferability between different speech enhancement models is also investigated.The low transferability of the method can be used to ensure the content in the adversarial example is not damaged,the useful information can be extracted by the friendly ASR.This work can prevent the malicious extraction of speech. 展开更多
关键词 Adversarial example speech enhancement privacy protection deep neural network
在线阅读 下载PDF
Increasing the Safety of People Activity in Aggressive Potential Locations, Analyzed through the Probability Theory, Modeling/Simulation and Application in Underground Coal Mining
18
作者 Emil Pop Gabriel-Ioan Ilcea +1 位作者 Ionut-Alin Popa Lorand Bogdanffy 《Engineering(科研)》 2019年第2期93-106,共14页
This paper deals with the increasing safety of working in aggressive potential locations, having SCADA system and WSN sensors, using a “probabilistic strategy” in comparison with a “deterministic” one, modeling/si... This paper deals with the increasing safety of working in aggressive potential locations, having SCADA system and WSN sensors, using a “probabilistic strategy” in comparison with a “deterministic” one, modeling/simulation and application in underground coal mining. In general, three conditions can be considered: 1) an unfriendly environment that facilitates the risk of accidents, 2) aggressive equipments that can compete to cause accidents and 3) the work security breaches that can cause accidents. These conditions define the triangle of accidents and are customized for an underground coal mining where the methane gas is released with the exploitation of the massive coal. In this case, the first two conditions create an explosive potential atmosphere. To allow people to work in a safe location it needs: first, a continuing monitoring through SCADA system of the explosive potential atmosphere and second, the use of antiexplosive equipment. This method, named “deterministic strategy”, increases the safety of working, but the explosions have not been completely eliminated. In order to increase the safety of working, the paper continues with the presentation of a new method based on hazard laws, named “probabilistic strategy”. This strategy was validated through modeling/simulation using CupCarbon software platform, and application of WSN networks implemented on Arduino equipments. At the end of the paper the interesting conclusions are emphases which are applicable to both strategies. 展开更多
关键词 Accident Potentially SAFETY Zone TRIANGLE of Accidents Hazard LAWS Deterministic STRATEGY Probabilistic STRATEGY CupCarbon Modeling and Simulation WSN Applications Arduino Implementation Example
暂未订购
上一页 1 2 10 下一页 到第
使用帮助 返回顶部