The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT...The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT environments to the Adversarial Tactics,Techniques,and Common Knowledge(ATT&CK)framework by MITRE and introduces a lightweight,data-driven scoring model that enables rapid identification and prioritization of attacks.Inspired by the Factor Analysis of Information Risk model,our proposed scoring model integrates four key metrics:Common Vulnerability Scoring System(CVSS)-based severity scoring,Cyber Kill Chain–based difficulty estimation,Deep Neural Networks-driven detection scoring,and frequency analysis based on dataset prevalence.By aggregating these indicators,the model generates comprehensive risk profiles,facilitating actionable prioritization of threats.Robustness and stability of the scoring model are validated through non-parametric correlation analysis using Spearman’s and Kendall’s rank correlation coefficients,demonstrating consistent performance across diverse scenarios.The approach culminates in a prioritized attack ranking that provides actionable guidance for risk mitigation and resource allocation in Edge-IoT/IIoT security operations.By leveraging real-world data to align MITRE ATT&CK techniques with CVSS metrics,the framework offers a standardized and practically applicable solution for consistent threat assessment in operational settings.The proposed lightweight scoring model delivers rapid and reliable results under dynamic cyber conditions,facilitating timely identification of attack scenarios and prioritization of response strategies.Our systematic integration of established taxonomies with data-driven indicators strengthens practical risk management and supports strategic planning in next-generation IoT deployments.Ultimately,this work advances adaptive threat modeling for Edge/IIoT ecosystems and establishes a robust foundation for evidence-based prioritization in emerging cyber-physical infrastructures.展开更多
Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,ha...Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,have enabled automated fire detection in images and videos.Several deep learning models have been developed for object detection,including applications in fire and smoke detection.This study focuses on optimizing the training hyperparameters of YOLOv8 andYOLOv10models usingBayesianTuning(BT).Experimental results on the large-scale D-Fire dataset demonstrate that this approach enhances detection performance.Specifically,the proposed approach improves the mean average precision at an Intersection over Union(IoU)threshold of 0.5(mAP50)of the YOLOv8s,YOLOv10s,YOLOv8l,and YOLOv10lmodels by 0.26,0.21,0.84,and 0.63,respectively,compared tomodels trainedwith the default hyperparameters.The performance gains are more pronounced in larger models,YOLOv8l and YOLOv10l,than in their smaller counterparts,YOLOv8s and YOLOv10s.Furthermore,YOLOv8 models consistently outperform YOLOv10,with mAP50 improvements of 0.26 for YOLOv8s over YOLOv10s and 0.65 for YOLOv8l over YOLOv10l when trained with BT.These results establish YOLOv8 as the preferred model for fire detection applications where detection performance is prioritized.展开更多
The influences of 2.5wt%Mn addition on the microstructure and mechanical properties of the Cu-11.9wt%Al-3.8wt%Ni shape memory alloy(SMA) were studied by means of scanning electron microscopy(SEM),transmission elec...The influences of 2.5wt%Mn addition on the microstructure and mechanical properties of the Cu-11.9wt%Al-3.8wt%Ni shape memory alloy(SMA) were studied by means of scanning electron microscopy(SEM),transmission electron microscopy(TEM),and differential scanning calorimeter(DSC).The experimental results show that Mn addition influences considerably the austenite-martensite transformation temperatures and the kind of martensite in the Cu-Al-Ni alloy.The martensitic transformation changes from a mixedβ1→β'1+γ'1 transformation to a singleβ1→β'1 martensite transformation together with a decrease in transformation temperatures.In addition,the observations reveal that the grain size of the Cu-Al-Ni alloy can be controlled with the addition of 2.5wt%Mn and thus its mechanical properties can be enhanced.The Cu-Al-Ni-Mn alloy exhibits better mechanical properties with the high ultimate compression strength and ductility of 952 MPa and 15%,respectively.These improvements are attributed to a decrease in grain size.However,the hardness decreases from Hv 230 to Hv 140 with the Mn addition.展开更多
The influence of aging on the microstructure and mechanical properties of Cu-11.6wt%Al-3.9wt%Ni-2.5wt%Mn shape memory alloy(SMA) was studied by means of scanning electron microscopy(SEM),transmission electron micr...The influence of aging on the microstructure and mechanical properties of Cu-11.6wt%Al-3.9wt%Ni-2.5wt%Mn shape memory alloy(SMA) was studied by means of scanning electron microscopy(SEM),transmission electron microscopy(TEM),X-ray diffractometer,and differential scanning calorimeter(DSC).Experimental results show that bainite,γ2,and α phase precipitates occur with the aging effect in the alloy.After aging at 300°C,the bainitic precipitates appear at the early stages of aging,while the precipitates of γ2 phase are observed for a longer aging time.When the aging temperature increases,the bainite gradually evolves into γ2 phase and equilibrium α phase(bcc) precipitates from the remaining parent phase.Thus,the bainite,γ2,and α phases appear,while the martensite phase disappears progressively in the alloy.The bainitic precipitates decrease the reverse transformation temperature while the γ2 phase precipitates increase these temperatures with a decrease of solute content in the retained parent phase.On the other hand,these precipitations cause an increasing in hardness of the alloy.展开更多
People started posting textual tweets on Twitter as soon as the novel coronavirus(COVID-19)emerged.Analyzing these tweets can assist institutions in better decision-making and prioritizing their tasks.Therefore,this s...People started posting textual tweets on Twitter as soon as the novel coronavirus(COVID-19)emerged.Analyzing these tweets can assist institutions in better decision-making and prioritizing their tasks.Therefore,this study aimed to analyze 43 million tweets collected between March 22 and March 30,2020 and describe the trend of public attention given to the topics related to the COVID-19 epidemic using evolutionary clustering analysis.The results indicated that unigram terms were trended more frequently than bigram and trigram terms.A large number of tweets about the COVID-19 were disseminated and received widespread public attention during the epidemic.The high-frequency words such as“death”,“test”,“spread”,and“lockdown”suggest that people fear of being infected,and those who got infection are afraid of death.The results also showed that people agreed to stay at home due to the fear of the spread,and they were calling for social distancing since they become aware of the COVID-19.It can be suggested that social media posts may affect human psychology and behavior.These results may help governments and health organizations to better understand the psychology of the public,and thereby,better communicate with them to prevent and manage the panic.展开更多
An improved method with better selection capability using a single camera was presented in comparison with previous method. To improve performance, two methods were applied to landmark selection in an unfamiliar indoo...An improved method with better selection capability using a single camera was presented in comparison with previous method. To improve performance, two methods were applied to landmark selection in an unfamiliar indoor environment. First, a modified visual attention method was proposed to automatically select a candidate region as a more useful landmark. In visual attention, candidate landmark regions were selected with different characteristics of ambient color and intensity in the image. Then, the more useful landmarks were selected by combining the candidate regions using clustering. As generally implemented, automatic landmark selection by vision-based simultaneous localization and mapping(SLAM) results in many useless landmarks, because the features of images are distinguished from the surrounding environment but detected repeatedly. These useless landmarks create a serious problem for the SLAM system because they complicate data association. To address this, a method was proposed in which the robot initially collected landmarks through automatic detection while traversing the entire area where the robot performed SLAM, and then, the robot selected only those landmarks that exhibited high rarity through clustering, which enhanced the system performance. Experimental results show that this method of automatic landmark selection results in selection of a high-rarity landmark. The average error of the performance of SLAM decreases 52% compared with conventional methods and the accuracy of data associations increases.展开更多
With the advent of the big data era,security issues in the context of artificial intelligence(AI)and data analysis are attracting research attention.In the metaverse,which will become a virtual asset in the future,us...With the advent of the big data era,security issues in the context of artificial intelligence(AI)and data analysis are attracting research attention.In the metaverse,which will become a virtual asset in the future,users’communication,movement with characters,text elements,etc.,are required to integrate the real and virtual.However,they can be exposed to threats.Particularly,various hacker threats exist.For example,users’assets are exposed through notices and mail alerts regularly sent to users by operators.In the future,hacker threats will increase mainly due to naturally anonymous texts.Therefore,it is necessary to use the natural language processing technology of artificial intelligence,especially term frequency-inverse document frequency,word2vec,gated recurrent unit,recurrent neural network,and long-short term memory.Additionally,several application versions are used.Currently,research on tasks and performance for algorithm application is underway.We propose a grouping algorithm that focuses on securing various bridgehead strategies to secure topics for security and safety within the metaverse.The algorithm comprises three modules:extracting topics from attacks,managing dimensions,and performing grouping.Consequently,we create 24 topic-based models.Assuming normal and spam mail attacks to verify our algorithm,the accuracy of the previous application version was increased by∼0.4%-1.5%.展开更多
The kinetic,morphological,crystallographic,and magnetic characteristics of thermally induced martensites in Fe-13.4wt% Mn-5.2wt%Mo alloy were investigated by scanning electron microscopy(SEM),transmission electron m...The kinetic,morphological,crystallographic,and magnetic characteristics of thermally induced martensites in Fe-13.4wt% Mn-5.2wt%Mo alloy were investigated by scanning electron microscopy(SEM),transmission electron microscopy(TEM),and M(o|¨)ssbauer spectroscopy.The experimental results reveal that two types of thermal-induced martensites,e(hcp) andα'(bcc) martensites,are formed in the as-quenched condition,and these transformations have athermal characters.Mo addition to the Fe-Mn alloy does not change the coexistence ofεandα' martensites with the Mn content between 10wt%and 15wt%.Besides,M(o|¨)ssbauer spectra reveal a paramagnetic character with a singlet for theγ(fcc) austenite andεmartensite phases and a ferromagnetic character with a broad sextet for theα' martensite phase. The volume fraction ofα' martensite forming in the quenched alloy is much more than that of theεmartensite.展开更多
The processing of sound signals is significantly improved recently.Technique for sound signal processing focusing on music beyond speech area is getting attention due to the development of deep learning techniques.Thi...The processing of sound signals is significantly improved recently.Technique for sound signal processing focusing on music beyond speech area is getting attention due to the development of deep learning techniques.This study is for analysis and process of music signals to generate tow-dimensional tabular data and a new music.For analysis and process part,we represented normalized waveforms for each of input data via frequency domain signals.Then we looked into shorted segment to see the difference wave pattern for different singers.Fourier transform is applied to get spectrogram of the music signals.Filterbank is applied to represent the spectrogram based on the human ear instead of the distance on the frequency dimension,and the final spectrogram has been plotted by Mel scale.For generating part,we created two-dimensional tabular data for data manipulation.With the 2D data,any kind of analysis can be done since it has digit values for the music signals.Then,we generated a new music by applying LSTM toward the song audience preferred more.As the result,it has been proved that the created music showed the similar waveforms with the original music.This study made a step forward for music signal processing.If this study expands further,it can find the pattern that listeners like so music can be generated within favorite singer’s voice in the way that the listener prefers.展开更多
Decision trees are mainly used to classify data and predict data classes. A spatial decision tree has been designed using Euclidean distance between objects for reflecting spatial data characteristic. Even though this...Decision trees are mainly used to classify data and predict data classes. A spatial decision tree has been designed using Euclidean distance between objects for reflecting spatial data characteristic. Even though this method explains the distance of objects in spatial dimension, it fails to represent distributions of spatial data and their relationships. But distributions of spatial data and relationships with their neighborhoods are very important in real world. This paper proposes decision tree based on spatial entropy that represents distributions of spatial data with dispersion and dissimilarity. The rate of dispersion by dissimilarity presents how related distribution of spatial data and non-spatial attributes. The experiment evaluates the accuracy and building time of decision tree as compared to previous methods and it shows that the proposed method makes efficient and scalable classification for spatial decision support.展开更多
Traffic shaping is one of important control operation to guarantee the Quality of Service(QoS)in optical burst switching(OBS)networks.The efficiency of traffic shaping is mainly determined by token generation method.I...Traffic shaping is one of important control operation to guarantee the Quality of Service(QoS)in optical burst switching(OBS)networks.The efficiency of traffic shaping is mainly determined by token generation method.In this paper,token generation methods of traffic shaping are evaluated by using three kinds of probability distribution,and are analyzed in terms of burst blocking probability,through-put and correlation by simulation.The simulation results show that the token generation methods decrease the burst correlation of Label Switched Paths(LSPs),and solve traffic congestion as well.The different burst arrival processes have small impact on the blocking probability for OBS net works.Key words optical burst switching-traffic shaping-token generation-quality of service CLC number TP 929.11 Foundation item:Supported by the National Natural Science Foundation of China(60132030)and the Korea Science and Engineering Foundation(KOSEF)through OIRC Project Biography:Tang Wan(1974-),female,Ph.D candidate,research direction:contention resolution and routing mechanism in OBS networks.展开更多
Spam mail classification considered complex and error-prone task in the distributed computing environment.There are various available spam mail classification approaches such as the naive Bayesian classifier,logistic ...Spam mail classification considered complex and error-prone task in the distributed computing environment.There are various available spam mail classification approaches such as the naive Bayesian classifier,logistic regression and support vector machine and decision tree,recursive neural network,and long short-term memory algorithms.However,they do not consider the document when analyzing spam mail content.These approaches use the bagof-words method,which analyzes a large amount of text data and classifies features with the help of term frequency-inverse document frequency.Because there are many words in a document,these approaches consume a massive amount of resources and become infeasible when performing classification on multiple associated mail documents together.Thus,spam mail is not classified fully,and these approaches remain with loopholes.Thus,we propose a term frequency topic inverse document frequency model that considers the meaning of text data in a larger semantic unit by applying weights based on the document’s topic.Moreover,the proposed approach reduces the scarcity problem through a frequency topic-inverse document frequency in singular value decomposition model.Our proposed approach also reduces the dimensionality,which ultimately increases the strength of document classification.Experimental evaluations show that the proposed approach classifies spam mail documents with higher accuracy using individual document-independent processing computation.Comparative evaluations show that the proposed approach performs better than the logistic regression model in the distributed computing environment,with higher document word frequencies of 97.05%,99.17%and 96.59%.展开更多
r-learning,which is based on e-learning and u-learning,is defined as a learning support system that intelligent robots serve verbal and nonverbal interactions on ubiquitous computing environment.In order to guarantee ...r-learning,which is based on e-learning and u-learning,is defined as a learning support system that intelligent robots serve verbal and nonverbal interactions on ubiquitous computing environment.In order to guarantee the advantages of r-learning contents with no limits of timc and place and with nonverbal interaction which are not in e-learning contents,in recent years,assessment criteria for r-learning contents are urgently rcquired.Therefore,the reliable and valid assessment criteria were developed for nonverbal interaction contents in r-learning,and its detailed research content is as follows.First,assessment criteria for nonverbal interaction in r-learning contents will be specified into gesture,facial expression,semi-verbal message,distance,physical contact and time.Second,the validity of the developed assessment criteria will be proved by statistics.Consequently,the assessment criteria for nonverbal interaction contents will be helpful when choosing the better r-learning content and producing the better r-learning content,and the reliability of school education is improved ultimately.展开更多
Recent advances in 360 video streaming technologies have enhanced the immersive experience of video streaming services.Particularly,there is immense potential for the application of 360 video encoding formats to achie...Recent advances in 360 video streaming technologies have enhanced the immersive experience of video streaming services.Particularly,there is immense potential for the application of 360 video encoding formats to achieve highly immersive virtual reality(VR)systems.However,360 video streaming requires considerable bandwidth,and its performance depends on several factors.Consequently,the optimization of 360 video bitstreams according to viewport texture is crucial.Therefore,we propose an adaptive solution for VR systems using viewport-dependent tiled 360 video streaming.To increase the degrees of freedom of users,the moving picture experts group(MPEG)recently defined three degrees plus of freedom(3DoF+)and six degrees of freedom(6DoF)to support free user movement within camera-captured scenes.The proposed method supports 6DoF to allow users to move their heads freely.Herein,we propose viewport-dependent tiled 360 video streaming based on users’head movements.The proposed system generates an adaptive bitstream using tile sets that are selected according to a parameter set of user’s viewport area.This extracted bitstream is then transmitted to the user’s computer.After decoding,the user’s viewport is generated and rendered on VR head-mounted display(HMD).Furthermore,we introduce certain approaches to reduce the motion-to-photon latency.The experimental results demonstrated that,in contrast with non-tiled streaming,the proposed method achieved high-performance 360 video streaming for VR systems,with a 25.89%BD-rate saving for Y-PSNR and 61.16%for decoding time.展开更多
The analysis of large time-series datasets has profoundly enhanced our ability to make accurate predictions in many fields.However,unpredictable phenomena,such as extreme weather events or the novel coronavirus 2019(C...The analysis of large time-series datasets has profoundly enhanced our ability to make accurate predictions in many fields.However,unpredictable phenomena,such as extreme weather events or the novel coronavirus 2019(COVID-19)outbreak,can greatly limit the ability of time-series analyses to establish reliable patterns.The present work addresses this issue by applying uncertainty analysis using a probability distribution function,and applies the proposed scheme within a preliminary study involving the prediction of power consumption for a single hotel in Seoul,South Korea based on an analysis of 53,567 data items collected by the Korea Electric Power Corporation using robotic process automation.We first apply Facebook Prophet for conducting time-series analysis.The results demonstrate that the COVID19 outbreak seriously compromised the reliability of the time-series analysis.Then,machine learning models are developed in the TensorFlow framework for conducting uncertainty analysis based on modeled relationships between electric power consumption and outdoor temperature.The benefits of the proposed uncertainty analysis for predicting the electricity consumption of the hotel building are demonstrated by comparing the results obtained when considering no uncertainty,aleatory uncertainty,epistemic uncertainty,and mixed aleatory and epistemic uncertainty.The minimum and maximum ranges of predicted electricity consumption are obtained when using mixed uncertainty.Accordingly,the application of uncertainty analysis using a probability distribution function greatly improved the predictive power of the analysis compared to time-series analysis.展开更多
Beam splitting upon refraction in a triangular sonic crystal composed of aluminum cylinders in air is experimentally and numerically demonstrated to occur due to finite source size, which facilitates circumvention of ...Beam splitting upon refraction in a triangular sonic crystal composed of aluminum cylinders in air is experimentally and numerically demonstrated to occur due to finite source size, which facilitates circumvention of a directional band gap. Experiments reveal that two distinct beams emerge at crystal output, in agreement with the numerical results obtained through the finite-element method. Beam splitting occurs at sufficiently-small source sizes comparable to lattice periodicity determined by the spatial gap width in reciprocal space. Split beams propagate in equal amplitude, whereas beam splitting is destructed for oblique incidence above a critical incidence angle.展开更多
have been focused on addressing the Covid-19 pandemic;for example,governments have implemented countermeasures,such as quarantining,pushing vaccine shots to minimize local spread,investigating and analyzing the virus...have been focused on addressing the Covid-19 pandemic;for example,governments have implemented countermeasures,such as quarantining,pushing vaccine shots to minimize local spread,investigating and analyzing the virus’characteristics,and conducting epidemiological investigations through patient management and tracers.Therefore,researchers worldwide require funding to achieve these goals.Furthermore,there is a need for documentation to investigate and trace disease characteristics.However,it is time consuming and resource intensive to work with documents comprising many types of unstructured data.Therefore,in this study,natural language processing technology is used to automatically classify these documents.Currently used statistical methods include data cleansing,query modification,sentiment analysis,and clustering.However,owing to limitations with respect to the data,it is necessary to understand how to perform data analysis suitable for medical documents.To solve this problem,this study proposes a robust in-depth mixed with subject and emotion model comprising three modules.The first is a subject and non-linear emotional module,which extracts topics from the data and supplements them with emotional figures.The second is a subject with singular value decomposition in the emotion model,which is a dimensional decomposition module that uses subject analysis and an emotion model.The third involves embedding with singular value decomposition using an emotion module,which is a dimensional decomposition method that uses emotion learning.The accuracy and other model measurements,such as the F1,area under the curve,and recall are evaluated based on an article on Middle East respiratory syndrome.A high F1 score of approximately 91%is achieved.The proposed joint analysis method is expected to provide a better synergistic effect in the dataset.展开更多
In decision support system for spatial site selection, the fuzzy synthetic evaluation is a useful way. However, the method can’t pay attention to the randomness in factors. To remedy the problem, this paper proposes ...In decision support system for spatial site selection, the fuzzy synthetic evaluation is a useful way. However, the method can’t pay attention to the randomness in factors. To remedy the problem, this paper proposes a clouded-base fuzzy approach which combines advantages of cloud transform and fuzzy synthetic evaluation. The cloud transform considers the randomness in the factors and product the higher concept layer for data mining. At the same time, the check mechanism controls the quality of partitions in factors. Then the fuzzy approach was used to get final evaluation value with randomness and fuzziness. It make the final result is optimization. Finally, performance evaluations show that this approach spent less runtime and got more accuracy than the fuzzy synthetic. The experiments prove that the proposed method is faster and more accuracy than the original method.展开更多
Pneumonia is a dangerous respiratory disease due to which breathing becomes incredibly difficult and painful;thus,catching it early is crucial.Medical physicians’time is limited in outdoor situations due to many pati...Pneumonia is a dangerous respiratory disease due to which breathing becomes incredibly difficult and painful;thus,catching it early is crucial.Medical physicians’time is limited in outdoor situations due to many patients;therefore,automated systems can be a rescue.The input images from the X-ray equipment are also highly unpredictable due to variances in radiologists’experience.Therefore,radiologists require an automated system that can swiftly and accurately detect pneumonic lungs from chest x-rays.In medical classifications,deep convolution neural networks are commonly used.This research aims to use deep pretrained transfer learning models to accurately categorize CXR images into binary classes,i.e.,Normal and Pneumonia.The MDEV is a proposed novel ensemble approach that concatenates four heterogeneous transfer learning models:Mobile-Net,DenseNet-201,EfficientNet-B0,and VGG-16,which have been finetuned and trained on 5,856 CXR images.The evaluation matrices used in this research to contrast different deep transfer learning architectures include precision,accuracy,recall,AUC-roc,and f1-score.The model effectively decreases training loss while increasing accuracy.The findings conclude that the proposed MDEV model outperformed cutting-edge deep transfer learning models and obtains an overall precision of 92.26%,an accuracy of 92.15%,a recall of 90.90%,an auc-roc score of 90.9%,and f-score of 91.49%with minimal data pre-processing,data augmentation,finetuning and hyperparameter adjustment in classifying Normal and Pneumonia chests.展开更多
A significant number of cloud storage environments are already implementing deduplication technology.Due to the nature of the cloud environment,a storage server capable of accommodating large-capacity storage is requi...A significant number of cloud storage environments are already implementing deduplication technology.Due to the nature of the cloud environment,a storage server capable of accommodating large-capacity storage is required.As storage capacity increases,additional storage solutions are required.By leveraging deduplication,you can fundamentally solve the cost problem.However,deduplication poses privacy concerns due to the structure itself.In this paper,we point out the privacy infringement problemand propose a new deduplication technique to solve it.In the proposed technique,since the user’s map structure and files are not stored on the server,the file uploader list cannot be obtained through the server’s meta-information analysis,so the user’s privacy is maintained.In addition,the personal identification number(PIN)can be used to solve the file ownership problemand provides advantages such as safety against insider breaches and sniffing attacks.The proposed mechanism required an additional time of approximately 100 ms to add a IDRef to distinguish user-file during typical deduplication,and for smaller file sizes,the time required for additional operations is similar to the operation time,but relatively less time as the file’s capacity grows.展开更多
基金supported by the“Regional Innovation System&Education(RISE)”through the Seoul RISE Center,funded by the Ministry of Education(MOE)and the Seoul Metropolitan Government(2025-RISE-01-018-05)supported by Quad Miners Corp。
文摘The dynamic,heterogeneous nature of Edge computing in the Internet of Things(Edge-IoT)and Industrial IoT(IIoT)networks brings unique and evolving cybersecurity challenges.This study maps cyber threats in Edge-IoT/IIoT environments to the Adversarial Tactics,Techniques,and Common Knowledge(ATT&CK)framework by MITRE and introduces a lightweight,data-driven scoring model that enables rapid identification and prioritization of attacks.Inspired by the Factor Analysis of Information Risk model,our proposed scoring model integrates four key metrics:Common Vulnerability Scoring System(CVSS)-based severity scoring,Cyber Kill Chain–based difficulty estimation,Deep Neural Networks-driven detection scoring,and frequency analysis based on dataset prevalence.By aggregating these indicators,the model generates comprehensive risk profiles,facilitating actionable prioritization of threats.Robustness and stability of the scoring model are validated through non-parametric correlation analysis using Spearman’s and Kendall’s rank correlation coefficients,demonstrating consistent performance across diverse scenarios.The approach culminates in a prioritized attack ranking that provides actionable guidance for risk mitigation and resource allocation in Edge-IoT/IIoT security operations.By leveraging real-world data to align MITRE ATT&CK techniques with CVSS metrics,the framework offers a standardized and practically applicable solution for consistent threat assessment in operational settings.The proposed lightweight scoring model delivers rapid and reliable results under dynamic cyber conditions,facilitating timely identification of attack scenarios and prioritization of response strategies.Our systematic integration of established taxonomies with data-driven indicators strengthens practical risk management and supports strategic planning in next-generation IoT deployments.Ultimately,this work advances adaptive threat modeling for Edge/IIoT ecosystems and establishes a robust foundation for evidence-based prioritization in emerging cyber-physical infrastructures.
基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2024-RS-2022-00156354)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Technology Development Program(RS-2023-00264489)funded by the Ministry of SMEs and Startups(MSS,Republic of Korea).
文摘Fire can cause significant damage to the environment,economy,and human lives.If fire can be detected early,the damage can be minimized.Advances in technology,particularly in computer vision powered by deep learning,have enabled automated fire detection in images and videos.Several deep learning models have been developed for object detection,including applications in fire and smoke detection.This study focuses on optimizing the training hyperparameters of YOLOv8 andYOLOv10models usingBayesianTuning(BT).Experimental results on the large-scale D-Fire dataset demonstrate that this approach enhances detection performance.Specifically,the proposed approach improves the mean average precision at an Intersection over Union(IoU)threshold of 0.5(mAP50)of the YOLOv8s,YOLOv10s,YOLOv8l,and YOLOv10lmodels by 0.26,0.21,0.84,and 0.63,respectively,compared tomodels trainedwith the default hyperparameters.The performance gains are more pronounced in larger models,YOLOv8l and YOLOv10l,than in their smaller counterparts,YOLOv8s and YOLOv10s.Furthermore,YOLOv8 models consistently outperform YOLOv10,with mAP50 improvements of 0.26 for YOLOv8s over YOLOv10s and 0.65 for YOLOv8l over YOLOv10l when trained with BT.These results establish YOLOv8 as the preferred model for fire detection applications where detection performance is prioritized.
文摘The influences of 2.5wt%Mn addition on the microstructure and mechanical properties of the Cu-11.9wt%Al-3.8wt%Ni shape memory alloy(SMA) were studied by means of scanning electron microscopy(SEM),transmission electron microscopy(TEM),and differential scanning calorimeter(DSC).The experimental results show that Mn addition influences considerably the austenite-martensite transformation temperatures and the kind of martensite in the Cu-Al-Ni alloy.The martensitic transformation changes from a mixedβ1→β'1+γ'1 transformation to a singleβ1→β'1 martensite transformation together with a decrease in transformation temperatures.In addition,the observations reveal that the grain size of the Cu-Al-Ni alloy can be controlled with the addition of 2.5wt%Mn and thus its mechanical properties can be enhanced.The Cu-Al-Ni-Mn alloy exhibits better mechanical properties with the high ultimate compression strength and ductility of 952 MPa and 15%,respectively.These improvements are attributed to a decrease in grain size.However,the hardness decreases from Hv 230 to Hv 140 with the Mn addition.
文摘The influence of aging on the microstructure and mechanical properties of Cu-11.6wt%Al-3.9wt%Ni-2.5wt%Mn shape memory alloy(SMA) was studied by means of scanning electron microscopy(SEM),transmission electron microscopy(TEM),X-ray diffractometer,and differential scanning calorimeter(DSC).Experimental results show that bainite,γ2,and α phase precipitates occur with the aging effect in the alloy.After aging at 300°C,the bainitic precipitates appear at the early stages of aging,while the precipitates of γ2 phase are observed for a longer aging time.When the aging temperature increases,the bainite gradually evolves into γ2 phase and equilibrium α phase(bcc) precipitates from the remaining parent phase.Thus,the bainite,γ2,and α phases appear,while the martensite phase disappears progressively in the alloy.The bainitic precipitates decrease the reverse transformation temperature while the γ2 phase precipitates increase these temperatures with a decrease of solute content in the retained parent phase.On the other hand,these precipitations cause an increasing in hardness of the alloy.
文摘People started posting textual tweets on Twitter as soon as the novel coronavirus(COVID-19)emerged.Analyzing these tweets can assist institutions in better decision-making and prioritizing their tasks.Therefore,this study aimed to analyze 43 million tweets collected between March 22 and March 30,2020 and describe the trend of public attention given to the topics related to the COVID-19 epidemic using evolutionary clustering analysis.The results indicated that unigram terms were trended more frequently than bigram and trigram terms.A large number of tweets about the COVID-19 were disseminated and received widespread public attention during the epidemic.The high-frequency words such as“death”,“test”,“spread”,and“lockdown”suggest that people fear of being infected,and those who got infection are afraid of death.The results also showed that people agreed to stay at home due to the fear of the spread,and they were calling for social distancing since they become aware of the COVID-19.It can be suggested that social media posts may affect human psychology and behavior.These results may help governments and health organizations to better understand the psychology of the public,and thereby,better communicate with them to prevent and manage the panic.
文摘An improved method with better selection capability using a single camera was presented in comparison with previous method. To improve performance, two methods were applied to landmark selection in an unfamiliar indoor environment. First, a modified visual attention method was proposed to automatically select a candidate region as a more useful landmark. In visual attention, candidate landmark regions were selected with different characteristics of ambient color and intensity in the image. Then, the more useful landmarks were selected by combining the candidate regions using clustering. As generally implemented, automatic landmark selection by vision-based simultaneous localization and mapping(SLAM) results in many useless landmarks, because the features of images are distinguished from the surrounding environment but detected repeatedly. These useless landmarks create a serious problem for the SLAM system because they complicate data association. To address this, a method was proposed in which the robot initially collected landmarks through automatic detection while traversing the entire area where the robot performed SLAM, and then, the robot selected only those landmarks that exhibited high rarity through clustering, which enhanced the system performance. Experimental results show that this method of automatic landmark selection results in selection of a high-rarity landmark. The average error of the performance of SLAM decreases 52% compared with conventional methods and the accuracy of data associations increases.
基金This work was supported by the BK21 FOUR Project.W.H.P received the grant。
文摘With the advent of the big data era,security issues in the context of artificial intelligence(AI)and data analysis are attracting research attention.In the metaverse,which will become a virtual asset in the future,users’communication,movement with characters,text elements,etc.,are required to integrate the real and virtual.However,they can be exposed to threats.Particularly,various hacker threats exist.For example,users’assets are exposed through notices and mail alerts regularly sent to users by operators.In the future,hacker threats will increase mainly due to naturally anonymous texts.Therefore,it is necessary to use the natural language processing technology of artificial intelligence,especially term frequency-inverse document frequency,word2vec,gated recurrent unit,recurrent neural network,and long-short term memory.Additionally,several application versions are used.Currently,research on tasks and performance for algorithm application is underway.We propose a grouping algorithm that focuses on securing various bridgehead strategies to secure topics for security and safety within the metaverse.The algorithm comprises three modules:extracting topics from attacks,managing dimensions,and performing grouping.Consequently,we create 24 topic-based models.Assuming normal and spam mail attacks to verify our algorithm,the accuracy of the previous application version was increased by∼0.4%-1.5%.
基金supported by the Kirikkale University Scientific Research Fund(Nos.2008/34 and 2008/35)
文摘The kinetic,morphological,crystallographic,and magnetic characteristics of thermally induced martensites in Fe-13.4wt% Mn-5.2wt%Mo alloy were investigated by scanning electron microscopy(SEM),transmission electron microscopy(TEM),and M(o|¨)ssbauer spectroscopy.The experimental results reveal that two types of thermal-induced martensites,e(hcp) andα'(bcc) martensites,are formed in the as-quenched condition,and these transformations have athermal characters.Mo addition to the Fe-Mn alloy does not change the coexistence ofεandα' martensites with the Mn content between 10wt%and 15wt%.Besides,M(o|¨)ssbauer spectra reveal a paramagnetic character with a singlet for theγ(fcc) austenite andεmartensite phases and a ferromagnetic character with a broad sextet for theα' martensite phase. The volume fraction ofα' martensite forming in the quenched alloy is much more than that of theεmartensite.
文摘The processing of sound signals is significantly improved recently.Technique for sound signal processing focusing on music beyond speech area is getting attention due to the development of deep learning techniques.This study is for analysis and process of music signals to generate tow-dimensional tabular data and a new music.For analysis and process part,we represented normalized waveforms for each of input data via frequency domain signals.Then we looked into shorted segment to see the difference wave pattern for different singers.Fourier transform is applied to get spectrogram of the music signals.Filterbank is applied to represent the spectrogram based on the human ear instead of the distance on the frequency dimension,and the final spectrogram has been plotted by Mel scale.For generating part,we created two-dimensional tabular data for data manipulation.With the 2D data,any kind of analysis can be done since it has digit values for the music signals.Then,we generated a new music by applying LSTM toward the song audience preferred more.As the result,it has been proved that the created music showed the similar waveforms with the original music.This study made a step forward for music signal processing.If this study expands further,it can find the pattern that listeners like so music can be generated within favorite singer’s voice in the way that the listener prefers.
文摘Decision trees are mainly used to classify data and predict data classes. A spatial decision tree has been designed using Euclidean distance between objects for reflecting spatial data characteristic. Even though this method explains the distance of objects in spatial dimension, it fails to represent distributions of spatial data and their relationships. But distributions of spatial data and relationships with their neighborhoods are very important in real world. This paper proposes decision tree based on spatial entropy that represents distributions of spatial data with dispersion and dissimilarity. The rate of dispersion by dissimilarity presents how related distribution of spatial data and non-spatial attributes. The experiment evaluates the accuracy and building time of decision tree as compared to previous methods and it shows that the proposed method makes efficient and scalable classification for spatial decision support.
基金Foundation item:Supported by the National Natural Science Foun-dation of China(60132030)the Korea Science and Engineering Foundation(KOSEF)through OIRC Project
文摘Traffic shaping is one of important control operation to guarantee the Quality of Service(QoS)in optical burst switching(OBS)networks.The efficiency of traffic shaping is mainly determined by token generation method.In this paper,token generation methods of traffic shaping are evaluated by using three kinds of probability distribution,and are analyzed in terms of burst blocking probability,through-put and correlation by simulation.The simulation results show that the token generation methods decrease the burst correlation of Label Switched Paths(LSPs),and solve traffic congestion as well.The different burst arrival processes have small impact on the blocking probability for OBS net works.Key words optical burst switching-traffic shaping-token generation-quality of service CLC number TP 929.11 Foundation item:Supported by the National Natural Science Foundation of China(60132030)and the Korea Science and Engineering Foundation(KOSEF)through OIRC Project Biography:Tang Wan(1974-),female,Ph.D candidate,research direction:contention resolution and routing mechanism in OBS networks.
文摘Spam mail classification considered complex and error-prone task in the distributed computing environment.There are various available spam mail classification approaches such as the naive Bayesian classifier,logistic regression and support vector machine and decision tree,recursive neural network,and long short-term memory algorithms.However,they do not consider the document when analyzing spam mail content.These approaches use the bagof-words method,which analyzes a large amount of text data and classifies features with the help of term frequency-inverse document frequency.Because there are many words in a document,these approaches consume a massive amount of resources and become infeasible when performing classification on multiple associated mail documents together.Thus,spam mail is not classified fully,and these approaches remain with loopholes.Thus,we propose a term frequency topic inverse document frequency model that considers the meaning of text data in a larger semantic unit by applying weights based on the document’s topic.Moreover,the proposed approach reduces the scarcity problem through a frequency topic-inverse document frequency in singular value decomposition model.Our proposed approach also reduces the dimensionality,which ultimately increases the strength of document classification.Experimental evaluations show that the proposed approach classifies spam mail documents with higher accuracy using individual document-independent processing computation.Comparative evaluations show that the proposed approach performs better than the logistic regression model in the distributed computing environment,with higher document word frequencies of 97.05%,99.17%and 96.59%.
基金Project(2011)supported by the research grant of the Chungbuk National University,South Korea
文摘r-learning,which is based on e-learning and u-learning,is defined as a learning support system that intelligent robots serve verbal and nonverbal interactions on ubiquitous computing environment.In order to guarantee the advantages of r-learning contents with no limits of timc and place and with nonverbal interaction which are not in e-learning contents,in recent years,assessment criteria for r-learning contents are urgently rcquired.Therefore,the reliable and valid assessment criteria were developed for nonverbal interaction contents in r-learning,and its detailed research content is as follows.First,assessment criteria for nonverbal interaction in r-learning contents will be specified into gesture,facial expression,semi-verbal message,distance,physical contact and time.Second,the validity of the developed assessment criteria will be proved by statistics.Consequently,the assessment criteria for nonverbal interaction contents will be helpful when choosing the better r-learning content and producing the better r-learning content,and the reliability of school education is improved ultimately.
基金Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2018-0-00765,Development of Compression and Transmission Technologies for Ultra High-Quality Immersive Videos Supporting 6DoF).
文摘Recent advances in 360 video streaming technologies have enhanced the immersive experience of video streaming services.Particularly,there is immense potential for the application of 360 video encoding formats to achieve highly immersive virtual reality(VR)systems.However,360 video streaming requires considerable bandwidth,and its performance depends on several factors.Consequently,the optimization of 360 video bitstreams according to viewport texture is crucial.Therefore,we propose an adaptive solution for VR systems using viewport-dependent tiled 360 video streaming.To increase the degrees of freedom of users,the moving picture experts group(MPEG)recently defined three degrees plus of freedom(3DoF+)and six degrees of freedom(6DoF)to support free user movement within camera-captured scenes.The proposed method supports 6DoF to allow users to move their heads freely.Herein,we propose viewport-dependent tiled 360 video streaming based on users’head movements.The proposed system generates an adaptive bitstream using tile sets that are selected according to a parameter set of user’s viewport area.This extracted bitstream is then transmitted to the user’s computer.After decoding,the user’s viewport is generated and rendered on VR head-mounted display(HMD).Furthermore,we introduce certain approaches to reduce the motion-to-photon latency.The experimental results demonstrated that,in contrast with non-tiled streaming,the proposed method achieved high-performance 360 video streaming for VR systems,with a 25.89%BD-rate saving for Y-PSNR and 61.16%for decoding time.
文摘The analysis of large time-series datasets has profoundly enhanced our ability to make accurate predictions in many fields.However,unpredictable phenomena,such as extreme weather events or the novel coronavirus 2019(COVID-19)outbreak,can greatly limit the ability of time-series analyses to establish reliable patterns.The present work addresses this issue by applying uncertainty analysis using a probability distribution function,and applies the proposed scheme within a preliminary study involving the prediction of power consumption for a single hotel in Seoul,South Korea based on an analysis of 53,567 data items collected by the Korea Electric Power Corporation using robotic process automation.We first apply Facebook Prophet for conducting time-series analysis.The results demonstrate that the COVID19 outbreak seriously compromised the reliability of the time-series analysis.Then,machine learning models are developed in the TensorFlow framework for conducting uncertainty analysis based on modeled relationships between electric power consumption and outdoor temperature.The benefits of the proposed uncertainty analysis for predicting the electricity consumption of the hotel building are demonstrated by comparing the results obtained when considering no uncertainty,aleatory uncertainty,epistemic uncertainty,and mixed aleatory and epistemic uncertainty.The minimum and maximum ranges of predicted electricity consumption are obtained when using mixed uncertainty.Accordingly,the application of uncertainty analysis using a probability distribution function greatly improved the predictive power of the analysis compared to time-series analysis.
基金Project supported by Akdeniz University Scientific Research Projects Coordination Unit
文摘Beam splitting upon refraction in a triangular sonic crystal composed of aluminum cylinders in air is experimentally and numerically demonstrated to occur due to finite source size, which facilitates circumvention of a directional band gap. Experiments reveal that two distinct beams emerge at crystal output, in agreement with the numerical results obtained through the finite-element method. Beam splitting occurs at sufficiently-small source sizes comparable to lattice periodicity determined by the spatial gap width in reciprocal space. Split beams propagate in equal amplitude, whereas beam splitting is destructed for oblique incidence above a critical incidence angle.
文摘have been focused on addressing the Covid-19 pandemic;for example,governments have implemented countermeasures,such as quarantining,pushing vaccine shots to minimize local spread,investigating and analyzing the virus’characteristics,and conducting epidemiological investigations through patient management and tracers.Therefore,researchers worldwide require funding to achieve these goals.Furthermore,there is a need for documentation to investigate and trace disease characteristics.However,it is time consuming and resource intensive to work with documents comprising many types of unstructured data.Therefore,in this study,natural language processing technology is used to automatically classify these documents.Currently used statistical methods include data cleansing,query modification,sentiment analysis,and clustering.However,owing to limitations with respect to the data,it is necessary to understand how to perform data analysis suitable for medical documents.To solve this problem,this study proposes a robust in-depth mixed with subject and emotion model comprising three modules.The first is a subject and non-linear emotional module,which extracts topics from the data and supplements them with emotional figures.The second is a subject with singular value decomposition in the emotion model,which is a dimensional decomposition module that uses subject analysis and an emotion model.The third involves embedding with singular value decomposition using an emotion module,which is a dimensional decomposition method that uses emotion learning.The accuracy and other model measurements,such as the F1,area under the curve,and recall are evaluated based on an article on Middle East respiratory syndrome.A high F1 score of approximately 91%is achieved.The proposed joint analysis method is expected to provide a better synergistic effect in the dataset.
基金This research is supported by the MIC ( Ministry of Information and Communication) , Korea ,underthe ITRC(Information Technology Research Center) support program supervised by the IITA(Institute of Information Tech-nology Assessment)
文摘In decision support system for spatial site selection, the fuzzy synthetic evaluation is a useful way. However, the method can’t pay attention to the randomness in factors. To remedy the problem, this paper proposes a clouded-base fuzzy approach which combines advantages of cloud transform and fuzzy synthetic evaluation. The cloud transform considers the randomness in the factors and product the higher concept layer for data mining. At the same time, the check mechanism controls the quality of partitions in factors. Then the fuzzy approach was used to get final evaluation value with randomness and fuzziness. It make the final result is optimization. Finally, performance evaluations show that this approach spent less runtime and got more accuracy than the fuzzy synthetic. The experiments prove that the proposed method is faster and more accuracy than the original method.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2021R1I1A1A01052299).
文摘Pneumonia is a dangerous respiratory disease due to which breathing becomes incredibly difficult and painful;thus,catching it early is crucial.Medical physicians’time is limited in outdoor situations due to many patients;therefore,automated systems can be a rescue.The input images from the X-ray equipment are also highly unpredictable due to variances in radiologists’experience.Therefore,radiologists require an automated system that can swiftly and accurately detect pneumonic lungs from chest x-rays.In medical classifications,deep convolution neural networks are commonly used.This research aims to use deep pretrained transfer learning models to accurately categorize CXR images into binary classes,i.e.,Normal and Pneumonia.The MDEV is a proposed novel ensemble approach that concatenates four heterogeneous transfer learning models:Mobile-Net,DenseNet-201,EfficientNet-B0,and VGG-16,which have been finetuned and trained on 5,856 CXR images.The evaluation matrices used in this research to contrast different deep transfer learning architectures include precision,accuracy,recall,AUC-roc,and f1-score.The model effectively decreases training loss while increasing accuracy.The findings conclude that the proposed MDEV model outperformed cutting-edge deep transfer learning models and obtains an overall precision of 92.26%,an accuracy of 92.15%,a recall of 90.90%,an auc-roc score of 90.9%,and f-score of 91.49%with minimal data pre-processing,data augmentation,finetuning and hyperparameter adjustment in classifying Normal and Pneumonia chests.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2019R1I1A3A01062789)(received by N.Park).
文摘A significant number of cloud storage environments are already implementing deduplication technology.Due to the nature of the cloud environment,a storage server capable of accommodating large-capacity storage is required.As storage capacity increases,additional storage solutions are required.By leveraging deduplication,you can fundamentally solve the cost problem.However,deduplication poses privacy concerns due to the structure itself.In this paper,we point out the privacy infringement problemand propose a new deduplication technique to solve it.In the proposed technique,since the user’s map structure and files are not stored on the server,the file uploader list cannot be obtained through the server’s meta-information analysis,so the user’s privacy is maintained.In addition,the personal identification number(PIN)can be used to solve the file ownership problemand provides advantages such as safety against insider breaches and sniffing attacks.The proposed mechanism required an additional time of approximately 100 ms to add a IDRef to distinguish user-file during typical deduplication,and for smaller file sizes,the time required for additional operations is similar to the operation time,but relatively less time as the file’s capacity grows.