Terms of intelligence in 20th and 21th century mean the methods of automatic extraction, analysis, interpretation and use of information. Thus, the intelligence services in the future created an electronic database in...Terms of intelligence in 20th and 21th century mean the methods of automatic extraction, analysis, interpretation and use of information. Thus, the intelligence services in the future created an electronic database in which to their being classified intelligence products, users could choose between the latter themselves relevant information. The EU (European Union) that activities are carried out from at least in year 1996, terrorist attacks in year 200l is only accelerating. Proposals to increase surveillance and international cooperation in this field have been drawn up before September 11 2011. On the Web you can fmd a list of networks (Cryptome, 2011), which could be connected, or are under the control of the security service--NSA (National Security Agency). United States of America in year 1994 enacted a law for telephone communication--Digital Telephony Act, which would require manufacturers of telecommunications equipment, leaving some security holes for control. In addition, we monitor the Internet and large corporations. The example of the United States of America in this action reveals the organization for electronic freedoms against a telecom company that the NSA illegally gains access to data on information technology users and Internet telephony.展开更多
The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)an...The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.展开更多
The rapid digitalization of the energy sector has led to the deployment of large-scale smart metering systems that generate high-frequency time series data,creating new opportunities and challenges for energy anomaly ...The rapid digitalization of the energy sector has led to the deployment of large-scale smart metering systems that generate high-frequency time series data,creating new opportunities and challenges for energy anomaly detection.Accurate identification of anomalous patterns in building energy consumption is essential for optimizing operations,improving energy efficiency,and supporting grid reliability.This study investigates advanced feature engineering and machine learning modeling techniques for large-scale time series anomaly detection in building energy systems.Expanding upon previous benchmark frameworks,we introduce additional features such as oil price indices and solar cycle indicators,including sunset and sunrise times,to enhance the contextual understanding of consumption patterns.Our comparative modeling approach encompasses an extensive suite of algorithms,including KNeighborsUnif,KNeighborsDist,LightGBMXT,LightGBM,RandomForestMSE,CatBoost,ExtraTreesMSE,NeuralNetFastAI,XGBoost,NeuralNetTorch,and LightGBMLarge.Data preprocessing includes rigorous handling of missing values and normalization,while feature engineering focuses on temporal,environmental,and value-change attributes.The models are evaluated on a comprehensive dataset of smart meter readings,with performance assessed using metrics such as the Area Under the Receiver Operating Characteristic Curve(AUC-ROC).The results demonstrate that the integration of diverse exogenous variables and a hybrid ensemble of traditional tree-based and neural network models can significantly improve anomaly detection performance.This work provides new insights into the design of robust,scalable,and generalizable frameworks for energy anomaly detection in complex,real-world settings.展开更多
In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information...In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information technology is becoming more and more mature, and as a result, its use across numerous industries is now standard. China is still in the early stages of developing its integration of emergency medical services with modern information technology;despite our progress, there are still numerous obstacles and constraints to overcome. Our goal is to integrate information technology into every aspect of emergency patient care, offering robust assistance for both patient rescue and the efforts of medical personnel. Information may be communicated in a fast, multiple, and effective manner by utilizing modern information technology. This study aims to examine the current state of this field’s development, current issues, and the field’s future course of development.展开更多
The rapid expansion of the Internet of Things(IoT)and Edge Artificial Intelligence(AI)has redefined automation and connectivity acrossmodern networks.However,the heterogeneity and limited resources of IoT devices expo...The rapid expansion of the Internet of Things(IoT)and Edge Artificial Intelligence(AI)has redefined automation and connectivity acrossmodern networks.However,the heterogeneity and limited resources of IoT devices expose them to increasingly sophisticated and persistentmalware attacks.These adaptive and stealthy threats can evade conventional detection,establish remote control,propagate across devices,exfiltrate sensitive data,and compromise network integrity.This study presents a Software-Defined Internet of Things(SD-IoT)control-plane-based,AI-driven framework that integrates Gated Recurrent Units(GRU)and Long Short-TermMemory(LSTM)networks for efficient detection of evolving multi-vector,malware-driven botnet attacks.The proposed CUDA-enabled hybrid deep learning(DL)framework performs centralized real-time detection without adding computational overhead to IoT nodes.A feature selection strategy combining variable clustering,attribute evaluation,one-R attribute evaluation,correlation analysis,and principal component analysis(PCA)enhances detection accuracy and reduces complexity.The framework is rigorously evaluated using the N_BaIoT dataset under k-fold cross-validation.Experimental results achieve 99.96%detection accuracy,a false positive rate(FPR)of 0.0035%,and a detection latency of 0.18 ms,confirming its high efficiency and scalability.The findings demonstrate the framework’s potential as a robust and intelligent security solution for next-generation IoT ecosystems.展开更多
In this paper,the problem of increasing information transfer authenticity is formulated.And to reach a decision,the control methods and algorithms based on the use of statistical and structural information redundancy ...In this paper,the problem of increasing information transfer authenticity is formulated.And to reach a decision,the control methods and algorithms based on the use of statistical and structural information redundancy are presented.It is assumed that the controllable information is submitted as the text element images and it contains redundancy,caused by statistical relations and non-uniformity probability distribution of the transmitted data.The use of statistical redundancy allows to develop the adaptive rules of the authenticity control which take into account non-stationarity properties of image data while transferring the information.The structural redundancy peculiar to the container of image in a data transfer package is used for developing new rules to control the information authenticity on the basis of pattern recognition mechanisms.The techniques offered in this work are used to estimate the authenticity in structure of data transfer packages.The results of comparative analysis for developed methods and algorithms show that their parameters of efficiency are increased by criterion of probability of undetected mistakes,labour input and cost of realization.展开更多
The goal of this manuscript is to present a research finding, based on a study conducted to identify, examine, and validate Social Media (SM) socio-technical information security factors, in line with usable-security ...The goal of this manuscript is to present a research finding, based on a study conducted to identify, examine, and validate Social Media (SM) socio-technical information security factors, in line with usable-security principles. The study followed literature search techniques, as well as theoretical and empirical methods of factor validation. The strategy used in literature search includes Boolean keywords search, and citation guides, using mainly web of science databases. As guided by study objectives, 9 SM socio-technical factors were identified, verified and validated. Both theoretical and empirical validation processes were followed. Thus, a theoretical validity test was conducted on 45 Likert scale items, involving 10 subject experts. From the score ratings of the experts, Content Validity Index (CVI) was calculated to determine the degree to which the identified factors exhibit appropriate items for the construct being measured, and 7 factors attained an adequate level of validity index. However, for reliability test, 32 respondents and 45 Likert scale items were used. Whereby, Cronbach’s alpha coefficient (α-values) were generated using SPSS. Subsequently, 8 factors attained an adequate level of reliability. Overall, the validated factors include;1) usability—visibility, learnability, and satisfaction;2) education and training—help and documentation;3) SM technology development—error handling, and revocability;4) information security —security, privacy, and expressiveness. In this case, the confirmed factors would add knowledge by providing a theoretical basis for rationalizing information security requirements on SM usage.展开更多
The COVID-19 outbreak initiated from the Chinese city of Wuhanand eventually affected almost every nation around the globe. From China,the disease started spreading to the rest of the world. After China, Italybecame t...The COVID-19 outbreak initiated from the Chinese city of Wuhanand eventually affected almost every nation around the globe. From China,the disease started spreading to the rest of the world. After China, Italybecame the next epicentre of the virus and witnessed a very high death toll.Soon nations like the USA became severely hit by SARS-CoV-2 virus. TheWorld Health Organisation, on 11th March 2020, declared COVID-19 a pandemic. To combat the epidemic, the nations from every corner of the worldhas instituted various policies like physical distancing, isolation of infectedpopulation and researching on the potential vaccine of SARS-CoV-2. Toidentify the impact of various policies implemented by the affected countrieson the pandemic spread, a myriad of AI-based models have been presented toanalyse and predict the epidemiological trends of COVID-19. In this work, theauthors present a detailed study of different articial intelligence frameworksapplied for predictive analysis of COVID-19 patient record. The forecastingmodels acquire information from records to detect the pandemic spreadingand thus enabling an opportunity to take immediate actions to reduce thespread of the virus. This paper addresses the research issues and correspondingsolutions associated with the prediction and detection of infectious diseaseslike COVID-19. It further focuses on the study of vaccinations to cope withthe pandemic. Finally, the research challenges in terms of data availability,reliability, the accuracy of the existing prediction models and other open issuesare discussed to outline the future course of this study.展开更多
This paper puts forward a communication programming method between robot and external computer based on RPC (Remote Produce Call) communication method, which realizes robot distributed controlling network system model...This paper puts forward a communication programming method between robot and external computer based on RPC (Remote Produce Call) communication method, which realizes robot distributed controlling network system model. And a new Robot off line programming method is built based on this communication method and network model. Further more, as an example, robot auto marking and auto cutting of shipbuilding profile system is developed, which proves the ideas of author’s off line programming and development methods of robot flexible automation system. As a result, this paper presents a new method for developing robot flexible automation system.展开更多
The work is devoted to the demonstration of the possibility of applying the formulas of information handling obtained in the theory of non-force interaction for the natural language processing. These formulas were obt...The work is devoted to the demonstration of the possibility of applying the formulas of information handling obtained in the theory of non-force interaction for the natural language processing. These formulas were obtained in computer experiments in modelling the movement and interaction of material objects by changing the amount of information that triggers this movement. The hypothesis, objective and tasks of the experimental research were defined. The methods and software tools were developed to conduct the experiments. To compare different results of the simulation of the processes in a human brain during speech production, there was a range of methods proposed to calculate the estimate of sequence of fragments of natural language texts including the methods based on linear approximation. The experiments confirmed that the formulas of information handling obtained in the theory of non-force interaction reflect the processes of language formation. It is shown that the offered approach can successfully be used to create systems of reactive artificial intelligence machines. Experimental and, presented in this work, practical results constitute that the non-force (informational) interaction formulae are generally valid.展开更多
Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the s...Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.展开更多
Genetic effect estimates for loci detected in quantitative trait locus (QTL) mapping experiments depend upon two factors. First, they are parameterizations of the genotypic values determined by the model of genetic ef...Genetic effect estimates for loci detected in quantitative trait locus (QTL) mapping experiments depend upon two factors. First, they are parameterizations of the genotypic values determined by the model of genetic effects. Second, they are consequently also affected by the regression method used to estimate the genotypic values from the observed marker genotypes and phenotypes. There are two common causes for marker-genotype data to be incomplete in those experiments—missing marker-genotypes and within-interval mapping. Different regression methods tend to differ in how this missing information is represented and handled. In this communication we explain why the estimates of genetic effects of QTL obtained using standard regression methods are not coherent with the model of genetic effects and indeed show intrinsic inconsistencies when there is incomeplete genotype information. We then describe the interval mapping by imputations (IMI) regression method and prove that it overcomes those problems. A numerical example is used to illustrate the use of IMI and the consequences of using current methods of choice. IMI enables researchers to obtain estimates of genetic effects that are coherent with the model of genetic effects used, despite incomplete genotype information. Furthermore, because IMI allows orthogonal estimation of genetic effects, it shows potential performance advantages for being implemented in QTL mapping tools.展开更多
This study systematically reviews the Internet of Things(IoT)security research based on literature from prominent international cybersecurity conferences over the past five years,including ACM Conference on Computer a...This study systematically reviews the Internet of Things(IoT)security research based on literature from prominent international cybersecurity conferences over the past five years,including ACM Conference on Computer and Communications Security(ACM CCS),USENIX Security,Network and Distributed System Security Symposium(NDSS),and IEEE Symposiumon Security and Privacy(IEEE S&P),along with other high-impact studies.It organizes and analyzes IoT security advancements through the lenses of threats,detection methods,and defense strategies.The foundational architecture of IoT systems is first outlined,followed by categorizing major threats into eight distinct types and analyzing their root causes and potential impacts.Next,six prominent threat detection techniques and five defense strategies are detailed,highlighting their technical principles,advantages,and limitations.The paper concludes by addressing the key challenges still confronting IoT security and proposing directions for future research to enhance system resilience and protection.展开更多
As blockchain technology advances,non-fungible tokens(NFTs)are emerging as unconventional assets in the commercial market.However,it is necessary to establish a comprehensive NFT ecosystem that addresses the prevailin...As blockchain technology advances,non-fungible tokens(NFTs)are emerging as unconventional assets in the commercial market.However,it is necessary to establish a comprehensive NFT ecosystem that addresses the prevailing public concerns.This study aimed to bridge this gap by analyzing user-generated content on prominent social media platforms such as Twitter,Weibo,and Reddit.Employing text clustering and topic modeling techniques,such as Latent Dirichlet Allocation,we constructed an analytical framework to delve into the intricacies of the NFT ecosystem.Our investigation revealed seven distinct topics from Twitter and Reddit data and eight topics from Weibo data.Weibo users predominantly engaged in reviews and critiques,whereas Twitter and Reddit users emphasized personal experiences and perceptions.The NFT ecosystem encompasses several crucial elements,including transactions,customers,infrastructure,products,environments,and perceptions.By identifying the prevailing trends and common issues,this study offers valuable guidance for the development of NFT ecosystems.展开更多
In the rapidly evolving landscape of intelligent transportation systems,the security and authenticity of vehicular communication have emerged as critical challenges.As vehicles become increasingly interconnected,the n...In the rapidly evolving landscape of intelligent transportation systems,the security and authenticity of vehicular communication have emerged as critical challenges.As vehicles become increasingly interconnected,the need for robust authentication mechanisms to safeguard against cyber threats and ensure trust in an autonomous ecosystem becomes essential.On the other hand,using intelligence in the authentication system is a significant attraction.While existing surveys broadly address vehicular security,a critical gap remains in the systematic exploration of Deep Learning(DL)-based authentication methods tailored to these communication paradigms.This survey fills that gap by offering a comprehensive analysis of DL techniques—including supervised,unsupervised,reinforcement,and hybrid learning—for vehicular authentication.This survey highlights novel contributions,such as a taxonomy of DL-driven authentication protocols,real-world case studies,and a critical evaluation of scalability and privacy-preserving techniques.Additionally,this paper identifies unresolved challenges,such as adversarial resilience and real-time processing constraints,and proposes actionable future directions,including lightweight model optimization and blockchain integration.By grounding the discussion in concrete applications,such as biometric authentication for driver safety and adaptive key management for infrastructure security,this survey bridges theoretical advancements with practical deployment needs,offering a roadmap for next-generation secure intelligent vehicular ecosystems for the modern world.展开更多
Purpose:Generally,the scientific comparison has been done with the help of the overall impact of scholars.Although it is very easy to compare scholars,but how can we assess the scientific impact of scholars who have d...Purpose:Generally,the scientific comparison has been done with the help of the overall impact of scholars.Although it is very easy to compare scholars,but how can we assess the scientific impact of scholars who have different research careers?It is very obvious,the scholars may gain a high impact if they have more research experience or have spent more time(in terms of research career in a year).Then we cannot compare two scholars who have different research careers.Many bibliometrics indicators address the time-span of scholars.In this series,the h-index sequence and EM/EM’-index sequence have been introduced for assessment and comparison of the scientific impact of scholars.The h-index sequence,EM-index sequence,and EM’-index sequence consider the yearly impact of scholars,and comparison is done by the index value along with their component value.The time-series indicators fail to give a comparative analysis between senior and junior scholars if there is a huge difference in both scholars’research careers.Design/methodology/approach:We have proposed the cumulative index calculation method to appraise the scientific impact of scholars till that age and tested it with 89 scholars data.Findings:The proposed mechanism is implemented and tested on 89 scholars’publication data,providing a clear difference between the scientific impact of two scholars.This also helps in predicting future prominent scholars based on their research impact.Research limitations:This study adopts a simplistic approach by assigning equal credit to all authors,regardless of their individual contributions.Further,the potential impact of career breaks on research productivity is not taken into account.These assumptions may limit the generalizability of our findings Practical implications:The proposed method can be used by respected institutions to compare their scholars impact.Funding agencies can also use it for similar purposes.Originality/value:This research adds to the existing literature by introducing a novel methodology for comparing the scientific impact of scholars.The outcomes of this research have notable implications for the development of more precise and unbiased research assessment frameworks,enabling a more equitable evaluation of scholarly contributions.展开更多
Distributed denial of service(DDoS)attacks are common network attacks that primarily target Internet of Things(IoT)devices.They are critical for emerging wireless services,especially for applications with limited late...Distributed denial of service(DDoS)attacks are common network attacks that primarily target Internet of Things(IoT)devices.They are critical for emerging wireless services,especially for applications with limited latency.DDoS attacks pose significant risks to entrepreneurial businesses,preventing legitimate customers from accessing their websites.These attacks require intelligent analytics before processing service requests.Distributed denial of service(DDoS)attacks exploit vulnerabilities in IoT devices by launchingmulti-point distributed attacks.These attacks generate massive traffic that overwhelms the victim’s network,disrupting normal operations.The consequences of distributed denial of service(DDoS)attacks are typically more severe in software-defined networks(SDNs)than in traditional networks.The centralised architecture of these networks can exacerbate existing vulnerabilities,as these weaknesses may not be effectively addressed in this model.The preliminary objective for detecting and mitigating distributed denial of service(DDoS)attacks in software-defined networks(SDN)is to monitor traffic patterns and identify anomalies that indicate distributed denial of service(DDoS)attacks.It implements measures to counter the effects ofDDoS attacks,and ensure network reliability and availability by leveraging the flexibility and programmability of SDN to adaptively respond to threats.The authors present a mechanism that leverages the OpenFlow and sFlow protocols to counter the threats posed by DDoS attacks.The results indicate that the proposed model effectively mitigates the negative effects of DDoS attacks in an SDN environment.展开更多
Multiple Sclerosis(MS)poses significant health risks.Patients may face neurodegeneration,mobility issues,cognitive decline,and a reduced quality of life.Manual diagnosis by neurologists is prone to limitations,making ...Multiple Sclerosis(MS)poses significant health risks.Patients may face neurodegeneration,mobility issues,cognitive decline,and a reduced quality of life.Manual diagnosis by neurologists is prone to limitations,making AI-based classification crucial for early detection.Therefore,automated classification using Artificial Intelligence(AI)techniques has a crucial role in addressing the limitations of manual classification and preventing the development of MS to advanced stages.This study developed hybrid systems integrating XGBoost(eXtreme Gradient Boosting)with multi-CNN(Convolutional Neural Networks)features based on Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS)algorithms for early classification of MRI(Magnetic Resonance Imaging)images in a multi-class and binary-class MS dataset.All hybrid systems started by enhancing MRI images using the fusion processes of a Gaussian filter and Contrast-Limited Adaptive Histogram Equalization(CLAHE).Then,the Gradient Vector Flow(GVF)algorithm was applied to select white matter(regions of interest)within the brain and segment them from the surrounding brain structures.These regions of interest were processed by CNN models(ResNet101,DenseNet201,and MobileNet)to extract deep feature maps,which were then combined into fused feature vectors of multi-CNN model combinations(ResNet101-DenseNet201,DenseNet201-MobileNet,ResNet101-MobileNet,and ResNet101-DenseNet201-MobileNet).The multi-CNN features underwent dimensionality reduction using ACO and MESbS algorithms to remove unimportant features and retain important features.The XGBoost classifier employed the resultant feature vectors for classification.All developed hybrid systems displayed promising outcomes.For multiclass classification,the XGBoost model using ResNet101-DenseNet201-MobileNet features selected by ACO attained 99.4%accuracy,99.45%precision,and 99.75%specificity,surpassing prior studies(93.76%accuracy).It reached 99.6%accuracy,99.65%precision,and 99.55%specificity in binary-class classification.These results demonstrate the effectiveness of multi-CNN fusion with feature selection in improving MS classification accuracy.展开更多
Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-ba...Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-based model that uses Long-Short-Term Memory(LSTM)to optimize resource allocation under dynam-ically changing conditions.Designed to monitor the workload on individual IoT nodes,the model incorporates long-term data dependencies,enabling adaptive resource distribution in real time.The training process utilizes Min-Max normalization and grid search for hyperparameter tuning,ensuring high resource utilization and consistent performance.The simulation results demonstrate the effectiveness of the proposed method,outperforming the state-of-the-art approaches,including Dynamic and Efficient Enhanced Load-Balancing(DEELB),Optimized Scheduling and Collaborative Active Resource-management(OSCAR),Convolutional Neural Network with Monarch Butterfly Optimization(CNN-MBO),and Autonomic Workload Prediction and Resource Allocation for Fog(AWPR-FOG).For example,in scenarios with low system utilization,the model achieved a resource utilization efficiency of 95%while maintaining a latency of just 15 ms,significantly exceeding the performance of comparative methods.展开更多
Livestock transportation is a key factor that contributes to the spatial spread of brucellosis.To analyze the impact of sheep transportation on brucellosis transmission,we develop a human–sheep coupled brucellosis mo...Livestock transportation is a key factor that contributes to the spatial spread of brucellosis.To analyze the impact of sheep transportation on brucellosis transmission,we develop a human–sheep coupled brucellosis model within a metapopulation network framework.Theoretically,we examine the positively invariant set,the basic reproduction number,the existence,uniqueness,and stability of disease-free equilibrium and the existence of the endemic equilibrium of the model.For practical application,using Heilongjiang province as a case study,we simulate brucellosis transmission across 12 cities based on data using three network types:the BA network,the ER network,and homogeneous mixing network.The simulation results indicate that the network's average degree plays a role in the spread of brucellosis.For BA and ER networks,the basic reproduction number and cumulative incidence of brucellosis stabilize when the network's average degree reaches 4 or 5.In contrast,sheep transport in a homogeneous mixing network accelerates the cross-regional spread of brucellosis,whereas transportation in a BA network helps to control it effectively.Furthermore,the findings suggest that the movement of sheep is not always detrimental to controlling the spread of brucellosis.For cities with smaller sheep populations,such as Shuangyashan and Qitaihe,increasing the transport of sheep outward amplifies the spatial spread of the disease.In contrast,in cities with larger sheep populations,such as Qiqihar,Daqing,and Suihua,moderate sheep outflow can help reduce the spread.In addition,cities with large livestock populations play a dominant role in the overall transmission dynamics,underscoring the need for stricter supervision in these areas.展开更多
文摘Terms of intelligence in 20th and 21th century mean the methods of automatic extraction, analysis, interpretation and use of information. Thus, the intelligence services in the future created an electronic database in which to their being classified intelligence products, users could choose between the latter themselves relevant information. The EU (European Union) that activities are carried out from at least in year 1996, terrorist attacks in year 200l is only accelerating. Proposals to increase surveillance and international cooperation in this field have been drawn up before September 11 2011. On the Web you can fmd a list of networks (Cryptome, 2011), which could be connected, or are under the control of the security service--NSA (National Security Agency). United States of America in year 1994 enacted a law for telephone communication--Digital Telephony Act, which would require manufacturers of telecommunications equipment, leaving some security holes for control. In addition, we monitor the Internet and large corporations. The example of the United States of America in this action reveals the organization for electronic freedoms against a telecom company that the NSA illegally gains access to data on information technology users and Internet telephony.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.
文摘The rapid digitalization of the energy sector has led to the deployment of large-scale smart metering systems that generate high-frequency time series data,creating new opportunities and challenges for energy anomaly detection.Accurate identification of anomalous patterns in building energy consumption is essential for optimizing operations,improving energy efficiency,and supporting grid reliability.This study investigates advanced feature engineering and machine learning modeling techniques for large-scale time series anomaly detection in building energy systems.Expanding upon previous benchmark frameworks,we introduce additional features such as oil price indices and solar cycle indicators,including sunset and sunrise times,to enhance the contextual understanding of consumption patterns.Our comparative modeling approach encompasses an extensive suite of algorithms,including KNeighborsUnif,KNeighborsDist,LightGBMXT,LightGBM,RandomForestMSE,CatBoost,ExtraTreesMSE,NeuralNetFastAI,XGBoost,NeuralNetTorch,and LightGBMLarge.Data preprocessing includes rigorous handling of missing values and normalization,while feature engineering focuses on temporal,environmental,and value-change attributes.The models are evaluated on a comprehensive dataset of smart meter readings,with performance assessed using metrics such as the Area Under the Receiver Operating Characteristic Curve(AUC-ROC).The results demonstrate that the integration of diverse exogenous variables and a hybrid ensemble of traditional tree-based and neural network models can significantly improve anomaly detection performance.This work provides new insights into the design of robust,scalable,and generalizable frameworks for energy anomaly detection in complex,real-world settings.
文摘In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information technology is becoming more and more mature, and as a result, its use across numerous industries is now standard. China is still in the early stages of developing its integration of emergency medical services with modern information technology;despite our progress, there are still numerous obstacles and constraints to overcome. Our goal is to integrate information technology into every aspect of emergency patient care, offering robust assistance for both patient rescue and the efforts of medical personnel. Information may be communicated in a fast, multiple, and effective manner by utilizing modern information technology. This study aims to examine the current state of this field’s development, current issues, and the field’s future course of development.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting ProjectNumber(PNURSP2025R97),PrincessNourah bint AbdulrahmanUniversity,Riyadh,Saudi Arabia.
文摘The rapid expansion of the Internet of Things(IoT)and Edge Artificial Intelligence(AI)has redefined automation and connectivity acrossmodern networks.However,the heterogeneity and limited resources of IoT devices expose them to increasingly sophisticated and persistentmalware attacks.These adaptive and stealthy threats can evade conventional detection,establish remote control,propagate across devices,exfiltrate sensitive data,and compromise network integrity.This study presents a Software-Defined Internet of Things(SD-IoT)control-plane-based,AI-driven framework that integrates Gated Recurrent Units(GRU)and Long Short-TermMemory(LSTM)networks for efficient detection of evolving multi-vector,malware-driven botnet attacks.The proposed CUDA-enabled hybrid deep learning(DL)framework performs centralized real-time detection without adding computational overhead to IoT nodes.A feature selection strategy combining variable clustering,attribute evaluation,one-R attribute evaluation,correlation analysis,and principal component analysis(PCA)enhances detection accuracy and reduces complexity.The framework is rigorously evaluated using the N_BaIoT dataset under k-fold cross-validation.Experimental results achieve 99.96%detection accuracy,a false positive rate(FPR)of 0.0035%,and a detection latency of 0.18 ms,confirming its high efficiency and scalability.The findings demonstrate the framework’s potential as a robust and intelligent security solution for next-generation IoT ecosystems.
文摘In this paper,the problem of increasing information transfer authenticity is formulated.And to reach a decision,the control methods and algorithms based on the use of statistical and structural information redundancy are presented.It is assumed that the controllable information is submitted as the text element images and it contains redundancy,caused by statistical relations and non-uniformity probability distribution of the transmitted data.The use of statistical redundancy allows to develop the adaptive rules of the authenticity control which take into account non-stationarity properties of image data while transferring the information.The structural redundancy peculiar to the container of image in a data transfer package is used for developing new rules to control the information authenticity on the basis of pattern recognition mechanisms.The techniques offered in this work are used to estimate the authenticity in structure of data transfer packages.The results of comparative analysis for developed methods and algorithms show that their parameters of efficiency are increased by criterion of probability of undetected mistakes,labour input and cost of realization.
文摘The goal of this manuscript is to present a research finding, based on a study conducted to identify, examine, and validate Social Media (SM) socio-technical information security factors, in line with usable-security principles. The study followed literature search techniques, as well as theoretical and empirical methods of factor validation. The strategy used in literature search includes Boolean keywords search, and citation guides, using mainly web of science databases. As guided by study objectives, 9 SM socio-technical factors were identified, verified and validated. Both theoretical and empirical validation processes were followed. Thus, a theoretical validity test was conducted on 45 Likert scale items, involving 10 subject experts. From the score ratings of the experts, Content Validity Index (CVI) was calculated to determine the degree to which the identified factors exhibit appropriate items for the construct being measured, and 7 factors attained an adequate level of validity index. However, for reliability test, 32 respondents and 45 Likert scale items were used. Whereby, Cronbach’s alpha coefficient (α-values) were generated using SPSS. Subsequently, 8 factors attained an adequate level of reliability. Overall, the validated factors include;1) usability—visibility, learnability, and satisfaction;2) education and training—help and documentation;3) SM technology development—error handling, and revocability;4) information security —security, privacy, and expressiveness. In this case, the confirmed factors would add knowledge by providing a theoretical basis for rationalizing information security requirements on SM usage.
文摘The COVID-19 outbreak initiated from the Chinese city of Wuhanand eventually affected almost every nation around the globe. From China,the disease started spreading to the rest of the world. After China, Italybecame the next epicentre of the virus and witnessed a very high death toll.Soon nations like the USA became severely hit by SARS-CoV-2 virus. TheWorld Health Organisation, on 11th March 2020, declared COVID-19 a pandemic. To combat the epidemic, the nations from every corner of the worldhas instituted various policies like physical distancing, isolation of infectedpopulation and researching on the potential vaccine of SARS-CoV-2. Toidentify the impact of various policies implemented by the affected countrieson the pandemic spread, a myriad of AI-based models have been presented toanalyse and predict the epidemiological trends of COVID-19. In this work, theauthors present a detailed study of different articial intelligence frameworksapplied for predictive analysis of COVID-19 patient record. The forecastingmodels acquire information from records to detect the pandemic spreadingand thus enabling an opportunity to take immediate actions to reduce thespread of the virus. This paper addresses the research issues and correspondingsolutions associated with the prediction and detection of infectious diseaseslike COVID-19. It further focuses on the study of vaccinations to cope withthe pandemic. Finally, the research challenges in terms of data availability,reliability, the accuracy of the existing prediction models and other open issuesare discussed to outline the future course of this study.
文摘This paper puts forward a communication programming method between robot and external computer based on RPC (Remote Produce Call) communication method, which realizes robot distributed controlling network system model. And a new Robot off line programming method is built based on this communication method and network model. Further more, as an example, robot auto marking and auto cutting of shipbuilding profile system is developed, which proves the ideas of author’s off line programming and development methods of robot flexible automation system. As a result, this paper presents a new method for developing robot flexible automation system.
文摘The work is devoted to the demonstration of the possibility of applying the formulas of information handling obtained in the theory of non-force interaction for the natural language processing. These formulas were obtained in computer experiments in modelling the movement and interaction of material objects by changing the amount of information that triggers this movement. The hypothesis, objective and tasks of the experimental research were defined. The methods and software tools were developed to conduct the experiments. To compare different results of the simulation of the processes in a human brain during speech production, there was a range of methods proposed to calculate the estimate of sequence of fragments of natural language texts including the methods based on linear approximation. The experiments confirmed that the formulas of information handling obtained in the theory of non-force interaction reflect the processes of language formation. It is shown that the offered approach can successfully be used to create systems of reactive artificial intelligence machines. Experimental and, presented in this work, practical results constitute that the non-force (informational) interaction formulae are generally valid.
文摘Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.
基金CN was funded by The Graduate School in Mathematics and Computing(FMB),SwedenOC was funded by a EURYI Award and a Future Research Leaders grant from SSFJAC was funded by an“Isidro Parga Pondal”contract from the autonomous administration Xunta de Galicia and by research projects BFU2009-11988 and BFU2010-20003 form the Spanish Ministry of Science and Innovation.
文摘Genetic effect estimates for loci detected in quantitative trait locus (QTL) mapping experiments depend upon two factors. First, they are parameterizations of the genotypic values determined by the model of genetic effects. Second, they are consequently also affected by the regression method used to estimate the genotypic values from the observed marker genotypes and phenotypes. There are two common causes for marker-genotype data to be incomplete in those experiments—missing marker-genotypes and within-interval mapping. Different regression methods tend to differ in how this missing information is represented and handled. In this communication we explain why the estimates of genetic effects of QTL obtained using standard regression methods are not coherent with the model of genetic effects and indeed show intrinsic inconsistencies when there is incomeplete genotype information. We then describe the interval mapping by imputations (IMI) regression method and prove that it overcomes those problems. A numerical example is used to illustrate the use of IMI and the consequences of using current methods of choice. IMI enables researchers to obtain estimates of genetic effects that are coherent with the model of genetic effects used, despite incomplete genotype information. Furthermore, because IMI allows orthogonal estimation of genetic effects, it shows potential performance advantages for being implemented in QTL mapping tools.
文摘This study systematically reviews the Internet of Things(IoT)security research based on literature from prominent international cybersecurity conferences over the past five years,including ACM Conference on Computer and Communications Security(ACM CCS),USENIX Security,Network and Distributed System Security Symposium(NDSS),and IEEE Symposiumon Security and Privacy(IEEE S&P),along with other high-impact studies.It organizes and analyzes IoT security advancements through the lenses of threats,detection methods,and defense strategies.The foundational architecture of IoT systems is first outlined,followed by categorizing major threats into eight distinct types and analyzing their root causes and potential impacts.Next,six prominent threat detection techniques and five defense strategies are detailed,highlighting their technical principles,advantages,and limitations.The paper concludes by addressing the key challenges still confronting IoT security and proposing directions for future research to enhance system resilience and protection.
基金funded by the National Social Science Fund of China(22CTQ019).
文摘As blockchain technology advances,non-fungible tokens(NFTs)are emerging as unconventional assets in the commercial market.However,it is necessary to establish a comprehensive NFT ecosystem that addresses the prevailing public concerns.This study aimed to bridge this gap by analyzing user-generated content on prominent social media platforms such as Twitter,Weibo,and Reddit.Employing text clustering and topic modeling techniques,such as Latent Dirichlet Allocation,we constructed an analytical framework to delve into the intricacies of the NFT ecosystem.Our investigation revealed seven distinct topics from Twitter and Reddit data and eight topics from Weibo data.Weibo users predominantly engaged in reviews and critiques,whereas Twitter and Reddit users emphasized personal experiences and perceptions.The NFT ecosystem encompasses several crucial elements,including transactions,customers,infrastructure,products,environments,and perceptions.By identifying the prevailing trends and common issues,this study offers valuable guidance for the development of NFT ecosystems.
基金funded and supported by the UCSI University Research Excellence&Innovation Grant(REIG),REIG-ICSDI-2024/044.
文摘In the rapidly evolving landscape of intelligent transportation systems,the security and authenticity of vehicular communication have emerged as critical challenges.As vehicles become increasingly interconnected,the need for robust authentication mechanisms to safeguard against cyber threats and ensure trust in an autonomous ecosystem becomes essential.On the other hand,using intelligence in the authentication system is a significant attraction.While existing surveys broadly address vehicular security,a critical gap remains in the systematic exploration of Deep Learning(DL)-based authentication methods tailored to these communication paradigms.This survey fills that gap by offering a comprehensive analysis of DL techniques—including supervised,unsupervised,reinforcement,and hybrid learning—for vehicular authentication.This survey highlights novel contributions,such as a taxonomy of DL-driven authentication protocols,real-world case studies,and a critical evaluation of scalability and privacy-preserving techniques.Additionally,this paper identifies unresolved challenges,such as adversarial resilience and real-time processing constraints,and proposes actionable future directions,including lightweight model optimization and blockchain integration.By grounding the discussion in concrete applications,such as biometric authentication for driver safety and adaptive key management for infrastructure security,this survey bridges theoretical advancements with practical deployment needs,offering a roadmap for next-generation secure intelligent vehicular ecosystems for the modern world.
文摘Purpose:Generally,the scientific comparison has been done with the help of the overall impact of scholars.Although it is very easy to compare scholars,but how can we assess the scientific impact of scholars who have different research careers?It is very obvious,the scholars may gain a high impact if they have more research experience or have spent more time(in terms of research career in a year).Then we cannot compare two scholars who have different research careers.Many bibliometrics indicators address the time-span of scholars.In this series,the h-index sequence and EM/EM’-index sequence have been introduced for assessment and comparison of the scientific impact of scholars.The h-index sequence,EM-index sequence,and EM’-index sequence consider the yearly impact of scholars,and comparison is done by the index value along with their component value.The time-series indicators fail to give a comparative analysis between senior and junior scholars if there is a huge difference in both scholars’research careers.Design/methodology/approach:We have proposed the cumulative index calculation method to appraise the scientific impact of scholars till that age and tested it with 89 scholars data.Findings:The proposed mechanism is implemented and tested on 89 scholars’publication data,providing a clear difference between the scientific impact of two scholars.This also helps in predicting future prominent scholars based on their research impact.Research limitations:This study adopts a simplistic approach by assigning equal credit to all authors,regardless of their individual contributions.Further,the potential impact of career breaks on research productivity is not taken into account.These assumptions may limit the generalizability of our findings Practical implications:The proposed method can be used by respected institutions to compare their scholars impact.Funding agencies can also use it for similar purposes.Originality/value:This research adds to the existing literature by introducing a novel methodology for comparing the scientific impact of scholars.The outcomes of this research have notable implications for the development of more precise and unbiased research assessment frameworks,enabling a more equitable evaluation of scholarly contributions.
基金supported by the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘Distributed denial of service(DDoS)attacks are common network attacks that primarily target Internet of Things(IoT)devices.They are critical for emerging wireless services,especially for applications with limited latency.DDoS attacks pose significant risks to entrepreneurial businesses,preventing legitimate customers from accessing their websites.These attacks require intelligent analytics before processing service requests.Distributed denial of service(DDoS)attacks exploit vulnerabilities in IoT devices by launchingmulti-point distributed attacks.These attacks generate massive traffic that overwhelms the victim’s network,disrupting normal operations.The consequences of distributed denial of service(DDoS)attacks are typically more severe in software-defined networks(SDNs)than in traditional networks.The centralised architecture of these networks can exacerbate existing vulnerabilities,as these weaknesses may not be effectively addressed in this model.The preliminary objective for detecting and mitigating distributed denial of service(DDoS)attacks in software-defined networks(SDN)is to monitor traffic patterns and identify anomalies that indicate distributed denial of service(DDoS)attacks.It implements measures to counter the effects ofDDoS attacks,and ensure network reliability and availability by leveraging the flexibility and programmability of SDN to adaptively respond to threats.The authors present a mechanism that leverages the OpenFlow and sFlow protocols to counter the threats posed by DDoS attacks.The results indicate that the proposed model effectively mitigates the negative effects of DDoS attacks in an SDN environment.
文摘Multiple Sclerosis(MS)poses significant health risks.Patients may face neurodegeneration,mobility issues,cognitive decline,and a reduced quality of life.Manual diagnosis by neurologists is prone to limitations,making AI-based classification crucial for early detection.Therefore,automated classification using Artificial Intelligence(AI)techniques has a crucial role in addressing the limitations of manual classification and preventing the development of MS to advanced stages.This study developed hybrid systems integrating XGBoost(eXtreme Gradient Boosting)with multi-CNN(Convolutional Neural Networks)features based on Ant Colony Optimization(ACO)and Maximum Entropy Score-based Selection(MESbS)algorithms for early classification of MRI(Magnetic Resonance Imaging)images in a multi-class and binary-class MS dataset.All hybrid systems started by enhancing MRI images using the fusion processes of a Gaussian filter and Contrast-Limited Adaptive Histogram Equalization(CLAHE).Then,the Gradient Vector Flow(GVF)algorithm was applied to select white matter(regions of interest)within the brain and segment them from the surrounding brain structures.These regions of interest were processed by CNN models(ResNet101,DenseNet201,and MobileNet)to extract deep feature maps,which were then combined into fused feature vectors of multi-CNN model combinations(ResNet101-DenseNet201,DenseNet201-MobileNet,ResNet101-MobileNet,and ResNet101-DenseNet201-MobileNet).The multi-CNN features underwent dimensionality reduction using ACO and MESbS algorithms to remove unimportant features and retain important features.The XGBoost classifier employed the resultant feature vectors for classification.All developed hybrid systems displayed promising outcomes.For multiclass classification,the XGBoost model using ResNet101-DenseNet201-MobileNet features selected by ACO attained 99.4%accuracy,99.45%precision,and 99.75%specificity,surpassing prior studies(93.76%accuracy).It reached 99.6%accuracy,99.65%precision,and 99.55%specificity in binary-class classification.These results demonstrate the effectiveness of multi-CNN fusion with feature selection in improving MS classification accuracy.
基金funding of the Deanship of Graduate Studies and Scientific Research,Jazan University,Saudi Arabia,through Project Number:ISP-2024.
文摘Efficient resource management within Internet of Things(IoT)environments remains a pressing challenge due to the increasing number of devices and their diverse functionalities.This study introduces a neural network-based model that uses Long-Short-Term Memory(LSTM)to optimize resource allocation under dynam-ically changing conditions.Designed to monitor the workload on individual IoT nodes,the model incorporates long-term data dependencies,enabling adaptive resource distribution in real time.The training process utilizes Min-Max normalization and grid search for hyperparameter tuning,ensuring high resource utilization and consistent performance.The simulation results demonstrate the effectiveness of the proposed method,outperforming the state-of-the-art approaches,including Dynamic and Efficient Enhanced Load-Balancing(DEELB),Optimized Scheduling and Collaborative Active Resource-management(OSCAR),Convolutional Neural Network with Monarch Butterfly Optimization(CNN-MBO),and Autonomic Workload Prediction and Resource Allocation for Fog(AWPR-FOG).For example,in scenarios with low system utilization,the model achieved a resource utilization efficiency of 95%while maintaining a latency of just 15 ms,significantly exceeding the performance of comparative methods.
基金Project supported by the National Natural Science Foundation of China(Grant No.12101443,12371493)the Natural Science Foundation of Shanxi Province(Grant Nos.20210302124260 and 202303021221024)。
文摘Livestock transportation is a key factor that contributes to the spatial spread of brucellosis.To analyze the impact of sheep transportation on brucellosis transmission,we develop a human–sheep coupled brucellosis model within a metapopulation network framework.Theoretically,we examine the positively invariant set,the basic reproduction number,the existence,uniqueness,and stability of disease-free equilibrium and the existence of the endemic equilibrium of the model.For practical application,using Heilongjiang province as a case study,we simulate brucellosis transmission across 12 cities based on data using three network types:the BA network,the ER network,and homogeneous mixing network.The simulation results indicate that the network's average degree plays a role in the spread of brucellosis.For BA and ER networks,the basic reproduction number and cumulative incidence of brucellosis stabilize when the network's average degree reaches 4 or 5.In contrast,sheep transport in a homogeneous mixing network accelerates the cross-regional spread of brucellosis,whereas transportation in a BA network helps to control it effectively.Furthermore,the findings suggest that the movement of sheep is not always detrimental to controlling the spread of brucellosis.For cities with smaller sheep populations,such as Shuangyashan and Qitaihe,increasing the transport of sheep outward amplifies the spatial spread of the disease.In contrast,in cities with larger sheep populations,such as Qiqihar,Daqing,and Suihua,moderate sheep outflow can help reduce the spread.In addition,cities with large livestock populations play a dominant role in the overall transmission dynamics,underscoring the need for stricter supervision in these areas.