Remote sensing and web-based platforms have emerged as vital tools in the effective monitoring of mangrove ecosystems, which are crucial for coastal protection, biodiversity, and carbon sequestration. Utilizing satell...Remote sensing and web-based platforms have emerged as vital tools in the effective monitoring of mangrove ecosystems, which are crucial for coastal protection, biodiversity, and carbon sequestration. Utilizing satellite imagery and aerial data, remote sensing allows researchers to assess the health and extent of mangrove forests over large areas and time periods, providing insights into changes due to environmental stressors like climate change, urbanization, and deforestation. Coupled with web-based platforms, this technology facilitates real-time data sharing and collaborative research efforts among scientists, policymakers, and conservationists. Thus, there is a need to grow this research interest among experts working in this kind of ecosystem. The aim of this paper is to provide a comprehensive literature review on the effective role of remote sensing and web-based platform in monitoring mangrove ecosystem. The research paper utilized the thematic approach to extract specific information to use in the discussion which helped realize the efficiency of digital monitoring for the environment. Web-based platforms and remote sensing represent a powerful tool for environmental monitoring, particularly in the context of forest ecosystems. They facilitate the accessibility of vital data, promote collaboration among stakeholders, support evidence-based policymaking, and engage communities in conservation efforts. As experts confront the urgent challenges posed by climate change and environmental degradation, leveraging technology through web-based platforms is essential for fostering a sustainable future for the forests of the world.展开更多
Large-scale deep-seated landslides pose a significant threat to human life and infrastructure.Therefore,closely monitoring these landslides is crucial for assessing and mitigating their associated risks.In this paper,...Large-scale deep-seated landslides pose a significant threat to human life and infrastructure.Therefore,closely monitoring these landslides is crucial for assessing and mitigating their associated risks.In this paper,the authors introduce the So Lo Mon framework,a comprehensive monitoring system developed for three large-scale landslides in the Autonomous Province of Bolzano,Italy.A web-based platform integrates various monitoring data(GNSS,topographic data,in-place inclinometer),providing a user-friendly interface for visualizing and analyzing the collected data.This facilitates the identification of trends and patterns in landslide behaviour,enabling the triggering of warnings and the implementation of appropriate mitigation measures.The So Lo Mon platform has proven to be an invaluable tool for managing the risks associated with large-scale landslides through non-structural measures and driving countermeasure works design.It serves as a centralized data repository,offering visualization and analysis tools.This information empowers decisionmakers to make informed choices regarding risk mitigation,ultimately ensuring the safety of communities and infrastructures.展开更多
The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integra...The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integrates transformer-based models(RoBERTa)and large language models(LLMs)(GPT-OSS 120B,LLaMA3.370B,and Qwen332B)to enhance smishing detection performance significantly.To mitigate class imbalance,we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques.Our system employs a duallayer voting mechanism:weighted majority voting among LLMs and a final ensemble vote to classify messages as ham,spam,or smishing.Experimental results show an average accuracy improvement from 96%to 98.5%compared to the best standalone transformer,and from 93%to 98.5%when compared to LLMs across datasets.Furthermore,we present a real-time,user-friendly application to operationalize our detection model for practical use.PhishNet demonstrates superior scalability,usability,and detection accuracy,filling critical gaps in current smishing detection methodologies.展开更多
Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulner...Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulnerabilities in software and communication protocols to silently gain access,exfiltrate data,and enable long-term surveillance.Their stealth and ability to evade traditional defenses make detection and mitigation highly challenging.This paper addresses these threats by systematically mapping the tactics and techniques of zero-click attacks using the MITRE ATT&CK framework,a widely adopted standard for modeling adversarial behavior.Through this mapping,we categorize real-world attack vectors and better understand how such attacks operate across the cyber-kill chain.To support threat detection efforts,we propose an Active Learning-based method to efficiently label the Pegasus spyware dataset in alignment with the MITRE ATT&CK framework.This approach reduces the effort of manually annotating data while improving the quality of the labeled data,which is essential to train robust cybersecurity models.In addition,our analysis highlights the structured execution paths of zero-click attacks and reveals gaps in current defense strategies.The findings emphasize the importance of forward-looking strategies such as continuous surveillance,dynamic threat profiling,and security education.By bridging zero-click attack analysis with the MITRE ATT&CK framework and leveraging machine learning for dataset annotation,this work provides a foundation for more accurate threat detection and the development of more resilient and structured cybersecurity frameworks.展开更多
Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.Howev...Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.展开更多
Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Althoug...Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.展开更多
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp...With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.展开更多
The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)an...The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.展开更多
The paper, with the backdrop of web-based autonomous learning put forward by the recent college English teaching reform, aims to explore teachers' roles in this learning process in students' perception through the m...The paper, with the backdrop of web-based autonomous learning put forward by the recent college English teaching reform, aims to explore teachers' roles in this learning process in students' perception through the means of questionnaires and interviews. It further analyzes the possible reasons why students perceive their teachers' roles in such a way, in the hope of providing some implications for web-based college English autonomous learning.展开更多
In the light of the theory of constructivism, the interactive web-based college English teaching model is intended to facilitate "autonomy", "inquiry" and "cooperation" in learning English. This paper presents a...In the light of the theory of constructivism, the interactive web-based college English teaching model is intended to facilitate "autonomy", "inquiry" and "cooperation" in learning English. This paper presents a research in which the interactive web-based college English teaching model intends to reshape the teacher's and learner's roles in the classroom. Based on the research, an exploration is made --- within the framework of the interactive web-based model --- on the design of "teaching model" and "learning model", its application and related potential problems.展开更多
The thesis introduces a comparative study of students'autonomous listening practice in a web-based autonomous learning center and the traditional teacher-dominated listening practice in a traditional language lab....The thesis introduces a comparative study of students'autonomous listening practice in a web-based autonomous learning center and the traditional teacher-dominated listening practice in a traditional language lab.The purpose of the study is to find how students'listening strategies differ in these two approaches and thereby to find which one better facilitates students'listening proficiency.展开更多
Metacognitive strategies are regarded as advanced strategies in all the learning strategies.This study focuses on the application of metacognitive strategies in English listening in the web-based self-access learning ...Metacognitive strategies are regarded as advanced strategies in all the learning strategies.This study focuses on the application of metacognitive strategies in English listening in the web-based self-access learning environment(WSLE) and tries to provide some references for those students and teachers in the vocational colleges.展开更多
The paper is a literature review, aiming to examine the effectiveness of web-based college English learning which mainly focuses on learners' autonomous learning. Previous studies indicate that the web-based learn...The paper is a literature review, aiming to examine the effectiveness of web-based college English learning which mainly focuses on learners' autonomous learning. Previous studies indicate that the web-based learning can improve learners' autonomous learning, as well as some problems found in their findings. Therefore, this paper first gives a summary and critique of research studies on the web-based autonomous learning and some factors influencing learners' autonomous learning ability;then, areas that deserve further study are also indicated.展开更多
Recently, with the rapid growth of information technology, many studies have been performed to implement Web-based manufacturing system. Such technologies are expected to meet the need of many manufacturing industries...Recently, with the rapid growth of information technology, many studies have been performed to implement Web-based manufacturing system. Such technologies are expected to meet the need of many manufacturing industries who want to adopt E-manufacturing system for the construction of globalization, agility, and digitalization to cope with the rapid changing market requirements. In this research, a real-time Web-based machine tool and machining process monitoring system is developed as the first step for implementing E-manufacturing system. In this system, the current variations of the main spindle and feeding motors are measured using hall sensors. And the relationship between the cutting force and the spindle motor RMS (Root Mean Square) current at various spindle rotational speeds is obtained. Thermocouples are used to measure temperature variations of important heat sources of a machine tool. Also, a rule-based expert system is applied in order to decide the machining process and machine tool are in normal conditions. Finally, the effectiveness of the developed system is verified through a series of experiments.展开更多
<strong>Object:</strong> To explore the effect of web-based real-time interactive intervention teaching model on self-efficacy of Gestational Diabetes Mellitus (GDM) patients. <strong>Method:</str...<strong>Object:</strong> To explore the effect of web-based real-time interactive intervention teaching model on self-efficacy of Gestational Diabetes Mellitus (GDM) patients. <strong>Method:</strong> Based on the hospital’s antenatal check-up archives from June 2018 to January 2019, patients diagnosed with GDM in the second trimester were randomly divided into the control group (100 cases) and the experimental group (121 cases). Patients in the control group received routine care following the diabetes mellitus one-day outpatient guidance, while patients in the experimental group received social media real-time interactive teaching intervention based on routine care, and accepted a nursing intervention scheme based on knowledge-attitude-practice mode. The knowledge of GDM, self-efficacy and self-management behavior indicators were compared between the two groups.<strong> Results:</strong> After the intervention, the self-efficacy scores, the blood glucose monitoring times and the blood glucose compliance rates of the experimental group were significantly higher than those of the control group (<em>P</em> < 0.05). The post-intervention GDM knowledge scores of the experimental group were higher than those of the control group, and the difference was not statistically significant (<em>P </em>= 0.072). <strong>Conclusion:</strong> Web-based real-time interactive intervention teaching model can effectively improve the self-efficacy of GDM patients and promote the formation of healthy behaviors.展开更多
AIM:To assess the reliability of web-based version of ocular surface disease index in Chinese(C-OSDI)on clinically diagnosed dry eye disease(DE)patients.METHODS:A total of 254 Chinese participants(51%male,129/254;mean...AIM:To assess the reliability of web-based version of ocular surface disease index in Chinese(C-OSDI)on clinically diagnosed dry eye disease(DE)patients.METHODS:A total of 254 Chinese participants(51%male,129/254;mean age:27.90±9.06 y)with DED completed paper-and web-based versions of C-OSDI questionnaires in a randomized crossover design.Ophthalmology examination and DED diagnosis were performed prior to the participants being invited to join the study.Participants were randomly designated to either group A(paper-based first and webbased second)or group B(web-based first and paper-based second).Final data analysis included participants that had successfully completed both versions of the C-OSDI.Demographic characteristics,test-retest reliability,and agreement of individual items,subscales,and total score were evaluated with intraclass correlation coefficients(ICC),Spearman rank correlation,Wilcoxon test and Rasch analysis.RESULTS:Reliability indexes were adequate,Pearson correlation was greater than 0.8 and ICCs range was 0.827 to 0.982;total C-OSDI score was not statistically different between the two versions.The values of mean-squares fit statistics were very low compared to 1,indicating that the responses to the items by the model had a high degree of predictability.While comparing the favorability 72%(182/254)of the participants preferred web-based assessment.CONCLUSION:Web-based C-OSDI is reliable in assessing DED and correlation with the paper-based version is significant in all subscales and overall total score.Webbased C-OSDI can be administered to assess individuals with DED as participants predominantly favored online assessment.展开更多
In this paper we propose a service-oriented architecture for spatial data integration (SOA-SDI) in the context of a large number of available spatial data sources that are physically sitting at different places, and d...In this paper we propose a service-oriented architecture for spatial data integration (SOA-SDI) in the context of a large number of available spatial data sources that are physically sitting at different places, and develop web-based GIS systems based on SOA-SDI, allowing client applications to pull in, analyze and present spatial data from those available spatial data sources. The proposed architecture logically includes 4 layers or components; they are layer of multiple data provider services, layer of data in-tegration, layer of backend services, and front-end graphical user interface (GUI) for spatial data presentation. On the basis of the 4-layered SOA-SDI framework, WebGIS applications can be quickly deployed, which proves that SOA-SDI has the potential to reduce the input of software development and shorten the development period.展开更多
文摘Remote sensing and web-based platforms have emerged as vital tools in the effective monitoring of mangrove ecosystems, which are crucial for coastal protection, biodiversity, and carbon sequestration. Utilizing satellite imagery and aerial data, remote sensing allows researchers to assess the health and extent of mangrove forests over large areas and time periods, providing insights into changes due to environmental stressors like climate change, urbanization, and deforestation. Coupled with web-based platforms, this technology facilitates real-time data sharing and collaborative research efforts among scientists, policymakers, and conservationists. Thus, there is a need to grow this research interest among experts working in this kind of ecosystem. The aim of this paper is to provide a comprehensive literature review on the effective role of remote sensing and web-based platform in monitoring mangrove ecosystem. The research paper utilized the thematic approach to extract specific information to use in the discussion which helped realize the efficiency of digital monitoring for the environment. Web-based platforms and remote sensing represent a powerful tool for environmental monitoring, particularly in the context of forest ecosystems. They facilitate the accessibility of vital data, promote collaboration among stakeholders, support evidence-based policymaking, and engage communities in conservation efforts. As experts confront the urgent challenges posed by climate change and environmental degradation, leveraging technology through web-based platforms is essential for fostering a sustainable future for the forests of the world.
基金funded by the So Lo Mon project“Monitoraggio a Lungo Termine di Grandi Frane basato su Sistemi Integrati di Sensori e Reti”(Longterm monitoring of large-scale landslides based on integrated systems of sensors and networks),Program EFRE-FESR 2014–2020,Project EFRE-FESR4008 South Tyrol–Person in charge:V.Mair。
文摘Large-scale deep-seated landslides pose a significant threat to human life and infrastructure.Therefore,closely monitoring these landslides is crucial for assessing and mitigating their associated risks.In this paper,the authors introduce the So Lo Mon framework,a comprehensive monitoring system developed for three large-scale landslides in the Autonomous Province of Bolzano,Italy.A web-based platform integrates various monitoring data(GNSS,topographic data,in-place inclinometer),providing a user-friendly interface for visualizing and analyzing the collected data.This facilitates the identification of trends and patterns in landslide behaviour,enabling the triggering of warnings and the implementation of appropriate mitigation measures.The So Lo Mon platform has proven to be an invaluable tool for managing the risks associated with large-scale landslides through non-structural measures and driving countermeasure works design.It serves as a centralized data repository,offering visualization and analysis tools.This information empowers decisionmakers to make informed choices regarding risk mitigation,ultimately ensuring the safety of communities and infrastructures.
基金funded by the Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah,under Grant No.(GPIP:1074-612-2024).
文摘The surge in smishing attacks underscores the urgent need for robust,real-time detection systems powered by advanced deep learning models.This paper introduces PhishNet,a novel ensemble learning framework that integrates transformer-based models(RoBERTa)and large language models(LLMs)(GPT-OSS 120B,LLaMA3.370B,and Qwen332B)to enhance smishing detection performance significantly.To mitigate class imbalance,we apply synthetic data augmentation using T5 and leverage various text preprocessing techniques.Our system employs a duallayer voting mechanism:weighted majority voting among LLMs and a final ensemble vote to classify messages as ham,spam,or smishing.Experimental results show an average accuracy improvement from 96%to 98.5%compared to the best standalone transformer,and from 93%to 98.5%when compared to LLMs across datasets.Furthermore,we present a real-time,user-friendly application to operationalize our detection model for practical use.PhishNet demonstrates superior scalability,usability,and detection accuracy,filling critical gaps in current smishing detection methodologies.
文摘Zero-click attacks represent an advanced cybersecurity threat,capable of compromising devices without user interaction.High-profile examples such as Pegasus,Simjacker,Bluebugging,and Bluesnarfing exploit hidden vulnerabilities in software and communication protocols to silently gain access,exfiltrate data,and enable long-term surveillance.Their stealth and ability to evade traditional defenses make detection and mitigation highly challenging.This paper addresses these threats by systematically mapping the tactics and techniques of zero-click attacks using the MITRE ATT&CK framework,a widely adopted standard for modeling adversarial behavior.Through this mapping,we categorize real-world attack vectors and better understand how such attacks operate across the cyber-kill chain.To support threat detection efforts,we propose an Active Learning-based method to efficiently label the Pegasus spyware dataset in alignment with the MITRE ATT&CK framework.This approach reduces the effort of manually annotating data while improving the quality of the labeled data,which is essential to train robust cybersecurity models.In addition,our analysis highlights the structured execution paths of zero-click attacks and reveals gaps in current defense strategies.The findings emphasize the importance of forward-looking strategies such as continuous surveillance,dynamic threat profiling,and security education.By bridging zero-click attack analysis with the MITRE ATT&CK framework and leveraging machine learning for dataset annotation,this work provides a foundation for more accurate threat detection and the development of more resilient and structured cybersecurity frameworks.
基金funded by the National Key Research and Development Program of China(Grant No.2024YFE0209000)the NSFC(Grant No.U23B2019).
文摘Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.
基金supported by Key Laboratory of Cyberspace Security,Ministry of Education,China。
文摘Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509Development of security monitoring technology based network behavior against encrypted cyber threats in ICT convergence environment).
文摘With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.
文摘The paper, with the backdrop of web-based autonomous learning put forward by the recent college English teaching reform, aims to explore teachers' roles in this learning process in students' perception through the means of questionnaires and interviews. It further analyzes the possible reasons why students perceive their teachers' roles in such a way, in the hope of providing some implications for web-based college English autonomous learning.
文摘In the light of the theory of constructivism, the interactive web-based college English teaching model is intended to facilitate "autonomy", "inquiry" and "cooperation" in learning English. This paper presents a research in which the interactive web-based college English teaching model intends to reshape the teacher's and learner's roles in the classroom. Based on the research, an exploration is made --- within the framework of the interactive web-based model --- on the design of "teaching model" and "learning model", its application and related potential problems.
文摘The thesis introduces a comparative study of students'autonomous listening practice in a web-based autonomous learning center and the traditional teacher-dominated listening practice in a traditional language lab.The purpose of the study is to find how students'listening strategies differ in these two approaches and thereby to find which one better facilitates students'listening proficiency.
文摘Metacognitive strategies are regarded as advanced strategies in all the learning strategies.This study focuses on the application of metacognitive strategies in English listening in the web-based self-access learning environment(WSLE) and tries to provide some references for those students and teachers in the vocational colleges.
文摘The paper is a literature review, aiming to examine the effectiveness of web-based college English learning which mainly focuses on learners' autonomous learning. Previous studies indicate that the web-based learning can improve learners' autonomous learning, as well as some problems found in their findings. Therefore, this paper first gives a summary and critique of research studies on the web-based autonomous learning and some factors influencing learners' autonomous learning ability;then, areas that deserve further study are also indicated.
基金Project (No. KRF-2005-202-D00046) supported by the Korea Re-search Foundation
文摘Recently, with the rapid growth of information technology, many studies have been performed to implement Web-based manufacturing system. Such technologies are expected to meet the need of many manufacturing industries who want to adopt E-manufacturing system for the construction of globalization, agility, and digitalization to cope with the rapid changing market requirements. In this research, a real-time Web-based machine tool and machining process monitoring system is developed as the first step for implementing E-manufacturing system. In this system, the current variations of the main spindle and feeding motors are measured using hall sensors. And the relationship between the cutting force and the spindle motor RMS (Root Mean Square) current at various spindle rotational speeds is obtained. Thermocouples are used to measure temperature variations of important heat sources of a machine tool. Also, a rule-based expert system is applied in order to decide the machining process and machine tool are in normal conditions. Finally, the effectiveness of the developed system is verified through a series of experiments.
文摘<strong>Object:</strong> To explore the effect of web-based real-time interactive intervention teaching model on self-efficacy of Gestational Diabetes Mellitus (GDM) patients. <strong>Method:</strong> Based on the hospital’s antenatal check-up archives from June 2018 to January 2019, patients diagnosed with GDM in the second trimester were randomly divided into the control group (100 cases) and the experimental group (121 cases). Patients in the control group received routine care following the diabetes mellitus one-day outpatient guidance, while patients in the experimental group received social media real-time interactive teaching intervention based on routine care, and accepted a nursing intervention scheme based on knowledge-attitude-practice mode. The knowledge of GDM, self-efficacy and self-management behavior indicators were compared between the two groups.<strong> Results:</strong> After the intervention, the self-efficacy scores, the blood glucose monitoring times and the blood glucose compliance rates of the experimental group were significantly higher than those of the control group (<em>P</em> < 0.05). The post-intervention GDM knowledge scores of the experimental group were higher than those of the control group, and the difference was not statistically significant (<em>P </em>= 0.072). <strong>Conclusion:</strong> Web-based real-time interactive intervention teaching model can effectively improve the self-efficacy of GDM patients and promote the formation of healthy behaviors.
文摘AIM:To assess the reliability of web-based version of ocular surface disease index in Chinese(C-OSDI)on clinically diagnosed dry eye disease(DE)patients.METHODS:A total of 254 Chinese participants(51%male,129/254;mean age:27.90±9.06 y)with DED completed paper-and web-based versions of C-OSDI questionnaires in a randomized crossover design.Ophthalmology examination and DED diagnosis were performed prior to the participants being invited to join the study.Participants were randomly designated to either group A(paper-based first and webbased second)or group B(web-based first and paper-based second).Final data analysis included participants that had successfully completed both versions of the C-OSDI.Demographic characteristics,test-retest reliability,and agreement of individual items,subscales,and total score were evaluated with intraclass correlation coefficients(ICC),Spearman rank correlation,Wilcoxon test and Rasch analysis.RESULTS:Reliability indexes were adequate,Pearson correlation was greater than 0.8 and ICCs range was 0.827 to 0.982;total C-OSDI score was not statistically different between the two versions.The values of mean-squares fit statistics were very low compared to 1,indicating that the responses to the items by the model had a high degree of predictability.While comparing the favorability 72%(182/254)of the participants preferred web-based assessment.CONCLUSION:Web-based C-OSDI is reliable in assessing DED and correlation with the paper-based version is significant in all subscales and overall total score.Webbased C-OSDI can be administered to assess individuals with DED as participants predominantly favored online assessment.
基金Supported by the Research Fund of Key GIS Lab of the Education Ministry (No. 200610)
文摘In this paper we propose a service-oriented architecture for spatial data integration (SOA-SDI) in the context of a large number of available spatial data sources that are physically sitting at different places, and develop web-based GIS systems based on SOA-SDI, allowing client applications to pull in, analyze and present spatial data from those available spatial data sources. The proposed architecture logically includes 4 layers or components; they are layer of multiple data provider services, layer of data in-tegration, layer of backend services, and front-end graphical user interface (GUI) for spatial data presentation. On the basis of the 4-layered SOA-SDI framework, WebGIS applications can be quickly deployed, which proves that SOA-SDI has the potential to reduce the input of software development and shorten the development period.