Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic developm...The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.展开更多
Advanced artificial intelligence technologies such as ChatGPT and other large language models(LLMs)have significantly impacted fields such as education and research in recent years.ChatGPT benefits students and educat...Advanced artificial intelligence technologies such as ChatGPT and other large language models(LLMs)have significantly impacted fields such as education and research in recent years.ChatGPT benefits students and educators by providing personalized feedback,facilitating interactive learning,and introducing innovative teaching methods.While many researchers have studied ChatGPT across various subject domains,few analyses have focused on the engineering domain,particularly in addressing the risks of academic dishonesty and potential declines in critical thinking skills.To address this gap,this study explores both the opportunities and limitations of ChatGPT in engineering contexts through a two-part analysis.First,we conducted experiments with ChatGPT to assess its effectiveness in tasks such as code generation,error checking,and solution optimization.Second,we surveyed 125 users,predominantly engineering students,to analyze ChatGPTs role in academic support.Our findings reveal that 93.60%of respondents use ChatGPT for quick academic answers,particularly among early-stage university students,and that 84.00%find it helpful for sourcing research materials.The study also highlights ChatGPT’s strengths in programming assistance,with 84.80%of users utilizing it for debugging and 86.40%for solving coding problems.However,limitations persist,with many users reporting inaccuracies in mathematical solutions and occasional false citations.Furthermore,the reliance on the free version by 96%of users underscores its accessibility but also suggests limitations in resource availability.This work provides key insights into ChatGPT’s strengths and limitations,establishing a framework for responsible AI use in education.Highlighting areas for improvement marks a milestone in understanding and optimizing AI’s role in academia for sustainable future use.展开更多
Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasi...Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.展开更多
We experimentally analyze the effect of the optical power on the time delay signature identification and the random bit generation in chaotic semiconductor laser with optical feedback.Due to the inevitable noise durin...We experimentally analyze the effect of the optical power on the time delay signature identification and the random bit generation in chaotic semiconductor laser with optical feedback.Due to the inevitable noise during the photoelectric detection and analog-digital conversion,the varying of output optical power would change the signal to noise ratio,then impact time delay signature identification and the random bit generation.Our results show that,when the optical power is less than-14 dBm,with the decreasing of the optical power,the actual identified time delay signature degrades and the entropy of the chaotic signal increases.Moreover,the extracted random bit sequence with lower optical power is more easily pass through the randomness testing.展开更多
The rapid expansion of Internet of Things(IoT)networks has introduced challenges in network management,primarily in maintaining energy efficiency and robust connectivity across an increasing array of devices.This pape...The rapid expansion of Internet of Things(IoT)networks has introduced challenges in network management,primarily in maintaining energy efficiency and robust connectivity across an increasing array of devices.This paper introduces the Adaptive Blended Marine Predators Algorithm(AB-MPA),a novel optimization technique designed to enhance Quality of Service(QoS)in IoT systems by dynamically optimizing network configurations for improved energy efficiency and stability.Our results represent significant improvements in network performance metrics such as energy consumption,throughput,and operational stability,indicating that AB-MPA effectively addresses the pressing needs ofmodern IoT environments.Nodes are initiated with 100 J of stored energy,and energy is consumed at 0.01 J per square meter in each node to emphasize energy-efficient networks.The algorithm also provides sufficient network lifetime extension to a resourceful 7000 cycles for up to 200 nodes with a maximum Packet Delivery Ratio(PDR)of 99% and a robust network throughput of up to 1800 kbps in more compact node configurations.This study proposes a viable solution to a critical problem and opens avenues for further research into scalable network management for diverse applications.展开更多
Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susce...Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research.展开更多
With the accelerated growth of the Internet of Things(IoT),real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local.Since these d...With the accelerated growth of the Internet of Things(IoT),real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local.Since these devices often handle personal information under limited resources,cryptographic algorithms must be executed efficiently.Their computational characteristics strongly affect system performance,making it necessary to analyze resource impact and predict usage under diverse configurations.In this paper,we analyze the phase-level resource usage of AES variants,ChaCha20,ECC,and RSA on an edge device and develop a prediction model.We apply these algorithms under varying parallelism levels and execution strategies across key generation,encryption,and decryption phases.Based on the analysis,we train a unified Random Forest model using execution context and temporal features,achieving R2 values up to 0.994 for power and 0.988 for temperature.Furthermore,the model maintains practical predictive performance even for cryptographic algorithms not included during training,demonstrating its ability to generalize across distinct computational characteristics.Our proposed approach reveals how execution characteristics and resource usage interacts,supporting proactive resource planning and efficient deployment of cryptographic workloads on edge devices.As our approach is grounded in phase-level computational characteristics rather than in any single algorithm,it provides generalizable insights that can be extended to a broader range of cryptographic algorithms that exhibit comparable phase-level execution patterns and to heterogeneous edge architectures.展开更多
The rapid advancement of 6G communication technologies and generative artificial intelligence(AI)is catalyzing a new wave of innovation at the intersection of networking and intelligent computing.On the one hand,6G en...The rapid advancement of 6G communication technologies and generative artificial intelligence(AI)is catalyzing a new wave of innovation at the intersection of networking and intelligent computing.On the one hand,6G envisions a hyper-connected environment that supports ubiquitous intelligence through ultra-low latency,high throughput,massive device connectivity,and integrated sensing and communication.On the other hand,generative AI,powered by large foundation models,has emerged as a powerful paradigm capable of creating.展开更多
Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(D...Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.展开更多
Quantum algorithms have demonstrated provable speedups over classical counterparts,yet establishing a comprehensive theoretical framework to understand the quantum advantage remains a core challenge.In this work,we de...Quantum algorithms have demonstrated provable speedups over classical counterparts,yet establishing a comprehensive theoretical framework to understand the quantum advantage remains a core challenge.In this work,we decode the quantum search advantage by investigating the critical role of quantum state properties in random-walk-based algorithms.We propose three distinct variants of quantum random-walk search algorithms and derive exact analytical expressions for their success probabilities.These probabilities are fundamentally determined by specific initial state properties:the coherence fraction governs the first algorithm’s performance,while entanglement and coherence dominate the outcomes of the second and third algorithms,respectively.We show that increased coherence fraction enhances success probability,but greater entanglement and coherence reduce it in the latter two cases.These findings reveal fundamental insights into harnessing quantum properties for advantage and guide algorithm design.Our searches achieve Grover-like speedups and show significant potential for quantum-enhanced machine learning.展开更多
In this study,ten wind turbines and fourteen solar photovoltaic(SPV)modules were employed to compare the potential of hydrogen production from wind and solar energy resources in the six geopolitical zones of Nigeria.T...In this study,ten wind turbines and fourteen solar photovoltaic(SPV)modules were employed to compare the potential of hydrogen production from wind and solar energy resources in the six geopolitical zones of Nigeria.The amount of hydrogen produced was considered as a technical parameter,cost of hydrogen production was considered as an economic index,and the amount of carbon(IV)oxide saved from the use of diesel fuel was considered as an environmental index.The results reveal that ENERCON E-40 turbine yields the highest capacity factor in Lagos,Jos,Sokoto,Bauchi and Enugu sites while FUHRLAENDER,GMBH yields the highest capacity factor in Delta.The mean annual hydrogen production from wind ranged from 2.05 tons/annum at site S6(Delta)to 17.33 tons/annum at site S3(Sokoto),and the mean annual hydrogen production from SPV ranged from 64.33 tons/annum at sites S1(Lagos)to 140.28 tons/annum at site S6(Delta).The cost of hydrogen production from wind was 6.3679 and 25.9007$/kg for sites S3 and S6,respectively,and the cost of hydrogen production from SPV was 5.6659 and 6.1206$/kg for sites S3 and S1,respectively.The amount of CO_(2) saved annually from wind-based hydrogen generation was 137,267 kg/year in site S6 and 504,180 kg/year in site S3,and was used to produce electricity via fuel cells.The amount of CO_(2) saved using hydrogen produced from SPV was 615,400 kg/year and 1,341,899 kg/year in sites S1 and S6,respectively.The results also revealed that 75.55%,88.93%,80.28%,80.54%,85.65%,98.53%more hydrogen could be produced from SPV for sites S1–S6,respectively,compared to the wind resources.This study serves as a source of reliable technical information to relevant government agencies,policy makers and investors in making informed decisions on optimal investment in the hydrogen economy of Nigeria.展开更多
Rainfall-induced shallow landslides pose one of significant geological hazards,necessitating precise monitoring and prediction for effective disaster mitigation.Most studies on landslide prediction have focused on opt...Rainfall-induced shallow landslides pose one of significant geological hazards,necessitating precise monitoring and prediction for effective disaster mitigation.Most studies on landslide prediction have focused on optimizing machine learning(ML)algorithms,very limited attention has been paid to enhancing data quality for improved predictive performance.This study employs strategic data augmentation(DA)techniques to enhance the accuracy of shallow landslide prediction.Using five DA methods including singular spectrum analysis(SSA),moving averages(MA),wavelet denoising(WD),variational mode decomposition(VMD),and linear interpolation(LI),we utilize strategies such as smoothing,denoising,trend decomposition,and synthetic data generation to improve the training dataset.Four machine learning algorithms,i.e.artificial neural network(ANN),recurrent neural network(RNN),one-dimensional convolutional neural network(CNN1D),and long short-term memory(LSTM),are used to forecast landslide displacement.The case study of a landslide in southwest China shows the effectiveness of our approach in predicting landslide displacements,despite the inherent limitations of the monitoring dataset.VMD proves the most effective for smoothing and denoising,improving R^(2),RMSE,and MAPE by 172.16%,71.82%,and 98.9%,respectively.SSA addresses missing data,while LI is effective with limited data samples,improving metrics by 21.6%,52.59%,and 47.87%,respectively.This study demonstrates the potential of DA techniques to mitigate the impact of data defects on landslide prediction accuracy,with implications for similar cases.展开更多
Software systems play increasing important roles in modern society,and the ability against attacks is of great practical importance to crucial software systems,resulting in that the structure and robustness of softwar...Software systems play increasing important roles in modern society,and the ability against attacks is of great practical importance to crucial software systems,resulting in that the structure and robustness of software systems have attracted a tremendous amount of interest in recent years.In this paper,based on the source code of Tar and MySQL,we propose an approach to generate coupled software networks and construct three kinds of directed software networks:The function call network,the weakly coupled network and the strongly coupled network.The structural properties of these complex networks are extensively investigated.It is found that the average influence and the average dependence for all functions are the same.Moreover,eight attacking strategies and two robustness indicators(the weakly connected indicator and the strongly connected indicator)are introduced to analyze the robustness of software networks.This shows that the strongly coupled network is just a weakly connected network rather than a strongly connected one.For MySQL,high in-degree strategy outperforms other attacking strategies when the weakly connected indicator is used.On the other hand,high out-degree strategy is a good choice when the strongly connected indicator is adopted.This work will highlight a better understanding of the structure and robustness of software networks.展开更多
This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data character...This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.展开更多
Airborne hyperspectral imaging spectrometers have been used for Earth observation over the past four decades.Despite the high sensitivity of push-broom hyperspectral imagers,they experience limited swath and wavelengt...Airborne hyperspectral imaging spectrometers have been used for Earth observation over the past four decades.Despite the high sensitivity of push-broom hyperspectral imagers,they experience limited swath and wavelength coverage.In this study,we report the development of a push-broom airborne multimodular imaging spectrometer(AMMIS)that spans ultraviolet(UV),visible near-infrared(VNIR),shortwave infrared(SWIR),and thermal infrared(TIR)wavelengths.As an integral part of China's HighResolution Earth Observation Program,AMMIS is intended for civilian applications and for validating key technologies for future spaceborne hyperspectral payloads.It has been mounted on aircraft platforms such as Y-5,Y-12,and XZ-60.Since 2016,AMMIS has been used to perform more than 30 flight campaigns and gather more than 200 TB of hyperspectral data.This study describes the system design,calibration techniques,performance tests,flight campaigns,and applications of the AMMIS.The system integrates UV,VNIR,SWIR,and TIR modules,which can be operated in combination or individually based on the application requirements.Each module includes three spectrometers,utilizing field-of-view(FOV)stitching technology to achieve a 40°FOV,thereby enhancing operational efficiency.We designed advanced optical systems for all modules,particularly for the TIR module,and employed cryogenic optical technology to maintain optical system stability at 100 K.Both laboratory and in-flight calibrations were conducted to improve preprocessing accuracy and produce high-quality hyperspectral data.The AMMIS features more than 1400 spectral bands,with spectral sampling intervals of 0.1 nm for UV,2.4 nm for VNIR,3 nm for SWIR,and 32 nm for TIR.In addition,the instantaneous fields of view(IFoVs)for the four modules were 0.5,0.25,0.5,and 1 mrad,respectively,with the VNIR module achieving an IFoV of 0.125 mrad in the high-spatial-resolution mode.This study reports on land-cover surveys,pollution gas detection,mineral exploration,coastal water detection,and plant investigations conducted using AMMIS,highlighting its excellent performance.Furthermore,we present three hyperspectral datasets with diverse scene distributions and categories suitable for developing artificial intelligence algorithms.This study paves the way for next-generation airborne and spaceborne hyperspectral payloads and serves as a valuable reference for hyperspectral sensor designers and data users.展开更多
Both evolutionary computation(EC)and multiagent systems(MAS)study the emergence of intelligence through the interaction and cooperation of a group of individuals.EC focuses on solving various complex optimization prob...Both evolutionary computation(EC)and multiagent systems(MAS)study the emergence of intelligence through the interaction and cooperation of a group of individuals.EC focuses on solving various complex optimization problems,while MAS provides a flexible model for distributed artificial intelligence.Since their group interaction mechanisms can be borrowed from each other,many studies have attempted to combine EC and MAS.With the rapid development of the Internet of Things,the confluence of EC and MAS has become more and more important,and related articles have shown a continuously growing trend during the last decades.In this survey,we first elaborate on the mutual assistance of EC and MAS from two aspects,agent-based EC and EC-assisted MAS.Agent-based EC aims to introduce characteristics of MAS into EC to improve the performance and parallelism of EC,while EC-assisted MAS aims to use EC to better solve optimization problems in MAS.Furthermore,we review studies that combine the cooperation mechanisms of EC and MAS,which greatly leverage the strengths of both sides.A description framework is built to elaborate existing studies.Promising future research directions are also discussed in conjunction with emerging technologies and real-world applications.展开更多
The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and stron...The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.展开更多
Based on the C-Coupler platform,the semi-unstructured Climate System Model,Synthesis Community Integrated Model version 2(SYCIM2.0),has been developed at the School of Atmospheric Sciences,Sun Yat-sen University.SYCIM...Based on the C-Coupler platform,the semi-unstructured Climate System Model,Synthesis Community Integrated Model version 2(SYCIM2.0),has been developed at the School of Atmospheric Sciences,Sun Yat-sen University.SYCIM2.0 aims to meet the demand for seamless climate prediction through accurate climate simulations and projections.This paper provides an overview of SYCIM2.0 and highlights its key features,especially the coupling of an unstructured ocean model and the tuning process.An extensive evaluation of its performance,focusing on the East Asian Summer Monsoon(EASM),is presented based on long-term simulations with fixed external forcing.The results suggest that after nearly 240 years of integration,SYCIM2.0 achieves a quasi-equilibrium state,albeit with small trends in the net radiation flux at the top-of-atmosphere(TOA)and Earth’s surface,as well as with global mean near-surface temperatures.Compared to observational and reanalysis data,the model realistically simulates spatial patterns of sea surface temperature(SST)and precipitation centers to include their annual cycles,in addition to the lower-level wind fields in the EASM region.However,it exhibits a weakened and eastward-shifted Western Pacific Subtropical High(WPSH),resulting in an associated precipitation bias.SYCIM2.0 robustly captures the dominant mode of the EASM and its close relationship with the El Niño-Southern Oscillation(ENSO)but exhibits relatively poor performance in simulating the second leading mode and the associated air–sea interaction processes.Further comprehensive evaluations of SYCIM2.0 will be conducted in future studies.展开更多
Background:In recent years,there has been a growing trend in the utilization of observational studies that make use of routinely collected healthcare data(RCD).These studies rely on algorithms to identify specific hea...Background:In recent years,there has been a growing trend in the utilization of observational studies that make use of routinely collected healthcare data(RCD).These studies rely on algorithms to identify specific health conditions(e.g.,diabetes or sepsis)for statistical analyses.However,there has been substantial variation in the algorithm development and validation,leading to frequently suboptimal performance and posing a significant threat to the validity of study findings.Unfortunately,these issues are often overlooked.Methods:We systematically developed guidance for the development,validation,and evaluation of algorithms designed to identify health status(DEVELOP-RCD).Our initial efforts involved conducting both a narrative review and a systematic review of published studies on the concepts and methodological issues related to algorithm development,validation,and evaluation.Subsequently,we conducted an empirical study on an algorithm for identifying sepsis.Based on these findings,we formulated specific workflow and recommendations for algorithm development,validation,and evaluation within the guidance.Finally,the guidance underwent independent review by a panel of 20 external experts who then convened a consensus meeting to finalize it.Results:A standardized workflow for algorithm development,validation,and evaluation was established.Guided by specific health status considerations,the workflow comprises four integrated steps:assessing an existing algorithm’s suitability for the target health status;developing a new algorithm using recommended methods;validating the algorithm using prescribed performance measures;and evaluating the impact of the algorithm on study results.Additionally,13 good practice recommendations were formulated with detailed explanations.Furthermore,a practical study on sepsis identification was included to demonstrate the application of this guidance.Conclusions:The establishment of guidance is intended to aid researchers and clinicians in the appropriate and accurate development and application of algorithms for identifying health status from RCD.This guidance has the potential to enhance the credibility of findings from observational studies involving RCD.展开更多
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
文摘The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.
基金supported by Competitive Research by the University of Aizu.
文摘Advanced artificial intelligence technologies such as ChatGPT and other large language models(LLMs)have significantly impacted fields such as education and research in recent years.ChatGPT benefits students and educators by providing personalized feedback,facilitating interactive learning,and introducing innovative teaching methods.While many researchers have studied ChatGPT across various subject domains,few analyses have focused on the engineering domain,particularly in addressing the risks of academic dishonesty and potential declines in critical thinking skills.To address this gap,this study explores both the opportunities and limitations of ChatGPT in engineering contexts through a two-part analysis.First,we conducted experiments with ChatGPT to assess its effectiveness in tasks such as code generation,error checking,and solution optimization.Second,we surveyed 125 users,predominantly engineering students,to analyze ChatGPTs role in academic support.Our findings reveal that 93.60%of respondents use ChatGPT for quick academic answers,particularly among early-stage university students,and that 84.00%find it helpful for sourcing research materials.The study also highlights ChatGPT’s strengths in programming assistance,with 84.80%of users utilizing it for debugging and 86.40%for solving coding problems.However,limitations persist,with many users reporting inaccuracies in mathematical solutions and occasional false citations.Furthermore,the reliance on the free version by 96%of users underscores its accessibility but also suggests limitations in resource availability.This work provides key insights into ChatGPT’s strengths and limitations,establishing a framework for responsible AI use in education.Highlighting areas for improvement marks a milestone in understanding and optimizing AI’s role in academia for sustainable future use.
基金supported by the“Technology Commercialization Collaboration Platform Construction”project of the Innopolis Foundation(Project Number:2710033536)the Competitive Research Fund of The University of Aizu,Japan.
文摘Sentiment Analysis,a significant domain within Natural Language Processing(NLP),focuses on extracting and interpreting subjective information-such as emotions,opinions,and attitudes-from textual data.With the increasing volume of user-generated content on social media and digital platforms,sentiment analysis has become essential for deriving actionable insights across various sectors.This study presents a systematic literature review of sentiment analysis methodologies,encompassing traditional machine learning algorithms,lexicon-based approaches,and recent advancements in deep learning techniques.The review follows a structured protocol comprising three phases:planning,execution,and analysis/reporting.During the execution phase,67 peer-reviewed articles were initially retrieved,with 25 meeting predefined inclusion and exclusion criteria.The analysis phase involved a detailed examination of each study’s methodology,experimental setup,and key contributions.Among the deep learning models evaluated,Long Short-Term Memory(LSTM)networks were identified as the most frequently adopted architecture for sentiment classification tasks.This review highlights current trends,technical challenges,and emerging opportunities in the field,providing valuable guidance for future research and development in applications such as market analysis,public health monitoring,financial forecasting,and crisis management.
基金Project supported in part by the National Natural Science Foundation of China(Grant Nos.62005129 and 62175116)。
文摘We experimentally analyze the effect of the optical power on the time delay signature identification and the random bit generation in chaotic semiconductor laser with optical feedback.Due to the inevitable noise during the photoelectric detection and analog-digital conversion,the varying of output optical power would change the signal to noise ratio,then impact time delay signature identification and the random bit generation.Our results show that,when the optical power is less than-14 dBm,with the decreasing of the optical power,the actual identified time delay signature degrades and the entropy of the chaotic signal increases.Moreover,the extracted random bit sequence with lower optical power is more easily pass through the randomness testing.
文摘The rapid expansion of Internet of Things(IoT)networks has introduced challenges in network management,primarily in maintaining energy efficiency and robust connectivity across an increasing array of devices.This paper introduces the Adaptive Blended Marine Predators Algorithm(AB-MPA),a novel optimization technique designed to enhance Quality of Service(QoS)in IoT systems by dynamically optimizing network configurations for improved energy efficiency and stability.Our results represent significant improvements in network performance metrics such as energy consumption,throughput,and operational stability,indicating that AB-MPA effectively addresses the pressing needs ofmodern IoT environments.Nodes are initiated with 100 J of stored energy,and energy is consumed at 0.01 J per square meter in each node to emphasize energy-efficient networks.The algorithm also provides sufficient network lifetime extension to a resourceful 7000 cycles for up to 200 nodes with a maximum Packet Delivery Ratio(PDR)of 99% and a robust network throughput of up to 1800 kbps in more compact node configurations.This study proposes a viable solution to a critical problem and opens avenues for further research into scalable network management for diverse applications.
基金supported in part by the National Natural Science Foundation of China under Grants No.62372087 and No.62072076the Research Fund of State Key Laboratory of Processors under Grant No.CLQ202310the CSC scholarship.
文摘Deep neural networks(DNNs)have found extensive applications in safety-critical artificial intelligence systems,such as autonomous driving and facial recognition systems.However,recent research has revealed their susceptibility to backdoors maliciously injected by adversaries.This vulnerability arises due to the intricate architecture and opacity of DNNs,resulting in numerous redundant neurons embedded within the models.Adversaries exploit these vulnerabilities to conceal malicious backdoor information within DNNs,thereby causing erroneous outputs and posing substantial threats to the efficacy of DNN-based applications.This article presents a comprehensive survey of backdoor attacks against DNNs and the countermeasure methods employed to mitigate them.Initially,we trace the evolution of the concept from traditional backdoor attacks to backdoor attacks against DNNs,highlighting the feasibility and practicality of generating backdoor attacks against DNNs.Subsequently,we provide an overview of notable works encompassing various attack and defense strategies,facilitating a comparative analysis of their approaches.Through these discussions,we offer constructive insights aimed at refining these techniques.Finally,we extend our research perspective to the domain of large language models(LLMs)and synthesize the characteristics and developmental trends of backdoor attacks and defense methods targeting LLMs.Through a systematic review of existing studies on backdoor vulnerabilities in LLMs,we identify critical open challenges in this field and propose actionable directions for future research.
基金supported in part by the National Research Foundation of Korea(NRF)(No.RS-2025-00554650)supported by the Chung-Ang University research grant in 2024。
文摘With the accelerated growth of the Internet of Things(IoT),real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local.Since these devices often handle personal information under limited resources,cryptographic algorithms must be executed efficiently.Their computational characteristics strongly affect system performance,making it necessary to analyze resource impact and predict usage under diverse configurations.In this paper,we analyze the phase-level resource usage of AES variants,ChaCha20,ECC,and RSA on an edge device and develop a prediction model.We apply these algorithms under varying parallelism levels and execution strategies across key generation,encryption,and decryption phases.Based on the analysis,we train a unified Random Forest model using execution context and temporal features,achieving R2 values up to 0.994 for power and 0.988 for temperature.Furthermore,the model maintains practical predictive performance even for cryptographic algorithms not included during training,demonstrating its ability to generalize across distinct computational characteristics.Our proposed approach reveals how execution characteristics and resource usage interacts,supporting proactive resource planning and efficient deployment of cryptographic workloads on edge devices.As our approach is grounded in phase-level computational characteristics rather than in any single algorithm,it provides generalizable insights that can be extended to a broader range of cryptographic algorithms that exhibit comparable phase-level execution patterns and to heterogeneous edge architectures.
文摘The rapid advancement of 6G communication technologies and generative artificial intelligence(AI)is catalyzing a new wave of innovation at the intersection of networking and intelligent computing.On the one hand,6G envisions a hyper-connected environment that supports ubiquitous intelligence through ultra-low latency,high throughput,massive device connectivity,and integrated sensing and communication.On the other hand,generative AI,powered by large foundation models,has emerged as a powerful paradigm capable of creating.
文摘Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.
基金supported by the Fundamental Research Funds for the Central Universities,the National Natural Science Foundation of China(Grant Nos.12371132,12075159,12171044,12071179,and 12405006)the specific research fund of the Innovation Platform for Academicians of Hainan Province.
文摘Quantum algorithms have demonstrated provable speedups over classical counterparts,yet establishing a comprehensive theoretical framework to understand the quantum advantage remains a core challenge.In this work,we decode the quantum search advantage by investigating the critical role of quantum state properties in random-walk-based algorithms.We propose three distinct variants of quantum random-walk search algorithms and derive exact analytical expressions for their success probabilities.These probabilities are fundamentally determined by specific initial state properties:the coherence fraction governs the first algorithm’s performance,while entanglement and coherence dominate the outcomes of the second and third algorithms,respectively.We show that increased coherence fraction enhances success probability,but greater entanglement and coherence reduce it in the latter two cases.These findings reveal fundamental insights into harnessing quantum properties for advantage and guide algorithm design.Our searches achieve Grover-like speedups and show significant potential for quantum-enhanced machine learning.
基金supported by Abiola Ajimobi Technical UniversityUniversity of Ibadan
文摘In this study,ten wind turbines and fourteen solar photovoltaic(SPV)modules were employed to compare the potential of hydrogen production from wind and solar energy resources in the six geopolitical zones of Nigeria.The amount of hydrogen produced was considered as a technical parameter,cost of hydrogen production was considered as an economic index,and the amount of carbon(IV)oxide saved from the use of diesel fuel was considered as an environmental index.The results reveal that ENERCON E-40 turbine yields the highest capacity factor in Lagos,Jos,Sokoto,Bauchi and Enugu sites while FUHRLAENDER,GMBH yields the highest capacity factor in Delta.The mean annual hydrogen production from wind ranged from 2.05 tons/annum at site S6(Delta)to 17.33 tons/annum at site S3(Sokoto),and the mean annual hydrogen production from SPV ranged from 64.33 tons/annum at sites S1(Lagos)to 140.28 tons/annum at site S6(Delta).The cost of hydrogen production from wind was 6.3679 and 25.9007$/kg for sites S3 and S6,respectively,and the cost of hydrogen production from SPV was 5.6659 and 6.1206$/kg for sites S3 and S1,respectively.The amount of CO_(2) saved annually from wind-based hydrogen generation was 137,267 kg/year in site S6 and 504,180 kg/year in site S3,and was used to produce electricity via fuel cells.The amount of CO_(2) saved using hydrogen produced from SPV was 615,400 kg/year and 1,341,899 kg/year in sites S1 and S6,respectively.The results also revealed that 75.55%,88.93%,80.28%,80.54%,85.65%,98.53%more hydrogen could be produced from SPV for sites S1–S6,respectively,compared to the wind resources.This study serves as a source of reliable technical information to relevant government agencies,policy makers and investors in making informed decisions on optimal investment in the hydrogen economy of Nigeria.
基金supported by the National Natural Science Foundation of China(Grant No.42101089)Sichuan Science and Technology Program(2022YFS0586)the Open Fund of Key Laboratory of Mountain Hazards and Earth Surface Processes Chinese Academy of Sciences.
文摘Rainfall-induced shallow landslides pose one of significant geological hazards,necessitating precise monitoring and prediction for effective disaster mitigation.Most studies on landslide prediction have focused on optimizing machine learning(ML)algorithms,very limited attention has been paid to enhancing data quality for improved predictive performance.This study employs strategic data augmentation(DA)techniques to enhance the accuracy of shallow landslide prediction.Using five DA methods including singular spectrum analysis(SSA),moving averages(MA),wavelet denoising(WD),variational mode decomposition(VMD),and linear interpolation(LI),we utilize strategies such as smoothing,denoising,trend decomposition,and synthetic data generation to improve the training dataset.Four machine learning algorithms,i.e.artificial neural network(ANN),recurrent neural network(RNN),one-dimensional convolutional neural network(CNN1D),and long short-term memory(LSTM),are used to forecast landslide displacement.The case study of a landslide in southwest China shows the effectiveness of our approach in predicting landslide displacements,despite the inherent limitations of the monitoring dataset.VMD proves the most effective for smoothing and denoising,improving R^(2),RMSE,and MAPE by 172.16%,71.82%,and 98.9%,respectively.SSA addresses missing data,while LI is effective with limited data samples,improving metrics by 21.6%,52.59%,and 47.87%,respectively.This study demonstrates the potential of DA techniques to mitigate the impact of data defects on landslide prediction accuracy,with implications for similar cases.
基金supported by the Beijing Education Commission Science and Technology Project(No.KM201811417005)the National Natural Science Foundation of China(No.62173237)+6 种基金the Aeronautical Science Foundation of China(No.20240055054001)the Open Fund of State Key Laboratory of Satellite Navigation System and Equipment Technology(No.CEPNT2023A01)Joint Fund of Ministry of Natural Resources Key Laboratory of Spatiotemporal Perception and Intelligent Processing(No.232203)the Civil Aviation Flight Technology and Flight Safety Engineering Technology Research Center of Sichuan(No.GY2024-02B)the Applied Basic Research Programs of Liaoning Province(No.2025JH2/101300011)the General Project of Liaoning Provincial Education Department(No.20250054)Research on Safety Intelligent Management Technology and Systems for Mixed Operations of General Aviation Aircraft in Low-Altitude Airspace(No.310125011).
文摘Software systems play increasing important roles in modern society,and the ability against attacks is of great practical importance to crucial software systems,resulting in that the structure and robustness of software systems have attracted a tremendous amount of interest in recent years.In this paper,based on the source code of Tar and MySQL,we propose an approach to generate coupled software networks and construct three kinds of directed software networks:The function call network,the weakly coupled network and the strongly coupled network.The structural properties of these complex networks are extensively investigated.It is found that the average influence and the average dependence for all functions are the same.Moreover,eight attacking strategies and two robustness indicators(the weakly connected indicator and the strongly connected indicator)are introduced to analyze the robustness of software networks.This shows that the strongly coupled network is just a weakly connected network rather than a strongly connected one.For MySQL,high in-degree strategy outperforms other attacking strategies when the weakly connected indicator is used.On the other hand,high out-degree strategy is a good choice when the strongly connected indicator is adopted.This work will highlight a better understanding of the structure and robustness of software networks.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-DDRSP2501).
文摘This study introduces the type-I heavy-tailed Burr XII(TIHTBXII)distribution,a highly flexible and robust statistical model designed to address the limitations of conventional distributions in analyzing data characterized by skewness,heavy tails,and diverse hazard behaviors.We meticulously develop the TIHTBXII’s mathematical foundations,including its probability density function(PDF),cumulative distribution function(CDF),and essential statistical properties,crucial for theoretical understanding and practical application.A comprehensive Monte Carlo simulation evaluates four parameter estimation methods:maximum likelihood(MLE),maximum product spacing(MPS),least squares(LS),and weighted least squares(WLS).The simulation results consistently show that as sample sizes increase,the Bias and RMSE of all estimators decrease,with WLS and LS often demonstrating superior and more stable performance.Beyond theoretical development,we present a practical application of the TIHTBXII distribution in constructing a group acceptance sampling plan(GASP)for truncated life tests.This application highlights how the TIHTBXII model can optimize quality control decisions by minimizing the average sample number(ASN)while effectively managing consumer and producer risks.Empirical validation using real-world datasets,including“Active Repair Duration,”“Groundwater Contaminant Measurements,”and“Dominica COVID-19 Mortality,”further demonstrates the TIHTBXII’s superior fit compared to existing models.Our findings confirm the TIHTBXII distribution as a powerful and reliable alternative for accurately modeling complex data in fields such as reliability engineering and quality assessment,leading to more informed and robust decision-making.
基金supported by the Shanghai Industrial Collaborative Innovation Fund(HCXBCY-2021-001)the Academy of Finland(349229)。
文摘Airborne hyperspectral imaging spectrometers have been used for Earth observation over the past four decades.Despite the high sensitivity of push-broom hyperspectral imagers,they experience limited swath and wavelength coverage.In this study,we report the development of a push-broom airborne multimodular imaging spectrometer(AMMIS)that spans ultraviolet(UV),visible near-infrared(VNIR),shortwave infrared(SWIR),and thermal infrared(TIR)wavelengths.As an integral part of China's HighResolution Earth Observation Program,AMMIS is intended for civilian applications and for validating key technologies for future spaceborne hyperspectral payloads.It has been mounted on aircraft platforms such as Y-5,Y-12,and XZ-60.Since 2016,AMMIS has been used to perform more than 30 flight campaigns and gather more than 200 TB of hyperspectral data.This study describes the system design,calibration techniques,performance tests,flight campaigns,and applications of the AMMIS.The system integrates UV,VNIR,SWIR,and TIR modules,which can be operated in combination or individually based on the application requirements.Each module includes three spectrometers,utilizing field-of-view(FOV)stitching technology to achieve a 40°FOV,thereby enhancing operational efficiency.We designed advanced optical systems for all modules,particularly for the TIR module,and employed cryogenic optical technology to maintain optical system stability at 100 K.Both laboratory and in-flight calibrations were conducted to improve preprocessing accuracy and produce high-quality hyperspectral data.The AMMIS features more than 1400 spectral bands,with spectral sampling intervals of 0.1 nm for UV,2.4 nm for VNIR,3 nm for SWIR,and 32 nm for TIR.In addition,the instantaneous fields of view(IFoVs)for the four modules were 0.5,0.25,0.5,and 1 mrad,respectively,with the VNIR module achieving an IFoV of 0.125 mrad in the high-spatial-resolution mode.This study reports on land-cover surveys,pollution gas detection,mineral exploration,coastal water detection,and plant investigations conducted using AMMIS,highlighting its excellent performance.Furthermore,we present three hyperspectral datasets with diverse scene distributions and categories suitable for developing artificial intelligence algorithms.This study paves the way for next-generation airborne and spaceborne hyperspectral payloads and serves as a valuable reference for hyperspectral sensor designers and data users.
基金supported in part by the National Key Research and Development Project(2023YFE0206200)the National Natural Science Foundation of China(U23B2058)+3 种基金in part by Guangdong Regional Joint Foundation Key Project(2022B1515120076)the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(RS-2025-00555463&RS-2025-25456394)the Tianjin Top Scientist Studio Project(24JRRCRC00030)the Tianjin Belt and Road Joint Laboratory(24PTLYHZ00250).
文摘Both evolutionary computation(EC)and multiagent systems(MAS)study the emergence of intelligence through the interaction and cooperation of a group of individuals.EC focuses on solving various complex optimization problems,while MAS provides a flexible model for distributed artificial intelligence.Since their group interaction mechanisms can be borrowed from each other,many studies have attempted to combine EC and MAS.With the rapid development of the Internet of Things,the confluence of EC and MAS has become more and more important,and related articles have shown a continuously growing trend during the last decades.In this survey,we first elaborate on the mutual assistance of EC and MAS from two aspects,agent-based EC and EC-assisted MAS.Agent-based EC aims to introduce characteristics of MAS into EC to improve the performance and parallelism of EC,while EC-assisted MAS aims to use EC to better solve optimization problems in MAS.Furthermore,we review studies that combine the cooperation mechanisms of EC and MAS,which greatly leverage the strengths of both sides.A description framework is built to elaborate existing studies.Promising future research directions are also discussed in conjunction with emerging technologies and real-world applications.
基金partially supported by MRC(MC_PC_17171)Royal Society(RP202G0230)+8 种基金BHF(AA/18/3/34220)Hope Foundation for Cancer Research(RM60G0680)GCRF(20P2PF11)Sino-UK Industrial Fund(RP202G0289)LIAS(20P2ED10,20P2RE969)Data Science Enhancement Fund(20P2RE237)Fight for Sight(24NN201)Sino-UK Education Fund(OP202006)BBSRC(RM32G0178B8).
文摘The Bat algorithm,a metaheuristic optimization technique inspired by the foraging behaviour of bats,has been employed to tackle optimization problems.Known for its ease of implementation,parameter tunability,and strong global search capabilities,this algorithm finds application across diverse optimization problem domains.However,in the face of increasingly complex optimization challenges,the Bat algorithm encounters certain limitations,such as slow convergence and sensitivity to initial solutions.In order to tackle these challenges,the present study incorporates a range of optimization compo-nents into the Bat algorithm,thereby proposing a variant called PKEBA.A projection screening strategy is implemented to mitigate its sensitivity to initial solutions,thereby enhancing the quality of the initial solution set.A kinetic adaptation strategy reforms exploration patterns,while an elite communication strategy enhances group interaction,to avoid algorithm from local optima.Subsequently,the effectiveness of the proposed PKEBA is rigorously evaluated.Testing encompasses 30 benchmark functions from IEEE CEC2014,featuring ablation experiments and comparative assessments against classical algorithms and their variants.Moreover,real-world engineering problems are employed as further validation.The results conclusively demonstrate that PKEBA ex-hibits superior convergence and precision compared to existing algorithms.
基金funded by the National Natural Science Foundation of China(Grant Nos.U21A6001,42261144687,42175173)the Project supported by Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai)(Grant No.SML2023SP208)the GuangDong Basic and Applied Basic Research Foundation(2023A1515240036).
文摘Based on the C-Coupler platform,the semi-unstructured Climate System Model,Synthesis Community Integrated Model version 2(SYCIM2.0),has been developed at the School of Atmospheric Sciences,Sun Yat-sen University.SYCIM2.0 aims to meet the demand for seamless climate prediction through accurate climate simulations and projections.This paper provides an overview of SYCIM2.0 and highlights its key features,especially the coupling of an unstructured ocean model and the tuning process.An extensive evaluation of its performance,focusing on the East Asian Summer Monsoon(EASM),is presented based on long-term simulations with fixed external forcing.The results suggest that after nearly 240 years of integration,SYCIM2.0 achieves a quasi-equilibrium state,albeit with small trends in the net radiation flux at the top-of-atmosphere(TOA)and Earth’s surface,as well as with global mean near-surface temperatures.Compared to observational and reanalysis data,the model realistically simulates spatial patterns of sea surface temperature(SST)and precipitation centers to include their annual cycles,in addition to the lower-level wind fields in the EASM region.However,it exhibits a weakened and eastward-shifted Western Pacific Subtropical High(WPSH),resulting in an associated precipitation bias.SYCIM2.0 robustly captures the dominant mode of the EASM and its close relationship with the El Niño-Southern Oscillation(ENSO)but exhibits relatively poor performance in simulating the second leading mode and the associated air–sea interaction processes.Further comprehensive evaluations of SYCIM2.0 will be conducted in future studies.
基金supported by the National Natural Science Foundation of China(82225049,72104155)the Sichuan Provincial Central Government Guides Local Science and Technology Development Special Project(2022ZYD0127)the 1·3·5 Project for Disciplines of Excellence,West China Hospital,Sichuan University(ZYGD23004).
文摘Background:In recent years,there has been a growing trend in the utilization of observational studies that make use of routinely collected healthcare data(RCD).These studies rely on algorithms to identify specific health conditions(e.g.,diabetes or sepsis)for statistical analyses.However,there has been substantial variation in the algorithm development and validation,leading to frequently suboptimal performance and posing a significant threat to the validity of study findings.Unfortunately,these issues are often overlooked.Methods:We systematically developed guidance for the development,validation,and evaluation of algorithms designed to identify health status(DEVELOP-RCD).Our initial efforts involved conducting both a narrative review and a systematic review of published studies on the concepts and methodological issues related to algorithm development,validation,and evaluation.Subsequently,we conducted an empirical study on an algorithm for identifying sepsis.Based on these findings,we formulated specific workflow and recommendations for algorithm development,validation,and evaluation within the guidance.Finally,the guidance underwent independent review by a panel of 20 external experts who then convened a consensus meeting to finalize it.Results:A standardized workflow for algorithm development,validation,and evaluation was established.Guided by specific health status considerations,the workflow comprises four integrated steps:assessing an existing algorithm’s suitability for the target health status;developing a new algorithm using recommended methods;validating the algorithm using prescribed performance measures;and evaluating the impact of the algorithm on study results.Additionally,13 good practice recommendations were formulated with detailed explanations.Furthermore,a practical study on sepsis identification was included to demonstrate the application of this guidance.Conclusions:The establishment of guidance is intended to aid researchers and clinicians in the appropriate and accurate development and application of algorithms for identifying health status from RCD.This guidance has the potential to enhance the credibility of findings from observational studies involving RCD.