The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in S...The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.展开更多
The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness dimin...The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.展开更多
In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic h...In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.展开更多
Graph neural networks(GNN)have shown strong performance in node classification tasks,yet most existing models rely on uniform or shared weight aggregation,lacking flexibility in modeling the varying strength of relati...Graph neural networks(GNN)have shown strong performance in node classification tasks,yet most existing models rely on uniform or shared weight aggregation,lacking flexibility in modeling the varying strength of relationships among nodes.This paper proposes a novel graph coupling convolutional model that introduces an adaptive weighting mechanism to assign distinct importance to neighboring nodes based on their similarity to the central node.Unlike traditional methods,the proposed coupling strategy enhances the interpretability of node interactions while maintaining competitive classification performance.The model operates in the spatial domain,utilizing adjacency list structures for efficient convolution and addressing the limitations of weight sharing through a coupling-based similarity computation.Extensive experiments are conducted on five graph-structured datasets,including Cora,Citeseer,PubMed,Reddit,and BlogCatalog,as well as a custom topology dataset constructed from the Open University Learning Analytics Dataset(OULAD)educational platform.Results demonstrate that the proposed model achieves good classification accuracy,while significantly reducing training time through direct second-order neighbor fusion and data preprocessing.Moreover,analysis of neighborhood order reveals that considering third-order neighbors offers limited accuracy gains but introduces considerable computational overhead,confirming the efficiency of first-and second-order convolution in practical applications.Overall,the proposed graph coupling model offers a lightweight,interpretable,and effective framework for multi-label node classification in complex networks.展开更多
Theauthor proposes a dual layer source grid load storage collaborative planning model based on Benders decomposition to optimize the low-carbon and economic performance of the distribution network.The model plans the ...Theauthor proposes a dual layer source grid load storage collaborative planning model based on Benders decomposition to optimize the low-carbon and economic performance of the distribution network.The model plans the configuration of photovoltaic(3.8 MW),wind power(2.5 MW),energy storage(2.2 MWh),and SVC(1.2 Mvar)through interaction between upper and lower layers,and modifies lines 2–3,8–9,etc.to improve transmission capacity and voltage stability.The author uses normal distribution and Monte Carlo method to model load uncertainty,and combines Weibull distribution to describe wind speed characteristics.Compared to the traditional three-layer model(TLM),Benders decomposition-based two-layer model(BLBD)has a 58.1%reduction in convergence time(5.36 vs.12.78 h),a 51.1%reduction in iteration times(23 vs.47 times),a 8.07%reduction in total cost(12.436 vs.13.528 million yuan),and a 9.62%reduction in carbon emissions(12,456 vs.13,782 t).After optimization,the peak valley difference decreased from4.1 to 2.9MW,the renewable energy consumption rate reached 93.4%,and the energy storage efficiency was 87.6%.Themodel has been validated in the IEEE 33 node system,demonstrating its superiority in terms of economy,low-carbon,and reliability.展开更多
It is well known that aluminum and copper exhibit structural phase transformations in quasi-static and dynamic measurements,including shock wave loading.However,the dependence of phase transformations in a wide range ...It is well known that aluminum and copper exhibit structural phase transformations in quasi-static and dynamic measurements,including shock wave loading.However,the dependence of phase transformations in a wide range of crystallographic directions of shock loading has not been revealed.In this work,we calculated the shock Hugoniot for aluminum and copper in different crystallographic directions([100],[110],[111],[112],[102],[114],[123],[134],[221]and[401])of shock compression using molecular dynamics(MD)simulations.The results showed a high pressure(>160 GPa for Cu and>40 GPa for Al)of the FCC-to-BCC transition.In copper,different characteristics of the phase transition are observed depending on the loading direction with the[100]compression direction being the weakest.The FCC-to-BCC transition for copper is in the range of 150–220 GPa,which is consistent with the existing experimental data.Due to the high transition pressure,the BCC phase transition in copper competes with melting.In aluminum,the FCC-to-BCC transition is observed for all studied directions at pressures between 40 and 50 GPa far beyond the melting.In all considered cases we observe the coexistence of HCP and BCC phases during the FCC-to-BCC transition,which is consistent with the experimental data and atomistic calculations;this HCP phase forms in the course of accompanying plastic deformation with dislocation activity in the parent FCC phase.The plasticity incipience is also anisotropic in bothmetals,which is due to the difference in the projections of stress on the slip plane for different orientations of the FCC crystal.MD modeling results demonstrate a strong dependence of the FCC-to-BCC transition on the crystallographic direction,in which the material is loaded in the copper crystals.However,MD simulations data can only be obtained for specific points in the stereographic direction space;therefore,for more comprehensive understanding of the phase transition process,a feed-forward neural network was trained using MD modeling data.The trained machine learning model allowed us to construct continuous stereographic maps of phase transitions as a function of stress in the shock-compressed state of metal.Due to appearance and growth of multiple centers of new phase,the FCC-to-BCC transition leads to formation of a polycrystalline structure from the parent single crystal.展开更多
Owing to intensified globalization and informatization,the structures of the urban scale hierarchy and urban networks between cities have become increasingly intertwined,resulting in different spatial effects.Therefor...Owing to intensified globalization and informatization,the structures of the urban scale hierarchy and urban networks between cities have become increasingly intertwined,resulting in different spatial effects.Therefore,this paper analyzes the spatial interaction between urban scale hierarchy and urban networks in China from 2019 to 2023,drawing on Baidu migration data and employing a spatial simultaneous equation model.The results reveal a significant positive spatial correlation between cities with higher hierarchy and those with greater network centrality.Within a static framework,we identify a positive interaction between urban scale hierarchy and urban network centrality,while their spatial cross-effects manifest as negative neighborhood interactions based on geographical distance and positive cross-scale interactions shaped by network connections.Within a dynamic framework,changes in urban scale hierarchy and urban networks are mutually reinforcing,thereby widening disparities within the urban hierarchy.Furthermore,an increase in a city’s network centrality had a dampening effect on the population growth of neighboring cities and network-connected cities.This study enhances understanding of the spatial organisation of urban systems and offers insights for coordinated regional development.展开更多
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automat...Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.展开更多
In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance o...In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.展开更多
Landslide susceptibility mapping(LSM)plays a crucial role in assessing geological risks.The current LSM techniques face a significant challenge in achieving accurate results due to uncertainties associated with region...Landslide susceptibility mapping(LSM)plays a crucial role in assessing geological risks.The current LSM techniques face a significant challenge in achieving accurate results due to uncertainties associated with regional-scale geotechnical parameters.To explore rainfall-induced LSM,this study proposes a hybrid model that combines the physically-based probabilistic model(PPM)with convolutional neural network(CNN).The PPM is capable of effectively capturing the spatial distribution of landslides by incorporating the probability of failure(POF)considering the slope stability mechanism under rainfall conditions.This significantly characterizes the variation of POF caused by parameter uncertainties.CNN was used as a binary classifier to capture the spatial and channel correlation between landslide conditioning factors and the probability of landslide occurrence.OpenCV image enhancement technique was utilized to extract non-landslide points based on the POF of landslides.The proposed model comprehensively considers physical mechanics when selecting non-landslide samples,effectively filtering out samples that do not adhere to physical principles and reduce the risk of overfitting.The results indicate that the proposed PPM-CNN hybrid model presents a higher prediction accuracy,with an area under the curve(AUC)value of 0.85 based on the landslide case of the Niangniangba area of Gansu Province,China compared with the individual CNN model(AUC=0.61)and the PPM(AUC=0.74).This model can also consider the statistical correlation and non-normal probability distributions of model parameters.These results offer practical guidance for future research on rainfall-induced LSM at the regional scale.展开更多
Timely and accurate forecasting of storm surges can effectively prevent typhoon storm surges from causing large economic losses and casualties in coastal areas.At present,numerical model forecasting consumes too many ...Timely and accurate forecasting of storm surges can effectively prevent typhoon storm surges from causing large economic losses and casualties in coastal areas.At present,numerical model forecasting consumes too many resources and takes too long to compute,while neural network forecasting lacks regional data to train regional forecasting models.In this study,we used the DUAL wind model to build typhoon wind fields,and constructed a typhoon database of 75 processes in the northern South China Sea using the coupled Advanced Circulation-Simulating Waves Nearshore(ADCIRC-SWAN)model.Then,a neural network with a Res-U-Net structure was trained using the typhoon database to forecast the typhoon processes in the validation dataset,and an excellent storm surge forecasting effect was achieved in the Pearl River Estuary region.The storm surge forecasting effect of stronger typhoons was improved by adding a branch structure and transfer learning.展开更多
This study directs the discussion of HIV disease with a novel kind of complex dynamical generalized and piecewise operator in the sense of classical and Atangana Baleanu(AB)derivatives having arbitrary order.The HIV i...This study directs the discussion of HIV disease with a novel kind of complex dynamical generalized and piecewise operator in the sense of classical and Atangana Baleanu(AB)derivatives having arbitrary order.The HIV infection model has a susceptible class,a recovered class,along with a case of infection divided into three sub-different levels or categories and the recovered class.The total time interval is converted into two,which are further investigated for ordinary and fractional order operators of the AB derivative,respectively.The proposed model is tested separately for unique solutions and existence on bi intervals.The numerical solution of the proposed model is treated by the piece-wise numerical iterative scheme of Newtons Polynomial.The proposed method is established for piece-wise derivatives under natural order and non-singular Mittag-Leffler Law.The cross-over or bending characteristics in the dynamical system of HIV are easily examined by the aspect of this research having a memory effect for controlling the said disease.This study uses the neural network(NN)technique to obtain a better set of weights with low residual errors,and the epochs number is considered 1000.The obtained figures represent the approximate solution and absolute error which are tested with NN to train the data accurately.展开更多
Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important a...Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.展开更多
Age-related osteoporosis poses a significant challenge in musculoskeletal health;a condition characterized by reduced bone density and increased fracture susceptibility in older individuals necessitates a better under...Age-related osteoporosis poses a significant challenge in musculoskeletal health;a condition characterized by reduced bone density and increased fracture susceptibility in older individuals necessitates a better understanding of underlying molecular and cellular mechanisms.Emerging evidence suggests that osteocytes are the pivotal orchestrators of bone remodeling and represent novel therapeutic targets for age-related bone loss.Our study uses the prematurely aged PolgD257A/D257A(PolgA)mouse model to scrutinize age-and sex-related alterations in musculoskeletal health parameters(frailty,grip strength,gait data),bone and particularly the osteocyte lacuno-canalicular network(LCN).Moreover,a new quantitative in silico image analysis pipeline is used to evaluate the alterations in the osteocyte network with aging.Our findings underscore the pronounced degenerative changes in the musculoskeletal health parameters,bone,and osteocyte LCN in PolgA mice as early as 40 weeks,with more prominent alterations evident in aged males.Our findings suggest that the PolgA mouse model serves as a valuable model for studying the cellular mechanisms underlying age-related bone loss,given the comparable aging signs and age-related degeneration of the bone and the osteocyte network observed in naturally aging mice and elderly humans.展开更多
[Objective]To construct an Escherichia coli mutant strain that accumulates pyruvate by genetic modification guided by the genome-scale metabolic network model.[Methods]Using a genome-scale metabolic network model as a...[Objective]To construct an Escherichia coli mutant strain that accumulates pyruvate by genetic modification guided by the genome-scale metabolic network model.[Methods]Using a genome-scale metabolic network model as a guide,we simulated pyruvate production of E.coli,screened key genes in metabolic pathways,and developed gene editing procedures accordingly.We knocked out the acetate kinase gene ackA,phosphate acetyltransferase gene pta,alcohol dehydrogenase adhE,glycogen synthase gene glgA,glycogen phosphorylase gene glgP,phosphoribosyl pyrophosphate(PRPP)synthase gene prs,ribose 1,5-bisphosphate phosphokinase gene phnN,and transporter encoding gene proP.Furthermore,we knocked in the transporter encoding gene ompC,flavonoid toxin gene fldA,and D-serine ammonia lyase gene dsdA.[Results]A shake flask process with the genetically edited mutant strain MG1655-6-2 under anaerobic conditions produced pyruvate at a titer of 10.46 g/L and a yield of 0.69 g/g.Metabolomic analysis revealed a significant increase in the pyruvate level in the fermentation broth,accompanied by notable decreases in the levels of certain related metabolic byproducts.Through 5 L fed-batch fermentation and an adaptive laboratory evolution,the strain finally achieved a pyruvate titer of 45.86 g/L.[Conclusion]This study illustrated the efficacy of a gene editing strategy predicted by a genome-scale metabolic network model in enhancing pyruvate accumulation in E.coli under anaerobic conditions and provided novel insights for microbial metabolic engineering.展开更多
Therapeutic monoclonal antibodies(mAbs)have garnered significant attention for their efficacy in treating a variety of diseases.However,some candidate antibodies exhibit non-specific binding to off-target proteins or ...Therapeutic monoclonal antibodies(mAbs)have garnered significant attention for their efficacy in treating a variety of diseases.However,some candidate antibodies exhibit non-specific binding to off-target proteins or other biomolecules,leading to high polyreactivity,which can compromise therapeutic efficacy and cause other complications,thereby reducing the approval rate of antibody drug candidates.Therefore,predicting the polyreactivity risk of therapeutic mAbs at an early stage of development is crucial.In this study,we fine-tuned six pre-trained protein language models(PLMs)to predict the polyreactivity of antibody sequences.The most effective model,named PolyXpert,demonstrated a sensitivity(SN)of 90.10%,specificity(SP)of 90.08%,accuracy(ACC)of 90.10%,F1-score of 0.9301,Matthews correlation coefficient(MCC)of 0.7654,and an area under curve(AUC)of 0.9672 on the external independent test dataset.These results suggest its potential as a valuable in-silico tool for assessing antibody polyreactivity and for selecting superior therapeutic mAb candidates for clinical development.Furthermore,we demonstrated that fine-tuned language model classifiers exhibit enhanced prediction robustness compared with classifiers trained on pre-trained model embeddings.PolyXpert can be easily available at https://github.com/zzyywww/PolyXpert.展开更多
Multi-agent systems often require good interoperability in the process of completing their assigned tasks.This paper first models the static structure and dynamic behavior of multiagent systems based on layered weight...Multi-agent systems often require good interoperability in the process of completing their assigned tasks.This paper first models the static structure and dynamic behavior of multiagent systems based on layered weighted scale-free community network and susceptible-infected-recovered(SIR)model.To solve the problem of difficulty in describing the changes in the structure and collaboration mode of the system under external factors,a two-dimensional Monte Carlo method and an improved dynamic Bayesian network are used to simulate the impact of external environmental factors on multi-agent systems.A collaborative information flow path optimization algorithm for agents under environmental factors is designed based on the Dijkstra algorithm.A method for evaluating system interoperability is designed based on simulation experiments,providing reference for the construction planning and optimization of organizational application of the system.Finally,the feasibility of the method is verified through case studies.展开更多
In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clu...In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.展开更多
In recent years,discrete neuron and discrete neural network models have played an important role in the development of neural dynamics.This paper reviews the theoretical advantages of well-known discrete neuron models...In recent years,discrete neuron and discrete neural network models have played an important role in the development of neural dynamics.This paper reviews the theoretical advantages of well-known discrete neuron models,some existing discretized continuous neuron models,and discrete neural networks in simulating complex neural dynamics.It places particular emphasis on the importance of memristors in the composition of neural networks,especially their unique memory and nonlinear characteristics.The integration of memristors into discrete neural networks,including Hopfield networks and their fractional-order variants,cellular neural networks and discrete neuron models has enabled the study and construction of various neural models with memory.These models exhibit complex dynamic behaviors,including superchaotic attractors,hidden attractors,multistability,and synchronization transitions.Furthermore,the present paper undertakes an analysis of more complex dynamical properties,including synchronization,speckle patterns,and chimera states in discrete coupled neural networks.This research provides new theoretical foundations and potential applications in the fields of brain-inspired computing,artificial intelligence,image encryption,and biological modeling.展开更多
基金the research project LaTe4PoliticES(PID2022-138099OB-I00)funded by MCIN/AEI/10.13039/501100011033 and the European Fund for Regional Development(ERDF)-a way to make Europe.Tomás Bernal-Beltrán is supported by University of Murcia through the predoctoral programme.
文摘The malicious dissemination of hate speech via compromised accounts,automated bot networks and malware-driven social media campaigns has become a growing cybersecurity concern.Automatically detecting such content in Spanish is challenging due to linguistic complexity and the scarcity of annotated resources.In this paper,we compare two predominant AI-based approaches for the forensic detection of malicious hate speech:(1)finetuning encoder-only models that have been trained in Spanish and(2)In-Context Learning techniques(Zero-and Few-Shot Learning)with large-scale language models.Our approach goes beyond binary classification,proposing a comprehensive,multidimensional evaluation that labels each text by:(1)type of speech,(2)recipient,(3)level of intensity(ordinal)and(4)targeted group(multi-label).Performance is evaluated using an annotated Spanish corpus,standard metrics such as precision,recall and F1-score and stability-oriented metrics to evaluate the stability of the transition from zero-shot to few-shot prompting(Zero-to-Few Shot Retention and Zero-to-Few Shot Gain)are applied.The results indicate that fine-tuned encoder-only models(notably MarIA and BETO variants)consistently deliver the strongest and most reliable performance:in our experiments their macro F1-scores lie roughly in the range of approximately 46%–66%depending on the task.Zero-shot approaches are much less stable and typically yield substantially lower performance(observed F1-scores range approximately 0%–39%),often producing invalid outputs in practice.Few-shot prompting(e.g.,Qwen 38B,Mistral 7B)generally improves stability and recall relative to pure zero-shot,bringing F1-scores into a moderate range of approximately 20%–51%but still falling short of fully fine-tuned models.These findings highlight the importance of supervised adaptation and discuss the potential of both paradigms as components in AI-powered cybersecurity and malware forensics systems designed to identify and mitigate coordinated online hate campaigns.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grant 62276109The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through the Research Group Project number(ORF-2025-585).
文摘The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.
基金funding from the European Commission by the Ruralities project(grant agreement no.101060876).
文摘In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.
基金Support by Sichuan Science and Technology Program[2023YFSY0026,2023YFH0004]Guangzhou Huashang University[2024HSZD01,HS2023JYSZH01].
文摘Graph neural networks(GNN)have shown strong performance in node classification tasks,yet most existing models rely on uniform or shared weight aggregation,lacking flexibility in modeling the varying strength of relationships among nodes.This paper proposes a novel graph coupling convolutional model that introduces an adaptive weighting mechanism to assign distinct importance to neighboring nodes based on their similarity to the central node.Unlike traditional methods,the proposed coupling strategy enhances the interpretability of node interactions while maintaining competitive classification performance.The model operates in the spatial domain,utilizing adjacency list structures for efficient convolution and addressing the limitations of weight sharing through a coupling-based similarity computation.Extensive experiments are conducted on five graph-structured datasets,including Cora,Citeseer,PubMed,Reddit,and BlogCatalog,as well as a custom topology dataset constructed from the Open University Learning Analytics Dataset(OULAD)educational platform.Results demonstrate that the proposed model achieves good classification accuracy,while significantly reducing training time through direct second-order neighbor fusion and data preprocessing.Moreover,analysis of neighborhood order reveals that considering third-order neighbors offers limited accuracy gains but introduces considerable computational overhead,confirming the efficiency of first-and second-order convolution in practical applications.Overall,the proposed graph coupling model offers a lightweight,interpretable,and effective framework for multi-label node classification in complex networks.
文摘Theauthor proposes a dual layer source grid load storage collaborative planning model based on Benders decomposition to optimize the low-carbon and economic performance of the distribution network.The model plans the configuration of photovoltaic(3.8 MW),wind power(2.5 MW),energy storage(2.2 MWh),and SVC(1.2 Mvar)through interaction between upper and lower layers,and modifies lines 2–3,8–9,etc.to improve transmission capacity and voltage stability.The author uses normal distribution and Monte Carlo method to model load uncertainty,and combines Weibull distribution to describe wind speed characteristics.Compared to the traditional three-layer model(TLM),Benders decomposition-based two-layer model(BLBD)has a 58.1%reduction in convergence time(5.36 vs.12.78 h),a 51.1%reduction in iteration times(23 vs.47 times),a 8.07%reduction in total cost(12.436 vs.13.528 million yuan),and a 9.62%reduction in carbon emissions(12,456 vs.13,782 t).After optimization,the peak valley difference decreased from4.1 to 2.9MW,the renewable energy consumption rate reached 93.4%,and the energy storage efficiency was 87.6%.Themodel has been validated in the IEEE 33 node system,demonstrating its superiority in terms of economy,low-carbon,and reliability.
基金founded by the Ministry of Science and Higher Education of the Russian Federation,State assignments for research,registration No.1024032600084-8-1.3.2Study of the grain growth and the formation of polycrystalline structure as a result of phase transition(Section 6)was founded by the Russian Science Foundation,Project No.24-71-00078+3 种基金https://rscf.ru/en/project/24-71-00078/(accessed on 01 December 2025).Study of the orientation dependence of the phase transition of aluminum in Section 3 was founded by the Russian Science Foundation,Project No.24-19-00684https://rscf.ru/en/project/24-19-00684/(accessed on 01 December 2025).
文摘It is well known that aluminum and copper exhibit structural phase transformations in quasi-static and dynamic measurements,including shock wave loading.However,the dependence of phase transformations in a wide range of crystallographic directions of shock loading has not been revealed.In this work,we calculated the shock Hugoniot for aluminum and copper in different crystallographic directions([100],[110],[111],[112],[102],[114],[123],[134],[221]and[401])of shock compression using molecular dynamics(MD)simulations.The results showed a high pressure(>160 GPa for Cu and>40 GPa for Al)of the FCC-to-BCC transition.In copper,different characteristics of the phase transition are observed depending on the loading direction with the[100]compression direction being the weakest.The FCC-to-BCC transition for copper is in the range of 150–220 GPa,which is consistent with the existing experimental data.Due to the high transition pressure,the BCC phase transition in copper competes with melting.In aluminum,the FCC-to-BCC transition is observed for all studied directions at pressures between 40 and 50 GPa far beyond the melting.In all considered cases we observe the coexistence of HCP and BCC phases during the FCC-to-BCC transition,which is consistent with the experimental data and atomistic calculations;this HCP phase forms in the course of accompanying plastic deformation with dislocation activity in the parent FCC phase.The plasticity incipience is also anisotropic in bothmetals,which is due to the difference in the projections of stress on the slip plane for different orientations of the FCC crystal.MD modeling results demonstrate a strong dependence of the FCC-to-BCC transition on the crystallographic direction,in which the material is loaded in the copper crystals.However,MD simulations data can only be obtained for specific points in the stereographic direction space;therefore,for more comprehensive understanding of the phase transition process,a feed-forward neural network was trained using MD modeling data.The trained machine learning model allowed us to construct continuous stereographic maps of phase transitions as a function of stress in the shock-compressed state of metal.Due to appearance and growth of multiple centers of new phase,the FCC-to-BCC transition leads to formation of a polycrystalline structure from the parent single crystal.
基金Under the auspices of the National Natural Science Foundation of China(No.42371222,41971167)Fundamental Scientific Research Funds of Central China Normal University(No.CCNU24ZZ120)。
文摘Owing to intensified globalization and informatization,the structures of the urban scale hierarchy and urban networks between cities have become increasingly intertwined,resulting in different spatial effects.Therefore,this paper analyzes the spatial interaction between urban scale hierarchy and urban networks in China from 2019 to 2023,drawing on Baidu migration data and employing a spatial simultaneous equation model.The results reveal a significant positive spatial correlation between cities with higher hierarchy and those with greater network centrality.Within a static framework,we identify a positive interaction between urban scale hierarchy and urban network centrality,while their spatial cross-effects manifest as negative neighborhood interactions based on geographical distance and positive cross-scale interactions shaped by network connections.Within a dynamic framework,changes in urban scale hierarchy and urban networks are mutually reinforcing,thereby widening disparities within the urban hierarchy.Furthermore,an increase in a city’s network centrality had a dampening effect on the population growth of neighboring cities and network-connected cities.This study enhances understanding of the spatial organisation of urban systems and offers insights for coordinated regional development.
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
基金supported by the National Natural Science Foundation of China(Grant Nos.52306126,22350710788,12432010,11988102,92270203)the Xplore Prize.
文摘Configuring computational fluid dynamics(CFD)simulations typically demands extensive domain expertise,limiting broader access.Although large language models(LLMs)have advanced scientific computing,their use in automating CFD workflows is underdeveloped.We introduce a novel approach centered on domain-specific LLM adaptation.By fine-tuning Qwen2.5-7B-Instruct on NL2FOAM,our custom dataset of 28,716 natural language-to-OpenFOAM configuration pairs with chain-of-thought(CoT)annotations enables direct translation from natural language descriptions to executable CFD setups.A multi-agent system orchestrates the process,autonomously verifying inputs,generating configurations,running simulations,and correcting errors.Evaluation on a benchmark of 21 diverse flow cases demonstrates state-of-the-art performance,achieving 88.7%solution accuracy and 82.6%first-attempt success rate.This significantly outperforms larger general-purpose models such as Qwen2.5-72B-Instruct,DeepSeek-R1,and Llama3.3-70B-Instruct,while also requiring fewer correction iterations and maintaining high computational efficiency.The results highlight the critical role of domain-specific adaptation in deploying LLM assistants for complex engineering workflows.Our code and fine-tuned model have been deposited at https://github.com/YYgroup/AutoCFD.
文摘In the rapidly evolving landscape of natural language processing(NLP)and sentiment analysis,improving the accuracy and efficiency of sentiment classification models is crucial.This paper investigates the performance of two advanced models,the Large Language Model(LLM)LLaMA model and NLP BERT model,in the context of airline review sentiment analysis.Through fine-tuning,domain adaptation,and the application of few-shot learning,the study addresses the subtleties of sentiment expressions in airline-related text data.Employing predictive modeling and comparative analysis,the research evaluates the effectiveness of Large Language Model Meta AI(LLaMA)and Bidirectional Encoder Representations from Transformers(BERT)in capturing sentiment intricacies.Fine-tuning,including domain adaptation,enhances the models'performance in sentiment classification tasks.Additionally,the study explores the potential of few-shot learning to improve model generalization using minimal annotated data for targeted sentiment analysis.By conducting experiments on a diverse airline review dataset,the research quantifies the impact of fine-tuning,domain adaptation,and few-shot learning on model performance,providing valuable insights for industries aiming to predict recommendations and enhance customer satisfaction through a deeper understanding of sentiment in user-generated content(UGC).This research contributes to refining sentiment analysis models,ultimately fostering improved customer satisfaction in the airline industry.
基金funding support from the National Natural Science Foundation of China(Grant Nos.U22A20594,52079045)Hong-Zhi Cui acknowledges the financial support of the China Scholarship Council(Grant No.CSC:202206710014)for his research at Universitat Politecnica de Catalunya,Barcelona.
文摘Landslide susceptibility mapping(LSM)plays a crucial role in assessing geological risks.The current LSM techniques face a significant challenge in achieving accurate results due to uncertainties associated with regional-scale geotechnical parameters.To explore rainfall-induced LSM,this study proposes a hybrid model that combines the physically-based probabilistic model(PPM)with convolutional neural network(CNN).The PPM is capable of effectively capturing the spatial distribution of landslides by incorporating the probability of failure(POF)considering the slope stability mechanism under rainfall conditions.This significantly characterizes the variation of POF caused by parameter uncertainties.CNN was used as a binary classifier to capture the spatial and channel correlation between landslide conditioning factors and the probability of landslide occurrence.OpenCV image enhancement technique was utilized to extract non-landslide points based on the POF of landslides.The proposed model comprehensively considers physical mechanics when selecting non-landslide samples,effectively filtering out samples that do not adhere to physical principles and reduce the risk of overfitting.The results indicate that the proposed PPM-CNN hybrid model presents a higher prediction accuracy,with an area under the curve(AUC)value of 0.85 based on the landslide case of the Niangniangba area of Gansu Province,China compared with the individual CNN model(AUC=0.61)and the PPM(AUC=0.74).This model can also consider the statistical correlation and non-normal probability distributions of model parameters.These results offer practical guidance for future research on rainfall-induced LSM at the regional scale.
基金supported by the National Natural Science Foundation of China(Grant No.42076214)Natural Science Foundation of Shandong Province(Grant No.ZR2024QD057).
文摘Timely and accurate forecasting of storm surges can effectively prevent typhoon storm surges from causing large economic losses and casualties in coastal areas.At present,numerical model forecasting consumes too many resources and takes too long to compute,while neural network forecasting lacks regional data to train regional forecasting models.In this study,we used the DUAL wind model to build typhoon wind fields,and constructed a typhoon database of 75 processes in the northern South China Sea using the coupled Advanced Circulation-Simulating Waves Nearshore(ADCIRC-SWAN)model.Then,a neural network with a Res-U-Net structure was trained using the typhoon database to forecast the typhoon processes in the validation dataset,and an excellent storm surge forecasting effect was achieved in the Pearl River Estuary region.The storm surge forecasting effect of stronger typhoons was improved by adding a branch structure and transfer learning.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RP23066).
文摘This study directs the discussion of HIV disease with a novel kind of complex dynamical generalized and piecewise operator in the sense of classical and Atangana Baleanu(AB)derivatives having arbitrary order.The HIV infection model has a susceptible class,a recovered class,along with a case of infection divided into three sub-different levels or categories and the recovered class.The total time interval is converted into two,which are further investigated for ordinary and fractional order operators of the AB derivative,respectively.The proposed model is tested separately for unique solutions and existence on bi intervals.The numerical solution of the proposed model is treated by the piece-wise numerical iterative scheme of Newtons Polynomial.The proposed method is established for piece-wise derivatives under natural order and non-singular Mittag-Leffler Law.The cross-over or bending characteristics in the dynamical system of HIV are easily examined by the aspect of this research having a memory effect for controlling the said disease.This study uses the neural network(NN)technique to obtain a better set of weights with low residual errors,and the epochs number is considered 1000.The obtained figures represent the approximate solution and absolute error which are tested with NN to train the data accurately.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Control signaling is mandatory for the operation and management of all types of communication networks,including the Third Generation Partnership Project(3GPP)mobile broadband networks.However,they consume important and scarce network resources such as bandwidth and processing power.There have been several reports of these control signaling turning into signaling storms halting network operations and causing the respective Telecom companies big financial losses.This paper draws its motivation from such real network disaster incidents attributed to signaling storms.In this paper,we present a thorough survey of the causes,of the signaling storm problems in 3GPP-based mobile broadband networks and discuss in detail their possible solutions and countermeasures.We provide relevant analytical models to help quantify the effect of the potential causes and benefits of their corresponding solutions.Another important contribution of this paper is the comparison of the possible causes and solutions/countermeasures,concerning their effect on several important network aspects such as architecture,additional signaling,fidelity,etc.,in the form of a table.This paper presents an update and an extension of our earlier conference publication.To our knowledge,no similar survey study exists on the subject.
基金the European Research Council(ERC Advanced MechAGE-ERC-2016-ADG-741883)the Swiss National Science Foundation(no.188522).
文摘Age-related osteoporosis poses a significant challenge in musculoskeletal health;a condition characterized by reduced bone density and increased fracture susceptibility in older individuals necessitates a better understanding of underlying molecular and cellular mechanisms.Emerging evidence suggests that osteocytes are the pivotal orchestrators of bone remodeling and represent novel therapeutic targets for age-related bone loss.Our study uses the prematurely aged PolgD257A/D257A(PolgA)mouse model to scrutinize age-and sex-related alterations in musculoskeletal health parameters(frailty,grip strength,gait data),bone and particularly the osteocyte lacuno-canalicular network(LCN).Moreover,a new quantitative in silico image analysis pipeline is used to evaluate the alterations in the osteocyte network with aging.Our findings underscore the pronounced degenerative changes in the musculoskeletal health parameters,bone,and osteocyte LCN in PolgA mice as early as 40 weeks,with more prominent alterations evident in aged males.Our findings suggest that the PolgA mouse model serves as a valuable model for studying the cellular mechanisms underlying age-related bone loss,given the comparable aging signs and age-related degeneration of the bone and the osteocyte network observed in naturally aging mice and elderly humans.
基金supported by the Hebei Provincial Key Research and Development Project(21372803D)。
文摘[Objective]To construct an Escherichia coli mutant strain that accumulates pyruvate by genetic modification guided by the genome-scale metabolic network model.[Methods]Using a genome-scale metabolic network model as a guide,we simulated pyruvate production of E.coli,screened key genes in metabolic pathways,and developed gene editing procedures accordingly.We knocked out the acetate kinase gene ackA,phosphate acetyltransferase gene pta,alcohol dehydrogenase adhE,glycogen synthase gene glgA,glycogen phosphorylase gene glgP,phosphoribosyl pyrophosphate(PRPP)synthase gene prs,ribose 1,5-bisphosphate phosphokinase gene phnN,and transporter encoding gene proP.Furthermore,we knocked in the transporter encoding gene ompC,flavonoid toxin gene fldA,and D-serine ammonia lyase gene dsdA.[Results]A shake flask process with the genetically edited mutant strain MG1655-6-2 under anaerobic conditions produced pyruvate at a titer of 10.46 g/L and a yield of 0.69 g/g.Metabolomic analysis revealed a significant increase in the pyruvate level in the fermentation broth,accompanied by notable decreases in the levels of certain related metabolic byproducts.Through 5 L fed-batch fermentation and an adaptive laboratory evolution,the strain finally achieved a pyruvate titer of 45.86 g/L.[Conclusion]This study illustrated the efficacy of a gene editing strategy predicted by a genome-scale metabolic network model in enhancing pyruvate accumulation in E.coli under anaerobic conditions and provided novel insights for microbial metabolic engineering.
基金supported by the National Natural Science Foundation of China(Grant Nos.:62371112,and 62071099)Sichuan Province Science and Technology Support Program(Grant No.:2024NSFSC0636).
文摘Therapeutic monoclonal antibodies(mAbs)have garnered significant attention for their efficacy in treating a variety of diseases.However,some candidate antibodies exhibit non-specific binding to off-target proteins or other biomolecules,leading to high polyreactivity,which can compromise therapeutic efficacy and cause other complications,thereby reducing the approval rate of antibody drug candidates.Therefore,predicting the polyreactivity risk of therapeutic mAbs at an early stage of development is crucial.In this study,we fine-tuned six pre-trained protein language models(PLMs)to predict the polyreactivity of antibody sequences.The most effective model,named PolyXpert,demonstrated a sensitivity(SN)of 90.10%,specificity(SP)of 90.08%,accuracy(ACC)of 90.10%,F1-score of 0.9301,Matthews correlation coefficient(MCC)of 0.7654,and an area under curve(AUC)of 0.9672 on the external independent test dataset.These results suggest its potential as a valuable in-silico tool for assessing antibody polyreactivity and for selecting superior therapeutic mAb candidates for clinical development.Furthermore,we demonstrated that fine-tuned language model classifiers exhibit enhanced prediction robustness compared with classifiers trained on pre-trained model embeddings.PolyXpert can be easily available at https://github.com/zzyywww/PolyXpert.
基金supported by the Key R&D Projects in Jiangsu Province(BE2021729)the Key Primary Research Project of Primary Strengthening Program(KYZYJKKCJC23001).
文摘Multi-agent systems often require good interoperability in the process of completing their assigned tasks.This paper first models the static structure and dynamic behavior of multiagent systems based on layered weighted scale-free community network and susceptible-infected-recovered(SIR)model.To solve the problem of difficulty in describing the changes in the structure and collaboration mode of the system under external factors,a two-dimensional Monte Carlo method and an improved dynamic Bayesian network are used to simulate the impact of external environmental factors on multi-agent systems.A collaborative information flow path optimization algorithm for agents under environmental factors is designed based on the Dijkstra algorithm.A method for evaluating system interoperability is designed based on simulation experiments,providing reference for the construction planning and optimization of organizational application of the system.Finally,the feasibility of the method is verified through case studies.
基金partially supported by the National Natural Science Foundation of China(62161016)the Key Research and Development Project of Lanzhou Jiaotong University(ZDYF2304)+1 种基金the Beijing Engineering Research Center of Highvelocity Railway Broadband Mobile Communications(BHRC-2022-1)Beijing Jiaotong University。
文摘In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.
基金supported by the Natural Science Foundation of Hunan Province(Grant No.2025JJ50368)the Scientific Research Fund of Hunan Provincial Education Department(Grant No.24A0248)the Guiding Science and Technology Plan Project of Changsha City(Grant No.kzd2501129)。
文摘In recent years,discrete neuron and discrete neural network models have played an important role in the development of neural dynamics.This paper reviews the theoretical advantages of well-known discrete neuron models,some existing discretized continuous neuron models,and discrete neural networks in simulating complex neural dynamics.It places particular emphasis on the importance of memristors in the composition of neural networks,especially their unique memory and nonlinear characteristics.The integration of memristors into discrete neural networks,including Hopfield networks and their fractional-order variants,cellular neural networks and discrete neuron models has enabled the study and construction of various neural models with memory.These models exhibit complex dynamic behaviors,including superchaotic attractors,hidden attractors,multistability,and synchronization transitions.Furthermore,the present paper undertakes an analysis of more complex dynamical properties,including synchronization,speckle patterns,and chimera states in discrete coupled neural networks.This research provides new theoretical foundations and potential applications in the fields of brain-inspired computing,artificial intelligence,image encryption,and biological modeling.