The subversive nature of information war lies not only in the information itself, but also in the circulation and application of information. It has always been a challenge to quantitatively analyze the function and e...The subversive nature of information war lies not only in the information itself, but also in the circulation and application of information. It has always been a challenge to quantitatively analyze the function and effect of information flow through command, control, communications, computer, kill, intelligence,surveillance, reconnaissance (C4KISR) system. In this work, we propose a framework of force of information influence and the methods for calculating the force of information influence between C4KISR nodes of sensing, intelligence processing,decision making and fire attack. Specifically, the basic concept of force of information influence between nodes in C4KISR system is formally proposed and its mathematical definition is provided. Then, based on the information entropy theory, the model of force of information influence between C4KISR system nodes is constructed. Finally, the simulation experiments have been performed under an air defense and attack scenario. The experimental results show that, with the proposed force of information influence framework, we can effectively evaluate the contribution of information circulation through different C4KISR system nodes to the corresponding tasks. Our framework of force of information influence can also serve as an effective tool for the design and dynamic reconfiguration of C4KISR system architecture.展开更多
The security of information transmission and processing due to unknown vulnerabilities and backdoors in cyberspace is becoming increasingly problematic.However,there is a lack of effective theory to mathematically dem...The security of information transmission and processing due to unknown vulnerabilities and backdoors in cyberspace is becoming increasingly problematic.However,there is a lack of effective theory to mathematically demonstrate the security of information transmission and processing under nonrandom noise(or vulnerability backdoor attack)conditions in cyberspace.This paper first proposes a security model for cyberspace information transmission and processing channels based on error correction coding theory.First,we analyze the fault tolerance and non-randomness problem of Dynamic Heterogeneous Redundancy(DHR)structured information transmission and processing channel under the condition of non-random noise or attacks.Secondly,we use a mathematical statistical method to demonstrate that for non-random noise(or attacks)on discrete memory channels,there exists a DHR-structured channel and coding scheme that enables the average system error probability to be arbitrarily small.Finally,to construct suitable coding and heterogeneous channels,we take Turbo code as an example and simulate the effects of different heterogeneity,redundancy,output vector length,verdict algorithm and dynamism on the system,which is an important guidance for theory and engineering practice.展开更多
In order to reduce the error judgment of outliers in vehicle temperature prediction and improve the accuracy of single-station processor prediction data,a Kalman filter multi-information fusion algorithm based on opti...In order to reduce the error judgment of outliers in vehicle temperature prediction and improve the accuracy of single-station processor prediction data,a Kalman filter multi-information fusion algorithm based on optimized P-Huber weight function was proposed.The algorithm took Kalman filter(KF)as the whole frame,and established the decision threshold based on the confidence level of Chi-square distribution.At the same time,the abnormal error judgment value was constructed by Mahalanobis distance function,and the three segments of Huber weight function were formed.It could improve the accuracy of the interval judgment of outliers,and give a reasonable weight,so as to improve the tracking accuracy of the algorithm.The data values of four important locations in the vehicle obtained after optimized filtering were processed by information fusion.According to theoretical analysis,compared with Kalman filtering algorithm,the proposed algorithm could accurately track the actual temperature in the case of abnormal error,and multi-station data fusion processing could improve the overall fault tolerance of the system.The results showed that the proposed algorithm effectively reduced the interference of abnormal errors on filtering,and the synthetic value of fusion processing was more stable and critical.展开更多
High complexity and uncertainty of air combat pose significant challenges to target intention prediction.Current interpolation methods for data pre-processing and wrangling have limitations in capturing interrelations...High complexity and uncertainty of air combat pose significant challenges to target intention prediction.Current interpolation methods for data pre-processing and wrangling have limitations in capturing interrelationships among intricate variable patterns.Accordingly,this study proposes a Mogrifier gate recurrent unit-D(Mog-GRU-D)model to address the com-bat target intention prediction issue under the incomplete infor-mation condition.The proposed model directly processes miss-ing data while reducing the independence between inputs and output states.A total of 1200 samples from twelve continuous moments are captured through the combat simulation system,each of which consists of seven dimensional features.To bench-mark the experiment,a missing valued dataset has been gener-ated by randomly removing 20%of the original data.Extensive experiments demonstrate that the proposed model obtains the state-of-the-art performance with an accuracy of 73.25%when dealing with incomplete information.This study provides possi-ble interpretations for the principle of target interactive mecha-nism,highlighting the model’s effectiveness in potential air war-fare implementation.展开更多
In view of the imperfect supply chain management of prefabricated building,inadequate information interaction among the participating subjects,and untimely information updates,the integration and development of BIM te...In view of the imperfect supply chain management of prefabricated building,inadequate information interaction among the participating subjects,and untimely information updates,the integration and development of BIM technology plus the supply chain of prefabricated building is analyzed,and the problems existing in the current supply chain and the application of BIM technology at various stages are elaborated.By analyzing the structural composition of the prefabricated building supply chain,an information sharing platform framework for prefabricated building supply chain based on BIM was established,which serves as a valuable reference for managing prefabricated building supply chains.BIM technology aligns well with assembly construction,laying a solid foundation for their synergistic development and offering novel research avenues for the prefabricated building supply chain.展开更多
In this paper,a distributed adaptive dynamic programming(ADP)framework based on value iteration is proposed for multi-player differential games.In the game setting,players have no access to the information of others...In this paper,a distributed adaptive dynamic programming(ADP)framework based on value iteration is proposed for multi-player differential games.In the game setting,players have no access to the information of others'system parameters or control laws.Each player adopts an on-policy value iteration algorithm as the basic learning framework.To deal with the incomplete information structure,players collect a period of system trajectory data to compensate for the lack of information.The policy updating step is implemented by a nonlinear optimization problem aiming to search for the proximal admissible policy.Theoretical analysis shows that by adopting proximal policy searching rules,the approximated policies can converge to a neighborhood of equilibrium policies.The efficacy of our method is illustrated by three examples,which also demonstrate that the proposed method can accelerate the learning process compared with the centralized learning framework.展开更多
The existing Low-Earth-Orbit(LEO)positioning performance cannot meet the requirements of Unmanned Aerial Vehicle(UAV)clusters for high-precision real-time positioning in the Global Navigation Satellite System(GNSS)den...The existing Low-Earth-Orbit(LEO)positioning performance cannot meet the requirements of Unmanned Aerial Vehicle(UAV)clusters for high-precision real-time positioning in the Global Navigation Satellite System(GNSS)denial conditions.Therefore,this paper proposes a UAV Clusters Information Geometry Fusion Positioning(UC-IGFP)method using pseudoranges from the LEO satellites.A novel graph model for linking and computing between the UAV clusters and LEO satellites was established.By utilizing probability to describe the positional states of UAVs and sensor errors,the distributed multivariate Probability Fusion Cooperative Positioning(PF-CP)algorithm is proposed to achieve high-precision cooperative positioning and integration of the cluster.Criteria to select the centroid of the cluster were set.A new Kalman filter algorithm that is suitable for UAV clusters was designed based on the global benchmark and Riemann information geometry theory,which overcomes the discontinuity problem caused by the change of cluster centroids.Finally,the UC-IGFP method achieves the LEO continuous highprecision positioning of UAV clusters.The proposed method effectively addresses the positioning challenges caused by the strong direction of signal beams from LEO satellites and the insufficient constraint quantity of information sources at the edge nodes of the cluster.It significantly improves the accuracy and reliability of LEO-UAV cluster positioning.The results of comprehensive simulation experiments show that the proposed method has a 30.5%improvement in performance over the mainstream positioning methods,with a positioning error of 14.267 m.展开更多
Startups form an information network that reflects their growth trajectories through information flow channels established by shared investors.However,traditional static network metrics overlook temporal dynamics and ...Startups form an information network that reflects their growth trajectories through information flow channels established by shared investors.However,traditional static network metrics overlook temporal dynamics and rely on single indicators to assess startups’roles in predicting future success,failing to comprehensively capture topological variations and structural diversity.To address these limitations,we construct a temporal information network using 14547 investment records from 1013 global blockchain startups between 2004 and 2020,sourced from Crunchbase.We propose two dynamic methods to characterize the information flow:temporal random walk(sTRW)for modeling information flow trajectories and temporal betweenness centrality(tTBET)for identifying key information hubs.These methods enhance walk coverage while ensuring random stability,allowing for more effective identification of influential startups.By integrating sTRW and tTBET,we develop a comprehensive metric to evaluate a startup’s influence within the network.In experiments assessing startups’potential for future success—where successful startups are defined as those that have undergone M&A or IPO—incorporating this metric improves accuracy,recall,and F1 score by 0.035,0.035,and 0.042,respectively.Our findings indicate that information flow from key startups to others diminishes as the network distance increases.Additionally,successful startups generally exhibit higher information inflows than outflows,suggesting that actively seeking investment-related information contributes to startup growth.Our research provides valuable insights for formulating startup development strategies and offers practical guidance for market regulators.展开更多
Visual Place Recognition(VPR)technology aims to use visual information to judge the location of agents,which plays an irreplaceable role in tasks such as loop closure detection and relocation.It is well known that pre...Visual Place Recognition(VPR)technology aims to use visual information to judge the location of agents,which plays an irreplaceable role in tasks such as loop closure detection and relocation.It is well known that previous VPR algorithms emphasize the extraction and integration of general image features,while ignoring the mining of salient features that play a key role in the discrimination of VPR tasks.To this end,this paper proposes a Domain-invariant Information Extraction and Optimization Network(DIEONet)for VPR.The core of the algorithm is a newly designed Domain-invariant Information Mining Module(DIMM)and a Multi-sample Joint Triplet Loss(MJT Loss).Specifically,DIMM incorporates the interdependence between different spatial regions of the feature map in the cascaded convolutional unit group,which enhances the model’s attention to the domain-invariant static object class.MJT Loss introduces the“joint processing of multiple samples”mechanism into the original triplet loss,and adds a new distance constraint term for“positive and negative”samples,so that the model can avoid falling into local optimum during training.We demonstrate the effectiveness of our algorithm by conducting extensive experiments on several authoritative benchmarks.In particular,the proposed method achieves the best performance on the TokyoTM dataset with a Recall@1 metric of 92.89%.展开更多
As the economy grows, environmental issues are becoming increasingly severe, making the promotion of green behavior more urgent. Information dissemination and policy regulation play crucial roles in influencing and am...As the economy grows, environmental issues are becoming increasingly severe, making the promotion of green behavior more urgent. Information dissemination and policy regulation play crucial roles in influencing and amplifying the spread of green behavior across society. To this end, a novel three-layer model in multilayer networks is proposed. In the novel model, the information layer describes green information spreading, the physical contact layer depicts green behavior propagation, and policy regulation is symbolized by an isolated node beneath the two layers. Then, we deduce the green behavior threshold for the three-layer model using the microscopic Markov chain approach. Moreover, subject to some individuals who are more likely to influence others or become green nodes and the limitations of the capacity of policy regulation, an optimal scheme is given that could optimize policy interventions to most effectively prompt green behavior.Subsequently, simulations are performed to validate the preciseness and theoretical results of the new model. It reveals that policy regulation can prompt the prevalence and outbreak of green behavior. Then, the green behavior is more likely to spread and be prevalent in the SF network than in the ER network. Additionally, optimal allocation is highly successful in facilitating the dissemination of green behavior. In practice, the optimal allocation strategy could prioritize interventions at critical nodes or regions, such as highly connected urban areas, where the impact of green behavior promotion would be most significant.展开更多
Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy....Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy.Although deeplearning-based downscaling methods effectively capture the complex nonlinear mapping between meteorological data of varying scales,the supervised deep-learning-based downscaling methods suffer from insufficient high-resolution data in practice,and unsupervised methods struggle with accurately inferring small-scale specifics from limited large-scale inputs due to small-scale uncertainty.This article presents DualDS,a dual-learning framework utilizing a Generative Adversarial Network–based neural network and subgrid-scale auxiliary information for climate downscaling.Such a learning method is unified in a two-stream framework through up-and downsamplers,where the downsampler is used to simulate the information loss process during the upscaling,and the upsampler is used to reconstruct lost details and correct errors incurred during the upscaling.This dual learning strategy can eliminate the dependence on high-resolution ground truth data in the training process and refine the downscaling results by constraining the mapping process.Experimental findings demonstrate that DualDS is comparable to several state-of-the-art deep learning downscaling approaches,both qualitatively and quantitatively.Specifically,for a single surface-temperature data downscaling task,our method is comparable with other unsupervised algorithms with the same dataset,and we can achieve a 0.469 dB higher peak signal-to-noise ratio,0.017 higher structural similarity,0.08 lower RMSE,and the best correlation coefficient.In summary,this paper presents a novel approach to addressing small-scale uncertainty issues in unsupervised downscaling processes.展开更多
Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and ...Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications.展开更多
Studies show that Graph Neural Networks(GNNs)are susceptible to minor perturbations.Therefore,analyzing adversarial attacks on GNNs is crucial in current research.Previous studies used Generative Adversarial Networks ...Studies show that Graph Neural Networks(GNNs)are susceptible to minor perturbations.Therefore,analyzing adversarial attacks on GNNs is crucial in current research.Previous studies used Generative Adversarial Networks to generate a set of fake nodes,injecting them into a clean GNNs to poison the graph structure and evaluate the robustness of GNNs.In the attack process,the computation of new node connections and the attack loss are independent,which affects the attack on the GNN.To improve this,a Fake Node Camouflage Attack based on Mutual Information(FNCAMI)algorithm is proposed.By incorporating Mutual Information(MI)loss,the distribution of nodes injected into the GNNs become more similar to the original nodes,achieving better attack results.Since the loss ratios of GNNs and MI affect performance,we also design an adaptive weighting method.By adjusting the loss weights in real-time through rate changes,larger loss values are obtained,eliminating local optima.The feasibility,effectiveness,and stealthiness of this algorithm are validated on four real datasets.Additionally,we use both global and targeted attacks to test the algorithm’s performance.Comparisons with baseline attack algorithms and ablation experiments demonstrate the efficiency of the FNCAMI algorithm.展开更多
Brain age is an effective biomarker for diagnosing Alzheimer’s disease(AD).Aimed at the issue that the existing brain age detection methods are inconsistent with the biological hypothesis that AD is the accelerated a...Brain age is an effective biomarker for diagnosing Alzheimer’s disease(AD).Aimed at the issue that the existing brain age detection methods are inconsistent with the biological hypothesis that AD is the accelerated aging of the brain,a mutual information—support vector regression(MI-SVR)brain age prediction model is proposed.First,the age deviation is introduced according to the biological hypothesis of AD.Second,fitness function is designed based on mutual information criterion.Third,support vector regression and fitness function are used to obtain the predicted brain age and fitness value of the subjects,respectively.The optimal age deviation is obtained by maximizing the fitness value.Finally,the proposed method is compared with some existing brain age detection methods.Experimental results show that the brain age obtained by the proposed method has better separability,can better reflect the accelerated aging of AD,and is more helpful for improving the diagnostic accuracy of AD.展开更多
Background:Acquiring relevant information about procurement targets is fundamental to procuring medical devices.Although traditional Natural Language Processing(NLP)and Machine Learning(ML)methods have improved inform...Background:Acquiring relevant information about procurement targets is fundamental to procuring medical devices.Although traditional Natural Language Processing(NLP)and Machine Learning(ML)methods have improved information retrieval efficiency to a certain extent,they exhibit significant limitations in adaptability and accuracy when dealing with procurement documents characterized by diverse formats and a high degree of unstructured content.The emergence of Large Language Models(LLMs)offers new possibilities for efficient procurement information processing and extraction.Methods:This study collected procurement transaction documents from public procurement websites,and proposed a procurement Information Extraction(IE)method based on LLMs.Unlike traditional approaches,this study systematically explores the applicability of LLMs in both structured and unstructured entities in procurement documents,addressing the challenges posed by format variability and content complexity.Furthermore,an optimized prompt framework tailored for procurement document extraction tasks is developed to enhance the accuracy and robustness of IE.The aim is to process and extract key information from medical device procurement quickly and accurately,meeting stakeholders'demands for precision and timeliness in information retrieval.Results:Experimental results demonstrate that,compared to traditional methods,the proposed approach achieves an F1 Score of 0.9698,representing a 4.85%improvement over the best baseline model.Moreover,both recall and precision rates are close to 97%,significantly outperforming other models and exhibiting exceptional overall recognition capabilities.Notably,further analysis reveals that the proposed method consistently maintains high performance across both structured and unstructured entities in procurement documents while balancing recall and precision effectively,demonstrating its adaptability in handling varying document formats.The results of ablation experiments validate the effectiveness of the proposed prompting strategy.Conclusion:Additionally,this study explores the challenges and potential improvements of the proposed method in IE tasks and provides insights into its feasibility for real-world deployment and application directions,further clarifying its adaptability and value.This method not only exhibits significant advantages in medical device procurement but also holds promise for providing new approaches to information processing and decision support in various domains.展开更多
In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges whe...In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges when datasets lack the comprehensive information necessary for addressing complex scenarios,which hampers adaptability.Thus,enhancing data completeness is essential.Knowledge-guided virtual sample generation transforms domain knowledge into extensive virtual datasets,thereby reducing dependence on limited real samples and enabling zero-sample fault diagnosis.This study used building air conditioning systems as a case study.We innovatively used the large language model(LLM)to acquire domain knowledge for sample generation,significantly lowering knowledge acquisition costs and establishing a generalized framework for knowledge acquisition in engineering applications.This acquired knowledge guided the design of diffusion boundaries in mega-trend diffusion(MTD),while the Monte Carlo method was used to sample within the diffusion function to create information-rich virtual samples.Additionally,a noise-adding technique was introduced to enhance the information entropy of these samples,thereby improving the robustness of neural networks trained with them.Experimental results showed that training the diagnostic model exclusively with virtual samples achieved an accuracy of 72.80%,significantly surpassing traditional small-sample supervised learning in terms of generalization.This underscores the quality and completeness of the generated virtual samples.展开更多
The investigation of whether sediment samples contain representative grain size distribution information is important for the accurate extraction of sediment characteristics and conduct of related sedimentary record s...The investigation of whether sediment samples contain representative grain size distribution information is important for the accurate extraction of sediment characteristics and conduct of related sedimentary record studies.This study comparatively analyzed the numerical and qualitative differences and the degree of correlation of 36 sets of the characteristic parameters of surface sediment parallel sample grain size distribution from three sampling profiles at Jinsha Bay Beach in Zhanjiang,western Guangdong.At each sampling point,five parallel subsamples were established at intervals of 0,10,20,50,and 100 cm along the coastline.The research findings indicate the following:1)relatively large differences in the mean values of the different parallel samples(0.19–0.34Φ),with smaller differences observed in other characteristic grain sizes(D_(10),D_(50),and D_(90));2)small differences in characteristic values among various parallel sample grain size parameters,with at least 33%of the combinations of qualitative results showing inconsistency;3)50%of the regression equations between the skewness of different parallel samples displaying no significant correlation;4)relative deviations of−47.91%to 27.63%and−49.20%to 2.08%existing between the particle size parameters of a single sample and parallel samples(with the average obtained)at intervals of 10 and 50 cm,respectively.As such,small spatial differences,even within 100 cm,can considerably affect grain size parameters.Given the uncertain reasons underlying the representativeness of the samples,which may only cover the area immediately surrounding the sampling station,researchers are advised to design parallel sample collection strategies based on the spatiotemporal distribution characteristics of the parameters of interest during sediment sample collection.This study provides a typical case of the comparative analysis of parallel sample grain size parameters,with a focus on small spatial beach sediment,which contributes to the enhanced understanding of the accuracy and reliability of sediment sample collection strategies and extraction of grain size information.展开更多
Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for ...Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for comprehensively obtaining the porosity. Deep learning methods provide an intelligent approach to suppress the ambiguity of the conventional inversion method. However, under the trace-bytrace inversion strategy, there is a lack of constraints from geological structural information, resulting in poor lateral continuity of prediction results. In addition, the heterogeneity and the sedimentary variability of subsurface media also lead to uncertainty in intelligent prediction. To achieve fine prediction of porosity, we consider the lateral continuity and variability and propose an improved structural modeling deep learning porosity prediction method. First, we combine well data, waveform attributes, and structural information as constraints to model geophysical parameters, constructing a high-quality training dataset with sedimentary facies-controlled significance. Subsequently, we introduce a gated axial attention mechanism to enhance the features of dataset and design a bidirectional closed-loop network system constrained by inversion and forward processes. The constraint coefficient is adaptively adjusted by the petrophysical information contained between the porosity and impedance in the study area. We demonstrate the effectiveness of the adaptive coefficient through numerical experiments.Finally, we compare the performance differences between the proposed method and conventional deep learning methods using data from two study areas. The proposed method achieves better consistency with the logging porosity, demonstrating the superiority of the proposed method.展开更多
The environment of low-altitude urban airspace is complex and variable due to numerous obstacles,non-cooperative aircraft,and birds.Unmanned Aerial Vehicles(UAVs)leveraging environmental information to achieve three-d...The environment of low-altitude urban airspace is complex and variable due to numerous obstacles,non-cooperative aircraft,and birds.Unmanned Aerial Vehicles(UAVs)leveraging environmental information to achieve three-dimension collision-free trajectory planning is the prerequisite to ensure airspace security.However,the timely information of surrounding situation is difficult to acquire by UAVs,which further brings security risks.As a mature technology leveraged in traditional civil aviation,the Automatic Dependent Surveillance-Broadcast(ADS-B)realizes continuous surveillance of the information of aircraft.Consequently,we leverage ADS-B for surveillance and information broadcasting,and divide the aerial airspace into multiple sub-airspaces to improve flight safety in UAV trajectory planning.In detail,we propose the secure Sub-airSpaces Planning(SSP)algorithm and Particle Swarm Optimization Rapidly-exploring Random Trees(PSO-RRT)algorithm for the UAV trajectory planning in law-altitude airspace.The performance of the proposed algorithm is verified by simulations and the results show that SSP reduces both the maximum number of UAVs in the sub-airspace and the length of the trajectory,and PSO-RRT reduces the cost of UAV trajectory in the sub-airspace.展开更多
In this data explosion era,ensuring the secure storage,access,and transmission of information is imperative,encom-passing all aspects ranging from safeguarding personal devices to formulating national information secu...In this data explosion era,ensuring the secure storage,access,and transmission of information is imperative,encom-passing all aspects ranging from safeguarding personal devices to formulating national information security strategies.Leverag-ing the potential offered by dual-type carriers for transportation and employing optical modulation techniques to develop high reconfigurable ambipolar optoelectronic transistors enables effective implementation of information destruction after read-ing,thereby guaranteeing data security.In this study,a reconfigurable ambipolar optoelectronic synaptic transistor based on poly(3-hexylthiophene)(P3HT)and poly[[N,N-bis(2-octyldodecyl)-napthalene-1,4,5,8-bis(dicarboximide)-2,6-diyl]-alt-5,5′-(2,2′-bithiophene)](N2200)blend film was fabricated through solution-processed method.The resulting transistor exhib-ited a relatively large ON/OFF ratio of 10^(3) in both n-and p-type regions,and tunable photoconductivity after light illumination,particularly with green light.The photo-generated carriers could be effectively trapped under the gate bias,indicating its poten-tial application in mimicking synaptic behaviors.Furthermore,the synaptic plasticity,including volatile/non-volatile and excita-tory/inhibitory characteristics,could be finely modulated by electrical and optical stimuli.These optoelectronic reconfigurable properties enable the realization of information light assisted burn after reading.This study not only offers valuable insights for the advancement of high-performance ambipolar organic optoelectronic synaptic transistors but also presents innovative ideas for the future information security access systems.展开更多
基金supported by the Natural Science Foundation Research Plan of Shanxi Province (2023JCQN0728)。
文摘The subversive nature of information war lies not only in the information itself, but also in the circulation and application of information. It has always been a challenge to quantitatively analyze the function and effect of information flow through command, control, communications, computer, kill, intelligence,surveillance, reconnaissance (C4KISR) system. In this work, we propose a framework of force of information influence and the methods for calculating the force of information influence between C4KISR nodes of sensing, intelligence processing,decision making and fire attack. Specifically, the basic concept of force of information influence between nodes in C4KISR system is formally proposed and its mathematical definition is provided. Then, based on the information entropy theory, the model of force of information influence between C4KISR system nodes is constructed. Finally, the simulation experiments have been performed under an air defense and attack scenario. The experimental results show that, with the proposed force of information influence framework, we can effectively evaluate the contribution of information circulation through different C4KISR system nodes to the corresponding tasks. Our framework of force of information influence can also serve as an effective tool for the design and dynamic reconfiguration of C4KISR system architecture.
基金supported by National Key R&D Program of China for Young Scientists:Cyberspace Endogenous Security Mechanisms and Evaluation Methods(No.2022YFB3102800).
文摘The security of information transmission and processing due to unknown vulnerabilities and backdoors in cyberspace is becoming increasingly problematic.However,there is a lack of effective theory to mathematically demonstrate the security of information transmission and processing under nonrandom noise(or vulnerability backdoor attack)conditions in cyberspace.This paper first proposes a security model for cyberspace information transmission and processing channels based on error correction coding theory.First,we analyze the fault tolerance and non-randomness problem of Dynamic Heterogeneous Redundancy(DHR)structured information transmission and processing channel under the condition of non-random noise or attacks.Secondly,we use a mathematical statistical method to demonstrate that for non-random noise(or attacks)on discrete memory channels,there exists a DHR-structured channel and coding scheme that enables the average system error probability to be arbitrarily small.Finally,to construct suitable coding and heterogeneous channels,we take Turbo code as an example and simulate the effects of different heterogeneity,redundancy,output vector length,verdict algorithm and dynamism on the system,which is an important guidance for theory and engineering practice.
基金supported by Natural Science Foundation of Gansu Province(No.20JR5RA407).
文摘In order to reduce the error judgment of outliers in vehicle temperature prediction and improve the accuracy of single-station processor prediction data,a Kalman filter multi-information fusion algorithm based on optimized P-Huber weight function was proposed.The algorithm took Kalman filter(KF)as the whole frame,and established the decision threshold based on the confidence level of Chi-square distribution.At the same time,the abnormal error judgment value was constructed by Mahalanobis distance function,and the three segments of Huber weight function were formed.It could improve the accuracy of the interval judgment of outliers,and give a reasonable weight,so as to improve the tracking accuracy of the algorithm.The data values of four important locations in the vehicle obtained after optimized filtering were processed by information fusion.According to theoretical analysis,compared with Kalman filtering algorithm,the proposed algorithm could accurately track the actual temperature in the case of abnormal error,and multi-station data fusion processing could improve the overall fault tolerance of the system.The results showed that the proposed algorithm effectively reduced the interference of abnormal errors on filtering,and the synthetic value of fusion processing was more stable and critical.
基金supported by the Aeronautical Science Foundation of China(2020Z023053002).
文摘High complexity and uncertainty of air combat pose significant challenges to target intention prediction.Current interpolation methods for data pre-processing and wrangling have limitations in capturing interrelationships among intricate variable patterns.Accordingly,this study proposes a Mogrifier gate recurrent unit-D(Mog-GRU-D)model to address the com-bat target intention prediction issue under the incomplete infor-mation condition.The proposed model directly processes miss-ing data while reducing the independence between inputs and output states.A total of 1200 samples from twelve continuous moments are captured through the combat simulation system,each of which consists of seven dimensional features.To bench-mark the experiment,a missing valued dataset has been gener-ated by randomly removing 20%of the original data.Extensive experiments demonstrate that the proposed model obtains the state-of-the-art performance with an accuracy of 73.25%when dealing with incomplete information.This study provides possi-ble interpretations for the principle of target interactive mecha-nism,highlighting the model’s effectiveness in potential air war-fare implementation.
基金“Education Department of Hebei Funding Project for Cultivating the Innovative Capabilities of Graduate Students”(Project No.:XJCX202510)。
文摘In view of the imperfect supply chain management of prefabricated building,inadequate information interaction among the participating subjects,and untimely information updates,the integration and development of BIM technology plus the supply chain of prefabricated building is analyzed,and the problems existing in the current supply chain and the application of BIM technology at various stages are elaborated.By analyzing the structural composition of the prefabricated building supply chain,an information sharing platform framework for prefabricated building supply chain based on BIM was established,which serves as a valuable reference for managing prefabricated building supply chains.BIM technology aligns well with assembly construction,laying a solid foundation for their synergistic development and offering novel research avenues for the prefabricated building supply chain.
基金supported by the Aeronautical Science Foundation of China(20220001057001)an Open Project of the National Key Laboratory of Air-based Information Perception and Fusion(202437)
文摘In this paper,a distributed adaptive dynamic programming(ADP)framework based on value iteration is proposed for multi-player differential games.In the game setting,players have no access to the information of others'system parameters or control laws.Each player adopts an on-policy value iteration algorithm as the basic learning framework.To deal with the incomplete information structure,players collect a period of system trajectory data to compensate for the lack of information.The policy updating step is implemented by a nonlinear optimization problem aiming to search for the proximal admissible policy.Theoretical analysis shows that by adopting proximal policy searching rules,the approximated policies can converge to a neighborhood of equilibrium policies.The efficacy of our method is illustrated by three examples,which also demonstrate that the proposed method can accelerate the learning process compared with the centralized learning framework.
基金supported in part by the National Natural Science Foundation of China(Nos.62171375,62271397,62001392,62101458,62173276,61803310 and 61801394)the Shenzhen Science and Technology Innovation ProgramChina(No.JCYJ20220530161615033)。
文摘The existing Low-Earth-Orbit(LEO)positioning performance cannot meet the requirements of Unmanned Aerial Vehicle(UAV)clusters for high-precision real-time positioning in the Global Navigation Satellite System(GNSS)denial conditions.Therefore,this paper proposes a UAV Clusters Information Geometry Fusion Positioning(UC-IGFP)method using pseudoranges from the LEO satellites.A novel graph model for linking and computing between the UAV clusters and LEO satellites was established.By utilizing probability to describe the positional states of UAVs and sensor errors,the distributed multivariate Probability Fusion Cooperative Positioning(PF-CP)algorithm is proposed to achieve high-precision cooperative positioning and integration of the cluster.Criteria to select the centroid of the cluster were set.A new Kalman filter algorithm that is suitable for UAV clusters was designed based on the global benchmark and Riemann information geometry theory,which overcomes the discontinuity problem caused by the change of cluster centroids.Finally,the UC-IGFP method achieves the LEO continuous highprecision positioning of UAV clusters.The proposed method effectively addresses the positioning challenges caused by the strong direction of signal beams from LEO satellites and the insufficient constraint quantity of information sources at the edge nodes of the cluster.It significantly improves the accuracy and reliability of LEO-UAV cluster positioning.The results of comprehensive simulation experiments show that the proposed method has a 30.5%improvement in performance over the mainstream positioning methods,with a positioning error of 14.267 m.
基金the funding from the National Natural Science Foundation of China(Grant Nos.42001236,71991481,and 71991480)Young Elite Scientist Sponsor-ship Program by Bast(Grant No.BYESS2023413)。
文摘Startups form an information network that reflects their growth trajectories through information flow channels established by shared investors.However,traditional static network metrics overlook temporal dynamics and rely on single indicators to assess startups’roles in predicting future success,failing to comprehensively capture topological variations and structural diversity.To address these limitations,we construct a temporal information network using 14547 investment records from 1013 global blockchain startups between 2004 and 2020,sourced from Crunchbase.We propose two dynamic methods to characterize the information flow:temporal random walk(sTRW)for modeling information flow trajectories and temporal betweenness centrality(tTBET)for identifying key information hubs.These methods enhance walk coverage while ensuring random stability,allowing for more effective identification of influential startups.By integrating sTRW and tTBET,we develop a comprehensive metric to evaluate a startup’s influence within the network.In experiments assessing startups’potential for future success—where successful startups are defined as those that have undergone M&A or IPO—incorporating this metric improves accuracy,recall,and F1 score by 0.035,0.035,and 0.042,respectively.Our findings indicate that information flow from key startups to others diminishes as the network distance increases.Additionally,successful startups generally exhibit higher information inflows than outflows,suggesting that actively seeking investment-related information contributes to startup growth.Our research provides valuable insights for formulating startup development strategies and offers practical guidance for market regulators.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region under grant number 2022D01B186.
文摘Visual Place Recognition(VPR)technology aims to use visual information to judge the location of agents,which plays an irreplaceable role in tasks such as loop closure detection and relocation.It is well known that previous VPR algorithms emphasize the extraction and integration of general image features,while ignoring the mining of salient features that play a key role in the discrimination of VPR tasks.To this end,this paper proposes a Domain-invariant Information Extraction and Optimization Network(DIEONet)for VPR.The core of the algorithm is a newly designed Domain-invariant Information Mining Module(DIMM)and a Multi-sample Joint Triplet Loss(MJT Loss).Specifically,DIMM incorporates the interdependence between different spatial regions of the feature map in the cascaded convolutional unit group,which enhances the model’s attention to the domain-invariant static object class.MJT Loss introduces the“joint processing of multiple samples”mechanism into the original triplet loss,and adds a new distance constraint term for“positive and negative”samples,so that the model can avoid falling into local optimum during training.We demonstrate the effectiveness of our algorithm by conducting extensive experiments on several authoritative benchmarks.In particular,the proposed method achieves the best performance on the TokyoTM dataset with a Recall@1 metric of 92.89%.
基金Project supported by the National Natural Science Foundation of China (Grant No. 62371253)the Postgraduate Research and Practice Innovation Program of Jiangsu Province, China (Grant No. KYCX24_1179)。
文摘As the economy grows, environmental issues are becoming increasingly severe, making the promotion of green behavior more urgent. Information dissemination and policy regulation play crucial roles in influencing and amplifying the spread of green behavior across society. To this end, a novel three-layer model in multilayer networks is proposed. In the novel model, the information layer describes green information spreading, the physical contact layer depicts green behavior propagation, and policy regulation is symbolized by an isolated node beneath the two layers. Then, we deduce the green behavior threshold for the three-layer model using the microscopic Markov chain approach. Moreover, subject to some individuals who are more likely to influence others or become green nodes and the limitations of the capacity of policy regulation, an optimal scheme is given that could optimize policy interventions to most effectively prompt green behavior.Subsequently, simulations are performed to validate the preciseness and theoretical results of the new model. It reveals that policy regulation can prompt the prevalence and outbreak of green behavior. Then, the green behavior is more likely to spread and be prevalent in the SF network than in the ER network. Additionally, optimal allocation is highly successful in facilitating the dissemination of green behavior. In practice, the optimal allocation strategy could prioritize interventions at critical nodes or regions, such as highly connected urban areas, where the impact of green behavior promotion would be most significant.
基金supported by the following funding bodies:the National Key Research and Development Program of China(Grant No.2020YFA0608000)National Science Foundation of China(Grant Nos.42075142,42375148,42125503+2 种基金42130608)FY-APP-2022.0609,Sichuan Province Key Tech nology Research and Development project(Grant Nos.2024ZHCG0168,2024ZHCG0176,2023YFG0305,2023YFG-0124,and 23ZDYF0091)the CUIT Science and Technology Innovation Capacity Enhancement Program project(Grant No.KYQN202305)。
文摘Climate downscaling is used to transform large-scale meteorological data into small-scale data with enhanced detail,which finds wide applications in climate modeling,numerical weather forecasting,and renewable energy.Although deeplearning-based downscaling methods effectively capture the complex nonlinear mapping between meteorological data of varying scales,the supervised deep-learning-based downscaling methods suffer from insufficient high-resolution data in practice,and unsupervised methods struggle with accurately inferring small-scale specifics from limited large-scale inputs due to small-scale uncertainty.This article presents DualDS,a dual-learning framework utilizing a Generative Adversarial Network–based neural network and subgrid-scale auxiliary information for climate downscaling.Such a learning method is unified in a two-stream framework through up-and downsamplers,where the downsampler is used to simulate the information loss process during the upscaling,and the upsampler is used to reconstruct lost details and correct errors incurred during the upscaling.This dual learning strategy can eliminate the dependence on high-resolution ground truth data in the training process and refine the downscaling results by constraining the mapping process.Experimental findings demonstrate that DualDS is comparable to several state-of-the-art deep learning downscaling approaches,both qualitatively and quantitatively.Specifically,for a single surface-temperature data downscaling task,our method is comparable with other unsupervised algorithms with the same dataset,and we can achieve a 0.469 dB higher peak signal-to-noise ratio,0.017 higher structural similarity,0.08 lower RMSE,and the best correlation coefficient.In summary,this paper presents a novel approach to addressing small-scale uncertainty issues in unsupervised downscaling processes.
文摘Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications.
基金supported by the Natural Science Basic Research Plan in Shaanxi Province of China(Program No.2022JM-381,2017JQ6070)National Natural Science Foundation of China(Grant No.61703256),Foundation of State Key Laboratory of Public Big Data(No.PBD2022-08)the Fundamental Research Funds for the Central Universities,China(Program No.GK202201014,GK202202003,GK201803020).
文摘Studies show that Graph Neural Networks(GNNs)are susceptible to minor perturbations.Therefore,analyzing adversarial attacks on GNNs is crucial in current research.Previous studies used Generative Adversarial Networks to generate a set of fake nodes,injecting them into a clean GNNs to poison the graph structure and evaluate the robustness of GNNs.In the attack process,the computation of new node connections and the attack loss are independent,which affects the attack on the GNN.To improve this,a Fake Node Camouflage Attack based on Mutual Information(FNCAMI)algorithm is proposed.By incorporating Mutual Information(MI)loss,the distribution of nodes injected into the GNNs become more similar to the original nodes,achieving better attack results.Since the loss ratios of GNNs and MI affect performance,we also design an adaptive weighting method.By adjusting the loss weights in real-time through rate changes,larger loss values are obtained,eliminating local optima.The feasibility,effectiveness,and stealthiness of this algorithm are validated on four real datasets.Additionally,we use both global and targeted attacks to test the algorithm’s performance.Comparisons with baseline attack algorithms and ablation experiments demonstrate the efficiency of the FNCAMI algorithm.
基金the Natural Science Foundation of Chongqing(No.cstb2022nscq-msx1575)the Science and Technology Research Program of Chongqing Municipal Education Commission(Nos.KJQN202201512,KJQN202001523 and KJZD-M202101501)+1 种基金the Chongqing University of Science and Technology Research Funding Projects(Nos.CKRC2022019 and CKRC2019042)the Open Foundation of the Chongqing Key Laboratory for Oil and Gas Production Safety and Risk Control(No.cqsrc202113)。
文摘Brain age is an effective biomarker for diagnosing Alzheimer’s disease(AD).Aimed at the issue that the existing brain age detection methods are inconsistent with the biological hypothesis that AD is the accelerated aging of the brain,a mutual information—support vector regression(MI-SVR)brain age prediction model is proposed.First,the age deviation is introduced according to the biological hypothesis of AD.Second,fitness function is designed based on mutual information criterion.Third,support vector regression and fitness function are used to obtain the predicted brain age and fitness value of the subjects,respectively.The optimal age deviation is obtained by maximizing the fitness value.Finally,the proposed method is compared with some existing brain age detection methods.Experimental results show that the brain age obtained by the proposed method has better separability,can better reflect the accelerated aging of AD,and is more helpful for improving the diagnostic accuracy of AD.
文摘Background:Acquiring relevant information about procurement targets is fundamental to procuring medical devices.Although traditional Natural Language Processing(NLP)and Machine Learning(ML)methods have improved information retrieval efficiency to a certain extent,they exhibit significant limitations in adaptability and accuracy when dealing with procurement documents characterized by diverse formats and a high degree of unstructured content.The emergence of Large Language Models(LLMs)offers new possibilities for efficient procurement information processing and extraction.Methods:This study collected procurement transaction documents from public procurement websites,and proposed a procurement Information Extraction(IE)method based on LLMs.Unlike traditional approaches,this study systematically explores the applicability of LLMs in both structured and unstructured entities in procurement documents,addressing the challenges posed by format variability and content complexity.Furthermore,an optimized prompt framework tailored for procurement document extraction tasks is developed to enhance the accuracy and robustness of IE.The aim is to process and extract key information from medical device procurement quickly and accurately,meeting stakeholders'demands for precision and timeliness in information retrieval.Results:Experimental results demonstrate that,compared to traditional methods,the proposed approach achieves an F1 Score of 0.9698,representing a 4.85%improvement over the best baseline model.Moreover,both recall and precision rates are close to 97%,significantly outperforming other models and exhibiting exceptional overall recognition capabilities.Notably,further analysis reveals that the proposed method consistently maintains high performance across both structured and unstructured entities in procurement documents while balancing recall and precision effectively,demonstrating its adaptability in handling varying document formats.The results of ablation experiments validate the effectiveness of the proposed prompting strategy.Conclusion:Additionally,this study explores the challenges and potential improvements of the proposed method in IE tasks and provides insights into its feasibility for real-world deployment and application directions,further clarifying its adaptability and value.This method not only exhibits significant advantages in medical device procurement but also holds promise for providing new approaches to information processing and decision support in various domains.
基金supported by the National Natural Science Foundation of China(No.62306281)the Natural Science Foundation of Zhejiang Province(Nos.LQ23E060006 and LTGG24E050005)the Key Research Plan of Jiaxing City(No.2024BZ20016).
文摘In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges when datasets lack the comprehensive information necessary for addressing complex scenarios,which hampers adaptability.Thus,enhancing data completeness is essential.Knowledge-guided virtual sample generation transforms domain knowledge into extensive virtual datasets,thereby reducing dependence on limited real samples and enabling zero-sample fault diagnosis.This study used building air conditioning systems as a case study.We innovatively used the large language model(LLM)to acquire domain knowledge for sample generation,significantly lowering knowledge acquisition costs and establishing a generalized framework for knowledge acquisition in engineering applications.This acquired knowledge guided the design of diffusion boundaries in mega-trend diffusion(MTD),while the Monte Carlo method was used to sample within the diffusion function to create information-rich virtual samples.Additionally,a noise-adding technique was introduced to enhance the information entropy of these samples,thereby improving the robustness of neural networks trained with them.Experimental results showed that training the diagnostic model exclusively with virtual samples achieved an accuracy of 72.80%,significantly surpassing traditional small-sample supervised learning in terms of generalization.This underscores the quality and completeness of the generated virtual samples.
基金supported by the Innovation Driven Development Foundation of Guangxi(No.AD22080035)the Open Project Funding of the Key Laboratory of Tropical Marine Ecosystem and Bioresource,Ministry of Natural Resources(No.2023-QN04)+1 种基金the Guangdong Provincial Ordinary University Youth Innovative Talent Project in 2024(No.2024KQNCX134)the Guangdong Provincial Special Fund Project for Talent Development Strategy in 2024(No.2024R3005).
文摘The investigation of whether sediment samples contain representative grain size distribution information is important for the accurate extraction of sediment characteristics and conduct of related sedimentary record studies.This study comparatively analyzed the numerical and qualitative differences and the degree of correlation of 36 sets of the characteristic parameters of surface sediment parallel sample grain size distribution from three sampling profiles at Jinsha Bay Beach in Zhanjiang,western Guangdong.At each sampling point,five parallel subsamples were established at intervals of 0,10,20,50,and 100 cm along the coastline.The research findings indicate the following:1)relatively large differences in the mean values of the different parallel samples(0.19–0.34Φ),with smaller differences observed in other characteristic grain sizes(D_(10),D_(50),and D_(90));2)small differences in characteristic values among various parallel sample grain size parameters,with at least 33%of the combinations of qualitative results showing inconsistency;3)50%of the regression equations between the skewness of different parallel samples displaying no significant correlation;4)relative deviations of−47.91%to 27.63%and−49.20%to 2.08%existing between the particle size parameters of a single sample and parallel samples(with the average obtained)at intervals of 10 and 50 cm,respectively.As such,small spatial differences,even within 100 cm,can considerably affect grain size parameters.Given the uncertain reasons underlying the representativeness of the samples,which may only cover the area immediately surrounding the sampling station,researchers are advised to design parallel sample collection strategies based on the spatiotemporal distribution characteristics of the parameters of interest during sediment sample collection.This study provides a typical case of the comparative analysis of parallel sample grain size parameters,with a focus on small spatial beach sediment,which contributes to the enhanced understanding of the accuracy and reliability of sediment sample collection strategies and extraction of grain size information.
基金the support of Research Program of Fine Exploration and Surrounding Rock Classification Technology for Deep Buried Long Tunnels Driven by Horizontal Directional Drilling and Magnetotelluric Methods Based on Deep Learning under Grant E202408010the Sichuan Science and Technology Program under Grant 2024NSFSC1984 and Grant 2024NSFSC1990。
文摘Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for comprehensively obtaining the porosity. Deep learning methods provide an intelligent approach to suppress the ambiguity of the conventional inversion method. However, under the trace-bytrace inversion strategy, there is a lack of constraints from geological structural information, resulting in poor lateral continuity of prediction results. In addition, the heterogeneity and the sedimentary variability of subsurface media also lead to uncertainty in intelligent prediction. To achieve fine prediction of porosity, we consider the lateral continuity and variability and propose an improved structural modeling deep learning porosity prediction method. First, we combine well data, waveform attributes, and structural information as constraints to model geophysical parameters, constructing a high-quality training dataset with sedimentary facies-controlled significance. Subsequently, we introduce a gated axial attention mechanism to enhance the features of dataset and design a bidirectional closed-loop network system constrained by inversion and forward processes. The constraint coefficient is adaptively adjusted by the petrophysical information contained between the porosity and impedance in the study area. We demonstrate the effectiveness of the adaptive coefficient through numerical experiments.Finally, we compare the performance differences between the proposed method and conventional deep learning methods using data from two study areas. The proposed method achieves better consistency with the logging porosity, demonstrating the superiority of the proposed method.
基金supported by the National Key R&D Program of China(No.2022YFB3104502)the National Natural Science Foundation of China(No.62301251)+2 种基金the Natural Science Foundation of Jiangsu Province of China under Project(No.BK20220883)the open research fund of National Mobile Communications Research Laboratory,Southeast University,China(No.2024D04)the Young Elite Scientists Sponsorship Program by CAST(No.2023QNRC001).
文摘The environment of low-altitude urban airspace is complex and variable due to numerous obstacles,non-cooperative aircraft,and birds.Unmanned Aerial Vehicles(UAVs)leveraging environmental information to achieve three-dimension collision-free trajectory planning is the prerequisite to ensure airspace security.However,the timely information of surrounding situation is difficult to acquire by UAVs,which further brings security risks.As a mature technology leveraged in traditional civil aviation,the Automatic Dependent Surveillance-Broadcast(ADS-B)realizes continuous surveillance of the information of aircraft.Consequently,we leverage ADS-B for surveillance and information broadcasting,and divide the aerial airspace into multiple sub-airspaces to improve flight safety in UAV trajectory planning.In detail,we propose the secure Sub-airSpaces Planning(SSP)algorithm and Particle Swarm Optimization Rapidly-exploring Random Trees(PSO-RRT)algorithm for the UAV trajectory planning in law-altitude airspace.The performance of the proposed algorithm is verified by simulations and the results show that SSP reduces both the maximum number of UAVs in the sub-airspace and the length of the trajectory,and PSO-RRT reduces the cost of UAV trajectory in the sub-airspace.
基金the National Natural-Science Foundation of China(Grant No.62304137)Guangdong Basic and Applied Basic Research Foundation(Grant Nos.2023A1515012479,2024A1515011737,and 2024A1515010006)+4 种基金the Science and Technology Innovation Commission of Shenzhen(Grant No.JCYJ20220818100206013)RSC Researcher Collaborations Grant(Grant No.C23-2422436283)State Key Laboratory of Radio Frequency Heterogeneous Integration(Independent Scientific Research Program No.2024010)the Project on Frontier and Interdisciplinary Research Assessment,Academic Divisions of the Chinese Academy of Sciences(Grant No.XK2023XXA002)NTUT-SZU Joint Research Program.
文摘In this data explosion era,ensuring the secure storage,access,and transmission of information is imperative,encom-passing all aspects ranging from safeguarding personal devices to formulating national information security strategies.Leverag-ing the potential offered by dual-type carriers for transportation and employing optical modulation techniques to develop high reconfigurable ambipolar optoelectronic transistors enables effective implementation of information destruction after read-ing,thereby guaranteeing data security.In this study,a reconfigurable ambipolar optoelectronic synaptic transistor based on poly(3-hexylthiophene)(P3HT)and poly[[N,N-bis(2-octyldodecyl)-napthalene-1,4,5,8-bis(dicarboximide)-2,6-diyl]-alt-5,5′-(2,2′-bithiophene)](N2200)blend film was fabricated through solution-processed method.The resulting transistor exhib-ited a relatively large ON/OFF ratio of 10^(3) in both n-and p-type regions,and tunable photoconductivity after light illumination,particularly with green light.The photo-generated carriers could be effectively trapped under the gate bias,indicating its poten-tial application in mimicking synaptic behaviors.Furthermore,the synaptic plasticity,including volatile/non-volatile and excita-tory/inhibitory characteristics,could be finely modulated by electrical and optical stimuli.These optoelectronic reconfigurable properties enable the realization of information light assisted burn after reading.This study not only offers valuable insights for the advancement of high-performance ambipolar organic optoelectronic synaptic transistors but also presents innovative ideas for the future information security access systems.