The advent of Grover’s algorithm presents a significant threat to classical block cipher security,spurring research into post-quantum secure cipher design.This study engineers quantum circuit implementations for thre...The advent of Grover’s algorithm presents a significant threat to classical block cipher security,spurring research into post-quantum secure cipher design.This study engineers quantum circuit implementations for three versions of the Ballet family block ciphers.The Ballet‑p/k includes a modular-addition operation uncommon in lightweight block ciphers.Quantum ripple-carry adder is implemented for both“32+32”and“64+64”scale to support this operation.Subsequently,qubits,quantum gates count,and quantum circuit depth of three versions of Ballet algorithm are systematically evaluated under quantum computing model,and key recovery attack circuits are constructed based on Grover’s algorithm against each version.The comprehensive analysis shows:Ballet-128/128 fails to NIST Level 1 security,while when the resource accounting is restricted to the Clifford gates and T gates set for the Ballet-128/256 and Ballet-256/256 quantum circuits,the design attains Level 3.展开更多
Wireless communication-enabled Cooperative Adaptive Cruise Control(CACC)is expected to improve the safety and traffic capacity of vehicle platoons.Existing CACC considers a conventional communication delay with fixed ...Wireless communication-enabled Cooperative Adaptive Cruise Control(CACC)is expected to improve the safety and traffic capacity of vehicle platoons.Existing CACC considers a conventional communication delay with fixed Vehicular Communication Network(VCN)topologies.However,when the network is under attack,the communication delay may be much higher,and the stability of the system may not be guaranteed.This paper proposes a novel communication Delay Aware CACC with Dynamic Network Topologies(DADNT).The main idea is that for various communication delays,in order to maximize the traffic capacity while guaranteeing stability and minimizing the following error,the CACC should dynamically adjust the VCN network topology to achieve the minimum inter-vehicle spacing.To this end,a multi-objective optimization problem is formulated,and a 3-step Divide-And-Conquer sub-optimal solution(3DAC)is proposed.Simulation results show that with 3DAC,the proposed DADNT with CACC can reduce the inter-vehicle spacing by 5%,10%,and 14%,respectively,compared with the traditional CACC with fixed one-vehicle,two-vehicle,and three-vehicle look-ahead network topologies,thereby improving the traffic efficiency.展开更多
With the increasing demand of computational power in artificial intelligence(AI)algorithms,dedicated accelerators have become a necessity.However,the complexity of hardware architectures,vast design search space,and c...With the increasing demand of computational power in artificial intelligence(AI)algorithms,dedicated accelerators have become a necessity.However,the complexity of hardware architectures,vast design search space,and complex tasks of accelerators have posed significant challenges.Tra-ditional search methods can become prohibitively slow if the search space continues to be expanded.A design space exploration(DSE)method is proposed based on transfer learning,which reduces the time for repeated training and uses multi-task models for different tasks on the same processor.The proposed method accurately predicts the latency and energy consumption associated with neural net-work accelerator design parameters,enabling faster identification of optimal outcomes compared with traditional methods.And compared with other DSE methods by using multilayer perceptron(MLP),the required training time is shorter.Comparative experiments with other methods demonstrate that the proposed method improves the efficiency of DSE without compromising the accuracy of the re-sults.展开更多
In covert communications,joint jammer selection and power optimization are important to improve performance.However,existing schemes usually assume a warden with a known location and perfect Channel State Information(...In covert communications,joint jammer selection and power optimization are important to improve performance.However,existing schemes usually assume a warden with a known location and perfect Channel State Information(CSI),which is difficult to achieve in practice.To be more practical,it is important to investigate covert communications against a warden with uncertain locations and imperfect CSI,which makes it difficult for legitimate transceivers to estimate the detection probability of the warden.First,the uncertainty caused by the unknown warden location must be removed,and the Optimal Detection Position(OPTDP)of the warden is derived which can provide the best detection performance(i.e.,the worst case for a covert communication).Then,to further avoid the impractical assumption of perfect CSI,the covert throughput is maximized using only the channel distribution information.Given this OPTDP based worst case for covert communications,the jammer selection,the jamming power,the transmission power,and the transmission rate are jointly optimized to maximize the covert throughput(OPTDP-JP).To solve this coupling problem,a Heuristic algorithm based on Maximum Distance Ratio(H-MAXDR)is proposed to provide a sub-optimal solution.First,according to the analysis of the covert throughput,the node with the maximum distance ratio(i.e.,the ratio of the distances from the jammer to the receiver and that to the warden)is selected as the friendly jammer(MAXDR).Then,the optimal transmission and jamming power can be derived,followed by the optimal transmission rate obtained via the bisection method.In numerical and simulation results,it is shown that although the location of the warden is unknown,by assuming the OPTDP of the warden,the proposed OPTDP-JP can always satisfy the covertness constraint.In addition,with an uncertain warden and imperfect CSI,the covert throughput provided by OPTDP-JP is 80%higher than the existing schemes when the covertness constraint is 0.9,showing the effectiveness of OPTDP-JP.展开更多
1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the es...1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight.展开更多
Synthetic aperture radar(SAR)radio frequency identification(RFID)localization is widely used for automated guided vehicles(AGVs)in the industrial internet of things(IIoT).However,the AGV’s speeds are limited by the p...Synthetic aperture radar(SAR)radio frequency identification(RFID)localization is widely used for automated guided vehicles(AGVs)in the industrial internet of things(IIoT).However,the AGV’s speeds are limited by the phase difference(PD)of two neighboring readers.In this paper,an inertial navigation system(INS)based SAR RFID localization method(ISRL)where AGV moves nonlinearly.To relax the speed limitation,a new phase-unwrapping method based on the similarity of PDs(PU-SPD)is proposed to deal with the PD ambiguity when the AGV speed exceeds 60km/h.In localization,the gauss-newton algorithm(GN)is employed and an initial value estimation scheme based on variable substitution(IVE-VS)is proposed to improve its positioning accuracy and the convergence rate.Thus,ISRL is a combination of IVE-VS and GN.Moreover,the Cramer-Rao lower bound(CRLB)and the speed limitation is derived.Simulation results show that the ISRL can converge after two iterations,and the positioning accuracy can achieve 7.50cm at a phase noise levelσ=0.18,which is 35%better than the Hyperbolic unbiased estimation localization(HyUnb).展开更多
The shadow tomography problem introduced by[1]is an important problem in quantum computing.Given an unknown-qubit quantum state,the goal is to estimate tr■,...,tr■using as least copies of■as possible,within an addi...The shadow tomography problem introduced by[1]is an important problem in quantum computing.Given an unknown-qubit quantum state,the goal is to estimate tr■,...,tr■using as least copies of■as possible,within an additive error of,whereF1,...,FM are known-outcome measurements.In this paper,we consider the shadow tomography problem with a potentially inaccurate prediction■of the true state■.This corresponds to practical cases where we possess prior knowledge of the unknown state.For example,in quantum verification or calibration,we may be aware of the quantum state that the quantum device is expected to generate.However,the actual state it generates may have deviations.We introduce an algorithm with sample complexity■(nmax{■ε}log2M/ε4.In the generic case,even if the prediction can be arbitrarily bad,our algorithm has the same complexity as the best algorithm without prediction[2].At the same time,as the prediction quality improves,the sample complexity can be reduced smoothly to■(nlog2M/ε3)when the trace distance between the prediction and the unknown state is■(ε).Furthermore,we conduct numerical experiments to validate our theoretical analysis.The experiments are constructed to simulate noisy quantum circuits that reflect possible real scenarios in quantum verification or calibration.Notably,our algorithm outperforms the previous work without prediction in most settings.展开更多
Low-Earth-Orbit satellite constellation networks(LEO-SCN)can provide low-cost,largescale,flexible coverage wireless communication services.High dynamics and large topological sizes characterize LEO-SCN.Protocol develo...Low-Earth-Orbit satellite constellation networks(LEO-SCN)can provide low-cost,largescale,flexible coverage wireless communication services.High dynamics and large topological sizes characterize LEO-SCN.Protocol development and application testing of LEO-SCN are challenging to carry out in a natural environment.Simulation platforms are a more effective means of technology demonstration.Currently available simulators have a single function and limited simulation scale.There needs to be a simulator for full-featured simulation.In this paper,we apply the parallel discrete-event simulation technique to the simulation of LEO-SCN to support large-scale complex system simulation at the packet level.To solve the problem that single-process programs cannot cope with complex simulations containing numerous entities,we propose a parallel mechanism and algorithms LP-NM and LP-YAWNS for synchronization.In the experiment,we use ns-3 to verify the acceleration ratio and efficiency of the above algorithms.The results show that our proposed mechanism can provide parallel simulation engine support for the LEO-SCN.展开更多
The high failure rates in clinical drug development based on animal models highlight the urgent need for more representative human models in biomedical research.In response to this demand,organoids and organ chips wer...The high failure rates in clinical drug development based on animal models highlight the urgent need for more representative human models in biomedical research.In response to this demand,organoids and organ chips were integrated for greater physiological relevance and dynamic,controlled experimental conditions.This innovative platform—the organoids-on-a-chip technology—shows great promise in disease modeling,drug discovery,and personalized medicine,attracting interest from researchers,clinicians,regulatory authorities,and industry stakeholders.This review traces the evolution from organoids to organoids-on-a-chip,driven by the necessity for advanced biological models.We summarize the applications of organoids-on-a-chip in simulating physiological and pathological phenotypes and therapeutic evaluation of this technology.This section highlights how integrating technologies from organ chips,such as microfluidic systems,mechanical stimulation,and sensor integration,optimizes organoid cell types,spatial structure,and physiological functions,thereby expanding their biomedical applications.We conclude by addressing the current challenges in the development of organoids-on-a-chip and offering insights into the prospects.The advancement of organoids-on-a-chip is poised to enhance fidelity,standardization,and scalability.Furthermore,the integration of cutting-edge technologies and interdisciplinary collaborations will be crucial for the progression of organoids-on-a-chip technology.展开更多
A novel quantum search algorithm tailored for continuous optimization and spectral problems was proposed recently by a research team from the University of Electronic Science and Technology of China to broaden quantum...A novel quantum search algorithm tailored for continuous optimization and spectral problems was proposed recently by a research team from the University of Electronic Science and Technology of China to broaden quantum computation frontiers and enrich its application landscape.Quantum computing has traditionally excelled at tackling discrete search challenges,but many important applications from large-scale optimization to advanced physics simulations necessitate searching through continuous domains.These continuous search problems involve uncountably infinite solution spaces and bring about computational complexities far beyond those faced in conventional discrete settings.This draft,titled“Fixed-Point Quantum Continuous Search Algorithm with Optimal Query Complexity”,takes on the core challenge of performing search tasks in domains that may be uncountably infinite,offering theoretical and practical insights into achieving quantum speedups in such settings[1].展开更多
In the field of natural language processing,the rapid development of large language model(LLM)has attracted increasing attention.LLMs have shown a high level of creativity in various tasks,but the methods for assessin...In the field of natural language processing,the rapid development of large language model(LLM)has attracted increasing attention.LLMs have shown a high level of creativity in various tasks,but the methods for assessing such creativity are inadequate.Assessment of LLM creativity needs to consider differences from humans,requiring multiple dimensional measurement while balancing accuracy and efficiency.This paper aims to establish an efficient framework for assessing the level of creativity in LLMs.By adapting the modified Torrance tests of creative thinking,the research evaluates the creative performance of various LLMs across 7 tasks,emphasizing 4 criteria including fluency,flexibility,originality,and elaboration.In this context,we develop a comprehensive dataset of 700 questions for testing and an LLM-based evaluation method.In addition,this study presents a novel analysis of LLMs'responses to diverse prompts and role-play situations.We found that the creativity of LLMs primarily falls short in originality,while excelling in elaboration.In addition,the use of prompts and role-play settings of the model significantly influence creativity.Additionally,the experimental results also indicate that collaboration among multiple LLMs can enhance originality.Notably,our findings reveal a consensus between human evaluations and LLMs regarding the personality traits that influence creativity.The findings underscore the significant impact of LLM design on creativity and bridge artificial intelligence and human creativity,offering insights into LLMs'creativity and potential applications.展开更多
Submodular maximization is a significant area of interest in combinatorial optimization.It has various real-world applications.In recent years,streaming algorithms for submodular maximization have gained attention,all...Submodular maximization is a significant area of interest in combinatorial optimization.It has various real-world applications.In recent years,streaming algorithms for submodular maximization have gained attention,allowing realtime processing of large data sets by examining each piece of data only once.However,most of the current state-of-the-art algorithms are only applicable to monotone submodular maximization.There are still significant gaps in the approximation ratios between monotone and non-monotone objective functions.In this paper,we propose a streaming algorithm framework for non-monotone submodular maximization and use this framework to design deterministic streaming algorithms for the d-knapsack constraint and the knapsack constraint.Our 1-pass streaming algorithm for the d-knapsack constraint has a 1/4(d+1)-∈approximation ratio,using O(BlogB/∈)memory,and O(logB/∈)query time per element,where B=MIN(n,b)is the maximum number of elements that the knapsack can store.As a special case of the d-knapsack constraint,we have the 1-pass streaming algorithm with a 1/8-∈approximation ratio to the knapsack constraint.To our knowledge,there is currently no streaming algorithm for this constraint when the objective function is non-monotone,even when d=1.In addition,we propose a multi-pass streaming algorithm with 1/6-∈approximation,which stores O(B)elements.展开更多
With the rapid development of artificial intelligence,computational pathology has been seamlessly integrated into the entire clinical workflow,which encompasses diagnosis,treatment,prognosis,and biomarker discovery.Th...With the rapid development of artificial intelligence,computational pathology has been seamlessly integrated into the entire clinical workflow,which encompasses diagnosis,treatment,prognosis,and biomarker discovery.This integration has significantly enhanced clinical accuracy and efficiency while reducing the workload for clinicians.Traditionally,research in this field has depended on the collection and labeling of large datasets for specific tasks,followed by the development of task-specific computational pathology models.However,this approach is labor intensive and does not scale efficiently for open-set identification or rare diseases.Given the diversity of clinical tasks,training individual models from scratch to address the whole spectrum of clinical tasks in the pathology workflow is impractical,which highlights the urgent need to transition from task-specific models to foundation models(FMs).In recent years,pathological FMs have proliferated.These FMs can be classified into three categories,namely,pathology image FMs,pathology image-text FMs,and pathology image-gene FMs,each of which results in distinct functionalities and application scenarios.This review provides an overview of the latest research advancements in pathological FMs,with a particular emphasis on their applications in oncology.The key challenges and opportunities presented by pathological FMs in precision oncology are also explored.展开更多
Quantum computing is a game-changing technology for global academia,research centers and industries including computational science,mathematics,finance,pharmaceutical,materials science,chemistry and cryptography.Altho...Quantum computing is a game-changing technology for global academia,research centers and industries including computational science,mathematics,finance,pharmaceutical,materials science,chemistry and cryptography.Although it has seen a major boost in the last decade,we are still a long way from reaching the maturity of a full-fledged quantum computer.That said,we will be in the noisy-intermediate scale quantum(NISQ)era for a long time,working on dozens or even thousands of qubits quantum computing systems.An outstanding challenge,then,is to come up with an application that can reliably carry out a nontrivial task of interest on the near-term quantum devices with non-negligible quantum noise.To address this challenge,several near-term quantum computing techniques,including variational quantum algorithms,error mitigation,quantum circuit compilation and benchmarking protocols,have been proposed to characterize and mitigate errors,and to implement algorithms with a certain resistance to noise,so as to enhance the capabilities of near-term quantum devices and explore the boundaries of their ability to realize useful applications.Besides,the development of near-term quantum devices is inseparable from the efficient classical sim-ulation,which plays a vital role in quantum algorithm design and verification,error-tolerant verification and other applications.This review will provide a thorough introduction of these near-term quantum computing techniques,report on their progress,and finally discuss the future prospect of these techniques,which we hope will motivate researchers to undertake additional studies in this field.展开更多
1 Introduction The lifetime of wireless sensor networks(WSNs)is restricted by the limited energy of battery-powered sensor devices,making the lifetime extension a critical problem in real applications[1].Several aspec...1 Introduction The lifetime of wireless sensor networks(WSNs)is restricted by the limited energy of battery-powered sensor devices,making the lifetime extension a critical problem in real applications[1].Several aspects have been examined in previous works to extend the lifetime of WSNs,such as the deployment position,the network routing strategy and the sensing range adjustment.Given the fact that there are often many redundant sensors,a practical way to extend the lifetime of a WSN is to partition the sensors into subsets,each of which can cover all the targets[2].Then the sets are activated one by one,extending the lifetime of a WSN to times of the battery lifetime of a sensor.The problem of finding the maximal is abstracted as the set k-cover problem.展开更多
Dear Editor,Short-range wireless communications have been widely used in our daily life,but the pursuit of a better communication experience never stops,leading to the more stringent requirements of emerging applicati...Dear Editor,Short-range wireless communications have been widely used in our daily life,but the pursuit of a better communication experience never stops,leading to the more stringent requirements of emerging applications.^(1)For example,remote control applications such as telesurgery require a delay of less than 1 ms.^(2)Indus-trial closed-loop control applications such as automatic assembly lines have a reliability requirement of at least 99.999%.展开更多
Mutation-based greybox fuzzing has been one of the most prevalent techniques for security vulnerability discovery and a great deal of research work has been proposed to improve both its efficiency and effectiveness.Mu...Mutation-based greybox fuzzing has been one of the most prevalent techniques for security vulnerability discovery and a great deal of research work has been proposed to improve both its efficiency and effectiveness.Mutation-based greybox fuzzing generates input cases by mutating the input seed,i.e.,applying a sequence of mutation operators to randomly selected mutation positions of the seed.However,existing fruitful research work focuses on scheduling mutation operators,leaving the schedule of mutation positions as an overlooked aspect of fuzzing efficiency.This paper proposes a novel greybox fuzzing method,PosFuzz,that statistically schedules mutation positions based on their historical performance.PosFuzz makes use of a concept of effective position distribution to represent the semantics of the input and to guide the mutations.PosFuzz first utilizes Good-Turing frequency estimation to calculate an effective position distribution for each mutation operator.It then leverages two sampling methods in different mutating stages to select the positions from the distribution.We have implemented PosFuzz on top of AFL,AFLFast and MOPT,called Pos-AFL,-AFLFast and-MOPT respectively,and evaluated them on the UNIFUZZ benchmark(20 widely used open source programs)and LAVA-M dataset.The result shows that,under the same testing time budget,the Pos-AFL,-AFLFast and-MOPT outperform their counterparts in code coverage and vulnerability discovery ability.Compared with AFL,AFLFast,and MOPT,PosFuzz gets 21%more edge coverage and finds 133%more paths on average.It also triggers 275%more unique bugs on average.展开更多
The rapid advancements in artificial intelligence(AI)are catalyzing transformative changes in atomic modeling,simulation,and design.AI-driven potential energy models havedemonstrated the capability to conduct large-sc...The rapid advancements in artificial intelligence(AI)are catalyzing transformative changes in atomic modeling,simulation,and design.AI-driven potential energy models havedemonstrated the capability to conduct large-scale,long-duration simulations with the accuracy of ab initio electronic structure methods.However,the model generation process remains a bottleneck for large-scale applications.We propose a shift towards a model-centric ecosystem,wherein a large atomic model(LAM),pretrained across multiple disciplines,can be efficiently fine-tuned and distilled for various downstream tasks,thereby establishing a new framework for molecular modeling.In this study,we introduce the DPA-2 architecture as a prototype for LAMs.Pre-trained on a diverse array of chemical and materials systemsusing a multi-task approach,DPA-2demonstrates superior generalization capabilities across multiple downstream tasks compared to the traditional single-task pre-training and fine-tuning methodologies.Our approach sets the stage for the development and broad application of LAMs in molecular and materials simulation research.展开更多
Agile hardware development methodology has been widely adopted over the past decade.Despite the research progress,the industry still doubts its applicability,especially for the functional verification of complicated p...Agile hardware development methodology has been widely adopted over the past decade.Despite the research progress,the industry still doubts its applicability,especially for the functional verification of complicated processor chips.Functional verification commonly employs a simulation-based method of co-simulating the design under test with a reference model and checking the consistency of their outcomes given the same input stimuli.We observe limited collaboration and information exchange through the design and verification processes,dramatically leading to inefficiencies when applying the conventional functional verification workflow to agile development.In this paper,we propose workflow integration with collaborative task delegation and dynamic information exchange as the design principles to effectively address the challenges on functional verification under the agile development model.Based on workflow integration,we enhance the functional verification workflows with a series of novel methodologies and toolchains.The diff-rule based agile verification methodology(DRAV)reduces the overhead of building reference models with runtime execution information from designs under test.We present the RISC-V implementation for DRAV,DiffTest,which adopts information probes to extract internal design behaviors for co-simulation and debugging.It further integrates two plugins,namely XFUZZ for effective test generation guided by design coverage metrics and LightSSS for efficient fault analysis triggered by co-simulation mismatches.We present the integrated workflows for agile hardware development and demonstrate their effectiveness in designing and verifying RISC-V processors with 33 functional bugs found in NutShell.We also illustrate the efficiency of the proposed toolchains with a case study on a functional bug in the L2 cache of XiangShan.展开更多
Two optimization technologies, namely, bypass and carry-control optimization, were demonstrated for enhancing the performance of a bit-slice Arithmetic Logic Unit (ALU) in 2n-bit Rapid Single-Flux-Quantum (RSFQ) micro...Two optimization technologies, namely, bypass and carry-control optimization, were demonstrated for enhancing the performance of a bit-slice Arithmetic Logic Unit (ALU) in 2n-bit Rapid Single-Flux-Quantum (RSFQ) microprocessors. These technologies can not only shorten the calculation time but also solve data hazards. Among them, the proposed bypass technology is applicable to any 2n-bit ALU, whether it is bit-serial, bit-slice or bit-parallel. The high performance bit-slice ALU was implemented using the 6 kA/cm^(2) Nb/AlOx/Nb junction fabrication process from Superconducting Electronics Facility of Shanghai Institute of Microsystem and Information Technology. It consists of 1693 Josephson junctions with an area of 2.46 0.81 mm^(2). All ALU operations of the MIPS32 instruction set are implemented, including two extended instructions, i.e., addition with carry (ADDC) and subtraction with borrow (SUBB). All the ALU operations were successfully obtained in SFQ testing based on OCTOPUX and the measured DC bias current margin can reach 86% - 104%. The ALU achieves a 100 utilization rate, regardless of carry/borrow read-after-write correlations between instructions.展开更多
基金State Key Lab of Processors,Institute of Computing Technology,Chinese Academy of Sciences(CLQ202516)the Fundamental Research Funds for the Central Universities of China(3282025047,3282024051,3282024009)。
文摘The advent of Grover’s algorithm presents a significant threat to classical block cipher security,spurring research into post-quantum secure cipher design.This study engineers quantum circuit implementations for three versions of the Ballet family block ciphers.The Ballet‑p/k includes a modular-addition operation uncommon in lightweight block ciphers.Quantum ripple-carry adder is implemented for both“32+32”and“64+64”scale to support this operation.Subsequently,qubits,quantum gates count,and quantum circuit depth of three versions of Ballet algorithm are systematically evaluated under quantum computing model,and key recovery attack circuits are constructed based on Grover’s algorithm against each version.The comprehensive analysis shows:Ballet-128/128 fails to NIST Level 1 security,while when the resource accounting is restricted to the Clifford gates and T gates set for the Ballet-128/256 and Ballet-256/256 quantum circuits,the design attains Level 3.
基金supported by the National Natural Science Foundation of China under Grant U21A20449in part by Jiangsu Provincial Key Research and Development Program under Grant BE2021013-2。
文摘Wireless communication-enabled Cooperative Adaptive Cruise Control(CACC)is expected to improve the safety and traffic capacity of vehicle platoons.Existing CACC considers a conventional communication delay with fixed Vehicular Communication Network(VCN)topologies.However,when the network is under attack,the communication delay may be much higher,and the stability of the system may not be guaranteed.This paper proposes a novel communication Delay Aware CACC with Dynamic Network Topologies(DADNT).The main idea is that for various communication delays,in order to maximize the traffic capacity while guaranteeing stability and minimizing the following error,the CACC should dynamically adjust the VCN network topology to achieve the minimum inter-vehicle spacing.To this end,a multi-objective optimization problem is formulated,and a 3-step Divide-And-Conquer sub-optimal solution(3DAC)is proposed.Simulation results show that with 3DAC,the proposed DADNT with CACC can reduce the inter-vehicle spacing by 5%,10%,and 14%,respectively,compared with the traditional CACC with fixed one-vehicle,two-vehicle,and three-vehicle look-ahead network topologies,thereby improving the traffic efficiency.
基金the National Key R&D Program of China(No.2018AAA0103300)the National Natural Science Foundation of China(No.61925208,U20A20227,U22A2028)+1 种基金the Chinese Academy of Sciences Project for Young Scientists in Basic Research(No.YSBR-029)the Youth Innovation Promotion Association Chinese Academy of Sciences.
文摘With the increasing demand of computational power in artificial intelligence(AI)algorithms,dedicated accelerators have become a necessity.However,the complexity of hardware architectures,vast design search space,and complex tasks of accelerators have posed significant challenges.Tra-ditional search methods can become prohibitively slow if the search space continues to be expanded.A design space exploration(DSE)method is proposed based on transfer learning,which reduces the time for repeated training and uses multi-task models for different tasks on the same processor.The proposed method accurately predicts the latency and energy consumption associated with neural net-work accelerator design parameters,enabling faster identification of optimal outcomes compared with traditional methods.And compared with other DSE methods by using multilayer perceptron(MLP),the required training time is shorter.Comparative experiments with other methods demonstrate that the proposed method improves the efficiency of DSE without compromising the accuracy of the re-sults.
基金supported by the CAS Project for Young Scientists in Basic Research under Grant YSBR-035Jiangsu Provincial Key Research and Development Program under Grant BE2021013-2.
文摘In covert communications,joint jammer selection and power optimization are important to improve performance.However,existing schemes usually assume a warden with a known location and perfect Channel State Information(CSI),which is difficult to achieve in practice.To be more practical,it is important to investigate covert communications against a warden with uncertain locations and imperfect CSI,which makes it difficult for legitimate transceivers to estimate the detection probability of the warden.First,the uncertainty caused by the unknown warden location must be removed,and the Optimal Detection Position(OPTDP)of the warden is derived which can provide the best detection performance(i.e.,the worst case for a covert communication).Then,to further avoid the impractical assumption of perfect CSI,the covert throughput is maximized using only the channel distribution information.Given this OPTDP based worst case for covert communications,the jammer selection,the jamming power,the transmission power,and the transmission rate are jointly optimized to maximize the covert throughput(OPTDP-JP).To solve this coupling problem,a Heuristic algorithm based on Maximum Distance Ratio(H-MAXDR)is proposed to provide a sub-optimal solution.First,according to the analysis of the covert throughput,the node with the maximum distance ratio(i.e.,the ratio of the distances from the jammer to the receiver and that to the warden)is selected as the friendly jammer(MAXDR).Then,the optimal transmission and jamming power can be derived,followed by the optimal transmission rate obtained via the bisection method.In numerical and simulation results,it is shown that although the location of the warden is unknown,by assuming the OPTDP of the warden,the proposed OPTDP-JP can always satisfy the covertness constraint.In addition,with an uncertain warden and imperfect CSI,the covert throughput provided by OPTDP-JP is 80%higher than the existing schemes when the covertness constraint is 0.9,showing the effectiveness of OPTDP-JP.
基金supported in part by the National Natural Science Foundation of China(62025404)in part by the National Key Research and Development Program of China(2022YFB3902802)+1 种基金in part by the Beijing Natural Science Foundation(L241013)in part by the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA000000).
文摘1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight.
基金supported by the National Natural Science Foundation of China under Grant U21A20449The Zhongguancun Project under Grant 23120035.
文摘Synthetic aperture radar(SAR)radio frequency identification(RFID)localization is widely used for automated guided vehicles(AGVs)in the industrial internet of things(IIoT).However,the AGV’s speeds are limited by the phase difference(PD)of two neighboring readers.In this paper,an inertial navigation system(INS)based SAR RFID localization method(ISRL)where AGV moves nonlinearly.To relax the speed limitation,a new phase-unwrapping method based on the similarity of PDs(PU-SPD)is proposed to deal with the PD ambiguity when the AGV speed exceeds 60km/h.In localization,the gauss-newton algorithm(GN)is employed and an initial value estimation scheme based on variable substitution(IVE-VS)is proposed to improve its positioning accuracy and the convergence rate.Thus,ISRL is a combination of IVE-VS and GN.Moreover,the Cramer-Rao lower bound(CRLB)and the speed limitation is derived.Simulation results show that the ISRL can converge after two iterations,and the positioning accuracy can achieve 7.50cm at a phase noise levelσ=0.18,which is 35%better than the Hyperbolic unbiased estimation localization(HyUnb).
基金supported by the National Natural Science Foundation of China(Grant Nos.62325210,and 62272441)the Strategic Priority Research Program of Chinese Academy of Sciences(No.XDB28000000)+1 种基金supported by the National Natural Science Foundation of China(Grant Nos.62372006,92365117)the Fundamental Research Funds for the Central Universities,Peking University.
文摘The shadow tomography problem introduced by[1]is an important problem in quantum computing.Given an unknown-qubit quantum state,the goal is to estimate tr■,...,tr■using as least copies of■as possible,within an additive error of,whereF1,...,FM are known-outcome measurements.In this paper,we consider the shadow tomography problem with a potentially inaccurate prediction■of the true state■.This corresponds to practical cases where we possess prior knowledge of the unknown state.For example,in quantum verification or calibration,we may be aware of the quantum state that the quantum device is expected to generate.However,the actual state it generates may have deviations.We introduce an algorithm with sample complexity■(nmax{■ε}log2M/ε4.In the generic case,even if the prediction can be arbitrarily bad,our algorithm has the same complexity as the best algorithm without prediction[2].At the same time,as the prediction quality improves,the sample complexity can be reduced smoothly to■(nlog2M/ε3)when the trace distance between the prediction and the unknown state is■(ε).Furthermore,we conduct numerical experiments to validate our theoretical analysis.The experiments are constructed to simulate noisy quantum circuits that reflect possible real scenarios in quantum verification or calibration.Notably,our algorithm outperforms the previous work without prediction in most settings.
基金supported by Jiangsu Provincial Key Research and Development Program (No.BE20210132)the Zhejiang Provincial Key Research and Development Program (No.2021C01040)the team of S-SET
文摘Low-Earth-Orbit satellite constellation networks(LEO-SCN)can provide low-cost,largescale,flexible coverage wireless communication services.High dynamics and large topological sizes characterize LEO-SCN.Protocol development and application testing of LEO-SCN are challenging to carry out in a natural environment.Simulation platforms are a more effective means of technology demonstration.Currently available simulators have a single function and limited simulation scale.There needs to be a simulator for full-featured simulation.In this paper,we apply the parallel discrete-event simulation technique to the simulation of LEO-SCN to support large-scale complex system simulation at the packet level.To solve the problem that single-process programs cannot cope with complex simulations containing numerous entities,we propose a parallel mechanism and algorithms LP-NM and LP-YAWNS for synchronization.In the experiment,we use ns-3 to verify the acceleration ratio and efficiency of the above algorithms.The results show that our proposed mechanism can provide parallel simulation engine support for the LEO-SCN.
基金supported by grants from the National Key Research and Development Program of China(No.2024YFA1108302)National Key Research and Development Program of China(No.2021YFA1101400)+2 种基金Strategic Priority Research Program of CAS(No.XDA 0460205)Open Project of Key Laboratory of Organ Regeneration and Intelligent Manufacturing(No.2024KF31)Basic Frontier Science Research Program of CAS(No.ZDBS-LY-SM024).
文摘The high failure rates in clinical drug development based on animal models highlight the urgent need for more representative human models in biomedical research.In response to this demand,organoids and organ chips were integrated for greater physiological relevance and dynamic,controlled experimental conditions.This innovative platform—the organoids-on-a-chip technology—shows great promise in disease modeling,drug discovery,and personalized medicine,attracting interest from researchers,clinicians,regulatory authorities,and industry stakeholders.This review traces the evolution from organoids to organoids-on-a-chip,driven by the necessity for advanced biological models.We summarize the applications of organoids-on-a-chip in simulating physiological and pathological phenotypes and therapeutic evaluation of this technology.This section highlights how integrating technologies from organ chips,such as microfluidic systems,mechanical stimulation,and sensor integration,optimizes organoid cell types,spatial structure,and physiological functions,thereby expanding their biomedical applications.We conclude by addressing the current challenges in the development of organoids-on-a-chip and offering insights into the prospects.The advancement of organoids-on-a-chip is poised to enhance fidelity,standardization,and scalability.Furthermore,the integration of cutting-edge technologies and interdisciplinary collaborations will be crucial for the progression of organoids-on-a-chip technology.
文摘A novel quantum search algorithm tailored for continuous optimization and spectral problems was proposed recently by a research team from the University of Electronic Science and Technology of China to broaden quantum computation frontiers and enrich its application landscape.Quantum computing has traditionally excelled at tackling discrete search challenges,but many important applications from large-scale optimization to advanced physics simulations necessitate searching through continuous domains.These continuous search problems involve uncountably infinite solution spaces and bring about computational complexities far beyond those faced in conventional discrete settings.This draft,titled“Fixed-Point Quantum Continuous Search Algorithm with Optimal Query Complexity”,takes on the core challenge of performing search tasks in domains that may be uncountably infinite,offering theoretical and practical insights into achieving quantum speedups in such settings[1].
基金partially supported by the National Natural Science Foundation of China(Nos.U22A2028,61925208,62102399,62302478,62302483,62222214,62372436,62302482 and 62302480)CAS Project for Young Scientists in Basic Research,China(No.YSBR-029)Youth Innovation Promotion Association CAS and Xplore Prize,China.
文摘In the field of natural language processing,the rapid development of large language model(LLM)has attracted increasing attention.LLMs have shown a high level of creativity in various tasks,but the methods for assessing such creativity are inadequate.Assessment of LLM creativity needs to consider differences from humans,requiring multiple dimensional measurement while balancing accuracy and efficiency.This paper aims to establish an efficient framework for assessing the level of creativity in LLMs.By adapting the modified Torrance tests of creative thinking,the research evaluates the creative performance of various LLMs across 7 tasks,emphasizing 4 criteria including fluency,flexibility,originality,and elaboration.In this context,we develop a comprehensive dataset of 700 questions for testing and an LLM-based evaluation method.In addition,this study presents a novel analysis of LLMs'responses to diverse prompts and role-play situations.We found that the creativity of LLMs primarily falls short in originality,while excelling in elaboration.In addition,the use of prompts and role-play settings of the model significantly influence creativity.Additionally,the experimental results also indicate that collaboration among multiple LLMs can enhance originality.Notably,our findings reveal a consensus between human evaluations and LLMs regarding the personality traits that influence creativity.The findings underscore the significant impact of LLM design on creativity and bridge artificial intelligence and human creativity,offering insights into LLMs'creativity and potential applications.
基金supported in part by the National Natural Science Foundation of China(Grant Nos.62325210 and 62272441).
文摘Submodular maximization is a significant area of interest in combinatorial optimization.It has various real-world applications.In recent years,streaming algorithms for submodular maximization have gained attention,allowing realtime processing of large data sets by examining each piece of data only once.However,most of the current state-of-the-art algorithms are only applicable to monotone submodular maximization.There are still significant gaps in the approximation ratios between monotone and non-monotone objective functions.In this paper,we propose a streaming algorithm framework for non-monotone submodular maximization and use this framework to design deterministic streaming algorithms for the d-knapsack constraint and the knapsack constraint.Our 1-pass streaming algorithm for the d-knapsack constraint has a 1/4(d+1)-∈approximation ratio,using O(BlogB/∈)memory,and O(logB/∈)query time per element,where B=MIN(n,b)is the maximum number of elements that the knapsack can store.As a special case of the d-knapsack constraint,we have the 1-pass streaming algorithm with a 1/8-∈approximation ratio to the knapsack constraint.To our knowledge,there is currently no streaming algorithm for this constraint when the objective function is non-monotone,even when d=1.In addition,we propose a multi-pass streaming algorithm with 1/6-∈approximation,which stores O(B)elements.
基金funded by the Science and Technology Innovation Key R&D Program of Chongqing(No.CSTB-2022TIAD-STX0008)the Natural Science Foundation of China(Nos.62402473 and 62271465)Suzhou Basic Research Program(No.SYG202338).
文摘With the rapid development of artificial intelligence,computational pathology has been seamlessly integrated into the entire clinical workflow,which encompasses diagnosis,treatment,prognosis,and biomarker discovery.This integration has significantly enhanced clinical accuracy and efficiency while reducing the workload for clinicians.Traditionally,research in this field has depended on the collection and labeling of large datasets for specific tasks,followed by the development of task-specific computational pathology models.However,this approach is labor intensive and does not scale efficiently for open-set identification or rare diseases.Given the diversity of clinical tasks,training individual models from scratch to address the whole spectrum of clinical tasks in the pathology workflow is impractical,which highlights the urgent need to transition from task-specific models to foundation models(FMs).In recent years,pathological FMs have proliferated.These FMs can be classified into three categories,namely,pathology image FMs,pathology image-text FMs,and pathology image-gene FMs,each of which results in distinct functionalities and application scenarios.This review provides an overview of the latest research advancements in pathological FMs,with a particular emphasis on their applications in oncology.The key challenges and opportunities presented by pathological FMs in precision oncology are also explored.
基金support from the Youth Talent Lifting Project(Grant No.2020-JCJQ-QT-030)the National Natural Science Foundation of China(Grant Nos.11905294,and 12274464)+7 种基金the China Postdoctoral Science Foundation,and the Open Research Fund from State Key Laboratory of High Performance Computing of China(Grant No.201901-01)support from the National Natural Science Foundation of China(Grant Nos.11805279,12074117,61833010,and 12061131011)support from the Strategic Priority Research Program of Chinese Academy of Sciences(Grant No.XDB28000000)the National Natural Science Foundation of China(Grant Nos.61832003,61872334,and 61801459)the National Natural Science Foundation of China(Grant No.12005015)the National Natural Science Foundation of China(Grant Nos.11974205,and 11774197)the National Key Research and Development Program of China(Grant No.2017YFA0303700)the Key Research and Development Program of Guangdong Province(Grant No.2018B030325002).
文摘Quantum computing is a game-changing technology for global academia,research centers and industries including computational science,mathematics,finance,pharmaceutical,materials science,chemistry and cryptography.Although it has seen a major boost in the last decade,we are still a long way from reaching the maturity of a full-fledged quantum computer.That said,we will be in the noisy-intermediate scale quantum(NISQ)era for a long time,working on dozens or even thousands of qubits quantum computing systems.An outstanding challenge,then,is to come up with an application that can reliably carry out a nontrivial task of interest on the near-term quantum devices with non-negligible quantum noise.To address this challenge,several near-term quantum computing techniques,including variational quantum algorithms,error mitigation,quantum circuit compilation and benchmarking protocols,have been proposed to characterize and mitigate errors,and to implement algorithms with a certain resistance to noise,so as to enhance the capabilities of near-term quantum devices and explore the boundaries of their ability to realize useful applications.Besides,the development of near-term quantum devices is inseparable from the efficient classical sim-ulation,which plays a vital role in quantum algorithm design and verification,error-tolerant verification and other applications.This review will provide a thorough introduction of these near-term quantum computing techniques,report on their progress,and finally discuss the future prospect of these techniques,which we hope will motivate researchers to undertake additional studies in this field.
基金This work was partially supported by the National Natural Science Foundation of China(Grant Nos.41930110,61872272 and 61640221)。
文摘1 Introduction The lifetime of wireless sensor networks(WSNs)is restricted by the limited energy of battery-powered sensor devices,making the lifetime extension a critical problem in real applications[1].Several aspects have been examined in previous works to extend the lifetime of WSNs,such as the deployment position,the network routing strategy and the sensing range adjustment.Given the fact that there are often many redundant sensors,a practical way to extend the lifetime of a WSN is to partition the sensors into subsets,each of which can cover all the targets[2].Then the sets are activated one by one,extending the lifetime of a WSN to times of the battery lifetime of a sensor.The problem of finding the maximal is abstracted as the set k-cover problem.
基金supported in part by Youth Innovation Promotion Association,Chinese Academy of Sciences,in part by Beijing Municipal Science&Technology Program under grant Z221100007722014in part by Zhejiang Key Research Program under grant 2021C01040.
文摘Dear Editor,Short-range wireless communications have been widely used in our daily life,but the pursuit of a better communication experience never stops,leading to the more stringent requirements of emerging applications.^(1)For example,remote control applications such as telesurgery require a delay of less than 1 ms.^(2)Indus-trial closed-loop control applications such as automatic assembly lines have a reliability requirement of at least 99.999%.
基金This research was supported by National Key R&D Program of China(2022YFB3103900)National Natural Science Foundation of China(62032010,62202462)Strategic Priority Research Program of the CAS(XDC02030200).
文摘Mutation-based greybox fuzzing has been one of the most prevalent techniques for security vulnerability discovery and a great deal of research work has been proposed to improve both its efficiency and effectiveness.Mutation-based greybox fuzzing generates input cases by mutating the input seed,i.e.,applying a sequence of mutation operators to randomly selected mutation positions of the seed.However,existing fruitful research work focuses on scheduling mutation operators,leaving the schedule of mutation positions as an overlooked aspect of fuzzing efficiency.This paper proposes a novel greybox fuzzing method,PosFuzz,that statistically schedules mutation positions based on their historical performance.PosFuzz makes use of a concept of effective position distribution to represent the semantics of the input and to guide the mutations.PosFuzz first utilizes Good-Turing frequency estimation to calculate an effective position distribution for each mutation operator.It then leverages two sampling methods in different mutating stages to select the positions from the distribution.We have implemented PosFuzz on top of AFL,AFLFast and MOPT,called Pos-AFL,-AFLFast and-MOPT respectively,and evaluated them on the UNIFUZZ benchmark(20 widely used open source programs)and LAVA-M dataset.The result shows that,under the same testing time budget,the Pos-AFL,-AFLFast and-MOPT outperform their counterparts in code coverage and vulnerability discovery ability.Compared with AFL,AFLFast,and MOPT,PosFuzz gets 21%more edge coverage and finds 133%more paths on average.It also triggers 275%more unique bugs on average.
基金supported by the National Key R&D Program of China(grantno.2022YFA1004300)the National Natural Science Foundation of China(grant no.12122103)+11 种基金supported by the National Key Research and Development Project of China(grant no.2022YFA1004302)the National Natural Science Foundation of China(grants nos.92270001 and 12288101)supported by the National Institutes of Health(grant no.GM107485 to D.M.Y.)the National Science Foundation(grant no.2209718 to D.M.Y.)supported by the Natural Science Foundation of Zhejiang Province(grant no.2022XHSJJ006)supported by the National Natural Science Foundation of China(grants nos.22222303 and 22173032)supported by the National Key R&D Program of China(grants nos.2021YFA0718900 and 2022YFA1403000)supported by the National Natural Science Foundation of China(grants nos.12034009 and 91961204)supported by the National Science Fund for Distinguished Young Scholars(grant no.22225302)Laboratory of AI for Electrochemistry(AI4EC),and IKKEM(grants nos.RD2023100101 and RD2022070501)supported by the National Natural Science Foundation of China(grants nos.12122401,12074007,and 12135002)Lastly,the computational resource was supported by the Bohrium Cloud Platform at DP Technology and Tan Kah Kee Supercomputing Center(IKKEM).
文摘The rapid advancements in artificial intelligence(AI)are catalyzing transformative changes in atomic modeling,simulation,and design.AI-driven potential energy models havedemonstrated the capability to conduct large-scale,long-duration simulations with the accuracy of ab initio electronic structure methods.However,the model generation process remains a bottleneck for large-scale applications.We propose a shift towards a model-centric ecosystem,wherein a large atomic model(LAM),pretrained across multiple disciplines,can be efficiently fine-tuned and distilled for various downstream tasks,thereby establishing a new framework for molecular modeling.In this study,we introduce the DPA-2 architecture as a prototype for LAMs.Pre-trained on a diverse array of chemical and materials systemsusing a multi-task approach,DPA-2demonstrates superior generalization capabilities across multiple downstream tasks compared to the traditional single-task pre-training and fine-tuning methodologies.Our approach sets the stage for the development and broad application of LAMs in molecular and materials simulation research.
基金supported in part by the Strategic Priority Research Program of Chinese Academy of Sciences(CAS)under Grant No.XDC05030200the National Key Research and Development Program of China under Grant No.2022YFB4500403+2 种基金the National Natural Science Foundation of China under Grant Nos.62090022 and 62172388the Youth Innovation Promotion Association of the Chinese Academy of Sciences under Grant No.2020105the Innovation Grant No.E261100 by Institute of Computing Technology,Chinese Academy of Sciences.
文摘Agile hardware development methodology has been widely adopted over the past decade.Despite the research progress,the industry still doubts its applicability,especially for the functional verification of complicated processor chips.Functional verification commonly employs a simulation-based method of co-simulating the design under test with a reference model and checking the consistency of their outcomes given the same input stimuli.We observe limited collaboration and information exchange through the design and verification processes,dramatically leading to inefficiencies when applying the conventional functional verification workflow to agile development.In this paper,we propose workflow integration with collaborative task delegation and dynamic information exchange as the design principles to effectively address the challenges on functional verification under the agile development model.Based on workflow integration,we enhance the functional verification workflows with a series of novel methodologies and toolchains.The diff-rule based agile verification methodology(DRAV)reduces the overhead of building reference models with runtime execution information from designs under test.We present the RISC-V implementation for DRAV,DiffTest,which adopts information probes to extract internal design behaviors for co-simulation and debugging.It further integrates two plugins,namely XFUZZ for effective test generation guided by design coverage metrics and LightSSS for efficient fault analysis triggered by co-simulation mismatches.We present the integrated workflows for agile hardware development and demonstrate their effectiveness in designing and verifying RISC-V processors with 33 functional bugs found in NutShell.We also illustrate the efficiency of the proposed toolchains with a case study on a functional bug in the L2 cache of XiangShan.
基金Strategic Priority Research Program of Chinese Academy of Sciences,under Grant XDA18000000.
文摘Two optimization technologies, namely, bypass and carry-control optimization, were demonstrated for enhancing the performance of a bit-slice Arithmetic Logic Unit (ALU) in 2n-bit Rapid Single-Flux-Quantum (RSFQ) microprocessors. These technologies can not only shorten the calculation time but also solve data hazards. Among them, the proposed bypass technology is applicable to any 2n-bit ALU, whether it is bit-serial, bit-slice or bit-parallel. The high performance bit-slice ALU was implemented using the 6 kA/cm^(2) Nb/AlOx/Nb junction fabrication process from Superconducting Electronics Facility of Shanghai Institute of Microsystem and Information Technology. It consists of 1693 Josephson junctions with an area of 2.46 0.81 mm^(2). All ALU operations of the MIPS32 instruction set are implemented, including two extended instructions, i.e., addition with carry (ADDC) and subtraction with borrow (SUBB). All the ALU operations were successfully obtained in SFQ testing based on OCTOPUX and the measured DC bias current margin can reach 86% - 104%. The ALU achieves a 100 utilization rate, regardless of carry/borrow read-after-write correlations between instructions.