In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy empl...The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.展开更多
Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed bas...Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.展开更多
As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for devic...As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.展开更多
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
Contact detection is the most time-consuming stage in 3D discontinuous deformation analysis(3D-DDA)computation.Improving the efficiency of 3D-DDA is beneficial for its application in large-scale computing.In this stud...Contact detection is the most time-consuming stage in 3D discontinuous deformation analysis(3D-DDA)computation.Improving the efficiency of 3D-DDA is beneficial for its application in large-scale computing.In this study,aiming at the continuous-discontinuous simulation of 3D-DDA,a highly efficient contact detection strategy is proposed.Firstly,the global direct search(GDS)method is integrated into the 3D-DDA framework to address intricate contact scenarios.Subsequently,all geometric elements,including blocks,faces,edges,and vertices are divided into searchable and unsearchable parts.Contacts between unsearchable geometric elements would be directly inherited,while only searchable geometric elements are involved in contact detection.This strategy significantly reduces the number of geometric elements involved in contact detection,thereby markedly enhancing the computation efficiency.Several examples are adopted to demonstrate the accuracy and efficiency of the improved 3D-DDA method.The rock pillars with different mesh sizes are simulated under self-weight.The deformation and stress are consistent with the analytical results,and the smaller the mesh size,the higher the accuracy.The maximum speedup ratio is 38.46 for this case.Furthermore,the Brazilian splitting test on the discs with different flaws is conducted.The results show that the failure pattern of the samples is consistent with the results obtained by other methods and experiments,and the maximum speedup ratio is 266.73.Finally,a large-scale impact test is performed,and approximately 3.2 times enhanced efficiency is obtained.The proposed contact detection strategy significantly improves efficiency when the rock has not completely failed,which is more suitable for continuous-discontinuous simulation.展开更多
In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introduc...In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.展开更多
Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration ...Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。展开更多
Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embed...Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.展开更多
As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the ...As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.展开更多
Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment w...Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment with digital twin.In this paper,a datadriven fast calculation method for the temperature field of resin impregnated paper(RIP)bushing used in converter transformer valve-side is proposed,which combines the data dimensionality reduction technology and the surrogate model.After applying the finite element algorithm to obtain the temperature field distribution of RIP bushing under different operation conditions as the input dataset,the proper orthogonal decomposition(POD)algorithm is adopted to reduce the order and obtain the low-dimensional projection of the temperature data.On this basis,the surrogate model is used to construct the mapping relationship between the sensor monitoring data and the low-dimensional projection,so that it can achieve the fast calculation and reconstruction of temperature field distribution.The results show that this method can effectively and quickly calculate the overall temperature field distribution of the RIP bushing.The maximum relative error and the average relative error are less than 4.5%and 0.25%,respectively.The calculation speed is at the millisecond level,meeting the needs of digitalisation of power equipment.展开更多
Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with...Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.展开更多
The growing demand for deployable phased-array antennas in space applications requires innovative solutions to optimize the folded configurations and reduce the computational complexity.Existing methods face limitatio...The growing demand for deployable phased-array antennas in space applications requires innovative solutions to optimize the folded configurations and reduce the computational complexity.Existing methods face limitations due to the low efficiency of traditional algorithms and the lack of effective constraint strategies,resulting in excessive solution spaces.This study proposes forward shannon entropy wave function collapse(FSE-WFC),a novel method for designing panel configurations of one-dimensional deployable phased-array antennas using the wave function collapse algorithm.This addresses two key challenges:the excessive number of panel layout options and high computational costs.First,it analyzes the relationship between the panel connection positions and the folded form to impose constraints on the panel combinations.It then calculates the information entropy of the potential configurations to identify low-entropy solutions,thereby narrowing the solution space.Finally,boundary constraints and interference check were applied to refine the results.This approach significantly reduced the calculation time while improving the folding state and envelope volume of the antenna.The results show that the FSE-WFC algorithm reduces the envelope area by 18.3%for a 350 mm high satellite and 9.0%for a 600 mm high satellite,while satisfying the connectivity constraints.As the first application of the wave-function collapse algorithm to antenna folding design,this study introduces an information entropy-based constraint generation method that provides an efficient solution for deployable antenna optimization.展开更多
The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity be...The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity between cells.The method can be optimized by calculating the maximum power of two of the global time step increments in the domain,allowing the optimal time step to be approached throughout the grid.To verify the acceleration and accuracy of LTS in storm surge simulations,we developed a model to simulate astronomical storm surges along the southern coast of China.This model employs the shallow water equations as governing equations,numerical discretization using the finite volume method,and fluxes calculated by the Roe solver.By comparing the simulation results of the traditional global time-stepping algorithm with those of the LTS algorithm,we find that the latter fit the measured data better.Taking the calculation results of Typhoon Sally in 1996 as an example,we show that compared with the traditional global time-stepping algorithm,the LTS algorithm reduces computation time by 2.05 h and increases computation efficiency by 2.64 times while maintaining good accuracy.展开更多
In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer paral...In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.展开更多
With the accelerated growth of the Internet of Things(IoT),real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local.Since these d...With the accelerated growth of the Internet of Things(IoT),real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local.Since these devices often handle personal information under limited resources,cryptographic algorithms must be executed efficiently.Their computational characteristics strongly affect system performance,making it necessary to analyze resource impact and predict usage under diverse configurations.In this paper,we analyze the phase-level resource usage of AES variants,ChaCha20,ECC,and RSA on an edge device and develop a prediction model.We apply these algorithms under varying parallelism levels and execution strategies across key generation,encryption,and decryption phases.Based on the analysis,we train a unified Random Forest model using execution context and temporal features,achieving R2 values up to 0.994 for power and 0.988 for temperature.Furthermore,the model maintains practical predictive performance even for cryptographic algorithms not included during training,demonstrating its ability to generalize across distinct computational characteristics.Our proposed approach reveals how execution characteristics and resource usage interacts,supporting proactive resource planning and efficient deployment of cryptographic workloads on edge devices.As our approach is grounded in phase-level computational characteristics rather than in any single algorithm,it provides generalizable insights that can be extended to a broader range of cryptographic algorithms that exhibit comparable phase-level execution patterns and to heterogeneous edge architectures.展开更多
In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the...In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.展开更多
The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baf...The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.展开更多
Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the eq...Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step, the computation effort of the proposed method is reduced compared with the full analysis of Newmark method. The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass. It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.展开更多
Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy...Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.展开更多
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
基金supported by the National Natural Science Foundation of China(Grant Nos.11974154,and 12304278)the Taishan Scholars Special Funding for Construction Projects(Grant No.tstp20230622)+1 种基金the Natural Science Foundation of Shandong Province(Grant Nos.ZR2022MA004,ZR2023QA127,and ZR2024QA121)the Special Foundation of Yantai for Leading Talents above Provincial Level。
文摘The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.
基金supported by the National Natural Science Foundation of China(Grant Nos.11902085 and 11832009)the Science and Technology Association Young Scientific and Technological Talents Support Project of Guangzhou City(Grant No.SKX20210304)the Natural Science Foundation of Guangdong Province(Grant No.2021Al515010320).
文摘Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.
基金Project supported by the National Natural Science Foundation of China (Grant No.60876027)the National Science Foundation for Distinguished Young Scholars of China (Grant No.60925015)+1 种基金the National Basic Research Program of China (Grant No.2011CBA00600)the Fundamental Research Project of Shenzhen Science & Technology Foundation,China (Grant No.JC200903160353A)
文摘As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
基金financially supported by the National Key R&D Program of China(Grant No.2023YFC3081200)the National Natural Science Foundation of China(Grant Nos.U21A20159 and 52179117).
文摘Contact detection is the most time-consuming stage in 3D discontinuous deformation analysis(3D-DDA)computation.Improving the efficiency of 3D-DDA is beneficial for its application in large-scale computing.In this study,aiming at the continuous-discontinuous simulation of 3D-DDA,a highly efficient contact detection strategy is proposed.Firstly,the global direct search(GDS)method is integrated into the 3D-DDA framework to address intricate contact scenarios.Subsequently,all geometric elements,including blocks,faces,edges,and vertices are divided into searchable and unsearchable parts.Contacts between unsearchable geometric elements would be directly inherited,while only searchable geometric elements are involved in contact detection.This strategy significantly reduces the number of geometric elements involved in contact detection,thereby markedly enhancing the computation efficiency.Several examples are adopted to demonstrate the accuracy and efficiency of the improved 3D-DDA method.The rock pillars with different mesh sizes are simulated under self-weight.The deformation and stress are consistent with the analytical results,and the smaller the mesh size,the higher the accuracy.The maximum speedup ratio is 38.46 for this case.Furthermore,the Brazilian splitting test on the discs with different flaws is conducted.The results show that the failure pattern of the samples is consistent with the results obtained by other methods and experiments,and the maximum speedup ratio is 266.73.Finally,a large-scale impact test is performed,and approximately 3.2 times enhanced efficiency is obtained.The proposed contact detection strategy significantly improves efficiency when the rock has not completely failed,which is more suitable for continuous-discontinuous simulation.
基金Supported by the National Natural Science Foundation of China(12061048)NSF of Jiangxi Province(20232BAB201026,20232BAB201018)。
文摘In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.
文摘Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。
文摘Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.
基金Supported by the Grant of National Science Foundation of China(11971433)Zhejiang Gongshang University“Digital+”Disciplinary Construction Management Project(SZJ2022B004)+1 种基金Institute for International People-to-People Exchange in Artificial Intelligence and Advanced Manufacturing(CCIPERGZN202439)the Development Fund for Zhejiang College of Shanghai University of Finance and Economics(2023FZJJ15).
文摘As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.
基金supported by China Postdoctoral Science Foundation,Grant 2024M753544Science and Technology Project of CSG,Grant GDKJXM2022106.
文摘Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment with digital twin.In this paper,a datadriven fast calculation method for the temperature field of resin impregnated paper(RIP)bushing used in converter transformer valve-side is proposed,which combines the data dimensionality reduction technology and the surrogate model.After applying the finite element algorithm to obtain the temperature field distribution of RIP bushing under different operation conditions as the input dataset,the proper orthogonal decomposition(POD)algorithm is adopted to reduce the order and obtain the low-dimensional projection of the temperature data.On this basis,the surrogate model is used to construct the mapping relationship between the sensor monitoring data and the low-dimensional projection,so that it can achieve the fast calculation and reconstruction of temperature field distribution.The results show that this method can effectively and quickly calculate the overall temperature field distribution of the RIP bushing.The maximum relative error and the average relative error are less than 4.5%and 0.25%,respectively.The calculation speed is at the millisecond level,meeting the needs of digitalisation of power equipment.
文摘Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.
基金Supported by National Natural Science Foundation of China(Grant Nos.52105035,62203094)Special Central Funds for Guiding Local Scientific and Technological Development(Grant No.236Z1801G)+2 种基金Higher Education Youth Top Talent Project of Hebei Province of China(Grant No.BJK2024042)Natural Science Foundation of Hebei Province of China(Grant Nos.E2021203109,F2023501021)Graduate Student Innovation Capability Training and Support Project of Hebei Province(Grant No.CXZZBS2024053).
文摘The growing demand for deployable phased-array antennas in space applications requires innovative solutions to optimize the folded configurations and reduce the computational complexity.Existing methods face limitations due to the low efficiency of traditional algorithms and the lack of effective constraint strategies,resulting in excessive solution spaces.This study proposes forward shannon entropy wave function collapse(FSE-WFC),a novel method for designing panel configurations of one-dimensional deployable phased-array antennas using the wave function collapse algorithm.This addresses two key challenges:the excessive number of panel layout options and high computational costs.First,it analyzes the relationship between the panel connection positions and the folded form to impose constraints on the panel combinations.It then calculates the information entropy of the potential configurations to identify low-entropy solutions,thereby narrowing the solution space.Finally,boundary constraints and interference check were applied to refine the results.This approach significantly reduced the calculation time while improving the folding state and envelope volume of the antenna.The results show that the FSE-WFC algorithm reduces the envelope area by 18.3%for a 350 mm high satellite and 9.0%for a 600 mm high satellite,while satisfying the connectivity constraints.As the first application of the wave-function collapse algorithm to antenna folding design,this study introduces an information entropy-based constraint generation method that provides an efficient solution for deployable antenna optimization.
基金National Natural Science Foundation of China(No.52071306)the Natural Science Foundation of Shandong Province(No.ZR2019MEE050)the Natural Science Foundation of Zhejiang Province(No.LZ22E090003).
文摘The local time-stepping(LTS)algorithm is an adaptive method that adjusts the time step by selecting suitable intervals for different regions based on the spatial scale of each cell and water depth and flow velocity between cells.The method can be optimized by calculating the maximum power of two of the global time step increments in the domain,allowing the optimal time step to be approached throughout the grid.To verify the acceleration and accuracy of LTS in storm surge simulations,we developed a model to simulate astronomical storm surges along the southern coast of China.This model employs the shallow water equations as governing equations,numerical discretization using the finite volume method,and fluxes calculated by the Roe solver.By comparing the simulation results of the traditional global time-stepping algorithm with those of the LTS algorithm,we find that the latter fit the measured data better.Taking the calculation results of Typhoon Sally in 1996 as an example,we show that compared with the traditional global time-stepping algorithm,the LTS algorithm reduces computation time by 2.05 h and increases computation efficiency by 2.64 times while maintaining good accuracy.
基金supported by Northern Border University Researchers Supporting Project number(NBU-FFR-2025-432-03),Northern Border University,Arar,Saudi Arabia.
文摘In medical imaging,accurate brain tumor classification in medical imaging requires real-time processing and efficient computation,making hardware acceleration essential.Field Programmable Gate Arrays(FPGAs)offer parallelism and reconfigurability,making them well-suited for such tasks.In this study,we propose a hardware-accelerated Convolutional Neural Network(CNN)for brain cancer classification,implemented on the PYNQ-Z2 FPGA.Our approach optimizes the first Conv2D layer using different numerical representations:8-bit fixed-point(INT8),16-bit fixed-point(FP16),and 32-bit fixed-point(FP32),while the remaining layers run on an ARM Cortex-A9 processor.Experimental results demonstrate that FPGA acceleration significantly outperforms the CPU(Central Processing Unit)based approach.The obtained results emphasize the critical importance of selecting the appropriate numerical representation for hardware acceleration in medical imaging.On the PYNQ-Z2 FPGA,the INT8 achieves a 16.8%reduction in latency and 22.2%power savings compared to FP32,making it ideal for real-time and energy-constrained applications.FP16 offers a strong balance,delivering only a 0.1%drop in accuracy compared to FP32(94.1%vs.94.2%)while improving latency by 5%and reducing power consumption by 11.1%.Compared to prior works,the proposed FPGA-based CNN model achieves the highest classification accuracy(94.2%)with a throughput of up to 1.562 FPS,outperforming GPU-based and traditional CPU methods in both accuracy and hardware efficiency.These findings demonstrate the effectiveness of FPGA-based AI acceleration for real-time,power-efficient,and high-performance brain tumor classification,showcasing its practical potential in next-generation medical imaging systems.
基金supported in part by the National Research Foundation of Korea(NRF)(No.RS-2025-00554650)supported by the Chung-Ang University research grant in 2024。
文摘With the accelerated growth of the Internet of Things(IoT),real-time data processing on edge devices is increasingly important for reducing overhead and enhancing security by keeping sensitive data local.Since these devices often handle personal information under limited resources,cryptographic algorithms must be executed efficiently.Their computational characteristics strongly affect system performance,making it necessary to analyze resource impact and predict usage under diverse configurations.In this paper,we analyze the phase-level resource usage of AES variants,ChaCha20,ECC,and RSA on an edge device and develop a prediction model.We apply these algorithms under varying parallelism levels and execution strategies across key generation,encryption,and decryption phases.Based on the analysis,we train a unified Random Forest model using execution context and temporal features,achieving R2 values up to 0.994 for power and 0.988 for temperature.Furthermore,the model maintains practical predictive performance even for cryptographic algorithms not included during training,demonstrating its ability to generalize across distinct computational characteristics.Our proposed approach reveals how execution characteristics and resource usage interacts,supporting proactive resource planning and efficient deployment of cryptographic workloads on edge devices.As our approach is grounded in phase-level computational characteristics rather than in any single algorithm,it provides generalizable insights that can be extended to a broader range of cryptographic algorithms that exhibit comparable phase-level execution patterns and to heterogeneous edge architectures.
基金Project(61272148) supported by the National Natural Science Foundation of ChinaProject(20120162110061) supported by the Doctoral Programs of Ministry of Education of China+1 种基金Project(CX2014B066) supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044) supported by the Fundamental Research Funds for the Central Universities,China
文摘In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.
文摘The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.
基金supported by the Foundation of the Science and Technology of Jilin Province (20070541)985-Automotive Engineering of Jilin University and Innovation Fund for 985 Engineering of Jilin University (20080104).
文摘Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step, the computation effort of the proposed method is reduced compared with the full analysis of Newmark method. The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass. It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.
基金National Natural Science Foundation of China under Grant Nos.51639006 and 51725901
文摘Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.