In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed bas...Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.展开更多
As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for devic...As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.展开更多
The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy empl...The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.展开更多
Distributed learning is a well-established method for estimation tasks over extensively distributed datasets.However,non-randomly stored data can introduce bias into local parameter estimates,leading to significant pe...Distributed learning is a well-established method for estimation tasks over extensively distributed datasets.However,non-randomly stored data can introduce bias into local parameter estimates,leading to significant performance degradation in classical distributed algorithms.In this paper,the authors propose a novel Distributed Quasi-Newton Pilot(DQNP)method for distributed learning with non-randomly distributed data.The proposed approach accommodates both randomly and non-randomly distributed data settings and imposes no constraints on the uniformity of local sample sizes.Additionally,it avoids the need to transfer the Hessian matrix or compute its inversion,thereby greatly reducing computational and communication complexity.The authors theoretically demonstrate that the resulting estimator achieves statistical efficiency under mild conditions.Extensive numerical experiments on synthetic and real-world data validate the theoretical findings and illustrate the effectiveness of the proposed method.展开更多
Detecting fake news in multimodal and multilingual social media environments is challenging due to inherent noise,inter-modal imbalance,computational bottlenecks,and semantic ambiguity.To address these issues,we propo...Detecting fake news in multimodal and multilingual social media environments is challenging due to inherent noise,inter-modal imbalance,computational bottlenecks,and semantic ambiguity.To address these issues,we propose SparseMoE-MFN,a novel unified framework that integrates sparse attention with a sparse-activated Mixture of-Experts(MoE)architecture.This framework aims to enhance the efficiency,inferential depth,and interpretability of multimodal fake news detection.Sparse MoE-MFN leverages LLaVA-v1.6-Mistral-7B-HF for efficient visual encoding and Qwen/Qwen2-7B for text processing.The sparse attention module adaptively filters irrelevant tokens and focuses on key regions,reducing computational costs and noise.The sparse MoE module dynamically routes inputs to specialized experts(visual,language,cross-modal alignment)based on content heterogeneity.This expert specialization design boosts computational efficiency and semantic adaptability,enabling precise processing of complex content and improving performance on ambiguous categories.Evaluated on the large-scale,multilingualMR2 dataset,SparseMoEMFN achieves state-of-the-art performance.It obtains an accuracy of 86.7%and a macro-averaged F1 score of 0.859,outperforming strong baselines like MiniGPT-4 by 3.4%and 3.2%,respectively.Notably,it shows significant advantages in the“unverified”category.Furthermore,SparseMoE-MFN demonstrates superior computational efficiency,with an average inference latency of 89.1 ms and 95.4 GFLOPs,substantially lower than existing models.Ablation studies and visualization analyses confirm the effectiveness of both sparse attention and sparse MoE components in improving accuracy,generalization,and efficiency.展开更多
The forward model of optical fiber strain induced by fractures,together with the associated model resolution matrix,is used to demonstrate the interpretability of fracture parameters once the fracture intersects the f...The forward model of optical fiber strain induced by fractures,together with the associated model resolution matrix,is used to demonstrate the interpretability of fracture parameters once the fracture intersects the fiber.A regularized inversion framework for fracture parameters is established to evaluate the influence of measured data quality on the accuracy of iterative regularized inversion.An interpretation approach for both fracture width and height is proposed,and the synthetic forward data with measurement error and field examples are employed to validate the accuracy of the simultaneous inversion of fracture width and height.The results indicate that,after the fracture contacts the fiber,the strain response is strongly sensitive only to the fracture parameters at the intersection location,whereas the interpretability of parameters at other locations remains limited.The iterative regularized inversion method effectively suppresses the impact of measurement error and exhibits high computational efficiency,showing clear advantages for inversion applications.When incorporating the first-order regularization with a Neumann boundary constraint on the tip width,the inverted fracture-width distribution becomes highly sensitive to fracture height;thus,combined with a bisection strategy,simultaneous inversion of fracture width and height can be achieved.Examination using the model resolution matrix,noisy synthetic data,and field data confirms that the iterative regularized inversion model for fracture width and height provides high interpretive accuracy and can be applied to the calculation and analysis of fracture width,fracture height,net pressure and other parameters.展开更多
The integrated nested Laplace approximation(INLA)algorithm provides a computationally efficient approach for approximate Bayesian inference,overcoming the limitations of traditional Markov chain Monte Carlo(MCMC)metho...The integrated nested Laplace approximation(INLA)algorithm provides a computationally efficient approach for approximate Bayesian inference,overcoming the limitations of traditional Markov chain Monte Carlo(MCMC)methods.This paper reviews INLA algorithm and provides a systematic review of six key books that explore the theoretical foundations,practical implementations,and diverse applications of INLA.These six books cover spatial and spatio-temporal modelling,general Bayesian inference,SPDE-based spatial analysis,geospatial health data,regression modelling,and dynamic time series.In addition,these books highlight the versatility of INLA method in handling complex models while maintaining high computational efficiency.This paper begins with an introduction to the INLA method and algorithm,followed by a systematic review of six key publications in the field.展开更多
In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the...In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.展开更多
The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baf...The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.展开更多
Based on Neumman series and epsilon-algorithm,an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated.Avoiding the calculation for the inverses of the equi...Based on Neumman series and epsilon-algorithm,an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated.Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step,the computation effort of the proposed method is reduced compared with the full analysis of Newmark method.The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass.It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.展开更多
Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy...Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.展开更多
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
In this paper, the complexity and performance of the Auxiliary Vector (AV) based reduced-rank filtering are addressed. The AV filters presented in the previous papers have the general form of the sum of the signature ...In this paper, the complexity and performance of the Auxiliary Vector (AV) based reduced-rank filtering are addressed. The AV filters presented in the previous papers have the general form of the sum of the signature vector of the desired signal and a set of weighted AVs,which can be classified as three categories according to the orthogonality of their AVs and the optimality of the weight coefficients of the AVs. The AV filter with orthogonal AVs and optimal weight coefficients has the best performance, but requires considerable computational complexity and suffers from the numerical unstable operation. In order to reduce its computational load while keeping the superior performance, several low complexity algorithms are proposed to efficiently calculate the AVs and their weight coefficients. The diagonal loading technique is also introduced to solve the numerical unstability problem without complexity increase. The performance of the three types of AV filters is also compared through their application to Direct Sequence Code Division Multiple Access (DS-CDM A) systems for interference suppression.展开更多
Contact detection is the most time-consuming stage in 3D discontinuous deformation analysis(3D-DDA)computation.Improving the efficiency of 3D-DDA is beneficial for its application in large-scale computing.In this stud...Contact detection is the most time-consuming stage in 3D discontinuous deformation analysis(3D-DDA)computation.Improving the efficiency of 3D-DDA is beneficial for its application in large-scale computing.In this study,aiming at the continuous-discontinuous simulation of 3D-DDA,a highly efficient contact detection strategy is proposed.Firstly,the global direct search(GDS)method is integrated into the 3D-DDA framework to address intricate contact scenarios.Subsequently,all geometric elements,including blocks,faces,edges,and vertices are divided into searchable and unsearchable parts.Contacts between unsearchable geometric elements would be directly inherited,while only searchable geometric elements are involved in contact detection.This strategy significantly reduces the number of geometric elements involved in contact detection,thereby markedly enhancing the computation efficiency.Several examples are adopted to demonstrate the accuracy and efficiency of the improved 3D-DDA method.The rock pillars with different mesh sizes are simulated under self-weight.The deformation and stress are consistent with the analytical results,and the smaller the mesh size,the higher the accuracy.The maximum speedup ratio is 38.46 for this case.Furthermore,the Brazilian splitting test on the discs with different flaws is conducted.The results show that the failure pattern of the samples is consistent with the results obtained by other methods and experiments,and the maximum speedup ratio is 266.73.Finally,a large-scale impact test is performed,and approximately 3.2 times enhanced efficiency is obtained.The proposed contact detection strategy significantly improves efficiency when the rock has not completely failed,which is more suitable for continuous-discontinuous simulation.展开更多
In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introduc...In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.展开更多
Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration ...Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。展开更多
Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embed...Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.展开更多
As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the ...As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.展开更多
Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with...Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.展开更多
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
基金supported by the National Natural Science Foundation of China(Grant Nos.11902085 and 11832009)the Science and Technology Association Young Scientific and Technological Talents Support Project of Guangzhou City(Grant No.SKX20210304)the Natural Science Foundation of Guangdong Province(Grant No.2021Al515010320).
文摘Huge calculation burden and difficulty in convergence are the two central conundrums of nonlinear topology optimization(NTO).To this end,a multi-resolution nonlinear topology optimization(MR-NTO)method is proposed based on the multiresolution design strategy(MRDS)and the additive hyperelasticity technique(AHT),taking into account the geometric nonlinearity and material nonlinearity.The MR-NTO strategy is established in the framework of the solid isotropic material with penalization(SIMP)method,while the Neo-Hookean hyperelastic material model characterizes the material nonlinearity.The coarse analysis grid is employed for finite element(FE)calculation,and the fine material grid is applied to describe the material configuration.To alleviate the convergence problem and reduce sensitivity calculation complexity,the software ANSYS coupled with AHT is utilized to perform the nonlinear FE calculation.A strategy for redistributing strain energy is proposed during the sensitivity analysis,i.e.,transforming the strain energy of the analysis element into that of the material element,including Neo-Hooken and second-order Yeoh material.Numerical examples highlight three distinct advantages of the proposed method,i.e.,it can(1)significantly improve the computational efficiency,(2)make up for the shortcoming that NTO based on AHT may have difficulty in convergence when solving the NTO problem,especially for 3D problems,(3)successfully cope with high-resolution 3D complex NTO problems on a personal computer.
基金Project supported by the National Natural Science Foundation of China (Grant No.60876027)the National Science Foundation for Distinguished Young Scholars of China (Grant No.60925015)+1 种基金the National Basic Research Program of China (Grant No.2011CBA00600)the Fundamental Research Project of Shenzhen Science & Technology Foundation,China (Grant No.JC200903160353A)
文摘As a connection between the process and the circuit design, the device model is greatly desired for emerging devices, such as the double-gate MOSFET. Time efficiency is one of the most important requirements for device modeling. In this paper, an improvement to the computational efficiency of the drain current model for double-gate MOSFETs is extended, and different calculation methods are compared and discussed. The results show that the calculation speed of the improved model is substantially enhanced. A two-dimensional device simulation is performed to verify the improved model. Furthermore, the model is implemented into the HSPICE circuit simulator in Verilog-A for practical application.
基金supported by the National Natural Science Foundation of China(Grant Nos.11974154,and 12304278)the Taishan Scholars Special Funding for Construction Projects(Grant No.tstp20230622)+1 种基金the Natural Science Foundation of Shandong Province(Grant Nos.ZR2022MA004,ZR2023QA127,and ZR2024QA121)the Special Foundation of Yantai for Leading Talents above Provincial Level。
文摘The four-decade quest for synthesizing ambient-stable polymeric nitrogen,a promising high-energy-density material,remains an unsolved challenge in materials science.We develop a multi-stage computational strategy employing density functional tight-binding-based rapid screening combined with density functional theory refinement and global structure searching,effectively bridging computational efficiency with quantum accuracy.This integrated approach identifies four novel polymeric nitrogen phases(Fddd,P3221,I4m2,and𝑃P6522)that are thermodynamically stable at ambient pressure.Remarkably,the helical𝑃6522 configuration demonstrates exceptional thermal resilience up to 1500 K,representing a predicted polymeric nitrogen structure that maintains stability under both atmospheric pressure and high-temperature extremes.Our methodology establishes a paradigm-shifting framework for the accelerated discovery of metastable energetic materials,resolving critical bottlenecks in theoretical predictions while providing experimentally actionable targets for polymeric nitrogen synthesis.
基金supported by the National Natural Science Foundation of China under Grant No.12271034the Open Fund Project of Key Laboratory of Market Regulation under Grant No.2023SYSKF02003。
文摘Distributed learning is a well-established method for estimation tasks over extensively distributed datasets.However,non-randomly stored data can introduce bias into local parameter estimates,leading to significant performance degradation in classical distributed algorithms.In this paper,the authors propose a novel Distributed Quasi-Newton Pilot(DQNP)method for distributed learning with non-randomly distributed data.The proposed approach accommodates both randomly and non-randomly distributed data settings and imposes no constraints on the uniformity of local sample sizes.Additionally,it avoids the need to transfer the Hessian matrix or compute its inversion,thereby greatly reducing computational and communication complexity.The authors theoretically demonstrate that the resulting estimator achieves statistical efficiency under mild conditions.Extensive numerical experiments on synthetic and real-world data validate the theoretical findings and illustrate the effectiveness of the proposed method.
基金supported by the National Social Science Fund of China(20BXW101).
文摘Detecting fake news in multimodal and multilingual social media environments is challenging due to inherent noise,inter-modal imbalance,computational bottlenecks,and semantic ambiguity.To address these issues,we propose SparseMoE-MFN,a novel unified framework that integrates sparse attention with a sparse-activated Mixture of-Experts(MoE)architecture.This framework aims to enhance the efficiency,inferential depth,and interpretability of multimodal fake news detection.Sparse MoE-MFN leverages LLaVA-v1.6-Mistral-7B-HF for efficient visual encoding and Qwen/Qwen2-7B for text processing.The sparse attention module adaptively filters irrelevant tokens and focuses on key regions,reducing computational costs and noise.The sparse MoE module dynamically routes inputs to specialized experts(visual,language,cross-modal alignment)based on content heterogeneity.This expert specialization design boosts computational efficiency and semantic adaptability,enabling precise processing of complex content and improving performance on ambiguous categories.Evaluated on the large-scale,multilingualMR2 dataset,SparseMoEMFN achieves state-of-the-art performance.It obtains an accuracy of 86.7%and a macro-averaged F1 score of 0.859,outperforming strong baselines like MiniGPT-4 by 3.4%and 3.2%,respectively.Notably,it shows significant advantages in the“unverified”category.Furthermore,SparseMoE-MFN demonstrates superior computational efficiency,with an average inference latency of 89.1 ms and 95.4 GFLOPs,substantially lower than existing models.Ablation studies and visualization analyses confirm the effectiveness of both sparse attention and sparse MoE components in improving accuracy,generalization,and efficiency.
基金Supported by the Ministry of Education U40 Program(ZYGXONJSKYCXNLZCXM-E19)National Natural Science Foundation of China(52574078)。
文摘The forward model of optical fiber strain induced by fractures,together with the associated model resolution matrix,is used to demonstrate the interpretability of fracture parameters once the fracture intersects the fiber.A regularized inversion framework for fracture parameters is established to evaluate the influence of measured data quality on the accuracy of iterative regularized inversion.An interpretation approach for both fracture width and height is proposed,and the synthetic forward data with measurement error and field examples are employed to validate the accuracy of the simultaneous inversion of fracture width and height.The results indicate that,after the fracture contacts the fiber,the strain response is strongly sensitive only to the fracture parameters at the intersection location,whereas the interpretability of parameters at other locations remains limited.The iterative regularized inversion method effectively suppresses the impact of measurement error and exhibits high computational efficiency,showing clear advantages for inversion applications.When incorporating the first-order regularization with a Neumann boundary constraint on the tip width,the inverted fracture-width distribution becomes highly sensitive to fracture height;thus,combined with a bisection strategy,simultaneous inversion of fracture width and height can be achieved.Examination using the model resolution matrix,noisy synthetic data,and field data confirms that the iterative regularized inversion model for fracture width and height provides high interpretive accuracy and can be applied to the calculation and analysis of fracture width,fracture height,net pressure and other parameters.
基金supported by the National Natural Science Foundation of China[grant number 12001266]the Humanities and Social Science Projects ofMinistry of Education of China[grant number 19YJCZH166]supported by the National Natural Science Foundation of China[grant numbers 12271168 and 12531013].
文摘The integrated nested Laplace approximation(INLA)algorithm provides a computationally efficient approach for approximate Bayesian inference,overcoming the limitations of traditional Markov chain Monte Carlo(MCMC)methods.This paper reviews INLA algorithm and provides a systematic review of six key books that explore the theoretical foundations,practical implementations,and diverse applications of INLA.These six books cover spatial and spatio-temporal modelling,general Bayesian inference,SPDE-based spatial analysis,geospatial health data,regression modelling,and dynamic time series.In addition,these books highlight the versatility of INLA method in handling complex models while maintaining high computational efficiency.This paper begins with an introduction to the INLA method and algorithm,followed by a systematic review of six key publications in the field.
基金Project(61272148) supported by the National Natural Science Foundation of ChinaProject(20120162110061) supported by the Doctoral Programs of Ministry of Education of China+1 种基金Project(CX2014B066) supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044) supported by the Fundamental Research Funds for the Central Universities,China
文摘In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.
文摘The main ideas in the development of the solvent extraction mixer settler focused on achieving clean phase separation,minimizing the loss of the reagents and decreasing the surface area of the settlers.The role of baffles in a mechanically agitated vessel is to ensure even distribution,reduce settler turbulence,promote the stability of power drawn by the impeller and to prevent swirling and vortexing of liquid,thus,greatly improving the mixing of liquid.The insertion of the appropriate number of baffles clearly improves the extent of liquid mixing.However,excessive baffling would interrupt liquid mixing and lengthen the mixing time.Computational fluid dynamics(CFD) provides a tool for determining detailed information on fluid flow(hydrodynamics) which is necessary for modeling subprocesses in mixer settler.A total of 54 final CFD runs were carried out representing different combinations of variables like number of baffles,density and impeller speed.CFD data shows that amount of separation increases with increasing baffles number and decreasing impeller speed.
基金supported by the Foundation of the Science and Technology of Jilin Province(20070541)985-Automotive Engineering of Jilin University and Innovation Fund for 985 Engineering of Jilin University(20080104).
文摘Based on Neumman series and epsilon-algorithm,an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated.Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step,the computation effort of the proposed method is reduced compared with the full analysis of Newmark method.The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass.It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.
基金National Natural Science Foundation of China under Grant Nos.51639006 and 51725901
文摘Finite element(FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations(RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time(TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method(CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ(λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
文摘In this paper, the complexity and performance of the Auxiliary Vector (AV) based reduced-rank filtering are addressed. The AV filters presented in the previous papers have the general form of the sum of the signature vector of the desired signal and a set of weighted AVs,which can be classified as three categories according to the orthogonality of their AVs and the optimality of the weight coefficients of the AVs. The AV filter with orthogonal AVs and optimal weight coefficients has the best performance, but requires considerable computational complexity and suffers from the numerical unstable operation. In order to reduce its computational load while keeping the superior performance, several low complexity algorithms are proposed to efficiently calculate the AVs and their weight coefficients. The diagonal loading technique is also introduced to solve the numerical unstability problem without complexity increase. The performance of the three types of AV filters is also compared through their application to Direct Sequence Code Division Multiple Access (DS-CDM A) systems for interference suppression.
基金financially supported by the National Key R&D Program of China(Grant No.2023YFC3081200)the National Natural Science Foundation of China(Grant Nos.U21A20159 and 52179117).
文摘Contact detection is the most time-consuming stage in 3D discontinuous deformation analysis(3D-DDA)computation.Improving the efficiency of 3D-DDA is beneficial for its application in large-scale computing.In this study,aiming at the continuous-discontinuous simulation of 3D-DDA,a highly efficient contact detection strategy is proposed.Firstly,the global direct search(GDS)method is integrated into the 3D-DDA framework to address intricate contact scenarios.Subsequently,all geometric elements,including blocks,faces,edges,and vertices are divided into searchable and unsearchable parts.Contacts between unsearchable geometric elements would be directly inherited,while only searchable geometric elements are involved in contact detection.This strategy significantly reduces the number of geometric elements involved in contact detection,thereby markedly enhancing the computation efficiency.Several examples are adopted to demonstrate the accuracy and efficiency of the improved 3D-DDA method.The rock pillars with different mesh sizes are simulated under self-weight.The deformation and stress are consistent with the analytical results,and the smaller the mesh size,the higher the accuracy.The maximum speedup ratio is 38.46 for this case.Furthermore,the Brazilian splitting test on the discs with different flaws is conducted.The results show that the failure pattern of the samples is consistent with the results obtained by other methods and experiments,and the maximum speedup ratio is 266.73.Finally,a large-scale impact test is performed,and approximately 3.2 times enhanced efficiency is obtained.The proposed contact detection strategy significantly improves efficiency when the rock has not completely failed,which is more suitable for continuous-discontinuous simulation.
基金Supported by the National Natural Science Foundation of China(12061048)NSF of Jiangxi Province(20232BAB201026,20232BAB201018)。
文摘In this paper,a new technique is introduced to construct higher-order iterative methods for solving nonlinear systems.The order of convergence of some iterative methods can be improved by three at the cost of introducing only one additional evaluation of the function in each step.Furthermore,some new efficient methods with a higher-order of convergence are obtained by using only a single matrix inversion in each iteration.Analyses of convergence properties and computational efficiency of these new methods are made and testified by several numerical problems.By comparison,the new schemes are more efficient than the corresponding existing ones,particularly for large problem sizes.
文摘Declaration of Competing Interest statements were not included in the published version of the following articles that appeared in previous issues of Journal of Automation and Intelligence.The appropriate Declaration of Competing Interest statements,provided by the Authors,are included below.1.“A survey on computationally efficient neural architecture search”[Journal of Automation and Intelligence,1(2022)100002].10.1016/j.jai.2022.100002。
文摘Feature selection(FS)plays a crucial role in medical imaging by reducing dimensionality,improving computational efficiency,and enhancing diagnostic accuracy.Traditional FS techniques,including filter,wrapper,and embedded methods,have been widely used but often struggle with high-dimensional and heterogeneous medical imaging data.Deep learning-based FS methods,particularly Convolutional Neural Networks(CNNs)and autoencoders,have demonstrated superior performance but lack interpretability.Hybrid approaches that combine classical and deep learning techniques have emerged as a promising solution,offering improved accuracy and explainability.Furthermore,integratingmulti-modal imaging data(e.g.,MagneticResonance Imaging(MRI),ComputedTomography(CT),Positron Emission Tomography(PET),and Ultrasound(US))poses additional challenges in FS,necessitating advanced feature fusion strategies.Multi-modal feature fusion combines information fromdifferent imagingmodalities to improve diagnostic accuracy.Recently,quantum computing has gained attention as a revolutionary approach for FS,providing the potential to handle high-dimensional medical data more efficiently.This systematic literature review comprehensively examines classical,Deep Learning(DL),hybrid,and quantum-based FS techniques inmedical imaging.Key outcomes include a structured taxonomy of FS methods,a critical evaluation of their performance across modalities,and identification of core challenges such as computational burden,interpretability,and ethical considerations.Future research directions—such as explainable AI(XAI),federated learning,and quantum-enhanced FS—are also emphasized to bridge the current gaps.This review provides actionable insights for developing scalable,interpretable,and clinically applicable FS methods in the evolving landscape of medical imaging.
基金Supported by the Grant of National Science Foundation of China(11971433)Zhejiang Gongshang University“Digital+”Disciplinary Construction Management Project(SZJ2022B004)+1 种基金Institute for International People-to-People Exchange in Artificial Intelligence and Advanced Manufacturing(CCIPERGZN202439)the Development Fund for Zhejiang College of Shanghai University of Finance and Economics(2023FZJJ15).
文摘As data becomes increasingly complex,measuring dependence among variables is of great interest.However,most existing measures of dependence are limited to the Euclidean setting and cannot effectively characterize the complex relationships.In this paper,we propose a novel method for constructing independence tests for random elements in Hilbert spaces,which includes functional data as a special case.Our approach is using distance covariance of random projections to build a test statistic that is computationally efficient and exhibits strong power performance.We prove the equivalence between testing for independence expressed on the original and the projected covariates,bridging the gap between measures of testing independence in Euclidean spaces and Hilbert spaces.Implementation of the test involves calibration by permutation and combining several p-values from different projections using the false discovery rate method.Simulation studies and real data examples illustrate the finite sample properties of the proposed method under a variety of scenarios.
文摘Due to their resource constraints,Internet of Things(IoT)devices require authentication mechanisms that are both secure and efficient.Elliptic curve cryptography(ECC)meets these needs by providing strong security with shorter key lengths,which significantly reduces the computational overhead required for authentication algorithms.This paper introduces a novel ECC-based IoT authentication system utilizing our previously proposed efficient mapping and reverse mapping operations on elliptic curves over prime fields.By reducing reliance on costly point multiplication,the proposed algorithm significantly improves execution time,storage requirements,and communication cost across varying security levels.The proposed authentication protocol demonstrates superior performance when benchmarked against relevant ECC-based schemes,achieving reductions of up to 35.83%in communication overhead,62.51%in device-side storage consumption,and 71.96%in computational cost.The security robustness of the scheme is substantiated through formal analysis using the Automated Validation of Internet Security Protocols and Applications(AVISPA)tool and Burrows-Abadir-Needham(BAN)logic,complemented by a comprehensive informal analysis that confirms its resilience against various attack models,including impersonation,replay,and man-in-the-middle attacks.Empirical evaluation under simulated conditions demonstrates notable gains in efficiency and security.While these results indicate the protocol’s strong potential for scalable IoT deployments,further validation on real-world embedded platforms is required to confirm its applicability and robustness at scale.