In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces...In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.展开更多
Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional...Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling.展开更多
The rapid evolution of wireless technologies and the advent of 6G networks present new challenges and opportunities for Internet ofThings(IoT)applications,particularly in terms of ultra-reliable,secure,and energyeffic...The rapid evolution of wireless technologies and the advent of 6G networks present new challenges and opportunities for Internet ofThings(IoT)applications,particularly in terms of ultra-reliable,secure,and energyefficient communication.This study explores the integration of Reconfigurable Intelligent Surfaces(RIS)into IoT networks to enhance communication performance.Unlike traditional passive reflector-based approaches,RIS is leveraged as an active optimization tool to improve both backscatter and direct communication modes,addressing critical IoT challenges such as energy efficiency,limited communication range,and double-fading effects in backscatter communication.We propose a novel computational framework that combines RIS functionality with Physical Layer Security(PLS)mechanisms,optimized through the algorithm known as Deep Deterministic Policy Gradient(DDPG).This framework adaptively adapts RIS configurations and transmitter beamforming to reduce key challenges,including imperfect channel state information(CSI)and hardware limitations like quantized RIS phase shifts.By optimizing both RIS settings and beamforming in real-time,our approach outperforms traditional methods by significantly increasing secrecy rates,improving spectral efficiency,and enhancing energy efficiency.Notably,this framework adapts more effectively to the dynamic nature of wireless channels compared to conventional optimization techniques,providing scalable solutions for large-scale RIS deployments.Our results demonstrate substantial improvements in communication performance setting a new benchmark for secure,efficient and scalable 6G communication.This work offers valuable insights for the future of IoT networks,with a focus on computational optimization,high spectral efficiency and energy-aware operations.展开更多
Accurate evaluation of elec-tron correlations is essential for the reliable quantitative de-scription of electronic struc-tures in strongly correlated sys-tems,including bond-dissociat-ing molecules,polyradicals,large...Accurate evaluation of elec-tron correlations is essential for the reliable quantitative de-scription of electronic struc-tures in strongly correlated sys-tems,including bond-dissociat-ing molecules,polyradicals,large conjugated molecules,and transition metal complex-es.To provide a user-friendly tool for studying such challeng-ing systems,our team developed Kylin 1.0[J.Comput.Chem.44,1316(2023)],an ab initio quantum chemistry program designed for efficient density matrix renormalization group(DMRG)and post-DMRG methods,enabling high-accuracy calculations with large active spaces.We have now further advanced the software with the release of Kylin 1.3,featuring optimized DMRG algorithms and an improved tensor contraction scheme in the diagonaliza-tion step.Benchmark calculations on the Mn_(4)CaO_(5)cluster demonstrate a remarkable speed-up of up to 16 fater than Kylin 1.0.Moreover,a more user-friendly and efficient algorithm[J.Chem.Theory Comput.17,3414(2021)]for sampling configurations from DMRG wavefunc-tion is implemented as well.Additionally,we have also implemented a spin-adapted version of the externally contracted multi-reference configuration interaction(EC-MRCI)method[J.Phys.Chem.A 128,958(2024)],further enhancing the program’s efficiency and accuracy for electron correlation calculations.展开更多
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ...In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.展开更多
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove...Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.展开更多
The emphasis on the simplification of cognitive and motor tasks by recent results of morphological computation has rendered possible the construction of appropriate“mimetic bodies”able to render accompanied computat...The emphasis on the simplification of cognitive and motor tasks by recent results of morphological computation has rendered possible the construction of appropriate“mimetic bodies”able to render accompanied computations simpler,according to a general appeal to the“simplexity”of animal embodied cognition.A new activity of what we can call“distributed computation”holds the promise of originating a new generation of robots with better adaptability and restricted number of required control parameters.The framework of distributed computation helps us see them in a more naturalized and prudent perspective,avoiding ontological or metaphysical considerations.Despite these progresses,there are still problems regarding the epistemological limitations of computational modeling remain to be solved.展开更多
Powered by advanced information industry and intelligent technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).And human factors have become crucial in the ...Powered by advanced information industry and intelligent technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).And human factors have become crucial in the operations of complex social systems.Traditional mechanical analysis and social simulations alone are powerless for analyzing complex social systems.Against this backdrop,computational experiments have emerged as a new method for quantitative analysis of complex social systems by combining social simulation(e.g.,ABM),complexity science,and domain knowledge.However,in the process of applying computational experiments,the construction of experiment system not only considers a large number of artificial society models,but also involves a large amount of data and knowledge.As a result,how to integrate various data,model and knowledge to achieve a running experiment system has become a key challenge.This paper proposes an integrated design framework of computational experiment system,which is composed of four parts:generation of digital subject,generation of digital object,design of operation engine,and construction of experiment system.Finally,this paper outlines a typical case study of coal mine emergency management to verify the validity of the proposed framework.展开更多
To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network...To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.展开更多
Due to the recent developments in communications technology,cognitive computations have been used in smart healthcare techniques that can combine massive medical data,artificial intelligence,federated learning,bio-ins...Due to the recent developments in communications technology,cognitive computations have been used in smart healthcare techniques that can combine massive medical data,artificial intelligence,federated learning,bio-inspired computation,and the Internet of Medical Things.It has helped in knowledge sharing and scaling ability between patients,doctors,and clinics for effective treatment of patients.Speech-based respiratory disease detection and monitoring are crucial in this direction and have shown several promising results.Since the subject’s speech can be remotely recorded and submitted for further examination,it offers a quick,economical,dependable,and noninvasive prospective alternative detection approach.However,the two main requirements of this are higher accuracy and lower computational complexity and,in many cases,these two requirements do not correlate with each other.This problem has been taken up in this paper to develop a low computational complexity-based neural network with higher accuracy.A cascaded perceptual functional link artificial neural network(PFLANN)is used to capture the nonlinearity in the data for better classification performance with low computational complexity.The proposed model is being tested for multiple respiratory diseases,and the analysis of various performance matrices demonstrates the superior performance of the proposed model both in terms of accuracy and complexity.展开更多
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support ...The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.展开更多
Computational Intelligent(CI)systems represent a pivotal intersection of cutting-edge technologies and complex engineering challenges aimed at solving real-world problems.This comprehensive body of work delves into th...Computational Intelligent(CI)systems represent a pivotal intersection of cutting-edge technologies and complex engineering challenges aimed at solving real-world problems.This comprehensive body of work delves into the realm of CI,which is designed to tackle intricate and multifaceted engineering problems through advanced computational techniques.The history of CI systems is a fascinating journey that spans several decades and has its roots in the development of artificial intelligence and machine learning techniques.Through a wide array of practical examples and case studies,this special issue bridges the gap between theoretical concepts and practical implementation,shedding light on how CI systems can optimize processes,design solutions,and inform decisions in complex engineering landscapes.This compilation stands as an essential resource for both novice learners and seasoned practitioners,offering a holistic perspective on the potential of CI in reshaping the future of engineering problem-solving.展开更多
Phishing attacks present a serious threat to enterprise systems,requiring advanced detection techniques to protect sensitive data.This study introduces a phishing email detection framework that combines Bidirectional ...Phishing attacks present a serious threat to enterprise systems,requiring advanced detection techniques to protect sensitive data.This study introduces a phishing email detection framework that combines Bidirectional Encoder Representations from Transformers(BERT)for feature extraction and CNN for classification,specifically designed for enterprise information systems.BERT’s linguistic capabilities are used to extract key features from email content,which are then processed by a convolutional neural network(CNN)model optimized for phishing detection.Achieving an accuracy of 97.5%,our proposed model demonstrates strong proficiency in identifying phishing emails.This approach represents a significant advancement in applying deep learning to cybersecurity,setting a new benchmark for email security by effectively addressing the increasing complexity of phishing attacks.展开更多
Rationality is a fundamental concept in economics. Most researchers will accept that human beings are not fully rational. Herbert Simon suggested that we are "bounded rational". However, it is very difficult to quan...Rationality is a fundamental concept in economics. Most researchers will accept that human beings are not fully rational. Herbert Simon suggested that we are "bounded rational". However, it is very difficult to quantify "bounded rationality", and therefore it is difficult to pinpoint its impact to all those economic theories that depend on the assumption of full rationality. Ariel Rubinstein proposed to model bounded rationality by explicitly specifying the decision makers' decision-making procedures. This paper takes a computational point of view to Rubinstein's approach. From a computational point of view, decision procedures can be encoded in algorithms and heuristics. We argue that, everything else being equal, the effective rationality of an agent is determined by its computational power - we refer to this as the computational intelligence determines effective rationality (CIDER) theory. This is not an attempt to propose a unifying definition of bounded rationality. It is merely a proposal of a computational point of view of bounded rationality. This way of interpreting bounded rationality enables us to (computationally) reason about economic systems when the full rationality assumption is relaxed.展开更多
Large-scale multi-objective optimization problems(MOPs)that involve a large number of decision variables,have emerged from many real-world applications.While evolutionary algorithms(EAs)have been widely acknowledged a...Large-scale multi-objective optimization problems(MOPs)that involve a large number of decision variables,have emerged from many real-world applications.While evolutionary algorithms(EAs)have been widely acknowledged as a mainstream method for MOPs,most research progress and successful applications of EAs have been restricted to MOPs with small-scale decision variables.More recently,it has been reported that traditional multi-objective EAs(MOEAs)suffer severe deterioration with the increase of decision variables.As a result,and motivated by the emergence of real-world large-scale MOPs,investigation of MOEAs in this aspect has attracted much more attention in the past decade.This paper reviews the progress of evolutionary computation for large-scale multi-objective optimization from two angles.From the key difficulties of the large-scale MOPs,the scalability analysis is discussed by focusing on the performance of existing MOEAs and the challenges induced by the increase of the number of decision variables.From the perspective of methodology,the large-scale MOEAs are categorized into three classes and introduced respectively:divide and conquer based,dimensionality reduction based and enhanced search-based approaches.Several future research directions are also discussed.展开更多
Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise...Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise their presence and capabilities in the form of services so that they can be discovered and, if desired, exploited by the user or other networked devices. With the increasing number of these devices attached to the network, the complexity to configure and control them increases, which may lead to major processing and communication overhead. Hence, the devices are no longer expected to just act as primitive stand-alone appliances that only provide the facilities and services to the user they are designed for, but also offer complex services that emerge from unique combinations of devices. This creates the necessity for these devices to be equipped with some sort of intelligence and self-awareness to enable them to be self-configuring and self-programming. However, with this "smart evolution", the cognitive load to configure and control such spaces becomes immense. One way to relieve this load is by employing artificial intelligence (AI) techniques to create an intelligent "presence" where the system will be able to recognize the users and autonomously program the environment to be energy efficient and responsive to the user's needs and behaviours. These AI mechanisms should be embedded in the user's environments and should operate in a non-intrusive manner. This paper will show how computational intelligence (CI), which is an emerging domain of AI, could be employed and embedded in our living spaces to help such environments to be more energy efficient, intelligent, adaptive and convenient to the users.展开更多
The cloud radio access network(C-RAN) and the fog computing have been recently proposed to tackle the dramatically increasing traffic demands and to provide better quality of service(QoS) to user equipment(UE).Conside...The cloud radio access network(C-RAN) and the fog computing have been recently proposed to tackle the dramatically increasing traffic demands and to provide better quality of service(QoS) to user equipment(UE).Considering the better computation capability of the cloud RAN(10 times larger than that of the fog RAN) and the lower transmission delay of the fog computing,we propose a joint resource allocation and coordinated computation offloading algorithm for the fog RAN(F-RAN),which takes the advantage of C-RAN and fog computing.Specifically,the F-RAN splits a computation task into the fog computing part and the cloud computing part.Based on the constraints of maximum transmission delay tolerance,fronthaul and backhaul capacity limits,we minimize the energy cost and obtain optimal computational resource allocation for multiple UE,transmission power allocation of each UE and the event splitting factor.Numerical results have been proposed with the comparison of existing methods.展开更多
Noise generated by civil transport aircraft during take-off and approach-to-land phases of operation is an environmental problem. The aircraft noise problem is firstly reviewed in this article. The review is followed ...Noise generated by civil transport aircraft during take-off and approach-to-land phases of operation is an environmental problem. The aircraft noise problem is firstly reviewed in this article. The review is followed by a description and assessment of a number of sound propagation methods suitable for applications with a background mean flow field pertinent to aircraft noise. Of the three main areas of the noise problem, i.e. generation, propagation, and ra- diation, propagation provides a vital link between near-field noise generation and far-field radiation. Its accurate assessment ensures the overall validity of a prediction model. Of the various classes of propagation equations, linearised Euler equations are often casted in either time domain or frequency domain. The equations are often solved numerically by computational aeroacoustics techniques, bur are subject to the onset of Kelvin-Helmholtz (K-H) instability modes which may ruin the solutions. Other forms of linearised equations, e.g. acoustic perturbation equations have been proposed, with differing degrees of success.展开更多
Ca^2+ dysregulation is an early event observed in Alzheimer's disease(AD) patients preceding the presence of its clinical symptoms.Dysregulation of neuronalCa^2+ will cause synaptic loss and neuronal death,eventu...Ca^2+ dysregulation is an early event observed in Alzheimer's disease(AD) patients preceding the presence of its clinical symptoms.Dysregulation of neuronalCa^2+ will cause synaptic loss and neuronal death,eventually leading to memory impairments and cognitive decline.Treatments targetingCa^2+ signaling pathways are potential therapeutic strategies against AD.The complicated interactions make it challenging and expensive to study the underlying mechanisms as to how Ca^2+ signaling contributes to the pathogenesis of AD.Computational modeling offers new opportunities to study the signaling pathway and test proposed mechanisms.In this mini-review,we present some computational approaches that have been used to study Ca^2+ dysregulation of AD by simulating Ca^2+signaling at various levels.We also pointed out the future directions that computational modeling can be done in studying the Ca^2+ dysregulation in AD.展开更多
The pressure loss of cross-flow perforated of physical modeling, simulation and data processing. muffler has been computed with the procedure Three-dimensional computational fluid dynamics (CFD) has been used to inv...The pressure loss of cross-flow perforated of physical modeling, simulation and data processing. muffler has been computed with the procedure Three-dimensional computational fluid dynamics (CFD) has been used to investigate the relations of porosities, flow velocity and diameter of the holes with the pressure loss. Accordingly, some preliminary results have been obtained that pressure loss increases with porosity descent as nearly a hyperbolic trend, rising flow velocity of the input makes the pressure loss increasing with parabola trend, diameter of holes affects little about pressure loss of the muffler. Otherwise, the holes on the perforated pipes make the air flow gently and meanly, which decreases the air impact to the wall and pipes in the muffler. A practical perforated muffler is used to illustrate the available of this method for pressure loss computation, and the comparison shows that the computation results with the method of CFD has reference value for muffler design.展开更多
基金supported by the National Key Research and Development Program of China(No.2021YFB2900504).
文摘In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.
基金supported by Ongoing Research Funding Program(ORF-2025-488)King Saud University,Riyadh,Saudi Arabia.
文摘Effectively handling imbalanced datasets remains a fundamental challenge in computational modeling and machine learning,particularly when class overlap significantly deteriorates classification performance.Traditional oversampling methods often generate synthetic samples without considering density variations,leading to redundant or misleading instances that exacerbate class overlap in high-density regions.To address these limitations,we propose Wasserstein Generative Adversarial Network Variational Density Estimation WGAN-VDE,a computationally efficient density-aware adversarial resampling framework that enhances minority class representation while strategically reducing class overlap.The originality of WGAN-VDE lies in its density-aware sample refinement,ensuring that synthetic samples are positioned in underrepresented regions,thereby improving class distinctiveness.By applying structured feature representation,targeted sample generation,and density-based selection mechanisms strategies,the proposed framework ensures the generation of well-separated and diverse synthetic samples,improving class separability and reducing redundancy.The experimental evaluation on 20 benchmark datasets demonstrates that this approach outperforms 11 state-of-the-art rebalancing techniques,achieving superior results in F1-score,Accuracy,G-Mean,and AUC metrics.These results establish the proposed method as an effective and robust computational approach,suitable for diverse engineering and scientific applications involving imbalanced data classification and computational modeling.
基金funded by the deanship of scientific research(DSR),King Abdukaziz University,Jeddah,under grant No.(G-1436-611-225)。
文摘The rapid evolution of wireless technologies and the advent of 6G networks present new challenges and opportunities for Internet ofThings(IoT)applications,particularly in terms of ultra-reliable,secure,and energyefficient communication.This study explores the integration of Reconfigurable Intelligent Surfaces(RIS)into IoT networks to enhance communication performance.Unlike traditional passive reflector-based approaches,RIS is leveraged as an active optimization tool to improve both backscatter and direct communication modes,addressing critical IoT challenges such as energy efficiency,limited communication range,and double-fading effects in backscatter communication.We propose a novel computational framework that combines RIS functionality with Physical Layer Security(PLS)mechanisms,optimized through the algorithm known as Deep Deterministic Policy Gradient(DDPG).This framework adaptively adapts RIS configurations and transmitter beamforming to reduce key challenges,including imperfect channel state information(CSI)and hardware limitations like quantized RIS phase shifts.By optimizing both RIS settings and beamforming in real-time,our approach outperforms traditional methods by significantly increasing secrecy rates,improving spectral efficiency,and enhancing energy efficiency.Notably,this framework adapts more effectively to the dynamic nature of wireless channels compared to conventional optimization techniques,providing scalable solutions for large-scale RIS deployments.Our results demonstrate substantial improvements in communication performance setting a new benchmark for secure,efficient and scalable 6G communication.This work offers valuable insights for the future of IoT networks,with a focus on computational optimization,high spectral efficiency and energy-aware operations.
基金supported by Shandong Provincial Nat-ural Science Foundation(ZR2024ZD30)the National Natural Science Foundation of China(Nos.22325302 and 22403100).
文摘Accurate evaluation of elec-tron correlations is essential for the reliable quantitative de-scription of electronic struc-tures in strongly correlated sys-tems,including bond-dissociat-ing molecules,polyradicals,large conjugated molecules,and transition metal complex-es.To provide a user-friendly tool for studying such challeng-ing systems,our team developed Kylin 1.0[J.Comput.Chem.44,1316(2023)],an ab initio quantum chemistry program designed for efficient density matrix renormalization group(DMRG)and post-DMRG methods,enabling high-accuracy calculations with large active spaces.We have now further advanced the software with the release of Kylin 1.3,featuring optimized DMRG algorithms and an improved tensor contraction scheme in the diagonaliza-tion step.Benchmark calculations on the Mn_(4)CaO_(5)cluster demonstrate a remarkable speed-up of up to 16 fater than Kylin 1.0.Moreover,a more user-friendly and efficient algorithm[J.Chem.Theory Comput.17,3414(2021)]for sampling configurations from DMRG wavefunc-tion is implemented as well.Additionally,we have also implemented a spin-adapted version of the externally contracted multi-reference configuration interaction(EC-MRCI)method[J.Phys.Chem.A 128,958(2024)],further enhancing the program’s efficiency and accuracy for electron correlation calculations.
基金The Natural Science Foundation of Henan Province(No.232300421097)the Program for Science&Technology Innovation Talents in Universities of Henan Province(No.23HASTIT019,24HASTIT038)+2 种基金the China Postdoctoral Science Foundation(No.2023T160596,2023M733251)the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)the Song Shan Laboratory Foundation(No.YYJC022022003)。
文摘In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes.
基金the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+2 种基金the New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)the Open Research Fund of the State Key Laboratory for Management and Control of Complex Systems(20210101)Tianjin University Talent Innovation Reward Program for Literature and Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.
文摘The emphasis on the simplification of cognitive and motor tasks by recent results of morphological computation has rendered possible the construction of appropriate“mimetic bodies”able to render accompanied computations simpler,according to a general appeal to the“simplexity”of animal embodied cognition.A new activity of what we can call“distributed computation”holds the promise of originating a new generation of robots with better adaptability and restricted number of required control parameters.The framework of distributed computation helps us see them in a more naturalized and prudent perspective,avoiding ontological or metaphysical considerations.Despite these progresses,there are still problems regarding the epistemological limitations of computational modeling remain to be solved.
基金supported in part by the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+3 种基金Open Research Fund of The State Key Laboratory for Management and Control of Complex Systems(20210101)New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)Tianjin University Talent InnovationReward Program for Literature&Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information industry and intelligent technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).And human factors have become crucial in the operations of complex social systems.Traditional mechanical analysis and social simulations alone are powerless for analyzing complex social systems.Against this backdrop,computational experiments have emerged as a new method for quantitative analysis of complex social systems by combining social simulation(e.g.,ABM),complexity science,and domain knowledge.However,in the process of applying computational experiments,the construction of experiment system not only considers a large number of artificial society models,but also involves a large amount of data and knowledge.As a result,how to integrate various data,model and knowledge to achieve a running experiment system has become a key challenge.This paper proposes an integrated design framework of computational experiment system,which is composed of four parts:generation of digital subject,generation of digital object,design of operation engine,and construction of experiment system.Finally,this paper outlines a typical case study of coal mine emergency management to verify the validity of the proposed framework.
文摘To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.
文摘Due to the recent developments in communications technology,cognitive computations have been used in smart healthcare techniques that can combine massive medical data,artificial intelligence,federated learning,bio-inspired computation,and the Internet of Medical Things.It has helped in knowledge sharing and scaling ability between patients,doctors,and clinics for effective treatment of patients.Speech-based respiratory disease detection and monitoring are crucial in this direction and have shown several promising results.Since the subject’s speech can be remotely recorded and submitted for further examination,it offers a quick,economical,dependable,and noninvasive prospective alternative detection approach.However,the two main requirements of this are higher accuracy and lower computational complexity and,in many cases,these two requirements do not correlate with each other.This problem has been taken up in this paper to develop a low computational complexity-based neural network with higher accuracy.A cascaded perceptual functional link artificial neural network(PFLANN)is used to capture the nonlinearity in the data for better classification performance with low computational complexity.The proposed model is being tested for multiple respiratory diseases,and the analysis of various performance matrices demonstrates the superior performance of the proposed model both in terms of accuracy and complexity.
文摘The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.
文摘Computational Intelligent(CI)systems represent a pivotal intersection of cutting-edge technologies and complex engineering challenges aimed at solving real-world problems.This comprehensive body of work delves into the realm of CI,which is designed to tackle intricate and multifaceted engineering problems through advanced computational techniques.The history of CI systems is a fascinating journey that spans several decades and has its roots in the development of artificial intelligence and machine learning techniques.Through a wide array of practical examples and case studies,this special issue bridges the gap between theoretical concepts and practical implementation,shedding light on how CI systems can optimize processes,design solutions,and inform decisions in complex engineering landscapes.This compilation stands as an essential resource for both novice learners and seasoned practitioners,offering a holistic perspective on the potential of CI in reshaping the future of engineering problem-solving.
基金supported by a grant from Hong Kong Metropolitan University (RD/2023/2.3)supported Princess Nourah bint Abdulrah-man University Researchers Supporting Project number (PNURSP2024R 343)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabiathe Deanship of Scientific Research at Northern Border University,Arar,Kingdom of Saudi Arabia for funding this research work through the project number“NBU-FFR-2024-1092-09”.
文摘Phishing attacks present a serious threat to enterprise systems,requiring advanced detection techniques to protect sensitive data.This study introduces a phishing email detection framework that combines Bidirectional Encoder Representations from Transformers(BERT)for feature extraction and CNN for classification,specifically designed for enterprise information systems.BERT’s linguistic capabilities are used to extract key features from email content,which are then processed by a convolutional neural network(CNN)model optimized for phishing detection.Achieving an accuracy of 97.5%,our proposed model demonstrates strong proficiency in identifying phishing emails.This approach represents a significant advancement in applying deep learning to cybersecurity,setting a new benchmark for email security by effectively addressing the increasing complexity of phishing attacks.
文摘Rationality is a fundamental concept in economics. Most researchers will accept that human beings are not fully rational. Herbert Simon suggested that we are "bounded rational". However, it is very difficult to quantify "bounded rationality", and therefore it is difficult to pinpoint its impact to all those economic theories that depend on the assumption of full rationality. Ariel Rubinstein proposed to model bounded rationality by explicitly specifying the decision makers' decision-making procedures. This paper takes a computational point of view to Rubinstein's approach. From a computational point of view, decision procedures can be encoded in algorithms and heuristics. We argue that, everything else being equal, the effective rationality of an agent is determined by its computational power - we refer to this as the computational intelligence determines effective rationality (CIDER) theory. This is not an attempt to propose a unifying definition of bounded rationality. It is merely a proposal of a computational point of view of bounded rationality. This way of interpreting bounded rationality enables us to (computationally) reason about economic systems when the full rationality assumption is relaxed.
基金This work was supported by the Natural Science Foundation of China(Nos.61672478 and 61806090)the National Key Research and Development Program of China(No.2017YFB1003102)+4 种基金the Guangdong Provincial Key Laboratory(No.2020B121201001)the Shenzhen Peacock Plan(No.KQTD2016112514355531)the Guangdong-Hong Kong-Macao Greater Bay Area Center for Brain Science and Brain-inspired Intelligence Fund(No.2019028)the Fellowship of China Postdoctoral Science Foundation(No.2020M671900)the National Leading Youth Talent Support Program of China.
文摘Large-scale multi-objective optimization problems(MOPs)that involve a large number of decision variables,have emerged from many real-world applications.While evolutionary algorithms(EAs)have been widely acknowledged as a mainstream method for MOPs,most research progress and successful applications of EAs have been restricted to MOPs with small-scale decision variables.More recently,it has been reported that traditional multi-objective EAs(MOEAs)suffer severe deterioration with the increase of decision variables.As a result,and motivated by the emergence of real-world large-scale MOPs,investigation of MOEAs in this aspect has attracted much more attention in the past decade.This paper reviews the progress of evolutionary computation for large-scale multi-objective optimization from two angles.From the key difficulties of the large-scale MOPs,the scalability analysis is discussed by focusing on the performance of existing MOEAs and the challenges induced by the increase of the number of decision variables.From the perspective of methodology,the large-scale MOEAs are categorized into three classes and introduced respectively:divide and conquer based,dimensionality reduction based and enhanced search-based approaches.Several future research directions are also discussed.
文摘Our living environments are being gradually occupied with an abundant number of digital objects that have networking and computing capabilities. After these devices are plugged into a network, they initially advertise their presence and capabilities in the form of services so that they can be discovered and, if desired, exploited by the user or other networked devices. With the increasing number of these devices attached to the network, the complexity to configure and control them increases, which may lead to major processing and communication overhead. Hence, the devices are no longer expected to just act as primitive stand-alone appliances that only provide the facilities and services to the user they are designed for, but also offer complex services that emerge from unique combinations of devices. This creates the necessity for these devices to be equipped with some sort of intelligence and self-awareness to enable them to be self-configuring and self-programming. However, with this "smart evolution", the cognitive load to configure and control such spaces becomes immense. One way to relieve this load is by employing artificial intelligence (AI) techniques to create an intelligent "presence" where the system will be able to recognize the users and autonomously program the environment to be energy efficient and responsive to the user's needs and behaviours. These AI mechanisms should be embedded in the user's environments and should operate in a non-intrusive manner. This paper will show how computational intelligence (CI), which is an emerging domain of AI, could be employed and embedded in our living spaces to help such environments to be more energy efficient, intelligent, adaptive and convenient to the users.
基金supported in part by National Natural Science Foundation of China(No. 61372070)Natural Science Basic Research Plan in Shaanxi Province of China(No. 2015JM6324)+3 种基金Ningbo Natural Science Foundation(2015A610117)Hong Kong, Macao and Taiwan Science & Technology Cooperation Program of China(No. 2015DFT10160)EU FP7 Project MONICA (No.PIRSES-GA-2011-295222)the 111 Project(No.B08038)
文摘The cloud radio access network(C-RAN) and the fog computing have been recently proposed to tackle the dramatically increasing traffic demands and to provide better quality of service(QoS) to user equipment(UE).Considering the better computation capability of the cloud RAN(10 times larger than that of the fog RAN) and the lower transmission delay of the fog computing,we propose a joint resource allocation and coordinated computation offloading algorithm for the fog RAN(F-RAN),which takes the advantage of C-RAN and fog computing.Specifically,the F-RAN splits a computation task into the fog computing part and the cloud computing part.Based on the constraints of maximum transmission delay tolerance,fronthaul and backhaul capacity limits,we minimize the energy cost and obtain optimal computational resource allocation for multiple UE,transmission power allocation of each UE and the event splitting factor.Numerical results have been proposed with the comparison of existing methods.
文摘Noise generated by civil transport aircraft during take-off and approach-to-land phases of operation is an environmental problem. The aircraft noise problem is firstly reviewed in this article. The review is followed by a description and assessment of a number of sound propagation methods suitable for applications with a background mean flow field pertinent to aircraft noise. Of the three main areas of the noise problem, i.e. generation, propagation, and ra- diation, propagation provides a vital link between near-field noise generation and far-field radiation. Its accurate assessment ensures the overall validity of a prediction model. Of the various classes of propagation equations, linearised Euler equations are often casted in either time domain or frequency domain. The equations are often solved numerically by computational aeroacoustics techniques, bur are subject to the onset of Kelvin-Helmholtz (K-H) instability modes which may ruin the solutions. Other forms of linearised equations, e.g. acoustic perturbation equations have been proposed, with differing degrees of success.
文摘Ca^2+ dysregulation is an early event observed in Alzheimer's disease(AD) patients preceding the presence of its clinical symptoms.Dysregulation of neuronalCa^2+ will cause synaptic loss and neuronal death,eventually leading to memory impairments and cognitive decline.Treatments targetingCa^2+ signaling pathways are potential therapeutic strategies against AD.The complicated interactions make it challenging and expensive to study the underlying mechanisms as to how Ca^2+ signaling contributes to the pathogenesis of AD.Computational modeling offers new opportunities to study the signaling pathway and test proposed mechanisms.In this mini-review,we present some computational approaches that have been used to study Ca^2+ dysregulation of AD by simulating Ca^2+signaling at various levels.We also pointed out the future directions that computational modeling can be done in studying the Ca^2+ dysregulation in AD.
文摘The pressure loss of cross-flow perforated of physical modeling, simulation and data processing. muffler has been computed with the procedure Three-dimensional computational fluid dynamics (CFD) has been used to investigate the relations of porosities, flow velocity and diameter of the holes with the pressure loss. Accordingly, some preliminary results have been obtained that pressure loss increases with porosity descent as nearly a hyperbolic trend, rising flow velocity of the input makes the pressure loss increasing with parabola trend, diameter of holes affects little about pressure loss of the muffler. Otherwise, the holes on the perforated pipes make the air flow gently and meanly, which decreases the air impact to the wall and pipes in the muffler. A practical perforated muffler is used to illustrate the available of this method for pressure loss computation, and the comparison shows that the computation results with the method of CFD has reference value for muffler design.