The aim of this article is to explore potential directions for the development of artificial intelligence(AI).It points out that,while current AI can handle the statistical properties of complex systems,it has difficu...The aim of this article is to explore potential directions for the development of artificial intelligence(AI).It points out that,while current AI can handle the statistical properties of complex systems,it has difficulty effectively processing and fully representing their spatiotemporal complexity patterns.The article also discusses a potential path of AI development in the engineering domain.Based on the existing understanding of the principles of multilevel com-plexity,this article suggests that consistency among the logical structures of datasets,AI models,model-building software,and hardware will be an important AI development direction and is worthy of careful consideration.展开更多
In recent years,large-scale artificial intelligence(AI)models have become a focal point in technology,attracting widespread attention and acclaim.Notable examples include Google’s BERT and OpenAI’s GPT,which have sc...In recent years,large-scale artificial intelligence(AI)models have become a focal point in technology,attracting widespread attention and acclaim.Notable examples include Google’s BERT and OpenAI’s GPT,which have scaled their parameter sizes to hundreds of billions or even tens of trillions.This growth has been accompanied by a significant increase in the amount of training data,significantly improving the capabilities and performance of these models.Unlike previous reviews,this paper provides a comprehensive discussion of the algorithmic principles of large-scale AI models and their industrial applications from multiple perspectives.We first outline the evolutionary history of these models,highlighting milestone algorithms while exploring their underlying principles and core technologies.We then evaluate the challenges and limitations of large-scale AI models,including computational resource requirements,model parameter inflation,data privacy concerns,and specific issues related to multi-modal AI models,such as reliance on text-image pairs,inconsistencies in understanding and generation capabilities,and the lack of true“multi-modality”.Various industrial applications of these models are also presented.Finally,we discuss future trends,predicting further expansion of model scale and the development of cross-modal fusion.This study provides valuable insights to inform and inspire future future research and practice.展开更多
Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap fr...Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap from traditional robotics to hierarchical and end-to-end models.This algorithmic advancement poses a critical challenge in achieving balanced system-wide performance.Therefore,algorithm-hardware co-design has emerged as the primary methodology,which ana-lyzes algorithm behaviors on hardware to identify common computational properties.These properties can motivate algo-rithm optimization to reduce computational complexity and hardware innovation from architecture to circuit for high performance and high energy efficiency.We then reviewed recent works on robotic and embodied AI algorithms and computing hard-ware to demonstrate this algorithm-hardware co-design methodology.In the end,we discuss future research opportunities by answering two questions:(1)how to adapt the computing platforms to the rapid evolution of embodied AI algorithms,and(2)how to transform the potential of emerging hardware innovations into end-to-end inference improvements.展开更多
In recent years,the rapid advancement of artificial intelligence(AI)has fostered deep integration between large AI models and robotic technology.Robots such as robotic dogs capable of carrying heavy loads on mountaino...In recent years,the rapid advancement of artificial intelligence(AI)has fostered deep integration between large AI models and robotic technology.Robots such as robotic dogs capable of carrying heavy loads on mountainous terrain or performing waste disposal tasks and humanoid robots that can execute high-precision component installations have gradually reached the public eye,raising expectations for embodied intelligent robots.展开更多
By comparing price plans offered by several retail energy firms,end users with smart meters and controllers may optimize their energy use cost portfolios,due to the growth of deregulated retail power markets.To help s...By comparing price plans offered by several retail energy firms,end users with smart meters and controllers may optimize their energy use cost portfolios,due to the growth of deregulated retail power markets.To help smart grid end-users decrease power payment and usage unhappiness,this article suggests a decision system based on reinforcement learning to aid with electricity price plan selection.An enhanced state-based Markov decision process(MDP)without transition probabilities simulates the decision issue.A Kernel approximate-integrated batch Q-learning approach is used to tackle the given issue.Several adjustments to the sampling and data representation are made to increase the computational and prediction performance.Using a continuous high-dimensional state space,the suggested approach can uncover the underlying characteristics of time-varying pricing schemes.Without knowing anything regarding the market environment in advance,the best decision-making policy may be learned via case studies that use data from actual historical price plans.Experiments show that the suggested decision approach may reduce cost and energy usage dissatisfaction by using user data to build an accurate prediction strategy.In this research,we look at how smart city energy planners rely on precise load forecasts.It presents a hybrid method that extracts associated characteristics to improve accuracy in residential power consumption forecasts using machine learning(ML).It is possible to measure the precision of forecasts with the use of loss functions with the RMSE.This research presents a methodology for estimating smart home energy usage in response to the growing interest in explainable artificial intelligence(XAI).Using Shapley Additive explanations(SHAP)approaches,this strategy makes it easy for consumers to comprehend their energy use trends.To predict future energy use,the study employs gradient boosting in conjunction with long short-term memory neural networks.展开更多
This study explores the determinants of impact on ecology in Northern Tanzania.By examining key socio-economic,institutional,and structural factors influencing engagement the study provides insights in strengthening a...This study explores the determinants of impact on ecology in Northern Tanzania.By examining key socio-economic,institutional,and structural factors influencing engagement the study provides insights in strengthening agribusiness networks and improving livelihoods.Data was collected from 215 farmers and 320 traders through a multistage sampling procedure.Heckman AI sample selection model was used in data analysis whereby the findings showed key factors influencing farmers’decisions on ecology were gender and years of formal education at p<0.1,and access to finance and off-farm income at p<0.05.The degree of farmers participation in social groups was influenced by age,household size,off-farm income and business network at p<0.05,number of years in formal education and access to finance at p<0.01,and distance to the market at p<0.1.The decision of traders to impact on ecology was significantly influenced by age and trading experience at p<0.1.Meanwhile,the degree of their involvement in social groups was strongly affected by gender,formal education,and trust at p<0.01,as well as by access to finance and business networks at p<0.05.The study concluded that natural ecology is influenced by socio economic and structural factors but trust among group members determine the degree of participation.The study recommends that strategies to improve agribusiness networks should understand underlying causes of impact on ecology and strengthen available social groups to improve performance of farmers and traders.展开更多
Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs...Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs. This practice involves strategically designing and structuring prompts to guide AI models toward desired outcomes, ensuring that they generate relevant, informative, and accurate responses. The significance of prompt engineering cannot be overstated. Well-crafted prompts can significantly enhance the capabilities of AI models, enabling them to perform tasks that were once thought to be exclusively human domain. By providing clear and concise instructions, prompts can guide AI models to generate creative text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Moreover, prompt engineering can help mitigate biases and ensure that AI models produce outputs that are fair, equitable, and inclusive. However, prompt engineering is not without its challenges. Crafting effective prompts requires a deep understanding of both the AI model’s capabilities and the specific task at hand. Additionally, the quality of the prompts can be influenced by factors such as the model’s training data [1] and the complexity of the task. As AI models continue to evolve, prompt engineering will likely become even more critical in unlocking their full potential.展开更多
From globally popular video game Black Myth:Wukong,which has garnered a dedicated player base around the world,to DeepSeek,an artificial intelligence(AI)model developed at an impressively low cost that rivals U.S.comp...From globally popular video game Black Myth:Wukong,which has garnered a dedicated player base around the world,to DeepSeek,an artificial intelligence(AI)model developed at an impressively low cost that rivals U.S.company OpenAI’s ChatGPT,and the perfectly synchronized robotic ensemble performing with precision at this year’s China Central Television Spring Festival Gala,a Chinese New Year’s Eve extravaganza that aired on January 28-these big tech breakthroughs have risen to prominence one after another,generating massive buzz.展开更多
Artificial intelligence(AI)models are promising to improve the accuracy of wireless positioning systems,particularly in indoor environments where unpredictable radio propagation channel is a great challenge.Although g...Artificial intelligence(AI)models are promising to improve the accuracy of wireless positioning systems,particularly in indoor environments where unpredictable radio propagation channel is a great challenge.Although great efforts have been made to explore the effectiveness of different AI models,it is still an open problem whether these models,trained with the data collected from all base stations(BSs),could work when some BSs are unavailable.In this paper,we make the first effort to enhance the generalization ability of AI wireless positioning model to adapt to the scenario where only partial BSs work.Particularly,a Siamese Network based Wireless Positioning Model(SNWPM)is proposed to predict the location of mobile user equipment from channel state information(CSI)collected from 5G BSs.Furthermore,a Feature Aware Attention Module(FAAM)is introduced to reinforce the capability of feature extraction from CSI data.Experiments are conducted on the 2022 Wireless Communication AI Competition(WAIC)dataset.The proposed SNWPM achieves decimeter-level positioning accuracy even if the data of partial BSs are unavailable.Compared with other AI models,the proposed SNWPM can reduce the positioning error by nearly 50%to more than 60%while using less parameters and lower computation resources.展开更多
In a prior practice and policy article published in Healthcare Science,we introduced the deployed application of an artificial intelligence(AI)model to predict longer‐term inpatient readmissions to guide community ca...In a prior practice and policy article published in Healthcare Science,we introduced the deployed application of an artificial intelligence(AI)model to predict longer‐term inpatient readmissions to guide community care interventions for patients with complex conditions in the context of Singapore's Hospital to Home(H2H)program that has been operating since 2017.In this follow on practice and policy article,we further elaborate on Singapore's H2H program and care model,and its supporting AI model for multiple readmission prediction,in the following ways:(1)by providing updates on the AI and supporting information systems,(2)by reporting on customer engagement and related service delivery outcomes including staff‐related time savings and patient benefits in terms of bed days saved,(3)by sharing lessons learned with respect to(i)analytics challenges encountered due to the high degree of heterogeneity and resulting variability of the data set associated with the population of program participants,(ii)balancing competing needs for simpler and stable predictive models versus continuing to further enhance models and add yet more predictive variables,and(iii)the complications of continuing to make model changes when the AI part of the system is highly interlinked with supporting clinical information systems,(4)by highlighting how this H2H effort supported broader Covid‐19 response efforts across Singapore's public healthcare system,and finally(5)by commenting on how the experiences and related capabilities acquired from running this H2H program and related community care model and supporting AI prediction model are expected to contribute to the next wave of Singapore's public healthcare efforts from 2023 onwards.For the convenience of the reader,some content that introduces the H2H program and the multiple readmissions AI prediction model that previously appeared in the prior Healthcare Science publication is repeated at the beginning of this article.展开更多
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become ...As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.展开更多
The proliferation of digital payment methods facilitated by various online platforms and applications has led to a surge in financial fraud,particularly in credit card transactions.Advanced technologies such as machin...The proliferation of digital payment methods facilitated by various online platforms and applications has led to a surge in financial fraud,particularly in credit card transactions.Advanced technologies such as machine learning have been widely employed to enhance the early detection and prevention of losses arising frompotentially fraudulent activities.However,a prevalent approach in existing literature involves the use of extensive data sampling and feature selection algorithms as a precursor to subsequent investigations.While sampling techniques can significantly reduce computational time,the resulting dataset relies on generated data and the accuracy of the pre-processing machine learning models employed.Such datasets often lack true representativeness of realworld data,potentially introducing secondary issues that affect the precision of the results.For instance,undersampling may result in the loss of critical information,while over-sampling can lead to overfitting machine learning models.In this paper,we proposed a classification study of credit card fraud using fundamental machine learning models without the application of any sampling techniques on all the features present in the original dataset.The results indicate that Support Vector Machine(SVM)consistently achieves classification performance exceeding 90%across various evaluation metrics.This discovery serves as a valuable reference for future research,encouraging comparative studies on original dataset without the reliance on sampling techniques.Furthermore,we explore hybrid machine learning techniques,such as ensemble learning constructed based on SVM,K-Nearest Neighbor(KNN)and decision tree,highlighting their potential advancements in the field.The study demonstrates that the proposed machine learning models yield promising results,suggesting that pre-processing the dataset with sampling algorithm or additional machine learning technique may not always be necessary.This research contributes to the field of credit card fraud detection by emphasizing the potential of employing machine learning models directly on original datasets,thereby simplifying the workflow and potentially improving the accuracy and efficiency of fraud detection systems.展开更多
Predicting the progression from Mild Cognitive Impairment(MCI)to Alzheimer's Disease(AD)is a critical challenge for enabling early intervention and improving patient outcomes.While longitudinal multi-modal neuroim...Predicting the progression from Mild Cognitive Impairment(MCI)to Alzheimer's Disease(AD)is a critical challenge for enabling early intervention and improving patient outcomes.While longitudinal multi-modal neuroimaging data holds immense potential for capturing the spatio-temporal dynamics of disease progression,its effective analysis is hampered by significant challenges:temporal heterogeneity(irregularly sampled scans),multi-modal misalignment,and the propensity of deep learning models to learn spurious,noncausal correlations.We propose CASCADE-Net,a novel end-to-end pipeline for robust and interpretable MCI-to-AD progression prediction.Our architecture introduces a Dynamic Temporal Alignment Module that employs a Neural Ordinary Differential Equation(Neural ODE)to model the continuous,underlying progression of pathology from irregularly sampled scans,effectively mapping heterogeneous patient data to a unified latent timeline.This aligned,noise-reduced spatio-temporal data is then processed by a predictive model featuring a novel Causal Spatial Attention mechanism.This mechanism not only identifies the critical brain regions and their evolution predictive of conversion but also incorporates a counterfactual constraint during training.This constraint ensures the learned features are causally linked to AD pathology by encouraging invariance to non-causal,confounder-based changes.Extensive experiments on the Alzheimer’s Disease Neuroimaging Initiative(ADNI)dataset demonstrate that CASCADE-Net significantly outperforms state-of-the-art sequential models in prognostic accuracy.Furthermore,our model provides highly interpretable,causally-grounded attention maps,offering valuable insights into the disease progression process and fostering greater clinical trust.展开更多
Generating carbon credits in rural and wetland lagoon environments is important for the economic and social survival of the same.There are many methodologies to study and certificate the Carbon Sink such as the ISO 14...Generating carbon credits in rural and wetland lagoon environments is important for the economic and social survival of the same.There are many methodologies to study and certificate the Carbon Sink such as the ISO 14064,VCS VERRA,UNI-BNEUTRAL,GOLD STANDARD and others.Many methods done before 2018 are obsolete since research has developed greatly in recent years.The methods are all different,but they share a continuous and real monitoring of the environment to ensure a true CCS(Carbon Capture and Storage)action.In the case of absence of monitoring,the method uses a system of provision of carbon credits called“buffer”.This system allows maintaining a credit-generating activity even in the presence of important anomalies due to adverse weather events.This research shows the complex analytic web of the different sensors in a continuous environmental monitoring system via GSM(Global System for Mobile)Communication and IoT(Internet of Things).By 2011,a monitoring network was installed in the wetland environments of Northern Italy Venetian Lagoon(UNESCO heritage)and used to understand and validate,the CCS action.Thingspeak cloud platform is used to collect data and is used to send alert to the user if the biological sink is reversed to emission.The obtained large dataset was used to prepare a AI(Artificial Intelligence)model“CCS wetland forecast”by Google COLAB.This model can fit the trend to avoid the direct and spot chemical field analysis and demonstrate the real efficacy of the model chosen.This network is now implemented by the Italian national method UNI PdR 99:2021 BNeutral generation of carbon credits.展开更多
Advanced machine learning(ML)algorithms have outperformed traditional approaches in various forecasting applications,especially electricity price forecasting(EPF).However,the prediction accuracy of ML reduces substant...Advanced machine learning(ML)algorithms have outperformed traditional approaches in various forecasting applications,especially electricity price forecasting(EPF).However,the prediction accuracy of ML reduces substantially if the input data is not similar to the ones seen by the model during training.This is often observed in EPF problems when market dynamics change owing to a rise in fuel prices,an increase in renewable penetration,a change in operational policies,etc.While the dip in model accuracy for unseen data is a cause for concern,what is more,challenging is not knowing when the ML model would respond in such a manner.Such uncertainty makes the power market participants,like bidding agents and retailers,vulnerable to substantial financial loss caused by the prediction errors of EPF models.Therefore,it becomes essential to identify whether or not the model prediction at a given instance is trustworthy.In this light,this paper proposes a trust algorithm for EPF users based on explainable artificial intelligence techniques.The suggested algorithm generates trust scores that reflect the model’s prediction quality for each new input.These scores are formulated in two stages:in the first stage,the coarse version of the score is formed using correlations of local and global explanations,and in the second stage,the score is fine-tuned further by the Shapley additive explanations values of different features.Such score-based explanations are more straightforward than feature-based visual explanations for EPF users like asset managers and traders.A dataset from Italy’s and ERCOT’s electricity market validates the efficacy of the proposed algorithm.Results show that the algorithm has more than 85%accuracy in identifying good predictions when the data distribution is similar to the training dataset.In the case of distribution shift,the algorithm shows the same accuracy level in identifying bad predictions.展开更多
Quality education is one of the primary objectives of any nation-build-ing strategy and is one of the seventeen Sustainable Development Goals(SDGs)by the United Nations.To provide quality education,delivering top-qual...Quality education is one of the primary objectives of any nation-build-ing strategy and is one of the seventeen Sustainable Development Goals(SDGs)by the United Nations.To provide quality education,delivering top-quality con-tent is not enough.However,understanding the learners’emotions during the learning process is equally important.However,most of this research work uses general data accessed from Twitter or other publicly available databases.These databases are generally not an ideal representation of the actual learning process and the learners’sentiments about the learning process.This research has col-lected real data from the learners,mainly undergraduate university students of dif-ferent regions and cultures.By analyzing the emotions of the students,appropriate steps can be suggested to improve the quality of education they receive.In order to understand the learning emotions,the XLNet technique is used.It investigated the transfer learning method to adopt an efficient model for learners’sentiment detection and classification based on real data.An experiment on the collected data shows that the proposed approach outperforms aspect enhanced sentiment analysis and topic sentiment analysis in the online learning community.展开更多
Recent breakthrough achievements such as the launch of DeepSeek's revolutionary AI models and the collection of samples from the far side of the moon are indicators of just how far China has developed in science a...Recent breakthrough achievements such as the launch of DeepSeek's revolutionary AI models and the collection of samples from the far side of the moon are indicators of just how far China has developed in science and technology.展开更多
The rapid advancement of artificial intelligence technologies,particularly in recent years,has led to the emergence of several large parameter artificial intelligence weather forecast models.These models represent a s...The rapid advancement of artificial intelligence technologies,particularly in recent years,has led to the emergence of several large parameter artificial intelligence weather forecast models.These models represent a significant breakthrough,overcoming the limitations of traditional numerical weather prediction models and indicating the emergence of profound potential tools for atmosphere-ocean forecasts.This study explores the evolution of these advanced artificial intelligence forecast models,and based on the identified commonalities,proposes the“Three Large Rules”for large weather forecast models:a large number of parameters,a large number of predictands,and large potential applications.We discuss the capacity of artificial intelligence to revolutionize numerical weather prediction,briefly outlining the underlying reasons for the significant improvement in weather forecasting.While acknowledging the high accuracy,computational efficiency,and ease of deployment of large artificial intelligence forecast models,we also emphasize the irreplaceable values of traditional numerical forecasts and explore the challenges in the future development of large-scale artificial intelligence atmosphere-ocean forecast models.We believe that the optimal future of atmosphere-ocean weather forecast lies in achieving a seamless integration of artificial intelligence and traditional numerical models.Such a synthesis is anticipated to offer a more advanced and reliable approach for improved atmosphere-ocean forecasts.Finally,we illustrate how forecasters can leverage the large weather forecast models through an example by building an artificial intelligence model for global ocean wave forecast.展开更多
The cyber physician will scan you now:how Al models are enhancing diagnostics and treatment in hospitals,After studying a MRI test on February 13 at Beijing Children^Hospital(BCH),13 top pediatricians were surprised w...The cyber physician will scan you now:how Al models are enhancing diagnostics and treatment in hospitals,After studying a MRI test on February 13 at Beijing Children^Hospital(BCH),13 top pediatricians were surprised when the country's first Al pediatrician came to an identical conclusion as theirs in the case of an 8-year-old boy who had been having seizures.展开更多
文摘The aim of this article is to explore potential directions for the development of artificial intelligence(AI).It points out that,while current AI can handle the statistical properties of complex systems,it has difficulty effectively processing and fully representing their spatiotemporal complexity patterns.The article also discusses a potential path of AI development in the engineering domain.Based on the existing understanding of the principles of multilevel com-plexity,this article suggests that consistency among the logical structures of datasets,AI models,model-building software,and hardware will be an important AI development direction and is worthy of careful consideration.
基金supported in part by the National Natural Science Foundation of China(Nos.62406207 and 62476224)the Project of Basic Scientific Research of Central Universities of China(No.J2023-026)+2 种基金the project of Science and Technology Department in Sichuan Province(No.25QNJJ5597)the Science and Technology Project of the Tibet Autonomous Region(No.XZ202401ZY0016)the Project of Sichuan Province Engineering Technology Research Center of General Aircraft Maintenance(No.GAMRC2023YB06).
文摘In recent years,large-scale artificial intelligence(AI)models have become a focal point in technology,attracting widespread attention and acclaim.Notable examples include Google’s BERT and OpenAI’s GPT,which have scaled their parameter sizes to hundreds of billions or even tens of trillions.This growth has been accompanied by a significant increase in the amount of training data,significantly improving the capabilities and performance of these models.Unlike previous reviews,this paper provides a comprehensive discussion of the algorithmic principles of large-scale AI models and their industrial applications from multiple perspectives.We first outline the evolutionary history of these models,highlighting milestone algorithms while exploring their underlying principles and core technologies.We then evaluate the challenges and limitations of large-scale AI models,including computational resource requirements,model parameter inflation,data privacy concerns,and specific issues related to multi-modal AI models,such as reliance on text-image pairs,inconsistencies in understanding and generation capabilities,and the lack of true“multi-modality”.Various industrial applications of these models are also presented.Finally,we discuss future trends,predicting further expansion of model scale and the development of cross-modal fusion.This study provides valuable insights to inform and inspire future future research and practice.
基金supported in part by NSFC under Grant 62422407in part by RGC under Grant 26204424in part by ACCESS–AI Chip Center for Emerging Smart Systems, sponsored by the Inno HK initiative of the Innovation and Technology Commission of the Hong Kong Special Administrative Region Government
文摘Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap from traditional robotics to hierarchical and end-to-end models.This algorithmic advancement poses a critical challenge in achieving balanced system-wide performance.Therefore,algorithm-hardware co-design has emerged as the primary methodology,which ana-lyzes algorithm behaviors on hardware to identify common computational properties.These properties can motivate algo-rithm optimization to reduce computational complexity and hardware innovation from architecture to circuit for high performance and high energy efficiency.We then reviewed recent works on robotic and embodied AI algorithms and computing hard-ware to demonstrate this algorithm-hardware co-design methodology.In the end,we discuss future research opportunities by answering two questions:(1)how to adapt the computing platforms to the rapid evolution of embodied AI algorithms,and(2)how to transform the potential of emerging hardware innovations into end-to-end inference improvements.
文摘In recent years,the rapid advancement of artificial intelligence(AI)has fostered deep integration between large AI models and robotic technology.Robots such as robotic dogs capable of carrying heavy loads on mountainous terrain or performing waste disposal tasks and humanoid robots that can execute high-precision component installations have gradually reached the public eye,raising expectations for embodied intelligent robots.
文摘By comparing price plans offered by several retail energy firms,end users with smart meters and controllers may optimize their energy use cost portfolios,due to the growth of deregulated retail power markets.To help smart grid end-users decrease power payment and usage unhappiness,this article suggests a decision system based on reinforcement learning to aid with electricity price plan selection.An enhanced state-based Markov decision process(MDP)without transition probabilities simulates the decision issue.A Kernel approximate-integrated batch Q-learning approach is used to tackle the given issue.Several adjustments to the sampling and data representation are made to increase the computational and prediction performance.Using a continuous high-dimensional state space,the suggested approach can uncover the underlying characteristics of time-varying pricing schemes.Without knowing anything regarding the market environment in advance,the best decision-making policy may be learned via case studies that use data from actual historical price plans.Experiments show that the suggested decision approach may reduce cost and energy usage dissatisfaction by using user data to build an accurate prediction strategy.In this research,we look at how smart city energy planners rely on precise load forecasts.It presents a hybrid method that extracts associated characteristics to improve accuracy in residential power consumption forecasts using machine learning(ML).It is possible to measure the precision of forecasts with the use of loss functions with the RMSE.This research presents a methodology for estimating smart home energy usage in response to the growing interest in explainable artificial intelligence(XAI).Using Shapley Additive explanations(SHAP)approaches,this strategy makes it easy for consumers to comprehend their energy use trends.To predict future energy use,the study employs gradient boosting in conjunction with long short-term memory neural networks.
基金financed as part of the project“Development of a methodology for instrumental base formation for analysis and modeling of the spatial socio-economic development of systems based on internal reserves in the context of digitalization”(FSEG-2023-0008).
文摘This study explores the determinants of impact on ecology in Northern Tanzania.By examining key socio-economic,institutional,and structural factors influencing engagement the study provides insights in strengthening agribusiness networks and improving livelihoods.Data was collected from 215 farmers and 320 traders through a multistage sampling procedure.Heckman AI sample selection model was used in data analysis whereby the findings showed key factors influencing farmers’decisions on ecology were gender and years of formal education at p<0.1,and access to finance and off-farm income at p<0.05.The degree of farmers participation in social groups was influenced by age,household size,off-farm income and business network at p<0.05,number of years in formal education and access to finance at p<0.01,and distance to the market at p<0.1.The decision of traders to impact on ecology was significantly influenced by age and trading experience at p<0.1.Meanwhile,the degree of their involvement in social groups was strongly affected by gender,formal education,and trust at p<0.01,as well as by access to finance and business networks at p<0.05.The study concluded that natural ecology is influenced by socio economic and structural factors but trust among group members determine the degree of participation.The study recommends that strategies to improve agribusiness networks should understand underlying causes of impact on ecology and strengthen available social groups to improve performance of farmers and traders.
文摘Prompt engineering, the art of crafting effective prompts for artificial intelligence models, has emerged as a pivotal factor in determining the quality and usefulness of AI (Artificial Intelligence)-generated outputs. This practice involves strategically designing and structuring prompts to guide AI models toward desired outcomes, ensuring that they generate relevant, informative, and accurate responses. The significance of prompt engineering cannot be overstated. Well-crafted prompts can significantly enhance the capabilities of AI models, enabling them to perform tasks that were once thought to be exclusively human domain. By providing clear and concise instructions, prompts can guide AI models to generate creative text, translate languages, write different kinds of creative content, and answer your questions in an informative way. Moreover, prompt engineering can help mitigate biases and ensure that AI models produce outputs that are fair, equitable, and inclusive. However, prompt engineering is not without its challenges. Crafting effective prompts requires a deep understanding of both the AI model’s capabilities and the specific task at hand. Additionally, the quality of the prompts can be influenced by factors such as the model’s training data [1] and the complexity of the task. As AI models continue to evolve, prompt engineering will likely become even more critical in unlocking their full potential.
文摘From globally popular video game Black Myth:Wukong,which has garnered a dedicated player base around the world,to DeepSeek,an artificial intelligence(AI)model developed at an impressively low cost that rivals U.S.company OpenAI’s ChatGPT,and the perfectly synchronized robotic ensemble performing with precision at this year’s China Central Television Spring Festival Gala,a Chinese New Year’s Eve extravaganza that aired on January 28-these big tech breakthroughs have risen to prominence one after another,generating massive buzz.
基金supported by National Natural Science Foundation of China (No. 62076251)sponsored by IMT-2020(5G) Promotion Group 5G+AI Work Group+3 种基金jointly sponsored by China Academy of Information and Communications TechnologyGuangdong OPPO Mobile Telecommunications Corp., Ltdvivo Mobile Communication Co., LtdHuawei Technologies Co., Ltd
文摘Artificial intelligence(AI)models are promising to improve the accuracy of wireless positioning systems,particularly in indoor environments where unpredictable radio propagation channel is a great challenge.Although great efforts have been made to explore the effectiveness of different AI models,it is still an open problem whether these models,trained with the data collected from all base stations(BSs),could work when some BSs are unavailable.In this paper,we make the first effort to enhance the generalization ability of AI wireless positioning model to adapt to the scenario where only partial BSs work.Particularly,a Siamese Network based Wireless Positioning Model(SNWPM)is proposed to predict the location of mobile user equipment from channel state information(CSI)collected from 5G BSs.Furthermore,a Feature Aware Attention Module(FAAM)is introduced to reinforce the capability of feature extraction from CSI data.Experiments are conducted on the 2022 Wireless Communication AI Competition(WAIC)dataset.The proposed SNWPM achieves decimeter-level positioning accuracy even if the data of partial BSs are unavailable.Compared with other AI models,the proposed SNWPM can reduce the positioning error by nearly 50%to more than 60%while using less parameters and lower computation resources.
文摘In a prior practice and policy article published in Healthcare Science,we introduced the deployed application of an artificial intelligence(AI)model to predict longer‐term inpatient readmissions to guide community care interventions for patients with complex conditions in the context of Singapore's Hospital to Home(H2H)program that has been operating since 2017.In this follow on practice and policy article,we further elaborate on Singapore's H2H program and care model,and its supporting AI model for multiple readmission prediction,in the following ways:(1)by providing updates on the AI and supporting information systems,(2)by reporting on customer engagement and related service delivery outcomes including staff‐related time savings and patient benefits in terms of bed days saved,(3)by sharing lessons learned with respect to(i)analytics challenges encountered due to the high degree of heterogeneity and resulting variability of the data set associated with the population of program participants,(ii)balancing competing needs for simpler and stable predictive models versus continuing to further enhance models and add yet more predictive variables,and(iii)the complications of continuing to make model changes when the AI part of the system is highly interlinked with supporting clinical information systems,(4)by highlighting how this H2H effort supported broader Covid‐19 response efforts across Singapore's public healthcare system,and finally(5)by commenting on how the experiences and related capabilities acquired from running this H2H program and related community care model and supporting AI prediction model are expected to contribute to the next wave of Singapore's public healthcare efforts from 2023 onwards.For the convenience of the reader,some content that introduces the H2H program and the multiple readmissions AI prediction model that previously appeared in the prior Healthcare Science publication is repeated at the beginning of this article.
基金supported by the National Natural Science Foundation of China(61771154)the Fundamental Research Funds for the Central Universities(3072022CF0601)supported by Key Laboratory of Advanced Marine Communication and Information Technology,Ministry of Industry and Information Technology,Harbin Engineering University,Harbin,China.
文摘As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management systems.AI has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and industry.However,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial attacks.Adversarial examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong results.Therefore,the security of AI models for the digital communication signals identification is the premise of its efficient and credible applications.In this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial principle.Next we present more detailed adversarial indicators to evaluate attack and defense behavior.Finally,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.
文摘The proliferation of digital payment methods facilitated by various online platforms and applications has led to a surge in financial fraud,particularly in credit card transactions.Advanced technologies such as machine learning have been widely employed to enhance the early detection and prevention of losses arising frompotentially fraudulent activities.However,a prevalent approach in existing literature involves the use of extensive data sampling and feature selection algorithms as a precursor to subsequent investigations.While sampling techniques can significantly reduce computational time,the resulting dataset relies on generated data and the accuracy of the pre-processing machine learning models employed.Such datasets often lack true representativeness of realworld data,potentially introducing secondary issues that affect the precision of the results.For instance,undersampling may result in the loss of critical information,while over-sampling can lead to overfitting machine learning models.In this paper,we proposed a classification study of credit card fraud using fundamental machine learning models without the application of any sampling techniques on all the features present in the original dataset.The results indicate that Support Vector Machine(SVM)consistently achieves classification performance exceeding 90%across various evaluation metrics.This discovery serves as a valuable reference for future research,encouraging comparative studies on original dataset without the reliance on sampling techniques.Furthermore,we explore hybrid machine learning techniques,such as ensemble learning constructed based on SVM,K-Nearest Neighbor(KNN)and decision tree,highlighting their potential advancements in the field.The study demonstrates that the proposed machine learning models yield promising results,suggesting that pre-processing the dataset with sampling algorithm or additional machine learning technique may not always be necessary.This research contributes to the field of credit card fraud detection by emphasizing the potential of employing machine learning models directly on original datasets,thereby simplifying the workflow and potentially improving the accuracy and efficiency of fraud detection systems.
文摘Predicting the progression from Mild Cognitive Impairment(MCI)to Alzheimer's Disease(AD)is a critical challenge for enabling early intervention and improving patient outcomes.While longitudinal multi-modal neuroimaging data holds immense potential for capturing the spatio-temporal dynamics of disease progression,its effective analysis is hampered by significant challenges:temporal heterogeneity(irregularly sampled scans),multi-modal misalignment,and the propensity of deep learning models to learn spurious,noncausal correlations.We propose CASCADE-Net,a novel end-to-end pipeline for robust and interpretable MCI-to-AD progression prediction.Our architecture introduces a Dynamic Temporal Alignment Module that employs a Neural Ordinary Differential Equation(Neural ODE)to model the continuous,underlying progression of pathology from irregularly sampled scans,effectively mapping heterogeneous patient data to a unified latent timeline.This aligned,noise-reduced spatio-temporal data is then processed by a predictive model featuring a novel Causal Spatial Attention mechanism.This mechanism not only identifies the critical brain regions and their evolution predictive of conversion but also incorporates a counterfactual constraint during training.This constraint ensures the learned features are causally linked to AD pathology by encouraging invariance to non-causal,confounder-based changes.Extensive experiments on the Alzheimer’s Disease Neuroimaging Initiative(ADNI)dataset demonstrate that CASCADE-Net significantly outperforms state-of-the-art sequential models in prognostic accuracy.Furthermore,our model provides highly interpretable,causally-grounded attention maps,offering valuable insights into the disease progression process and fostering greater clinical trust.
文摘Generating carbon credits in rural and wetland lagoon environments is important for the economic and social survival of the same.There are many methodologies to study and certificate the Carbon Sink such as the ISO 14064,VCS VERRA,UNI-BNEUTRAL,GOLD STANDARD and others.Many methods done before 2018 are obsolete since research has developed greatly in recent years.The methods are all different,but they share a continuous and real monitoring of the environment to ensure a true CCS(Carbon Capture and Storage)action.In the case of absence of monitoring,the method uses a system of provision of carbon credits called“buffer”.This system allows maintaining a credit-generating activity even in the presence of important anomalies due to adverse weather events.This research shows the complex analytic web of the different sensors in a continuous environmental monitoring system via GSM(Global System for Mobile)Communication and IoT(Internet of Things).By 2011,a monitoring network was installed in the wetland environments of Northern Italy Venetian Lagoon(UNESCO heritage)and used to understand and validate,the CCS action.Thingspeak cloud platform is used to collect data and is used to send alert to the user if the biological sink is reversed to emission.The obtained large dataset was used to prepare a AI(Artificial Intelligence)model“CCS wetland forecast”by Google COLAB.This model can fit the trend to avoid the direct and spot chemical field analysis and demonstrate the real efficacy of the model chosen.This network is now implemented by the Italian national method UNI PdR 99:2021 BNeutral generation of carbon credits.
文摘Advanced machine learning(ML)algorithms have outperformed traditional approaches in various forecasting applications,especially electricity price forecasting(EPF).However,the prediction accuracy of ML reduces substantially if the input data is not similar to the ones seen by the model during training.This is often observed in EPF problems when market dynamics change owing to a rise in fuel prices,an increase in renewable penetration,a change in operational policies,etc.While the dip in model accuracy for unseen data is a cause for concern,what is more,challenging is not knowing when the ML model would respond in such a manner.Such uncertainty makes the power market participants,like bidding agents and retailers,vulnerable to substantial financial loss caused by the prediction errors of EPF models.Therefore,it becomes essential to identify whether or not the model prediction at a given instance is trustworthy.In this light,this paper proposes a trust algorithm for EPF users based on explainable artificial intelligence techniques.The suggested algorithm generates trust scores that reflect the model’s prediction quality for each new input.These scores are formulated in two stages:in the first stage,the coarse version of the score is formed using correlations of local and global explanations,and in the second stage,the score is fine-tuned further by the Shapley additive explanations values of different features.Such score-based explanations are more straightforward than feature-based visual explanations for EPF users like asset managers and traders.A dataset from Italy’s and ERCOT’s electricity market validates the efficacy of the proposed algorithm.Results show that the algorithm has more than 85%accuracy in identifying good predictions when the data distribution is similar to the training dataset.In the case of distribution shift,the algorithm shows the same accuracy level in identifying bad predictions.
基金The authors extend their appreciation to the Deanship of Scientific Research at Saudi Electronic University for funding this research work through project number(8141).
文摘Quality education is one of the primary objectives of any nation-build-ing strategy and is one of the seventeen Sustainable Development Goals(SDGs)by the United Nations.To provide quality education,delivering top-quality con-tent is not enough.However,understanding the learners’emotions during the learning process is equally important.However,most of this research work uses general data accessed from Twitter or other publicly available databases.These databases are generally not an ideal representation of the actual learning process and the learners’sentiments about the learning process.This research has col-lected real data from the learners,mainly undergraduate university students of dif-ferent regions and cultures.By analyzing the emotions of the students,appropriate steps can be suggested to improve the quality of education they receive.In order to understand the learning emotions,the XLNet technique is used.It investigated the transfer learning method to adopt an efficient model for learners’sentiment detection and classification based on real data.An experiment on the collected data shows that the proposed approach outperforms aspect enhanced sentiment analysis and topic sentiment analysis in the online learning community.
文摘Recent breakthrough achievements such as the launch of DeepSeek's revolutionary AI models and the collection of samples from the far side of the moon are indicators of just how far China has developed in science and technology.
基金supported by the National Key Research and Development Program of China(Grant No.2020YFA0608000)the National Natural Science Foundation of China(Grant No.42030605)。
文摘The rapid advancement of artificial intelligence technologies,particularly in recent years,has led to the emergence of several large parameter artificial intelligence weather forecast models.These models represent a significant breakthrough,overcoming the limitations of traditional numerical weather prediction models and indicating the emergence of profound potential tools for atmosphere-ocean forecasts.This study explores the evolution of these advanced artificial intelligence forecast models,and based on the identified commonalities,proposes the“Three Large Rules”for large weather forecast models:a large number of parameters,a large number of predictands,and large potential applications.We discuss the capacity of artificial intelligence to revolutionize numerical weather prediction,briefly outlining the underlying reasons for the significant improvement in weather forecasting.While acknowledging the high accuracy,computational efficiency,and ease of deployment of large artificial intelligence forecast models,we also emphasize the irreplaceable values of traditional numerical forecasts and explore the challenges in the future development of large-scale artificial intelligence atmosphere-ocean forecast models.We believe that the optimal future of atmosphere-ocean weather forecast lies in achieving a seamless integration of artificial intelligence and traditional numerical models.Such a synthesis is anticipated to offer a more advanced and reliable approach for improved atmosphere-ocean forecasts.Finally,we illustrate how forecasters can leverage the large weather forecast models through an example by building an artificial intelligence model for global ocean wave forecast.
文摘The cyber physician will scan you now:how Al models are enhancing diagnostics and treatment in hospitals,After studying a MRI test on February 13 at Beijing Children^Hospital(BCH),13 top pediatricians were surprised when the country's first Al pediatrician came to an identical conclusion as theirs in the case of an 8-year-old boy who had been having seizures.