In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in it...In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in its own right,boundaries necessarily arose between it and other disciplines,in a way that is now often detrimental to progress.Therefore,it is necessary to reinvigorate the relationship between computer science and other academic disciplines and celebrate exploration and creativity in research.To do this,the structures of the academic department have to act as supporting scaffolding rather than barriers.Some examples are given that show the efforts being made at the University of Cambridge to approach this problem.展开更多
The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can brin...The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can bring interest to multi-users, the new and distinctive management accounting systems which meet in a manner easily all the needs of institutions and individuals from financial business, accounting and management, which take into account the accuracy, speed and confidentiality of the information for which the system is designed. The paper aims to describe a computerized system that is able to predict the budget for the new year based on past budgets by using time series analysis, which gives results with errors to a minimum and controls the budget during the year, through the ability to control exchange, compared to the scheme with the investigator and calculating the deviation, measurement of performance ratio and the expense of a number of indicators relating to budgets, such as the rate of condensation of capital, the growth rate and profitability ratio and gives a clear indication whether these ratios are good or not. There is a positive impact on information systems through this system for its ability to accomplish complex calculations and process paperwork, which is faster than it was previously and there is also a high flexibility, where the system can do any adjustments required in helping relevant parties to control the financial matters of the decision-making appropriate action thereon.展开更多
The importance of prerequisites for education has recently become a promising research direction.This work proposes a statistical model for measuring dependencies in learning resources between knowledge units.Instruct...The importance of prerequisites for education has recently become a promising research direction.This work proposes a statistical model for measuring dependencies in learning resources between knowledge units.Instructors are expected to present knowledge units in a semantically well-organized manner to facilitate students’understanding of the material.The proposed model reveals how inner concepts of a knowledge unit are dependent on each other and on concepts not in the knowledge unit.To help understand the complexity of the inner concepts themselves,WordNet is included as an external knowledge base in thismodel.The goal is to develop a model that will enable instructors to evaluate whether or not a learning regime has hidden relationships which might hinder students’ability to understand the material.The evaluation,employing three textbooks,shows that the proposed model succeeds in discovering hidden relationships among knowledge units in learning resources and in exposing the knowledge gaps in some knowledge units.展开更多
The number of students demanding computer science(CS)education is rapidly rising,and while faculty sizes are also growing,the traditional pipeline consisting of a CS major,a CS master’s,and then a move to industry or...The number of students demanding computer science(CS)education is rapidly rising,and while faculty sizes are also growing,the traditional pipeline consisting of a CS major,a CS master’s,and then a move to industry or a Ph.D.program is simply not scalable.To address this problem,the Department of Computing at the University of Illinois has introduced a multidisciplinary approach to computing,which is a scalable and collaborative approach to capitalize on the tremendous demand for computer science education.The key component of the approach is the blended major,also referred to as“CS+X”,where CS denotes computer science and X denotes a non-computing field.These CS+X blended degrees enable win-win partnerships among multiple subject areas,distributing the educational responsibilities while growing the entire university.To meet the demand from non-CS majors,another pathway that is offered is a graduate certificate program in addition to the traditional minor program.To accommodate the large number of students,scalable teaching tools,such as automatic graders,have also been developed.展开更多
Computer science(CS)is a discipline to study the scientific and practical approach to computation and its applications.As we enter into the Internet era,computers and the Internet have become intimate parts of our dai...Computer science(CS)is a discipline to study the scientific and practical approach to computation and its applications.As we enter into the Internet era,computers and the Internet have become intimate parts of our daily life.Due to its rapid development and wide applications recently,more CS graduates are needed in industries around the world.In USA,this situation is even more severe due to the rapid expansions of several big IT related companies such as Microsoft,Google,Facebook,Amazon,IBM etc.Hence,how to effectively train a large number of展开更多
The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,s...The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions.展开更多
At the panel session of the 3rd Global Forum on the Development of Computer Science,attendees had an opportunity to deliberate recent issues affecting computer science departments as a result of the recent growth in t...At the panel session of the 3rd Global Forum on the Development of Computer Science,attendees had an opportunity to deliberate recent issues affecting computer science departments as a result of the recent growth in the field.6 heads of university computer science departments participated in the discussions,including the moderator,Professor Andrew Yao.The first issue was how universities are managing the growing number of applicants in addition to swelling class sizes.Several approaches were suggested,including increasing faculty hiring,implementing scalable teaching tools,and working closer with other departments through degree programs that integrate computer science with other fields.The second issue was about the position and role of computer science within broader science.Participants generally agreed that all fields are increasingly relying on computer science techniques,and that effectively disseminating these techniques to others is a key to unlocking broader scientific progress.展开更多
Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have probl...Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.展开更多
Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing de...Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition.展开更多
Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(D...Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.展开更多
This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the...This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.展开更多
A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,...A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms.展开更多
The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare I...The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare IoT(H-IoT)technology,which also provides proactive statistical findings and precise medical diagnoses that enhance healthcare performance.This study examines how ML might support IoT-based health care systems,namely in the areas of prognostic systems,disease detection,patient tracking,and healthcare operations control.The study looks at the benefits and drawbacks of several machine learning techniques for H-IoT applications.It also examines the fundamental problems,such as data security and cyberthreats,as well as the high processing demands that these systems face.Alongside this,the essay discusses the advantages of all the technologies,including machine learning,deep learning,and the Internet of Things,as well as the significant difficulties and problems that arise when integrating the technology into healthcare forecasts.展开更多
Background:Stomach cancer(SC)is one of the most lethal malignancies worldwide due to late-stage diagnosis and limited treatment.The transcriptomic,epigenomic,and proteomic,etc.,omics datasets generated by high-through...Background:Stomach cancer(SC)is one of the most lethal malignancies worldwide due to late-stage diagnosis and limited treatment.The transcriptomic,epigenomic,and proteomic,etc.,omics datasets generated by high-throughput sequencing technology have become prominent in biomedical research,and they reveal molecular aspects of cancer diagnosis and therapy.Despite the development of advanced sequencing technology,the presence of high-dimensionality in multi-omics data makes it challenging to interpret the data.Methods:In this study,we introduce RankXLAN,an explainable ensemble-based multi-omics framework that integrates feature selection(FS),ensemble learning,bioinformatics,and in-silico validation for robust biomarker detection,potential therapeutic drug-repurposing candidates’identification,and classification of SC.To enhance the interpretability of the model,we incorporated explainable artificial intelligence(SHapley Additive exPlanations analysis),as well as accuracy,precision,F1-score,recall,cross-validation,specificity,likelihood ratio(LR)+,LR−,and Youden index results.Results:The experimental results showed that the top four FS algorithms achieved improved results when applied to the ensemble learning classification model.The proposed ensemble model produced an area under the curve(AUC)score of 0.994 for gene expression,0.97 for methylation,and 0.96 for miRNA expression data.Through the integration of bioinformatics and ML approach of the transcriptomic and epigenomic multi-omics dataset,we identified potential marker genes,namely,UBE2D2,HPCAL4,IGHA1,DPT,and FN3K.In-silico molecular docking revealed a strong binding affinity between ANKRD13C and the FDA-approved drug Everolimus(binding affinity−10.1 kcal/mol),identifying ANKRD13C as a potential therapeutic drug-repurposing target for SC.Conclusion:The proposed framework RankXLAN outperforms other existing frameworks for serum biomarker identification,therapeutic target identification,and SC classification with multi-omics datasets.展开更多
This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywo...This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywords after indexing national R&D information in Korea (human resources, project and outcome) and applying expertise calculation algorithm. In consideration of the characteristics of national R&D information, weight values have been selected. Then, expertise points were calculated by applying weighted values. In addition, joint research and collaborative relations were implemented in a knowledge map format through network analysis using national R&D information.展开更多
Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital ...Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures.展开更多
In this work,we propose a new,fully automated system for multiclass skin lesion localization and classification using deep learning.The main challenge is to address the problem of imbalanced data classes,found in HAM1...In this work,we propose a new,fully automated system for multiclass skin lesion localization and classification using deep learning.The main challenge is to address the problem of imbalanced data classes,found in HAM10000,ISBI2018,and ISBI2019 datasets.Initially,we consider a pretrained deep neural network model,DarkeNet19,and fine-tune the parameters of third convolutional layer to generate the image gradients.All the visualized images are fused using a High-Frequency approach along with Multilayered Feed-Forward Neural Network(HFaFFNN).The resultant image is further enhanced by employing a log-opening based activation function to generate a localized binary image.Later,two pre-trained deep models,Darknet-53 and NasNet-mobile,are employed and fine-tuned according to the selected datasets.The concept of transfer learning is later explored to train both models,where the input feed is the generated localized lesion images.In the subsequent step,the extracted features are fused using parallel max entropy correlation(PMEC)technique.To avoid the problem of overfitting and to select the most discriminant feature information,we implement a hybrid optimization algorithm called entropy-kurtosis controlled whale optimization(EKWO)algorithm.The selected features are finally passed to the softmax classifier for the final classification.Three datasets are used for the experimental process,such as HAM10000,ISBI2018,and ISBI2019 to achieve an accuracy of 95.8%,97.1%,and 85.35%,respectively.展开更多
A recent work has shown that using an ion trap quantum processor can speed up the decision making of a reinforcement learning agent. Its quantum advantage is observed when the external environment changes, and then ag...A recent work has shown that using an ion trap quantum processor can speed up the decision making of a reinforcement learning agent. Its quantum advantage is observed when the external environment changes, and then agent needs to relearn again. One character of this quantum hardware system discovered in this study is that it tends to overestimate the values used to determine the actions the agent will take. IBM’s five qubit superconducting quantum processor is a popular quantum platform. The aims of our study are twofold. First we want to identify the hardware characteristic features of IBM’s 5Q quantum computer when running this learning agent, compared with the ion trap processor. Second, through careful analysis, we observe that the quantum circuit employed in the ion trap processor for this agent could be simplified. Furthermore, when tested on IBM’s 5Q quantum processor, our simplified circuit demonstrates its enhanced performance over the original circuit on one of the hard learning tasks investigated in the previous work. We also use IBM’s quantum simulator when a good baseline is needed to compare the performances. As more and more quantum hardware devices are moving out of the laboratory and becoming generally available to public use, our work emphasizes the fact that the features and constraints of the quantum hardware could take a toll on the performance of quantum algorithms.展开更多
In modern computer games, "bots" - intelligent realistic agents play a prominent role in the popularity of a game in the market. Typically, bots are modeled using finite-state machine and then programmed via simple ...In modern computer games, "bots" - intelligent realistic agents play a prominent role in the popularity of a game in the market. Typically, bots are modeled using finite-state machine and then programmed via simple conditional statements which are hard-coded in bots logic. Since these bots have become quite predictable to an experienced games' player, a player might lose interest in the game. We propose the use of a game theoretic based learning rule called fictitious play for improving behavior of these computer game bots which will make them less predictable and hence, more a enjoyable game.展开更多
Networks provide a significant function in everyday life,and cybersecurity therefore developed a critical field of study.The Intrusion detection system(IDS)becoming an essential information protection strategy that tr...Networks provide a significant function in everyday life,and cybersecurity therefore developed a critical field of study.The Intrusion detection system(IDS)becoming an essential information protection strategy that tracks the situation of the software and hardware operating on the network.Notwithstanding advancements of growth,current intrusion detection systems also experience difficulties in enhancing detection precision,growing false alarm levels and identifying suspicious activities.In order to address above mentioned issues,several researchers concentrated on designing intrusion detection systems that rely on machine learning approaches.Machine learning models will accurately identify the underlying variations among regular information and irregular information with incredible efficiency.Artificial intelligence,particularly machine learning methods can be used to develop an intelligent intrusion detection framework.There in this article in order to achieve this objective,we propose an intrusion detection system focused on a Deep extreme learning machine(DELM)which first establishes the assessment of safety features that lead to their prominence and then constructs an adaptive intrusion detection system focusing on the important features.In the moment,we researched the viability of our suggested DELMbased intrusion detection system by conducting dataset assessments and evaluating the performance factors to validate the system reliability.The experimental results illustrate that the suggested framework outclasses traditional algorithms.In fact,the suggested framework is not only of interest to scientific research but also of functional importance.展开更多
文摘In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in its own right,boundaries necessarily arose between it and other disciplines,in a way that is now often detrimental to progress.Therefore,it is necessary to reinvigorate the relationship between computer science and other academic disciplines and celebrate exploration and creativity in research.To do this,the structures of the academic department have to act as supporting scaffolding rather than barriers.Some examples are given that show the efforts being made at the University of Cambridge to approach this problem.
文摘The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can bring interest to multi-users, the new and distinctive management accounting systems which meet in a manner easily all the needs of institutions and individuals from financial business, accounting and management, which take into account the accuracy, speed and confidentiality of the information for which the system is designed. The paper aims to describe a computerized system that is able to predict the budget for the new year based on past budgets by using time series analysis, which gives results with errors to a minimum and controls the budget during the year, through the ability to control exchange, compared to the scheme with the investigator and calculating the deviation, measurement of performance ratio and the expense of a number of indicators relating to budgets, such as the rate of condensation of capital, the growth rate and profitability ratio and gives a clear indication whether these ratios are good or not. There is a positive impact on information systems through this system for its ability to accomplish complex calculations and process paperwork, which is faster than it was previously and there is also a high flexibility, where the system can do any adjustments required in helping relevant parties to control the financial matters of the decision-making appropriate action thereon.
文摘The importance of prerequisites for education has recently become a promising research direction.This work proposes a statistical model for measuring dependencies in learning resources between knowledge units.Instructors are expected to present knowledge units in a semantically well-organized manner to facilitate students’understanding of the material.The proposed model reveals how inner concepts of a knowledge unit are dependent on each other and on concepts not in the knowledge unit.To help understand the complexity of the inner concepts themselves,WordNet is included as an external knowledge base in thismodel.The goal is to develop a model that will enable instructors to evaluate whether or not a learning regime has hidden relationships which might hinder students’ability to understand the material.The evaluation,employing three textbooks,shows that the proposed model succeeds in discovering hidden relationships among knowledge units in learning resources and in exposing the knowledge gaps in some knowledge units.
文摘The number of students demanding computer science(CS)education is rapidly rising,and while faculty sizes are also growing,the traditional pipeline consisting of a CS major,a CS master’s,and then a move to industry or a Ph.D.program is simply not scalable.To address this problem,the Department of Computing at the University of Illinois has introduced a multidisciplinary approach to computing,which is a scalable and collaborative approach to capitalize on the tremendous demand for computer science education.The key component of the approach is the blended major,also referred to as“CS+X”,where CS denotes computer science and X denotes a non-computing field.These CS+X blended degrees enable win-win partnerships among multiple subject areas,distributing the educational responsibilities while growing the entire university.To meet the demand from non-CS majors,another pathway that is offered is a graduate certificate program in addition to the traditional minor program.To accommodate the large number of students,scalable teaching tools,such as automatic graders,have also been developed.
文摘Computer science(CS)is a discipline to study the scientific and practical approach to computation and its applications.As we enter into the Internet era,computers and the Internet have become intimate parts of our daily life.Due to its rapid development and wide applications recently,more CS graduates are needed in industries around the world.In USA,this situation is even more severe due to the rapid expansions of several big IT related companies such as Microsoft,Google,Facebook,Amazon,IBM etc.Hence,how to effectively train a large number of
基金supported in part by the National Natural Science Foundation of China under Grant 62371181in part by the Changzhou Science and Technology International Cooperation Program under Grant CZ20230029+1 种基金supported by a National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(2021R1A2B5B02087169)supported under the framework of international cooperation program managed by the National Research Foundation of Korea(2022K2A9A1A01098051)。
文摘The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions.
文摘At the panel session of the 3rd Global Forum on the Development of Computer Science,attendees had an opportunity to deliberate recent issues affecting computer science departments as a result of the recent growth in the field.6 heads of university computer science departments participated in the discussions,including the moderator,Professor Andrew Yao.The first issue was how universities are managing the growing number of applicants in addition to swelling class sizes.Several approaches were suggested,including increasing faculty hiring,implementing scalable teaching tools,and working closer with other departments through degree programs that integrate computer science with other fields.The second issue was about the position and role of computer science within broader science.Participants generally agreed that all fields are increasingly relying on computer science techniques,and that effectively disseminating these techniques to others is a key to unlocking broader scientific progress.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Blockchain Technology(BT)has emerged as a transformative solution for improving the efficacy,security,and transparency of supply chain intelligence.Traditional Supply Chain Management(SCM)systems frequently have problems such as data silos,a lack of visibility in real time,fraudulent activities,and inefficiencies in tracking and traceability.Blockchain’s decentralized and irreversible ledger offers a solid foundation for dealing with these issues;it facilitates trust,security,and the sharing of data in real-time among all parties involved.Through an examination of critical technologies,methodology,and applications,this paper delves deeply into computer modeling based-blockchain framework within supply chain intelligence.The effect of BT on SCM is evaluated by reviewing current research and practical applications in the field.As part of the process,we delved through the research on blockchain-based supply chain models,smart contracts,Decentralized Applications(DApps),and how they connect to other cutting-edge innovations like Artificial Intelligence(AI)and the Internet of Things(IoT).To quantify blockchain’s performance,the study introduces analytical models for efficiency improvement,security enhancement,and scalability,enabling computational assessment and simulation of supply chain scenarios.These models provide a structured approach to predicting system performance under varying parameters.According to the results,BT increases efficiency by automating transactions using smart contracts,increases security by using cryptographic techniques,and improves transparency in the supply chain by providing immutable records.Regulatory concerns,challenges with interoperability,and scalability all work against broad adoption.To fully automate and intelligently integrate blockchain with AI and the IoT,additional research is needed to address blockchain’s current limitations and realize its potential for supply chain intelligence.
基金The author Dr.Arshiya S.Ansari extends the appreciation to the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number(R-2025-1538).
文摘Voice, motion, and mimicry are naturalistic control modalities that have replaced text or display-driven control in human-computer communication (HCC). Specifically, the vocals contain a lot of knowledge, revealing details about the speaker’s goals and desires, as well as their internal condition. Certain vocal characteristics reveal the speaker’s mood, intention, and motivation, while word study assists the speaker’s demand to be understood. Voice emotion recognition has become an essential component of modern HCC networks. Integrating findings from the various disciplines involved in identifying vocal emotions is also challenging. Many sound analysis techniques were developed in the past. Learning about the development of artificial intelligence (AI), and especially Deep Learning (DL) technology, research incorporating real data is becoming increasingly common these days. Thus, this research presents a novel selfish herd optimization-tuned long/short-term memory (SHO-LSTM) strategy to identify vocal emotions in human communication. The RAVDESS public dataset is used to train the suggested SHO-LSTM technique. Mel-frequency cepstral coefficient (MFCC) and wiener filter (WF) techniques are used, respectively, to remove noise and extract features from the data. LSTM and SHO are applied to the extracted data to optimize the LSTM network’s parameters for effective emotion recognition. Python Software was used to execute our proposed framework. In the finding assessment phase, Numerous metrics are used to evaluate the proposed model’s detection capability, Such as F1-score (95%), precision (95%), recall (96%), and accuracy (97%). The suggested approach is tested on a Python platform, and the SHO-LSTM’s outcomes are contrasted with those of other previously conducted research. Based on comparative assessments, our suggested approach outperforms the current approaches in vocal emotion recognition.
文摘Cyber-Physical Systems(CPS)represent an integration of computational and physical elements,revolutionizing industries by enabling real-time monitoring,control,and optimization.A complementary technology,Digital Twin(DT),acts as a virtual replica of physical assets or processes,facilitating better decision making through simulations and predictive analytics.CPS and DT underpin the evolution of Industry 4.0 by bridging the physical and digital domains.This survey explores their synergy,highlighting how DT enriches CPS with dynamic modeling,realtime data integration,and advanced simulation capabilities.The layered architecture of DTs within CPS is examined,showcasing the enabling technologies and tools vital for seamless integration.The study addresses key challenges in CPS modeling,such as concurrency and communication,and underscores the importance of DT in overcoming these obstacles.Applications in various sectors are analyzed,including smart manufacturing,healthcare,and urban planning,emphasizing the transformative potential of CPS-DT integration.In addition,the review identifies gaps in existing methodologies and proposes future research directions to develop comprehensive,scalable,and secure CPSDT systems.By synthesizing insights fromthe current literature and presenting a taxonomy of CPS and DT,this survey serves as a foundational reference for academics and practitioners.The findings stress the need for unified frameworks that align CPS and DT with emerging technologies,fostering innovation and efficiency in the digital transformation era.
基金financially supported by Ongoing Research Funding Program(ORF-2025-846),King Saud University,Riyadh,Saudi Arabia.
文摘This research investigates the application of digital images in military contexts by utilizing analytical equations to augment human visual capabilities.A comparable filter is used to improve the visual quality of the photographs by reducing truncations in the existing images.Furthermore,the collected images undergo processing using histogram gradients and a flexible threshold value that may be adjusted in specific situations.Thus,it is possible to reduce the occurrence of overlapping circumstances in collective picture characteristics by substituting grey-scale photos with colorized factors.The proposed method offers additional robust feature representations by imposing a limiting factor to reduce overall scattering values.This is achieved by visualizing a graphical function.Moreover,to derive valuable insights from a series of photos,both the separation and in-version processes are conducted.This involves analyzing comparison results across four different scenarios.The results of the comparative analysis show that the proposed method effectively reduces the difficulties associated with time and space to 1 s and 3%,respectively.In contrast,the existing strategy exhibits higher complexities of 3 s and 9.1%,respectively.
基金The National Science Foundation funded this research under the Dy-namics of Coupled Natural and Human Systems program(Grants No.DEB-1212183 and BCS-1826839)support from San Diego State University and Auburn University.
文摘A significant number and range of challenges besetting sustainability can be traced to the actions and inter actions of multiple autonomous agents(people mostly)and the entities they create(e.g.,institutions,policies,social network)in the corresponding social-environmental systems(SES).To address these challenges,we need to understand decisions made and actions taken by agents,the outcomes of their actions,including the feedbacks on the corresponding agents and environment.The science of complex adaptive systems-complex adaptive sys tems(CAS)science-has a significant potential to handle such challenges.We address the advantages of CAS science for sustainability by identifying the key elements and challenges in sustainability science,the generic features of CAS,and the key advances and challenges in modeling CAS.Artificial intelligence and data science combined with agent-based modeling promise to improve understanding of agents’behaviors,detect SES struc tures,and formulate SES mechanisms.
文摘The integration of machine learning(ML)technology with Internet of Things(IoT)systems produces essential changes in healthcare operations.Healthcare personnel can track patients around the clock thanks to healthcare IoT(H-IoT)technology,which also provides proactive statistical findings and precise medical diagnoses that enhance healthcare performance.This study examines how ML might support IoT-based health care systems,namely in the areas of prognostic systems,disease detection,patient tracking,and healthcare operations control.The study looks at the benefits and drawbacks of several machine learning techniques for H-IoT applications.It also examines the fundamental problems,such as data security and cyberthreats,as well as the high processing demands that these systems face.Alongside this,the essay discusses the advantages of all the technologies,including machine learning,deep learning,and the Internet of Things,as well as the significant difficulties and problems that arise when integrating the technology into healthcare forecasts.
基金the Deanship of Research and Graduate Studies at King Khalid University,KSA,for funding this work through the Large Research Project under grant number RGP2/164/46.
文摘Background:Stomach cancer(SC)is one of the most lethal malignancies worldwide due to late-stage diagnosis and limited treatment.The transcriptomic,epigenomic,and proteomic,etc.,omics datasets generated by high-throughput sequencing technology have become prominent in biomedical research,and they reveal molecular aspects of cancer diagnosis and therapy.Despite the development of advanced sequencing technology,the presence of high-dimensionality in multi-omics data makes it challenging to interpret the data.Methods:In this study,we introduce RankXLAN,an explainable ensemble-based multi-omics framework that integrates feature selection(FS),ensemble learning,bioinformatics,and in-silico validation for robust biomarker detection,potential therapeutic drug-repurposing candidates’identification,and classification of SC.To enhance the interpretability of the model,we incorporated explainable artificial intelligence(SHapley Additive exPlanations analysis),as well as accuracy,precision,F1-score,recall,cross-validation,specificity,likelihood ratio(LR)+,LR−,and Youden index results.Results:The experimental results showed that the top four FS algorithms achieved improved results when applied to the ensemble learning classification model.The proposed ensemble model produced an area under the curve(AUC)score of 0.994 for gene expression,0.97 for methylation,and 0.96 for miRNA expression data.Through the integration of bioinformatics and ML approach of the transcriptomic and epigenomic multi-omics dataset,we identified potential marker genes,namely,UBE2D2,HPCAL4,IGHA1,DPT,and FN3K.In-silico molecular docking revealed a strong binding affinity between ANKRD13C and the FDA-approved drug Everolimus(binding affinity−10.1 kcal/mol),identifying ANKRD13C as a potential therapeutic drug-repurposing target for SC.Conclusion:The proposed framework RankXLAN outperforms other existing frameworks for serum biomarker identification,therapeutic target identification,and SC classification with multi-omics datasets.
基金Project(N-12-NM-LU01-C01) supported by Construction of NTIS (National Science & Technology Information Service) Program Funded by the National Science & Technology Commission (NSTC), Korea
文摘This work aims to implement expert and collaborative group recommendation services through an analysis of expertise and network relations NTIS. First of all, expertise database has been constructed by extracting keywords after indexing national R&D information in Korea (human resources, project and outcome) and applying expertise calculation algorithm. In consideration of the characteristics of national R&D information, weight values have been selected. Then, expertise points were calculated by applying weighted values. In addition, joint research and collaborative relations were implemented in a knowledge map format through network analysis using national R&D information.
文摘Presently,precision agriculture processes like plant disease,crop yield prediction,species recognition,weed detection,and irrigation can be accom-plished by the use of computer vision(CV)approaches.Weed plays a vital role in influencing crop productivity.The wastage and pollution of farmland's natural atmosphere instigated by full coverage chemical herbicide spraying are increased.Since the proper identification of weeds from crops helps to reduce the usage of herbicide and improve productivity,this study presents a novel computer vision and deep learning based weed detection and classification(CVDL-WDC)model for precision agriculture.The proposed CVDL-WDC technique intends to prop-erly discriminate the plants as well as weeds.The proposed CVDL-WDC technique involves two processes namely multiscale Faster RCNN based object detection and optimal extreme learning machine(ELM)based weed classification.The parameters of the ELM model are optimally adjusted by the use of farmland fertility optimization(FFO)algorithm.A comprehensive simulation analysis of the CVDL-WDC technique against benchmark dataset reported the enhanced out-comes over its recent approaches interms of several measures.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘In this work,we propose a new,fully automated system for multiclass skin lesion localization and classification using deep learning.The main challenge is to address the problem of imbalanced data classes,found in HAM10000,ISBI2018,and ISBI2019 datasets.Initially,we consider a pretrained deep neural network model,DarkeNet19,and fine-tune the parameters of third convolutional layer to generate the image gradients.All the visualized images are fused using a High-Frequency approach along with Multilayered Feed-Forward Neural Network(HFaFFNN).The resultant image is further enhanced by employing a log-opening based activation function to generate a localized binary image.Later,two pre-trained deep models,Darknet-53 and NasNet-mobile,are employed and fine-tuned according to the selected datasets.The concept of transfer learning is later explored to train both models,where the input feed is the generated localized lesion images.In the subsequent step,the extracted features are fused using parallel max entropy correlation(PMEC)technique.To avoid the problem of overfitting and to select the most discriminant feature information,we implement a hybrid optimization algorithm called entropy-kurtosis controlled whale optimization(EKWO)algorithm.The selected features are finally passed to the softmax classifier for the final classification.Three datasets are used for the experimental process,such as HAM10000,ISBI2018,and ISBI2019 to achieve an accuracy of 95.8%,97.1%,and 85.35%,respectively.
文摘A recent work has shown that using an ion trap quantum processor can speed up the decision making of a reinforcement learning agent. Its quantum advantage is observed when the external environment changes, and then agent needs to relearn again. One character of this quantum hardware system discovered in this study is that it tends to overestimate the values used to determine the actions the agent will take. IBM’s five qubit superconducting quantum processor is a popular quantum platform. The aims of our study are twofold. First we want to identify the hardware characteristic features of IBM’s 5Q quantum computer when running this learning agent, compared with the ion trap processor. Second, through careful analysis, we observe that the quantum circuit employed in the ion trap processor for this agent could be simplified. Furthermore, when tested on IBM’s 5Q quantum processor, our simplified circuit demonstrates its enhanced performance over the original circuit on one of the hard learning tasks investigated in the previous work. We also use IBM’s quantum simulator when a good baseline is needed to compare the performances. As more and more quantum hardware devices are moving out of the laboratory and becoming generally available to public use, our work emphasizes the fact that the features and constraints of the quantum hardware could take a toll on the performance of quantum algorithms.
文摘In modern computer games, "bots" - intelligent realistic agents play a prominent role in the popularity of a game in the market. Typically, bots are modeled using finite-state machine and then programmed via simple conditional statements which are hard-coded in bots logic. Since these bots have become quite predictable to an experienced games' player, a player might lose interest in the game. We propose the use of a game theoretic based learning rule called fictitious play for improving behavior of these computer game bots which will make them less predictable and hence, more a enjoyable game.
基金Data and Artificial Intelligence Scientific Chair at Umm AlQura University.
文摘Networks provide a significant function in everyday life,and cybersecurity therefore developed a critical field of study.The Intrusion detection system(IDS)becoming an essential information protection strategy that tracks the situation of the software and hardware operating on the network.Notwithstanding advancements of growth,current intrusion detection systems also experience difficulties in enhancing detection precision,growing false alarm levels and identifying suspicious activities.In order to address above mentioned issues,several researchers concentrated on designing intrusion detection systems that rely on machine learning approaches.Machine learning models will accurately identify the underlying variations among regular information and irregular information with incredible efficiency.Artificial intelligence,particularly machine learning methods can be used to develop an intelligent intrusion detection framework.There in this article in order to achieve this objective,we propose an intrusion detection system focused on a Deep extreme learning machine(DELM)which first establishes the assessment of safety features that lead to their prominence and then constructs an adaptive intrusion detection system focusing on the important features.In the moment,we researched the viability of our suggested DELMbased intrusion detection system by conducting dataset assessments and evaluating the performance factors to validate the system reliability.The experimental results illustrate that the suggested framework outclasses traditional algorithms.In fact,the suggested framework is not only of interest to scientific research but also of functional importance.