This paper describes the implementation of an Information Systems (IS) capstone project management course that is a requirement for graduating seniors in an undergraduate Computer Information Systems (CIS) program...This paper describes the implementation of an Information Systems (IS) capstone project management course that is a requirement for graduating seniors in an undergraduate Computer Information Systems (CIS) program at a regional university. The description provides a model which includes the culmination of students' academic training in an IS curriculum which is part of a Bachelor of Business Administration (BBA) program in an accredited college of business. The course requires an application of technical and business skills, as well as systems development and project management skills--while students are working on an actual IS project for an external sponsoring organization. Rationale for implementing this type of course includes the benefits it provides to the students, the project sponsors, and the IS department providing the course. Feedback from the course is used as integral part of the C1S curriculum assessment process used for accreditation purposes.展开更多
This paper aims to design and implement an automatic heart disease diagnosis system using?MATLAB. The Cleveland data set for heart diseases was used as the main database for training and testing the developed system. ...This paper aims to design and implement an automatic heart disease diagnosis system using?MATLAB. The Cleveland data set for heart diseases was used as the main database for training and testing the developed system. In order to train and test the Cleveland data set, two systems were developed. The first system is based on the Multilayer Perceptron (MLP) structure on the Artificial Neural Network (ANN), whereas the second system is based on the Adaptive Neuro-Fuzzy Inference Systems (ANFIS) approach. Each system has two main modules, namely, training and testing,?where 80% and 20% of the Cleveland data set were randomly selected for training and testing?purposes respectively. Each system also has an additional module known as case-based module,?where the user has to input values for 13 required attributes as specified by the Cleveland data set,?in order to test the status of the patient whether heart disease is present or absent from that particular patient. In addition, the effects of different values for important parameters were investigated in the ANN-based and Neuro-Fuzzy-based systems in order to select the best parameters that obtain the highest performance. Based on the experimental work, it is clear that the Neuro-Fuzzy system outperforms the ANN system using the training data set, where the accuracy for each system was 100% and 90.74%, respectively. However, using the testing data set, it is clear that the ANN system outperforms the Neuro-Fuzzy system, where the best accuracy for each system was 87.04% and 75.93%, respectively.展开更多
The rapid development of information communication technology blazed a trail in our learning, work, and lives. This study was conducted to ascertain the computer and internet literacy level of medical faculties’ stud...The rapid development of information communication technology blazed a trail in our learning, work, and lives. This study was conducted to ascertain the computer and internet literacy level of medical faculties’ students. 171 first-year medical students from 4 different medical colleges of the University of Jordan participated in the study. A semi-structured questionnaire was used to collect the data and the data analysis was done by using SPSS, Version 17. The results indicated that most medical students have average 5 or advance knowledge on the basic use of computer and internet. Google was found to be the most commonly used search engine. Also the study found that ICT (Information and Communication Technology) can be a useful tool in medical education but the lack of time, internet connectivity and resources is still a serious constraint.展开更多
The development of multimedia and digital imaging has led to high quantity of data required to represent modern imagery. This requires large disk space for storage, and long time for transmission over computer network...The development of multimedia and digital imaging has led to high quantity of data required to represent modern imagery. This requires large disk space for storage, and long time for transmission over computer networks, and these two are relatively expensive. These factors prove the need for images compression. Image compression addresses the problem of reducing the amount of space required to represent a digital image yielding a compact representation of an image, and thereby reducing the image storage/transmission time requirements. The key idea here is to remove redundancy of data presented within an image to reduce its size without affecting the essential information of it. We are concerned with lossless image compression in this paper. Our proposed approach is a mix of a number of already existing techniques. Our approach works as follows: first, we apply the well-known Lempel-Ziv-Welch (LZW) algorithm on the image in hand. What comes out of the first step is forward to the second step where the Bose, Chaudhuri and Hocquenghem (BCH) error correction and detected algorithm is used. To improve the compression ratio, the proposed approach applies the BCH algorithms repeatedly until “inflation” is detected. The experimental results show that the proposed algorithm could achieve an excellent compression ratio without losing data when compared to the standard compression algorithms.展开更多
As one chemical composition,nicotine content has an important influence on the quality of tobacco leaves.Rapid and nondestructive quantitative analysis of nicotine is an important task in the tobacco industry.Near-inf...As one chemical composition,nicotine content has an important influence on the quality of tobacco leaves.Rapid and nondestructive quantitative analysis of nicotine is an important task in the tobacco industry.Near-infrared(NIR)spectroscopy as an effective chemical composition analysis technique has been widely used.In this paper,we propose a one-dimensional fully convolutional network(1D-FCN)model to quantitatively analyze the nicotine composition of tobacco leaves using NIR spectroscopy data in a cloud environment.This 1D-FCN model uses one-dimensional convolution layers to directly extract the complex features from sequential spectroscopy data.It consists of five convolutional layers and two full connection layers with the max-pooling layer replaced by a convolutional layer to avoid information loss.Cloud computing techniques are used to solve the increasing requests of large-size data analysis and implement data sharing and accessing.Experimental results show that the proposed 1D-FCN model can effectively extract the complex characteristics inside the spectrum and more accurately predict the nicotine volumes in tobacco leaves than other approaches.This research provides a deep learning foundation for quantitative analysis of NIR spectral data in the tobacco industry.展开更多
Class diagrams and use case models are system models that are used to analyze, design and model object oriented systems. In this era of agile computing, service-oriented architecture has become increasingly popular fo...Class diagrams and use case models are system models that are used to analyze, design and model object oriented systems. In this era of agile computing, service-oriented architecture has become increasingly popular for achieving efficient and agile business solutions that can maintain changes demanded by the business world. This paper proposes a methodology to identify services from a set of class diagrams and use case models in order to generate a service oriented model. An extensive evaluation of the generated services has shown that these services conform to the principles of Service Oriented Architecture (SOA), and provide a straightforward methodology, which can reuse the valuable business logic that resides within legacy applications to migrate to SOA-based systems.展开更多
With the continuous development of medical informatics and digital diagnosis,the classification of tuberculosis(TB)cases from computed tomography(CT)images of the lung based on deep learning is an important guiding ai...With the continuous development of medical informatics and digital diagnosis,the classification of tuberculosis(TB)cases from computed tomography(CT)images of the lung based on deep learning is an important guiding aid in clinical diagnosis and treatment.Due to its potential application in medical image classification,this task has received extensive research attention.Existing related neural network techniques are still challenging in terms of feature extraction of global contextual information of images and network complexity in achieving image classification.To address these issues,this paper proposes a lightweight medical image classification network based on a combination of Transformer and convolutional neural network(CNN)for the classification of TB cases from lung CT.The method mainly consists of a fusion of the CNN module and the Transformer module,exploiting the advantages of both in order to accomplish a more accurate classification task.On the one hand,the CNN branch supplements the Transformer branch with basic local feature information in the low level;on the other hand,in the middle and high levels of the model,the CNN branch can also provide the Transformer architecture with different local and global feature information to the Transformer architecture to enhance the ability of the model to obtain feature information and improve the accuracy of image classification.A shortcut is used in each module of the network to solve the problem of poor model results due to gradient divergence and to optimize the effectiveness of TB classification.The proposed lightweight model can well solve the problem of long training time in the process of TB classification of lung CT and improve the speed of classification.The proposed method was validated on a CT image data set provided by the First Hospital of Lanzhou University.The experimental results show that the proposed lightweight classification network for TB based on CT medical images of lungs can fully extract the feature information of the input images and obtain high-accuracy classification results.展开更多
Predicting the value of one or more variables using the values of other variables is a very important process in the various engineering experiments that include large data that are difficult to obtain using different...Predicting the value of one or more variables using the values of other variables is a very important process in the various engineering experiments that include large data that are difficult to obtain using different measurement processes.Regression is one of the most important types of supervised machine learning,in which labeled data is used to build a prediction model,regression can be classified into three different categories:linear,polynomial,and logistic.In this research paper,different methods will be implemented to solve the linear regression problem,where there is a linear relationship between the target and the predicted output.Various methods for linear regression will be analyzed using the calculated Mean Square Error(MSE)between the target values and the predicted outputs.A huge set of regression samples will be used to construct the training dataset with selected sizes.A detailed comparison will be performed between three methods,including least-square fit;Feed-Forward Artificial Neural Network(FFANN),and Cascade Feed-Forward Artificial Neural Network(CFFANN),and recommendations will be raised.The proposed method has been tested in this research on random data samples,and the results were compared with the results of the most common method,which is the linear multiple regression method.It should be noted here that the procedures for building and testing the neural network will remain constant even if another sample of data is used.展开更多
Timetabling problem is among the most difficult operational tasks and is an important step in raising industrial productivity,capability,and capacity.Such tasks are usually tackled using metaheuristics techniques that...Timetabling problem is among the most difficult operational tasks and is an important step in raising industrial productivity,capability,and capacity.Such tasks are usually tackled using metaheuristics techniques that provide an intelligent way of suggesting solutions or decision-making.Swarm intelligence techniques including Particle Swarm Optimization(PSO)have proved to be effective examples.Different recent experiments showed that the PSO algorithm is reliable for timetabling in many applications such as educational and personnel timetabling,machine scheduling,etc.However,having an optimal solution is extremely challenging but having a sub-optimal solution using heuristics or metaheuristics is guaranteed.This research paper seeks the enhancement of the PSO algorithm for an efficient timetabling task.This algorithm aims at generating a feasible timetable within a reasonable time.This enhanced version is a hybrid dynamic adaptive PSO algorithm that is tested on a round-robin tournament known as ITC2021 which is dedicated to sports timetabling.The competition includes several soft and hard constraints to be satisfied in order to build a feasible or sub-optimal timetable.It consists of three categories of complexities,namely early,test,and middle instances.Results showed that the proposed dynamic adaptive PSO has obtained feasible timetables for almost all of the instances.The feasibility is measured by minimizing the violation of hard constraints to zero.The performance of the dynamic adaptive PSO is evaluated by the consumed computational time to produce a solution of feasible timetable,consistency,and robustness.The dynamic adaptive PSO showed a robust and consistent performance in producing a diversity of timetables in a reasonable computational time.展开更多
Image recognition is widely used in different application areas such as shape recognition, gesture recognition and eye recognition. In this research, we introduced image recognition using efficient invariant moments a...Image recognition is widely used in different application areas such as shape recognition, gesture recognition and eye recognition. In this research, we introduced image recognition using efficient invariant moments and Principle Component Analysis (PCA) for gray and color images using different number of invariant moments. We used twelve moments for each image of gray images and Hu’s seven moments for color images to decrease dimensionality of the problem to 6 PCA’s for gray and 5 PCA’s for color images and hence the recognition time. PCA is then employed to decrease dimensionality of the problem and hence the recognition time and this is our main objective. The PCA is derived from Karhunen-Loeve’s transformation. Given an N-dimensional vector representation of each image, PCA tends to find a K-dimensional subspace whose basis vectors correspond to the maximum variance direction in the original image space. This new subspace is normally lower dimensional (K N). Three known datasets are used. The first set is the known Flower dataset. The second is the Africans dataset, and the third is the Shapes dataset. All these datasets were used by many researchers.展开更多
As a necessary process of modern drug development,finding a drug compound that can selectively bind to a specific protein is highly challenging and costly.Exploring drug‐target interaction strength in terms of drug‐...As a necessary process of modern drug development,finding a drug compound that can selectively bind to a specific protein is highly challenging and costly.Exploring drug‐target interaction strength in terms of drug‐target affinity(DTA)is an emerging and effective research approach for drug development.However,it is challenging to model drug‐target interactions in a deep learning manner,and few studies provide interpretable analysis of models.This paper proposes a DTA prediction method(mutual transformer‐drug target affinity[MT‐DTA])with interactive learning and an autoencoder mechanism.The proposed MT‐DTA builds a variational autoencoders system with a cascade structure of the attention model and convolutional neural networks.It not only enhances the ability to capture the characteristic information of a single molecular sequence but also establishes the characteristic expression relationship for each substructure in a single molecular sequence.On this basis,a molecular information interaction module is constructed,which adds information interaction paths between molecular sequence pairs and complements the expression of correlations between molecular substructures.The performance of the proposed model was verified on two public benchmark datasets,KIBA and Davis,and the results confirm that the proposed model structure is effective in predicting DTA.Additionally,attention transformer models with different configurations can improve the feature expression of drug/protein molecules.The model performs better in correctly predicting interaction strengths compared with state‐of‐the‐art baselines.In addition,the diversity of drug/protein molecules can be better expressed than existing methods such as SeqGAN and Co‐VAE to generate more effective new drugs.The DTA value prediction module fuses the drug‐target pair interaction information to output the predicted value of DTA.Additionally,this paper theoretically proves that the proposed method maximises evidence lower bound for the joint distribution of the DTA prediction model,which enhances the consistency of the probability distribution between actual and predicted values.The source code of proposed method is available at https://github.com/Lamouryz/Code/tree/main/MT‐DTA.展开更多
The Internet usage has grown rapidly during the last decade in almost every country in the world and in Jordan specifically;today millions of individuals are connected to the Internet and the Internet has become the b...The Internet usage has grown rapidly during the last decade in almost every country in the world and in Jordan specifically;today millions of individuals are connected to the Internet and the Internet has become the backbone of the information economy. It was used for social, commercial, political, and personal interactions. This study aims to investigate the attitudes of students at The University of Jordan towards using ICT (Information and Communication Technology). A semi-structured questionnaire was used to collect the data for obtaining students attitudes on the amount of Internet usage, reasons for using the Internet, and how the Internet impacted on students’ life. The data analysis was done by using SPSS, version 17. 536 students from different faculties (medical, humanities, and scientific) of the University of Jordan participated in the study. The results indicated that most students access Internet before they attended university;there is a positive attitude towards Internet;and they used it mainly for social websites, chatting and information gathering. The slow speed of the Internet connection and the lack of adopting ICT in courses syllabus are some constraints facing the students.展开更多
The emergence and popularity of blockchain,distributed ledger technology distributed computing,and network security and trust techniques are significantly changing the operation and management of computing and communi...The emergence and popularity of blockchain,distributed ledger technology distributed computing,and network security and trust techniques are significantly changing the operation and management of computing and communication systems,as these techniques have the potential to disrupt any domain involving coordination among autonomous resources without trusted third parties.These techniques and their applications include finance and payments(e.g.,Facebook Libra),but also networks(e.g.,power grids or telecom networks),computing(e.g.,brokering of edge resources),Internet of Things(e.g.,supply chain or industry 4.0),and service platforms(e.g.,identity management).The market capitalization,investor appetite,and institutional coverage for cryptocurrency(as well as bitcoin and blockchain)have all jumped exponentially.The total market capitalization of the cryptocurrency market has significantly increased in the past three years.The applications of blockchain exhibit a variety of complicated problems and new requirements,which brings more open issues and challenges for artificial intelligence(AI)and related research.展开更多
Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data s...Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data sets and huge networks make training a time-consuming process.At the same time,the parameters and their values generated during the training process also take up a lot of computer resources.Therefore,we apply distributed cloud computing method to perform person re-identification task.Using distributed data storage method,pedestrian data sets and parameters are stored in cloud nodes.To speed up operational efficiency and increase fault tolerance,we add data redundancy mechanism to copy and store data blocks to different nodes,and we propose a hash loop optimization algorithm to optimize the data distribution process.Moreover,we assign different layers of the re-identification network to different nodes to complete the training in the way of model parallelism.By comparing and analyzing the accuracy and operation speed of the distributed model on the video-based dataset MARS,the results show that our distributed model has a faster training speed.展开更多
Predicting stock prices has been a widely studied topic across numerous disciplines for a long time.In this study,the goals are to first,analyze how social media sentiment influences stock price predictions;second,com...Predicting stock prices has been a widely studied topic across numerous disciplines for a long time.In this study,the goals are to first,analyze how social media sentiment influences stock price predictions;second,compare the effects of social media sentiment on stock price predictions before and during the pandemic;third,investigate the impact of the pandemic on stock prices across three major sectors:airlines,hotels,and restaurants.This research leverages three distinct types of data stock prices,COvID-19 data,and social media data to develop three separate feature sets for analyzing the impact of various factors on RNN-based stock prediction.The process begins with loading the relevant datasets and initializing a sequential model.Next,the model is built by adding an input layer,followed by an LSTM layer,and one or more dense layers.After the model is compiled and trained,it is evaluated,and the results are visualized to assess the outcomes.展开更多
Businesses have been using social media to promote products and services to increase sales.This paper aims to study the impact of Facebook on real estate sales.First,we examine how realtors’activities on Facebook bus...Businesses have been using social media to promote products and services to increase sales.This paper aims to study the impact of Facebook on real estate sales.First,we examine how realtors’activities on Facebook business pages are associated with real estate sales.Then,we include time lags in analysis because a time lag can be expected between activates on Facebook and a resulting real estate transaction.For the collected datasets,the results suggest that:(1)The total numbers of Facebook likes,links,and stories are positively associated with real estate sales;(2)The average sentiment score of Facebook posts is negatively associated with real estate sales;(3)The influence of activities on Facebook has a time lag effect on real estate sales.The research findings can be used by real estate stakeholders to promote and potentially forecast sales.展开更多
As one of the most promising machine learning frameworks emerging in recent years,Federated learning(FL)has received lots of attention.The main idea of centralized FL is to train a global model by aggregating local mo...As one of the most promising machine learning frameworks emerging in recent years,Federated learning(FL)has received lots of attention.The main idea of centralized FL is to train a global model by aggregating local model parameters and maintain the private data of users locally.However,recent studies have shown that traditional centralized federated learning is vulnerable to various attacks,such as gradient attacks,where a malicious server collects local model gradients and uses them to recover the private data stored on the client.In this paper,we propose a decentralized federated learning against aTtacks(DEFEAT)framework and use it to defend the gradient attack.The decentralized structure adopted by this paper uses a peer-to-peer network to transmit,aggregate,and update local models.In DEFEAT,the participating clients only need to communicate with their single-hop neighbors to learn the global model,in which the model accuracy and communication cost during the training process of DEFEAT are well balanced.Through a series of experiments and detailed case studies on real datasets,we evaluate the excellent model performance of DEFEAT and the privacy preservation capability against gradient attacks.展开更多
文摘This paper describes the implementation of an Information Systems (IS) capstone project management course that is a requirement for graduating seniors in an undergraduate Computer Information Systems (CIS) program at a regional university. The description provides a model which includes the culmination of students' academic training in an IS curriculum which is part of a Bachelor of Business Administration (BBA) program in an accredited college of business. The course requires an application of technical and business skills, as well as systems development and project management skills--while students are working on an actual IS project for an external sponsoring organization. Rationale for implementing this type of course includes the benefits it provides to the students, the project sponsors, and the IS department providing the course. Feedback from the course is used as integral part of the C1S curriculum assessment process used for accreditation purposes.
文摘This paper aims to design and implement an automatic heart disease diagnosis system using?MATLAB. The Cleveland data set for heart diseases was used as the main database for training and testing the developed system. In order to train and test the Cleveland data set, two systems were developed. The first system is based on the Multilayer Perceptron (MLP) structure on the Artificial Neural Network (ANN), whereas the second system is based on the Adaptive Neuro-Fuzzy Inference Systems (ANFIS) approach. Each system has two main modules, namely, training and testing,?where 80% and 20% of the Cleveland data set were randomly selected for training and testing?purposes respectively. Each system also has an additional module known as case-based module,?where the user has to input values for 13 required attributes as specified by the Cleveland data set,?in order to test the status of the patient whether heart disease is present or absent from that particular patient. In addition, the effects of different values for important parameters were investigated in the ANN-based and Neuro-Fuzzy-based systems in order to select the best parameters that obtain the highest performance. Based on the experimental work, it is clear that the Neuro-Fuzzy system outperforms the ANN system using the training data set, where the accuracy for each system was 100% and 90.74%, respectively. However, using the testing data set, it is clear that the ANN system outperforms the Neuro-Fuzzy system, where the best accuracy for each system was 87.04% and 75.93%, respectively.
文摘The rapid development of information communication technology blazed a trail in our learning, work, and lives. This study was conducted to ascertain the computer and internet literacy level of medical faculties’ students. 171 first-year medical students from 4 different medical colleges of the University of Jordan participated in the study. A semi-structured questionnaire was used to collect the data and the data analysis was done by using SPSS, Version 17. The results indicated that most medical students have average 5 or advance knowledge on the basic use of computer and internet. Google was found to be the most commonly used search engine. Also the study found that ICT (Information and Communication Technology) can be a useful tool in medical education but the lack of time, internet connectivity and resources is still a serious constraint.
文摘The development of multimedia and digital imaging has led to high quantity of data required to represent modern imagery. This requires large disk space for storage, and long time for transmission over computer networks, and these two are relatively expensive. These factors prove the need for images compression. Image compression addresses the problem of reducing the amount of space required to represent a digital image yielding a compact representation of an image, and thereby reducing the image storage/transmission time requirements. The key idea here is to remove redundancy of data presented within an image to reduce its size without affecting the essential information of it. We are concerned with lossless image compression in this paper. Our proposed approach is a mix of a number of already existing techniques. Our approach works as follows: first, we apply the well-known Lempel-Ziv-Welch (LZW) algorithm on the image in hand. What comes out of the first step is forward to the second step where the Bose, Chaudhuri and Hocquenghem (BCH) error correction and detected algorithm is used. To improve the compression ratio, the proposed approach applies the BCH algorithms repeatedly until “inflation” is detected. The experimental results show that the proposed algorithm could achieve an excellent compression ratio without losing data when compared to the standard compression algorithms.
文摘As one chemical composition,nicotine content has an important influence on the quality of tobacco leaves.Rapid and nondestructive quantitative analysis of nicotine is an important task in the tobacco industry.Near-infrared(NIR)spectroscopy as an effective chemical composition analysis technique has been widely used.In this paper,we propose a one-dimensional fully convolutional network(1D-FCN)model to quantitatively analyze the nicotine composition of tobacco leaves using NIR spectroscopy data in a cloud environment.This 1D-FCN model uses one-dimensional convolution layers to directly extract the complex features from sequential spectroscopy data.It consists of five convolutional layers and two full connection layers with the max-pooling layer replaced by a convolutional layer to avoid information loss.Cloud computing techniques are used to solve the increasing requests of large-size data analysis and implement data sharing and accessing.Experimental results show that the proposed 1D-FCN model can effectively extract the complex characteristics inside the spectrum and more accurately predict the nicotine volumes in tobacco leaves than other approaches.This research provides a deep learning foundation for quantitative analysis of NIR spectral data in the tobacco industry.
文摘Class diagrams and use case models are system models that are used to analyze, design and model object oriented systems. In this era of agile computing, service-oriented architecture has become increasingly popular for achieving efficient and agile business solutions that can maintain changes demanded by the business world. This paper proposes a methodology to identify services from a set of class diagrams and use case models in order to generate a service oriented model. An extensive evaluation of the generated services has shown that these services conform to the principles of Service Oriented Architecture (SOA), and provide a straightforward methodology, which can reuse the valuable business logic that resides within legacy applications to migrate to SOA-based systems.
文摘With the continuous development of medical informatics and digital diagnosis,the classification of tuberculosis(TB)cases from computed tomography(CT)images of the lung based on deep learning is an important guiding aid in clinical diagnosis and treatment.Due to its potential application in medical image classification,this task has received extensive research attention.Existing related neural network techniques are still challenging in terms of feature extraction of global contextual information of images and network complexity in achieving image classification.To address these issues,this paper proposes a lightweight medical image classification network based on a combination of Transformer and convolutional neural network(CNN)for the classification of TB cases from lung CT.The method mainly consists of a fusion of the CNN module and the Transformer module,exploiting the advantages of both in order to accomplish a more accurate classification task.On the one hand,the CNN branch supplements the Transformer branch with basic local feature information in the low level;on the other hand,in the middle and high levels of the model,the CNN branch can also provide the Transformer architecture with different local and global feature information to the Transformer architecture to enhance the ability of the model to obtain feature information and improve the accuracy of image classification.A shortcut is used in each module of the network to solve the problem of poor model results due to gradient divergence and to optimize the effectiveness of TB classification.The proposed lightweight model can well solve the problem of long training time in the process of TB classification of lung CT and improve the speed of classification.The proposed method was validated on a CT image data set provided by the First Hospital of Lanzhou University.The experimental results show that the proposed lightweight classification network for TB based on CT medical images of lungs can fully extract the feature information of the input images and obtain high-accuracy classification results.
文摘Predicting the value of one or more variables using the values of other variables is a very important process in the various engineering experiments that include large data that are difficult to obtain using different measurement processes.Regression is one of the most important types of supervised machine learning,in which labeled data is used to build a prediction model,regression can be classified into three different categories:linear,polynomial,and logistic.In this research paper,different methods will be implemented to solve the linear regression problem,where there is a linear relationship between the target and the predicted output.Various methods for linear regression will be analyzed using the calculated Mean Square Error(MSE)between the target values and the predicted outputs.A huge set of regression samples will be used to construct the training dataset with selected sizes.A detailed comparison will be performed between three methods,including least-square fit;Feed-Forward Artificial Neural Network(FFANN),and Cascade Feed-Forward Artificial Neural Network(CFFANN),and recommendations will be raised.The proposed method has been tested in this research on random data samples,and the results were compared with the results of the most common method,which is the linear multiple regression method.It should be noted here that the procedures for building and testing the neural network will remain constant even if another sample of data is used.
基金supported by Deanship of Scientific Research at Imam Abdulrahman Bin Faisal University,under the Project Number 2019-383-ASCS.
文摘Timetabling problem is among the most difficult operational tasks and is an important step in raising industrial productivity,capability,and capacity.Such tasks are usually tackled using metaheuristics techniques that provide an intelligent way of suggesting solutions or decision-making.Swarm intelligence techniques including Particle Swarm Optimization(PSO)have proved to be effective examples.Different recent experiments showed that the PSO algorithm is reliable for timetabling in many applications such as educational and personnel timetabling,machine scheduling,etc.However,having an optimal solution is extremely challenging but having a sub-optimal solution using heuristics or metaheuristics is guaranteed.This research paper seeks the enhancement of the PSO algorithm for an efficient timetabling task.This algorithm aims at generating a feasible timetable within a reasonable time.This enhanced version is a hybrid dynamic adaptive PSO algorithm that is tested on a round-robin tournament known as ITC2021 which is dedicated to sports timetabling.The competition includes several soft and hard constraints to be satisfied in order to build a feasible or sub-optimal timetable.It consists of three categories of complexities,namely early,test,and middle instances.Results showed that the proposed dynamic adaptive PSO has obtained feasible timetables for almost all of the instances.The feasibility is measured by minimizing the violation of hard constraints to zero.The performance of the dynamic adaptive PSO is evaluated by the consumed computational time to produce a solution of feasible timetable,consistency,and robustness.The dynamic adaptive PSO showed a robust and consistent performance in producing a diversity of timetables in a reasonable computational time.
文摘Image recognition is widely used in different application areas such as shape recognition, gesture recognition and eye recognition. In this research, we introduced image recognition using efficient invariant moments and Principle Component Analysis (PCA) for gray and color images using different number of invariant moments. We used twelve moments for each image of gray images and Hu’s seven moments for color images to decrease dimensionality of the problem to 6 PCA’s for gray and 5 PCA’s for color images and hence the recognition time. PCA is then employed to decrease dimensionality of the problem and hence the recognition time and this is our main objective. The PCA is derived from Karhunen-Loeve’s transformation. Given an N-dimensional vector representation of each image, PCA tends to find a K-dimensional subspace whose basis vectors correspond to the maximum variance direction in the original image space. This new subspace is normally lower dimensional (K N). Three known datasets are used. The first set is the known Flower dataset. The second is the Africans dataset, and the third is the Shapes dataset. All these datasets were used by many researchers.
基金supported by Cooperation Project Between Undergraduate Universities in Chongqing and Institutions Affiliated to the Chinese Academy of Sciences(No.HZ2021018)the National Natural Science Foundation of China(Grant 62276037)Special key project of Chongqing technology innovation and application development:CSTB2022TIAD‐KPX0039.
文摘As a necessary process of modern drug development,finding a drug compound that can selectively bind to a specific protein is highly challenging and costly.Exploring drug‐target interaction strength in terms of drug‐target affinity(DTA)is an emerging and effective research approach for drug development.However,it is challenging to model drug‐target interactions in a deep learning manner,and few studies provide interpretable analysis of models.This paper proposes a DTA prediction method(mutual transformer‐drug target affinity[MT‐DTA])with interactive learning and an autoencoder mechanism.The proposed MT‐DTA builds a variational autoencoders system with a cascade structure of the attention model and convolutional neural networks.It not only enhances the ability to capture the characteristic information of a single molecular sequence but also establishes the characteristic expression relationship for each substructure in a single molecular sequence.On this basis,a molecular information interaction module is constructed,which adds information interaction paths between molecular sequence pairs and complements the expression of correlations between molecular substructures.The performance of the proposed model was verified on two public benchmark datasets,KIBA and Davis,and the results confirm that the proposed model structure is effective in predicting DTA.Additionally,attention transformer models with different configurations can improve the feature expression of drug/protein molecules.The model performs better in correctly predicting interaction strengths compared with state‐of‐the‐art baselines.In addition,the diversity of drug/protein molecules can be better expressed than existing methods such as SeqGAN and Co‐VAE to generate more effective new drugs.The DTA value prediction module fuses the drug‐target pair interaction information to output the predicted value of DTA.Additionally,this paper theoretically proves that the proposed method maximises evidence lower bound for the joint distribution of the DTA prediction model,which enhances the consistency of the probability distribution between actual and predicted values.The source code of proposed method is available at https://github.com/Lamouryz/Code/tree/main/MT‐DTA.
文摘The Internet usage has grown rapidly during the last decade in almost every country in the world and in Jordan specifically;today millions of individuals are connected to the Internet and the Internet has become the backbone of the information economy. It was used for social, commercial, political, and personal interactions. This study aims to investigate the attitudes of students at The University of Jordan towards using ICT (Information and Communication Technology). A semi-structured questionnaire was used to collect the data for obtaining students attitudes on the amount of Internet usage, reasons for using the Internet, and how the Internet impacted on students’ life. The data analysis was done by using SPSS, version 17. 536 students from different faculties (medical, humanities, and scientific) of the University of Jordan participated in the study. The results indicated that most students access Internet before they attended university;there is a positive attitude towards Internet;and they used it mainly for social websites, chatting and information gathering. The slow speed of the Internet connection and the lack of adopting ICT in courses syllabus are some constraints facing the students.
文摘The emergence and popularity of blockchain,distributed ledger technology distributed computing,and network security and trust techniques are significantly changing the operation and management of computing and communication systems,as these techniques have the potential to disrupt any domain involving coordination among autonomous resources without trusted third parties.These techniques and their applications include finance and payments(e.g.,Facebook Libra),but also networks(e.g.,power grids or telecom networks),computing(e.g.,brokering of edge resources),Internet of Things(e.g.,supply chain or industry 4.0),and service platforms(e.g.,identity management).The market capitalization,investor appetite,and institutional coverage for cryptocurrency(as well as bitcoin and blockchain)have all jumped exponentially.The total market capitalization of the cryptocurrency market has significantly increased in the past three years.The applications of blockchain exhibit a variety of complicated problems and new requirements,which brings more open issues and challenges for artificial intelligence(AI)and related research.
基金the Common Key Technology Innovation Special of Key Industries of Chongqing Science and Technology Commission under Grant No.cstc2017zdcy-zdyfX0067.
文摘Person re-identification has been a hot research issues in the field of computer vision.In recent years,with the maturity of the theory,a large number of excellent methods have been proposed.However,large-scale data sets and huge networks make training a time-consuming process.At the same time,the parameters and their values generated during the training process also take up a lot of computer resources.Therefore,we apply distributed cloud computing method to perform person re-identification task.Using distributed data storage method,pedestrian data sets and parameters are stored in cloud nodes.To speed up operational efficiency and increase fault tolerance,we add data redundancy mechanism to copy and store data blocks to different nodes,and we propose a hash loop optimization algorithm to optimize the data distribution process.Moreover,we assign different layers of the re-identification network to different nodes to complete the training in the way of model parallelism.By comparing and analyzing the accuracy and operation speed of the distributed model on the video-based dataset MARS,the results show that our distributed model has a faster training speed.
文摘Predicting stock prices has been a widely studied topic across numerous disciplines for a long time.In this study,the goals are to first,analyze how social media sentiment influences stock price predictions;second,compare the effects of social media sentiment on stock price predictions before and during the pandemic;third,investigate the impact of the pandemic on stock prices across three major sectors:airlines,hotels,and restaurants.This research leverages three distinct types of data stock prices,COvID-19 data,and social media data to develop three separate feature sets for analyzing the impact of various factors on RNN-based stock prediction.The process begins with loading the relevant datasets and initializing a sequential model.Next,the model is built by adding an input layer,followed by an LSTM layer,and one or more dense layers.After the model is compiled and trained,it is evaluated,and the results are visualized to assess the outcomes.
文摘Businesses have been using social media to promote products and services to increase sales.This paper aims to study the impact of Facebook on real estate sales.First,we examine how realtors’activities on Facebook business pages are associated with real estate sales.Then,we include time lags in analysis because a time lag can be expected between activates on Facebook and a resulting real estate transaction.For the collected datasets,the results suggest that:(1)The total numbers of Facebook likes,links,and stories are positively associated with real estate sales;(2)The average sentiment score of Facebook posts is negatively associated with real estate sales;(3)The influence of activities on Facebook has a time lag effect on real estate sales.The research findings can be used by real estate stakeholders to promote and potentially forecast sales.
基金partially supported by U.S.National Science Foundation(1912753,2011845).
文摘As one of the most promising machine learning frameworks emerging in recent years,Federated learning(FL)has received lots of attention.The main idea of centralized FL is to train a global model by aggregating local model parameters and maintain the private data of users locally.However,recent studies have shown that traditional centralized federated learning is vulnerable to various attacks,such as gradient attacks,where a malicious server collects local model gradients and uses them to recover the private data stored on the client.In this paper,we propose a decentralized federated learning against aTtacks(DEFEAT)framework and use it to defend the gradient attack.The decentralized structure adopted by this paper uses a peer-to-peer network to transmit,aggregate,and update local models.In DEFEAT,the participating clients only need to communicate with their single-hop neighbors to learn the global model,in which the model accuracy and communication cost during the training process of DEFEAT are well balanced.Through a series of experiments and detailed case studies on real datasets,we evaluate the excellent model performance of DEFEAT and the privacy preservation capability against gradient attacks.