Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate...Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions.展开更多
Recently,Multicore systems use Dynamic Voltage/Frequency Scaling(DV/FS)technology to allow the cores to operate with various voltage and/or frequencies than other cores to save power and enhance the performance.In thi...Recently,Multicore systems use Dynamic Voltage/Frequency Scaling(DV/FS)technology to allow the cores to operate with various voltage and/or frequencies than other cores to save power and enhance the performance.In this paper,an effective and reliable hybridmodel to reduce the energy and makespan in multicore systems is proposed.The proposed hybrid model enhances and integrates the greedy approach with dynamic programming to achieve optimal Voltage/Frequency(Vmin/F)levels.Then,the allocation process is applied based on the availableworkloads.The hybrid model consists of three stages.The first stage gets the optimum safe voltage while the second stage sets the level of energy efficiency,and finally,the third is the allocation stage.Experimental results on various benchmarks show that the proposed model can generate optimal solutions to save energy while minimizing the makespan penalty.Comparisons with other competitive algorithms show that the proposed model provides on average 48%improvements in energy-saving and achieves an 18%reduction in computation time while ensuring a high degree of system reliability.展开更多
Diagnoses of heart diseases can be done effectively on long term recordings of ECG signals that preserve the signals’ morphologies. In these cases, the volume of the ECG data produced by the monitoring systems grows ...Diagnoses of heart diseases can be done effectively on long term recordings of ECG signals that preserve the signals’ morphologies. In these cases, the volume of the ECG data produced by the monitoring systems grows significantly. To make the mobile healthcare possible, the need for efficient ECG signal compression algorithms to store and/or transmit the signal efficiently has been rising exponentially. Currently, ECG signal is acquired at Nyquist rate or higher, thus introducing redundancies between adjacent heartbeats due to its quasi-periodic structure. Existing compression methods remove these redundancies by achieving compression and facilitate transmission of the patient’s imperative information. Based on the fact that these signals can be approximated by a linear combination of a few coefficients taken from different basis, an alternative new compression scheme based on Compressive Sensing (CS) has been proposed. CS provides a new approach concerned with signal compression and recovery by exploiting the fact that ECG signal can be reconstructed by acquiring a relatively small number of samples in the “sparse” domains through well-developed optimization procedures. In this paper, a single-lead ECG compression method has been proposed based on improving the signal sparisty through the extraction of the signal significant features. The proposed method starts with a preprocessing stage that detects the peaks and periods of the Q, R and S waves of each beat. Then, the QRS-complex for each signal beat is estimated. The estimated QRS-complexes are subtracted from the original ECG signal and the resulting error signal is compressed using the CS technique. Throughout this process, DWT sparsifying dictionaries have been adopted. The performance of the proposed algorithm, in terms of the reconstructed signal quality and compression ratio, is evaluated by adopting DWT spatial domain basis applied to ECG records extracted from the MIT-BIH Arrhythmia Database. The results indicate that average compression ratio of 11:1 with PRD1 = 1.2% are obtained. Moreover, the quality of the retrieved signal is guaranteed and the compression ratio achieved is an improvement over those obtained by previously reported algorithms. Simulation results suggest that CS should be considered as an acceptable methodology for ECG compression.展开更多
Sentiment analysis attracts the attention of Egyptian Decisionmakers in the education sector.It offers a viable method to assess education quality services based on the students’feedback as well as that provides an u...Sentiment analysis attracts the attention of Egyptian Decisionmakers in the education sector.It offers a viable method to assess education quality services based on the students’feedback as well as that provides an understanding of their needs.As machine learning techniques offer automated strategies to process big data derived from social media and other digital channels,this research uses a dataset for tweets’sentiments to assess a few machine learning techniques.After dataset preprocessing to remove symbols,necessary stemming and lemmatization is performed for features extraction.This is followed by several machine learning techniques and a proposed Long Short-Term Memory(LSTM)classifier optimized by the Salp Swarm Algorithm(SSA)and measured the corresponding performance.Then,the validity and accuracy of commonly used classifiers,such as Support Vector Machine,Logistic Regression Classifier,and Naive Bayes classifier,were reviewed.Moreover,LSTM based on the SSA classification model was compared with Support Vector Machine(SVM),Logistic Regression(LR),and Naive Bayes(NB).Finally,as LSTM based SSA achieved the highest accuracy,it was applied to predict the sentiments of students’feedback and evaluate their association with the course outcome evaluations for education quality purposes.展开更多
Routing is a key function inWireless Sensor Networks(WSNs)since it facilitates data transfer to base stations.Routing attacks have the potential to destroy and degrade the functionality ofWSNs.A trustworthy routing sy...Routing is a key function inWireless Sensor Networks(WSNs)since it facilitates data transfer to base stations.Routing attacks have the potential to destroy and degrade the functionality ofWSNs.A trustworthy routing system is essential for routing security andWSN efficiency.Numerous methods have been implemented to build trust between routing nodes,including the use of cryptographic methods and centralized routing.Nonetheless,the majority of routing techniques are unworkable in reality due to the difficulty of properly identifying untrusted routing node activities.At the moment,there is no effective way to avoid malicious node attacks.As a consequence of these concerns,this paper proposes a trusted routing technique that combines blockchain infrastructure,deep neural networks,and Markov Decision Processes(MDPs)to improve the security and efficiency of WSN routing.To authenticate the transmission process,the suggested methodology makes use of a Proof of Authority(PoA)mechanism inside the blockchain network.The validation group required for proofing is chosen using a deep learning approach that prioritizes each node’s characteristics.MDPs are then utilized to determine the suitable next-hop as a forwarding node capable of securely transmitting messages.According to testing data,our routing system outperforms current routing algorithms in a 50%malicious node routing scenario.展开更多
This article introduces a new medical internet of things(IoT)framework for intelligent fall detection system of senior people based on our proposed deep forest model.The cascade multi-layer structure of deep forest cl...This article introduces a new medical internet of things(IoT)framework for intelligent fall detection system of senior people based on our proposed deep forest model.The cascade multi-layer structure of deep forest classifier allows to generate new features at each level with minimal hyperparameters compared to deep neural networks.Moreover,the optimal number of the deep forest layers is automatically estimated based on the early stopping criteria of validation accuracy value at each generated layer.The suggested forest classifier was successfully tested and evaluated using a public SmartFall dataset,which is acquired from three-axis accelerometer in a smartwatch.It includes 92781 training samples and 91025 testing samples with two labeled classes,namely non-fall and fall.Classification results of our deep forest classifier demonstrated a superior performance with the best accuracy score of 98.0%compared to three machine learning models,i.e.,K-nearest neighbors,decision trees and traditional random forest,and two deep learning models,which are dense neural networks and convolutional neural networks.By considering security and privacy aspects in the future work,our proposed medical IoT framework for fall detection of old people is valid for real-time healthcare application deployment.展开更多
:Strabismus is a medical condition that is defined as the lack of coordination between the eyes.When Strabismus is detected at an early age,the chances of curing it are higher.The methods used to detect strabismus and...:Strabismus is a medical condition that is defined as the lack of coordination between the eyes.When Strabismus is detected at an early age,the chances of curing it are higher.The methods used to detect strabismus and measure its degree of deviation are complex and time-consuming,and they always require the presence of a physician.In this paper,we present a method of detecting strabismus and measuring its degree of deviation using videos of the patient’s eye region under a cover test.Our method involves extracting features from a set of training videos(training corpora)and using them to build a classifier.A decision tree(ID3)is built using labeled cases from actual strabismus diagnosis.Patterns are extracted from the corresponding videos of patients,and an association between the extracted features and actual diagnoses is established.Matching Rules from the correlation plot are used to predict diagnoses for future patients.The classifier was tested using a set of testing videos(testing corpora).The results showed 95.9%accuracy,4.1%were light cases and could not be detected correctly from the videos,half of them were false positive and the other half was false negative.展开更多
The status of the social and human sciences as genuine sciences on a par with the natural sciences has widely been held in doubt, and the subject-oriented approach (SOA) to knowledge also shows the traditional scien...The status of the social and human sciences as genuine sciences on a par with the natural sciences has widely been held in doubt, and the subject-oriented approach (SOA) to knowledge also shows the traditional scientific view to be misleaded. Its shows that it is mandatory to dismiss the idea that personal knowledge is a representation of a common world created by some God, and also the mistake to take the seductive noun/verb structure as for given. We need a new methodological paradigm of science--an approach that avoids the pitfalls of dualism and realism--and take the effort to couch its thinking in a re-interpretation of natural language. This line of reasoning paves the way for the SOA--a new epistemology that takes the individual knower and its feelings as the coherent point of departure. The traits of a new foundation are sketched and to that end a bootstrap model is proposed that departs from the early man's first experience. In doing so, we, in a subject-oriented manner, can bring man's living experience and his priverse (or private universe), under the collective umbrella of a consensual science. This approach brings the promise to provide a sound theory of everything-or rather a theory of every thin/kin/g-which in one step removes the cleft between the natural and social sciences.展开更多
In digital signal processing (DSP), Nyquistrate sampling completely describes a signal by exploiting its bandlimitedness. Compressed Sensing (CS), also known as compressive sampling, is a DSP technique efficiently acq...In digital signal processing (DSP), Nyquistrate sampling completely describes a signal by exploiting its bandlimitedness. Compressed Sensing (CS), also known as compressive sampling, is a DSP technique efficiently acquiring and reconstructing a signal completely from reduced number of measurements, by exploiting its compressibility. The measurements are not point samples but more general linear functions of the signal. CS can capture and represent sparse signals at a rate significantly lower than ordinarily used in the Shannon’s sampling theorem. It is interesting to notice that most signals in reality are sparse;especially when they are represented in some domain (such as the wavelet domain) where many coefficients are close to or equal to zero. A signal is called K-sparse, if it can be exactly represented by a basis, , and a set of coefficients , where only K coefficients are nonzero. A signal is called approximately K-sparse, if it can be represented up to a certain accuracy using K non-zero coefficients. As an example, a K-sparse signal is the class of signals that are the sum of K sinusoids chosen from the N harmonics of the observed time interval. Taking the DFT of any such signal would render only K non-zero values . An example of approximately sparse signals is when the coefficients , sorted by magnitude, decrease following a power law. In this case the sparse approximation constructed by choosing the K largest coefficients is guaranteed to have an approximation error that decreases with the same power law as the coefficients. The main limitation of CS-based systems is that they are employing iterative algorithms to recover the signal. The sealgorithms are slow and the hardware solution has become crucial for higher performance and speed. This technique enables fewer data samples than traditionally required when capturing a signal with relatively high bandwidth, but a low information rate. As a main feature of CS, efficient algorithms such as -minimization can be used for recovery. This paper gives a survey of both theoretical and numerical aspects of compressive sensing technique and its applications. The theory of CS has many potential applications in signal processing, wireless communication, cognitive radio and medical imaging.展开更多
Detection of brain tumors in MRI images is the first step in brain cancer diagnosis.The accuracy of the diagnosis depends highly on the expertise of radiologists.Therefore,automated diagnosis of brain cancer from MRI ...Detection of brain tumors in MRI images is the first step in brain cancer diagnosis.The accuracy of the diagnosis depends highly on the expertise of radiologists.Therefore,automated diagnosis of brain cancer from MRI is receiving a large amount of attention.Also,MRI tumor detection is usually followed by a biopsy(an invasive procedure),which is a medical procedure for brain tumor classification.It is of high importance to devise automated methods to aid radiologists in brain cancer tumor diagnosis without resorting to invasive procedures.Convolutional neural network(CNN)is deemed to be one of the best machine learning algorithms to achieve high-accuracy results in tumor identification and classification.In this paper,a CNN-based technique for brain tumor classification has been developed.The proposed CNN can distinguish between normal(no-cancer),astrocytoma tumors,gliomatosis cerebri tumors,and glioblastoma tumors.The implemented CNN was tested on MRI images that underwent a motion-correction procedure.The CNN was evaluated using two performance measurement procedures.The first one is a k-fold cross-validation testing method,in which we tested the dataset using k=8,10,12,and 14.The best accuracy for this procedure was 96.26%when k=10.To overcome the over-fitting problem that could be occurred in the k-fold testing method,we used a hold-out testing method as a second evaluation procedure.The results of this procedure succeeded in attaining 97.8%accuracy,with a specificity of 99.2%and a sensitivity of 97.32%.With this high accuracy,the developed CNN architecture could be considered an effective automated diagnosis method for the classification of brain tumors from MRI images.展开更多
Robust Clustering methods are aimed at avoiding unsatisfactory results resulting from the presence of certain amount of outlying observations in the input data of many practical applications such as biological sequenc...Robust Clustering methods are aimed at avoiding unsatisfactory results resulting from the presence of certain amount of outlying observations in the input data of many practical applications such as biological sequences analysis or gene expressions analysis. This paper presents a fuzzy clustering algorithm based on average link and possibilistic clustering paradigm termed as AVLINK. It minimizes the average dissimilarity between pairs of patterns within the same cluster and at the same time the size of a cluster is maximized by computing the zeros of the derivative of proposed objective function. AVLINK along with the proposed initialization procedure show a high outliers rejection capability as it makes their membership very low furthermore it does not requires the number of clusters to be known in advance and it can discover clusters of non convex shape. The effectiveness and robustness of the proposed algorithms have been demonstrated on different types of protein data sets.展开更多
The continuous development of cyberattacks is threatening digital transformation endeavors worldwide and leadsto wide losses for various organizations. These dangers have proven that signature-based approaches are ins...The continuous development of cyberattacks is threatening digital transformation endeavors worldwide and leadsto wide losses for various organizations. These dangers have proven that signature-based approaches are insufficientto prevent emerging and polymorphic attacks. Therefore, this paper is proposing a Robust Malicious ExecutableDetection (RMED) using Host-based Machine Learning Classifier to discover malicious Portable Executable (PE)files in hosts using Windows operating systems through collecting PE headers and applying machine learningmechanisms to detect unknown infected files. The authors have collected a novel reliable dataset containing 116,031benign files and 179,071 malware samples from diverse sources to ensure the efficiency of RMED approach.The most effective PE headers that can highly differentiate between benign and malware files were selected totrain the model on 15 PE features to speed up the classification process and achieve real-time detection formalicious executables. The evaluation results showed that RMED succeeded in shrinking the classification timeto 91 milliseconds for each file while reaching an accuracy of 98.42% with a false positive rate equal to 1.58. Inconclusion, this paper contributes to the field of cybersecurity by presenting a comprehensive framework thatleverages Artificial Intelligence (AI) methods to proactively detect and prevent cyber-attacks.展开更多
In this investigation,we have shown that the combination of deep learning,including natural language processing,and conformal prediction results in highly predictive and efficient temporal test set sentiment estimates...In this investigation,we have shown that the combination of deep learning,including natural language processing,and conformal prediction results in highly predictive and efficient temporal test set sentiment estimates for 12 categories of Amazon product reviews using either in-category predictions,i.e.the model and the test set are from the same review category or cross-category predictions,i.e.using a model of another review category for predicting the test set.The similar results from in-and cross-category predictions indicate high degree of generalizability across product review categories.The investigation also shows that the combination of deep learning and conformal prediction gracefully handles class imbalances without explicit class balancing measures.展开更多
This paper presents a fast adaptive iterative algorithm to solve linearly separable classification problems in R n.In each iteration,a subset of the sampling data (n-points,where n is the number of features) is adap...This paper presents a fast adaptive iterative algorithm to solve linearly separable classification problems in R n.In each iteration,a subset of the sampling data (n-points,where n is the number of features) is adaptively chosen and a hyperplane is constructed such that it separates the chosen n-points at a margin and best classifies the remaining points.The classification problem is formulated and the details of the algorithm are presented.Further,the algorithm is extended to solving quadratically separable classification problems.The basic idea is based on mapping the physical space to another larger one where the problem becomes linearly separable.Numerical illustrations show that few iteration steps are sufficient for convergence when classes are linearly separable.For nonlinearly separable data,given a specified maximum number of iteration steps,the algorithm returns the best hyperplane that minimizes the number of misclassified points occurring through these steps.Comparisons with other machine learning algorithms on practical and benchmark datasets are also presented,showing the performance of the proposed algorithm.展开更多
Along with the rapid development of economics and enhancement of industrialization, the power demand keeps rising and frequently creates mismatch between demand and supply in electricity.This provides miscellaneous en...Along with the rapid development of economics and enhancement of industrialization, the power demand keeps rising and frequently creates mismatch between demand and supply in electricity.This provides miscellaneous energy buy-back programs with great opportunities. Such programs, when activated, offer certain amount of financial compensations to participants for reducing their energy consumption during peak time. They aim at encouraging participants to shift their electricity usage from peak to non-peak time, and thereby release the demand pressure during peak time. This paper considers a periodic-review joint pricing and inventory decision model under an energy buy-back program over finite planning horizons, in which the compensation levels, setup cost and additive random demand function are incorporated. The objective is to maximize a manufacturer's expected total profit.By using Veinott's conditions, it is shown that the manufacturer's optimal decision is a state dependent(s, S, P) policy under a peak market condition, or partly an(s, S, A, P) policy under the normal market condition.展开更多
文摘Visual question answering(VQA)is a multimodal task,involving a deep understanding of the image scene and the question’s meaning and capturing the relevant correlations between both modalities to infer the appropriate answer.In this paper,we propose a VQA system intended to answer yes/no questions about real-world images,in Arabic.To support a robust VQA system,we work in two directions:(1)Using deep neural networks to semantically represent the given image and question in a fine-grainedmanner,namely ResNet-152 and Gated Recurrent Units(GRU).(2)Studying the role of the utilizedmultimodal bilinear pooling fusion technique in the trade-o.between the model complexity and the overall model performance.Some fusion techniques could significantly increase the model complexity,which seriously limits their applicability for VQA models.So far,there is no evidence of how efficient these multimodal bilinear pooling fusion techniques are for VQA systems dedicated to yes/no questions.Hence,a comparative analysis is conducted between eight bilinear pooling fusion techniques,in terms of their ability to reduce themodel complexity and improve themodel performance in this case of VQA systems.Experiments indicate that these multimodal bilinear pooling fusion techniques have improved the VQA model’s performance,until reaching the best performance of 89.25%.Further,experiments have proven that the number of answers in the developed VQA system is a critical factor that a.ects the effectiveness of these multimodal bilinear pooling techniques in achieving their main objective of reducing the model complexity.The Multimodal Local Perception Bilinear Pooling(MLPB)technique has shown the best balance between the model complexity and its performance,for VQA systems designed to answer yes/no questions.
文摘Recently,Multicore systems use Dynamic Voltage/Frequency Scaling(DV/FS)technology to allow the cores to operate with various voltage and/or frequencies than other cores to save power and enhance the performance.In this paper,an effective and reliable hybridmodel to reduce the energy and makespan in multicore systems is proposed.The proposed hybrid model enhances and integrates the greedy approach with dynamic programming to achieve optimal Voltage/Frequency(Vmin/F)levels.Then,the allocation process is applied based on the availableworkloads.The hybrid model consists of three stages.The first stage gets the optimum safe voltage while the second stage sets the level of energy efficiency,and finally,the third is the allocation stage.Experimental results on various benchmarks show that the proposed model can generate optimal solutions to save energy while minimizing the makespan penalty.Comparisons with other competitive algorithms show that the proposed model provides on average 48%improvements in energy-saving and achieves an 18%reduction in computation time while ensuring a high degree of system reliability.
文摘Diagnoses of heart diseases can be done effectively on long term recordings of ECG signals that preserve the signals’ morphologies. In these cases, the volume of the ECG data produced by the monitoring systems grows significantly. To make the mobile healthcare possible, the need for efficient ECG signal compression algorithms to store and/or transmit the signal efficiently has been rising exponentially. Currently, ECG signal is acquired at Nyquist rate or higher, thus introducing redundancies between adjacent heartbeats due to its quasi-periodic structure. Existing compression methods remove these redundancies by achieving compression and facilitate transmission of the patient’s imperative information. Based on the fact that these signals can be approximated by a linear combination of a few coefficients taken from different basis, an alternative new compression scheme based on Compressive Sensing (CS) has been proposed. CS provides a new approach concerned with signal compression and recovery by exploiting the fact that ECG signal can be reconstructed by acquiring a relatively small number of samples in the “sparse” domains through well-developed optimization procedures. In this paper, a single-lead ECG compression method has been proposed based on improving the signal sparisty through the extraction of the signal significant features. The proposed method starts with a preprocessing stage that detects the peaks and periods of the Q, R and S waves of each beat. Then, the QRS-complex for each signal beat is estimated. The estimated QRS-complexes are subtracted from the original ECG signal and the resulting error signal is compressed using the CS technique. Throughout this process, DWT sparsifying dictionaries have been adopted. The performance of the proposed algorithm, in terms of the reconstructed signal quality and compression ratio, is evaluated by adopting DWT spatial domain basis applied to ECG records extracted from the MIT-BIH Arrhythmia Database. The results indicate that average compression ratio of 11:1 with PRD1 = 1.2% are obtained. Moreover, the quality of the retrieved signal is guaranteed and the compression ratio achieved is an improvement over those obtained by previously reported algorithms. Simulation results suggest that CS should be considered as an acceptable methodology for ECG compression.
文摘Sentiment analysis attracts the attention of Egyptian Decisionmakers in the education sector.It offers a viable method to assess education quality services based on the students’feedback as well as that provides an understanding of their needs.As machine learning techniques offer automated strategies to process big data derived from social media and other digital channels,this research uses a dataset for tweets’sentiments to assess a few machine learning techniques.After dataset preprocessing to remove symbols,necessary stemming and lemmatization is performed for features extraction.This is followed by several machine learning techniques and a proposed Long Short-Term Memory(LSTM)classifier optimized by the Salp Swarm Algorithm(SSA)and measured the corresponding performance.Then,the validity and accuracy of commonly used classifiers,such as Support Vector Machine,Logistic Regression Classifier,and Naive Bayes classifier,were reviewed.Moreover,LSTM based on the SSA classification model was compared with Support Vector Machine(SVM),Logistic Regression(LR),and Naive Bayes(NB).Finally,as LSTM based SSA achieved the highest accuracy,it was applied to predict the sentiments of students’feedback and evaluate their association with the course outcome evaluations for education quality purposes.
文摘Routing is a key function inWireless Sensor Networks(WSNs)since it facilitates data transfer to base stations.Routing attacks have the potential to destroy and degrade the functionality ofWSNs.A trustworthy routing system is essential for routing security andWSN efficiency.Numerous methods have been implemented to build trust between routing nodes,including the use of cryptographic methods and centralized routing.Nonetheless,the majority of routing techniques are unworkable in reality due to the difficulty of properly identifying untrusted routing node activities.At the moment,there is no effective way to avoid malicious node attacks.As a consequence of these concerns,this paper proposes a trusted routing technique that combines blockchain infrastructure,deep neural networks,and Markov Decision Processes(MDPs)to improve the security and efficiency of WSN routing.To authenticate the transmission process,the suggested methodology makes use of a Proof of Authority(PoA)mechanism inside the blockchain network.The validation group required for proofing is chosen using a deep learning approach that prioritizes each node’s characteristics.MDPs are then utilized to determine the suitable next-hop as a forwarding node capable of securely transmitting messages.According to testing data,our routing system outperforms current routing algorithms in a 50%malicious node routing scenario.
基金the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFP2021-043).
文摘This article introduces a new medical internet of things(IoT)framework for intelligent fall detection system of senior people based on our proposed deep forest model.The cascade multi-layer structure of deep forest classifier allows to generate new features at each level with minimal hyperparameters compared to deep neural networks.Moreover,the optimal number of the deep forest layers is automatically estimated based on the early stopping criteria of validation accuracy value at each generated layer.The suggested forest classifier was successfully tested and evaluated using a public SmartFall dataset,which is acquired from three-axis accelerometer in a smartwatch.It includes 92781 training samples and 91025 testing samples with two labeled classes,namely non-fall and fall.Classification results of our deep forest classifier demonstrated a superior performance with the best accuracy score of 98.0%compared to three machine learning models,i.e.,K-nearest neighbors,decision trees and traditional random forest,and two deep learning models,which are dense neural networks and convolutional neural networks.By considering security and privacy aspects in the future work,our proposed medical IoT framework for fall detection of old people is valid for real-time healthcare application deployment.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Funding Program(Grand No.FRP-1440-32).
文摘:Strabismus is a medical condition that is defined as the lack of coordination between the eyes.When Strabismus is detected at an early age,the chances of curing it are higher.The methods used to detect strabismus and measure its degree of deviation are complex and time-consuming,and they always require the presence of a physician.In this paper,we present a method of detecting strabismus and measuring its degree of deviation using videos of the patient’s eye region under a cover test.Our method involves extracting features from a set of training videos(training corpora)and using them to build a classifier.A decision tree(ID3)is built using labeled cases from actual strabismus diagnosis.Patterns are extracted from the corresponding videos of patients,and an association between the extracted features and actual diagnoses is established.Matching Rules from the correlation plot are used to predict diagnoses for future patients.The classifier was tested using a set of testing videos(testing corpora).The results showed 95.9%accuracy,4.1%were light cases and could not be detected correctly from the videos,half of them were false positive and the other half was false negative.
文摘The status of the social and human sciences as genuine sciences on a par with the natural sciences has widely been held in doubt, and the subject-oriented approach (SOA) to knowledge also shows the traditional scientific view to be misleaded. Its shows that it is mandatory to dismiss the idea that personal knowledge is a representation of a common world created by some God, and also the mistake to take the seductive noun/verb structure as for given. We need a new methodological paradigm of science--an approach that avoids the pitfalls of dualism and realism--and take the effort to couch its thinking in a re-interpretation of natural language. This line of reasoning paves the way for the SOA--a new epistemology that takes the individual knower and its feelings as the coherent point of departure. The traits of a new foundation are sketched and to that end a bootstrap model is proposed that departs from the early man's first experience. In doing so, we, in a subject-oriented manner, can bring man's living experience and his priverse (or private universe), under the collective umbrella of a consensual science. This approach brings the promise to provide a sound theory of everything-or rather a theory of every thin/kin/g-which in one step removes the cleft between the natural and social sciences.
文摘In digital signal processing (DSP), Nyquistrate sampling completely describes a signal by exploiting its bandlimitedness. Compressed Sensing (CS), also known as compressive sampling, is a DSP technique efficiently acquiring and reconstructing a signal completely from reduced number of measurements, by exploiting its compressibility. The measurements are not point samples but more general linear functions of the signal. CS can capture and represent sparse signals at a rate significantly lower than ordinarily used in the Shannon’s sampling theorem. It is interesting to notice that most signals in reality are sparse;especially when they are represented in some domain (such as the wavelet domain) where many coefficients are close to or equal to zero. A signal is called K-sparse, if it can be exactly represented by a basis, , and a set of coefficients , where only K coefficients are nonzero. A signal is called approximately K-sparse, if it can be represented up to a certain accuracy using K non-zero coefficients. As an example, a K-sparse signal is the class of signals that are the sum of K sinusoids chosen from the N harmonics of the observed time interval. Taking the DFT of any such signal would render only K non-zero values . An example of approximately sparse signals is when the coefficients , sorted by magnitude, decrease following a power law. In this case the sparse approximation constructed by choosing the K largest coefficients is guaranteed to have an approximation error that decreases with the same power law as the coefficients. The main limitation of CS-based systems is that they are employing iterative algorithms to recover the signal. The sealgorithms are slow and the hardware solution has become crucial for higher performance and speed. This technique enables fewer data samples than traditionally required when capturing a signal with relatively high bandwidth, but a low information rate. As a main feature of CS, efficient algorithms such as -minimization can be used for recovery. This paper gives a survey of both theoretical and numerical aspects of compressive sensing technique and its applications. The theory of CS has many potential applications in signal processing, wireless communication, cognitive radio and medical imaging.
基金the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project Number PNU-DRI-RI-20-029.
文摘Detection of brain tumors in MRI images is the first step in brain cancer diagnosis.The accuracy of the diagnosis depends highly on the expertise of radiologists.Therefore,automated diagnosis of brain cancer from MRI is receiving a large amount of attention.Also,MRI tumor detection is usually followed by a biopsy(an invasive procedure),which is a medical procedure for brain tumor classification.It is of high importance to devise automated methods to aid radiologists in brain cancer tumor diagnosis without resorting to invasive procedures.Convolutional neural network(CNN)is deemed to be one of the best machine learning algorithms to achieve high-accuracy results in tumor identification and classification.In this paper,a CNN-based technique for brain tumor classification has been developed.The proposed CNN can distinguish between normal(no-cancer),astrocytoma tumors,gliomatosis cerebri tumors,and glioblastoma tumors.The implemented CNN was tested on MRI images that underwent a motion-correction procedure.The CNN was evaluated using two performance measurement procedures.The first one is a k-fold cross-validation testing method,in which we tested the dataset using k=8,10,12,and 14.The best accuracy for this procedure was 96.26%when k=10.To overcome the over-fitting problem that could be occurred in the k-fold testing method,we used a hold-out testing method as a second evaluation procedure.The results of this procedure succeeded in attaining 97.8%accuracy,with a specificity of 99.2%and a sensitivity of 97.32%.With this high accuracy,the developed CNN architecture could be considered an effective automated diagnosis method for the classification of brain tumors from MRI images.
文摘Robust Clustering methods are aimed at avoiding unsatisfactory results resulting from the presence of certain amount of outlying observations in the input data of many practical applications such as biological sequences analysis or gene expressions analysis. This paper presents a fuzzy clustering algorithm based on average link and possibilistic clustering paradigm termed as AVLINK. It minimizes the average dissimilarity between pairs of patterns within the same cluster and at the same time the size of a cluster is maximized by computing the zeros of the derivative of proposed objective function. AVLINK along with the proposed initialization procedure show a high outliers rejection capability as it makes their membership very low furthermore it does not requires the number of clusters to be known in advance and it can discover clusters of non convex shape. The effectiveness and robustness of the proposed algorithms have been demonstrated on different types of protein data sets.
文摘The continuous development of cyberattacks is threatening digital transformation endeavors worldwide and leadsto wide losses for various organizations. These dangers have proven that signature-based approaches are insufficientto prevent emerging and polymorphic attacks. Therefore, this paper is proposing a Robust Malicious ExecutableDetection (RMED) using Host-based Machine Learning Classifier to discover malicious Portable Executable (PE)files in hosts using Windows operating systems through collecting PE headers and applying machine learningmechanisms to detect unknown infected files. The authors have collected a novel reliable dataset containing 116,031benign files and 179,071 malware samples from diverse sources to ensure the efficiency of RMED approach.The most effective PE headers that can highly differentiate between benign and malware files were selected totrain the model on 15 PE features to speed up the classification process and achieve real-time detection formalicious executables. The evaluation results showed that RMED succeeded in shrinking the classification timeto 91 milliseconds for each file while reaching an accuracy of 98.42% with a false positive rate equal to 1.58. Inconclusion, this paper contributes to the field of cybersecurity by presenting a comprehensive framework thatleverages Artificial Intelligence (AI) methods to proactively detect and prevent cyber-attacks.
文摘In this investigation,we have shown that the combination of deep learning,including natural language processing,and conformal prediction results in highly predictive and efficient temporal test set sentiment estimates for 12 categories of Amazon product reviews using either in-category predictions,i.e.the model and the test set are from the same review category or cross-category predictions,i.e.using a model of another review category for predicting the test set.The similar results from in-and cross-category predictions indicate high degree of generalizability across product review categories.The investigation also shows that the combination of deep learning and conformal prediction gracefully handles class imbalances without explicit class balancing measures.
文摘This paper presents a fast adaptive iterative algorithm to solve linearly separable classification problems in R n.In each iteration,a subset of the sampling data (n-points,where n is the number of features) is adaptively chosen and a hyperplane is constructed such that it separates the chosen n-points at a margin and best classifies the remaining points.The classification problem is formulated and the details of the algorithm are presented.Further,the algorithm is extended to solving quadratically separable classification problems.The basic idea is based on mapping the physical space to another larger one where the problem becomes linearly separable.Numerical illustrations show that few iteration steps are sufficient for convergence when classes are linearly separable.For nonlinearly separable data,given a specified maximum number of iteration steps,the algorithm returns the best hyperplane that minimizes the number of misclassified points occurring through these steps.Comparisons with other machine learning algorithms on practical and benchmark datasets are also presented,showing the performance of the proposed algorithm.
基金partially supported by Young Faculty Research Fund of Beijing Foreign Studies University(2015JT005)YETP(YETP0851)+3 种基金the National Natural Science Foundation of China(71371032)Key Project of Beijing Foreign Studies University Research Programs(2011XG003)the Humanities and Social Science Research Project of Ministry of Education(13YJA630125)the Fundamental Research Funds for the Central Universities
文摘Along with the rapid development of economics and enhancement of industrialization, the power demand keeps rising and frequently creates mismatch between demand and supply in electricity.This provides miscellaneous energy buy-back programs with great opportunities. Such programs, when activated, offer certain amount of financial compensations to participants for reducing their energy consumption during peak time. They aim at encouraging participants to shift their electricity usage from peak to non-peak time, and thereby release the demand pressure during peak time. This paper considers a periodic-review joint pricing and inventory decision model under an energy buy-back program over finite planning horizons, in which the compensation levels, setup cost and additive random demand function are incorporated. The objective is to maximize a manufacturer's expected total profit.By using Veinott's conditions, it is shown that the manufacturer's optimal decision is a state dependent(s, S, P) policy under a peak market condition, or partly an(s, S, A, P) policy under the normal market condition.