Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors...Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors and operations are compromised,which can lead to big problems,disruptions,failures and blackouts.In response to this challenge,this paper presents a reliable and innovative detection framework that leverages Bidirectional Long Short-Term Memory(Bi-LSTM)networks and employs explanatory methods from Artificial Intelligence(AI).Not only does the suggested architecture detect potential fraud with high accuracy,but it also makes its decisions transparent,enabling operators to take appropriate action.Themethod developed here utilizesmodel-free,interpretable tools to identify essential input elements,thereby making predictions more understandable and usable.Enhancing detection performance is made possible by correcting class imbalance using Synthetic Minority Over-sampling Technique(SMOTE)-based data balancing.Benchmark power system data confirms that the model functions correctly through detailed experiments.Experimental results showed that Bi-LSTM+Explainable AI(XAI)achieved an average accuracy of 94%,surpassing XGBoost(89%)and Bagging(84%),while ensuring explainability and a high level of robustness across various operating scenarios.By conducting an ablation study,we find that bidirectional recursive modeling and ReLU activation help improve generalization and model predictability.Additionally,examining model decisions through LIME enables us to identify which features are crucial for making smart grid operational decisions in real time.The research offers a practical and flexible approach for detecting FDI attacks,improving the security of cyber-physical systems,and facilitating the deployment of AI in energy infrastructure.展开更多
Globally,skin cancer is a prevalent form of malignancy,and its early and accurate diagnosis is critical for patient survival.Clinical evaluation of skin lesions is essential,but several challenges,such as long waiting...Globally,skin cancer is a prevalent form of malignancy,and its early and accurate diagnosis is critical for patient survival.Clinical evaluation of skin lesions is essential,but several challenges,such as long waiting times and subjective interpretations,make this task difficult.The recent advancement of deep learning in healthcare has shownmuch success in diagnosing and classifying skin cancer and has assisted dermatologists in clinics.Deep learning improves the speed and precision of skin cancer diagnosis,leading to earlier prediction and treatment.In this work,we proposed a novel deep architecture for skin cancer classification in innovative healthcare.The proposed framework performed data augmentation at the first step to resolve the imbalance issue in the selected dataset.The proposed architecture is based on two customized,innovative Convolutional neural network(CNN)models based on small depth and filter sizes.In the first model,four residual blocks are added in a squeezed fashion with a small filter size.In the second model,five residual blocks are added with smaller depth and more useful weight information of the lesion region.To make models more useful,we selected the hyperparameters through Bayesian Optimization,in which the learning rate is selected.After training the proposed models,deep features are extracted and fused using a novel information entropy-controlled Euclidean Distance technique.The final features are passed on to the classifiers,and classification results are obtained.Also,the proposed trained model is interpreted through LIME-based localization on the HAM10000 dataset.The experimental process of the proposed architecture is performed on two dermoscopic datasets,HAM10000 and ISIC2019.We obtained an improved accuracy of 90.8%and 99.3%on these datasets,respectively.Also,the proposed architecture returned 91.6%for the cancer localization.In conclusion,the proposed architecture accuracy is compared with several pre-trained and state-of-the-art(SOTA)techniques and shows improved performance.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a...Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.展开更多
With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contex...With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contextual understanding,sequential dependencies,and/or data imbalance.This makes distinction between genuine and fabricated news a challenging task.To address this problem,we propose a novel hybrid architecture,T5-SA-LSTM,which synergistically integrates the T5 Transformer for semantically rich contextual embedding with the Self-Attentionenhanced(SA)Long Short-Term Memory(LSTM).The LSTM is trained using the Adam optimizer,which provides faster and more stable convergence compared to the Stochastic Gradient Descend(SGD)and Root Mean Square Propagation(RMSProp).The WELFake and FakeNewsPrediction datasets are used,which consist of labeled news articles having fake and real news samples.Tokenization and Synthetic Minority Over-sampling Technique(SMOTE)methods are used for data preprocessing to ensure linguistic normalization and class imbalance.The incorporation of the Self-Attention(SA)mechanism enables the model to highlight critical words and phrases,thereby enhancing predictive accuracy.The proposed model is evaluated using accuracy,precision,recall(sensitivity),and F1-score as performance metrics.The model achieved 99%accuracy on the WELFake dataset and 96.5%accuracy on the FakeNewsPrediction dataset.It outperformed the competitive schemes such as T5-SA-LSTM(RMSProp),T5-SA-LSTM(SGD)and some other models.展开更多
As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a no...As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a novel,unified deep learning framework for vehicle detection,tracking,counting,and classification in aerial imagery designed explicitly for modern smart city infrastructure demands.Our approach begins with adaptive histogram equalization to optimize aerial image clarity,followed by a cutting-edge scene parsing technique using Mask2Former,enabling robust segmentation even in visually congested settings.Vehicle detection leverages the latest YOLOv11 architecture,delivering superior accuracy in aerial contexts by addressing occlusion,scale variance,and fine-grained object differentiation.We incorporate the highly efficient ByteTrack algorithm for tracking,enabling seamless identity preservation across frames.Vehicle counting is achieved through an unsupervised DBSCAN-based method,ensuring adaptability to varying traffic densities.We further introduce a hybrid feature extraction module combining Convolutional Neural Networks(CNNs)with Zernike Moments,capturing both deep semantic and geometric signatures of vehicles.The final classification is powered by NASNet,a neural architecture search-optimized model,ensuring high accuracy across diverse vehicle types and orientations.Extensive evaluations of the VAID benchmark dataset demonstrate the system’s outstanding performance,achieving 96%detection,94%tracking,and 96.4%classification accuracy.On the UAVDT dataset,the system attains 95%detection,93%tracking,and 95%classification accuracy,confirming its robustness across diverse aerial traffic scenarios.These results establish new benchmarks in aerial traffic analysis and validate the framework’s scalability,making it a powerful and adaptable solution for next-generation intelligent transportation systems and urban surveillance.展开更多
In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based ...In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based financial technologies(Fintech)have been identified as important disruptive driving forces for this paradigm shift.In this paper,we take an information economics perspective to investigate how big data affects the transformation of the lending industry.By identifying how signaling and search costs are reduced by big data analytics for credit risk management of P2P lending,we discuss how information asymmetry is reduced in the big data era.Rooted in the lending business,we propose a theory on the economics of big data and outline a number of research opportunities and challenging issues.展开更多
This paper deals with the robust control problem for a class of uncertain nonlinear networked systems with stochastic communication delays via sliding mode conception (SMC). A sequence of variables obeying Bernoulli...This paper deals with the robust control problem for a class of uncertain nonlinear networked systems with stochastic communication delays via sliding mode conception (SMC). A sequence of variables obeying Bernoulli distribution are employed to model the randomly occurring communication delays which could be different for different state variables. A discrete switching function that is different from those in the existing literature is first proposed. Then, expressed as the feasibility of a linear matrix inequality (LMI) with an equality constraint, sufficient conditions are derived in order to ensure the globally mean-square asymptotic stability of the system dynamics on the sliding surface. A discrete-time SMC controller is then synthesized to guarantee the discrete-time sliding mode reaching condition with the specified sliding surface. Finally, a simulation example is given to show the effectiveness of the proposed method.展开更多
Mobile edge computing(MEC)provides effective cloud services and functionality at the edge device,to improve the quality of service(QoS)of end users by offloading the high computation tasks.Currently,the introduction o...Mobile edge computing(MEC)provides effective cloud services and functionality at the edge device,to improve the quality of service(QoS)of end users by offloading the high computation tasks.Currently,the introduction of deep learning(DL)and hardware technologies paves amethod in detecting the current traffic status,data offloading,and cyberattacks in MEC.This study introduces an artificial intelligence with metaheuristic based data offloading technique for Secure MEC(AIMDO-SMEC)systems.The proposed AIMDO-SMEC technique incorporates an effective traffic prediction module using Siamese Neural Networks(SNN)to determine the traffic status in the MEC system.Also,an adaptive sampling cross entropy(ASCE)technique is utilized for data offloading in MEC systems.Moreover,the modified salp swarm algorithm(MSSA)with extreme gradient boosting(XGBoost)technique was implemented to identification and classification of cyberattack that exist in the MEC systems.For examining the enhanced outcomes of the AIMDO-SMEC technique,a comprehensive experimental analysis is carried out and the results demonstrated the enhanced outcomes of the AIMDOSMEC technique with the minimal completion time of tasks(CTT)of 0.680.展开更多
Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traff...Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traffic features with high information gain are primarily found in data link layers rather than application layers in wired networks.This survey investigates some of the complexities and challenges in deploying wireless IDS in terms of data collection methods,IDS techniques,IDS placement strategies,and traffic data analysis techniques.This paper’s main finding highlights the lack of available network traces for training modern machine-learning models against IoT specific intrusions.Specifically,the Knowledge Discovery in Databases(KDD)Cup dataset is reviewed to highlight the design challenges of wireless intrusion detection based on current data attributes and proposed several guidelines to future-proof following traffic capture methods in the wireless network(WN).The paper starts with a review of various intrusion detection techniques,data collection methods and placement methods.The main goal of this paper is to study the design challenges of deploying intrusion detection system in a wireless environment.Intrusion detection system deployment in a wireless environment is not as straightforward as in the wired network environment due to the architectural complexities.So this paper reviews the traditional wired intrusion detection deployment methods and discusses how these techniques could be adopted into the wireless environment and also highlights the design challenges in the wireless environment.The main wireless environments to look into would be Wireless Sensor Networks(WSN),Mobile Ad Hoc Networks(MANET)and IoT as this are the future trends and a lot of attacks have been targeted into these networks.So it is very crucial to design an IDS specifically to target on the wireless networks.展开更多
Background:We examine the signaling effect of borrowers’social media behavior,especially self-disclosure behavior,on the default probability of money borrowers on a peer-to-peer(P2P)lending site.Method:We use a uniqu...Background:We examine the signaling effect of borrowers’social media behavior,especially self-disclosure behavior,on the default probability of money borrowers on a peer-to-peer(P2P)lending site.Method:We use a unique dataset that combines loan data from a large P2P lending site with the borrower’s social media presence data from a popular social media site.Results:Through a natural experiment enabled by an instrument variable,we identify two forms of social media information that act as signals of borrowers’creditworthiness:(1)borrowers’choice to self-disclose their social media account to the P2P lending site,and(2)borrowers’social media behavior,such as their social network scope and social media engagement.Conclusion:This study offers new insights for screening borrowers in P2P lending and a novel usage of social media information.展开更多
Cyber-physical systems(CPS)are increasingly commonplace,with applications in energy,health,transportation,and many other sectors.One of the major requirements in CPS is that the interaction between cyber-world and man...Cyber-physical systems(CPS)are increasingly commonplace,with applications in energy,health,transportation,and many other sectors.One of the major requirements in CPS is that the interaction between cyber-world and man-made physical world(exchanging and sharing of data and information with other physical objects and systems)must be safe,especially in bi-directional communications.In particular,there is a need to suitably address security and/or privacy concerns in this human-in-the-loop CPS ecosystem.However,existing centralized architecture models in CPS,and also the more general IoT systems,have a number of associated limitations,in terms of single point of failure,data privacy,security,robustness,etc.Such limitations reinforce the importance of designing reliable,secure and privacy-preserving distributed solutions and other novel approaches,such as those based on blockchain technology due to its features(e.g.,decentralization,transparency and immutability of data).This is the focus of this special issue.展开更多
The main objective of this research is to provide a solution for online exam systems by using face recognition to authenticate learners for attending an online exam. More importantly, the system continuously (with sho...The main objective of this research is to provide a solution for online exam systems by using face recognition to authenticate learners for attending an online exam. More importantly, the system continuously (with short time intervals), checks for learner identity during the whole exam period to ensure that the learner who started the exam is the same one who continued until the end and prevent the possibility of cheating by looking at adjacent PC or reading from an external paper. The system will issue an early warning to the learners if suspicious behavior has been noticed by the system. The proposed system has been presented to eight e-learning instructors and experts in addition to 32 students to gather feedback and to study the impact and the benefit of such system in e-learning environment.展开更多
Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both cus...Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.展开更多
The spread of social media has increased contacts of members of communities on the lntemet. Members of these communities often use account names instead of real names. When they meet in the real world, they will find ...The spread of social media has increased contacts of members of communities on the lntemet. Members of these communities often use account names instead of real names. When they meet in the real world, they will find it useful to have a tool that enables them to associate the faces in fiont of them with the account names they know. This paper proposes a method that enables a person to identify the account name of the person ("target") in front of him/her using a smartphone. The attendees to a meeting exchange their identifiers (i.e., the account name) and GPS information using smartphones. When the user points his/her smartphone towards a target, the target's identifier is displayed near the target's head on the camera screen using AR (augmented reality). The position where the identifier is displayed is calculated from the differences in longitude and latitude between the user and the target and the azimuth direction of the target from the user. The target is identified based on this information, the face detection coordinates, and the distance between the two. The proposed method has been implemented using Android terminals, and identification accuracy has been examined through experiments.展开更多
Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple ...Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.展开更多
To guarantee a unified response to disasters, humanitarian organizations work together via the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Although the OCHA has made great strides to imp...To guarantee a unified response to disasters, humanitarian organizations work together via the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Although the OCHA has made great strides to improve its information management and increase the availability of accurate, real-time data for disaster and humanitarian response teams, significant gaps persist. There are inefficiencies in the emergency management of data at every stage of its lifecycle: collection, processing, analysis, distribution, storage, and retrieval. Disaster risk reduction and disaster risk management are the two main tenets of the United Nations’ worldwide plan for disaster management. Information systems are crucial because of the crucial roles they play in capturing, processing, and transmitting data. The management of information is seldom discussed in published works. The goal of this study is to employ qualitative research methods to provide insight by facilitating an expanded comprehension of relevant contexts, phenomena, and individual experiences. Humanitarian workers and OCHA staffers will take part in the research. The study subjects will be chosen using a random selection procedure. Online surveys with both closed- and open-ended questions will be used to compile the data. UN OCHA offers a structure for the handling of information via which all humanitarian actors may contribute to the overall response. This research will enable the UN Office for OCHA better gather, process, analyze, disseminate, store, and retrieve data in the event of a catastrophe or humanitarian crisis.展开更多
The increasing quantity of sensitive and personal data being gathered by data controllers has raised the security needs in the cloud environment.Cloud computing(CC)is used for storing as well as processing data.Theref...The increasing quantity of sensitive and personal data being gathered by data controllers has raised the security needs in the cloud environment.Cloud computing(CC)is used for storing as well as processing data.Therefore,security becomes important as the CC handles massive quantity of outsourced,and unprotected sensitive data for public access.This study introduces a novel chaotic chimp optimization with machine learning enabled information security(CCOML-IS)technique on cloud environment.The proposed CCOML-IS technique aims to accomplish maximum security in the CC environment by the identification of intrusions or anomalies in the network.The proposed CCOML-IS technique primarily normalizes the networking data by the use of data conversion and min-max normalization.Followed by,the CCOML-IS technique derives a feature selection technique using chaotic chimp optimization algorithm(CCOA).In addition,kernel ridge regression(KRR)classifier is used for the detection of security issues in the network.The design of CCOA technique assists in choosing optimal features and thereby boost the classification performance.A wide set of experimentations were carried out on benchmark datasets and the results are assessed under several measures.The comparison study reported the enhanced outcomes of the CCOML-IS technique over the recent approaches interms of several measures.展开更多
Systems’ usability is one of the critical attribute of any system’s quality. Medical prac-titioners usually encounter usability difficulties while using a health information system like other systems. There are diff...Systems’ usability is one of the critical attribute of any system’s quality. Medical prac-titioners usually encounter usability difficulties while using a health information system like other systems. There are different usability factors, which are expected to influence systems’ usability. Errors preventions, patient safety and privacy are vital usability factors and should not be ignored while developing a health information system. This study is based on a comprehensive analysis of published academic and industrial literature to provide the current status of health information systems’ usability. It also identifies different usability factors such as privacy, errors, design and efficiency. Usability factors are then assessed. Those factors are further examined through a questionnaire to study the priorities of them from medical practitioners’ point of view in Saudi Arabia. The statistical analysis shows that the privacy and errors are very critical than the other usability factors. The study results further revealed that availability and response time are the main challenges faced by the medical practitioners when using the HIS. However, flexibility and customizability were claimed to ease the use of the HIS. In addition, a number of statistical correlations were established. Overall, the study findings seemed helpful to designers and implementers to consider these factors for successful implementation of HIS.展开更多
The cyberspace has simultaneously presented opportunities and challenges alike for personal data security and privacy, as well as the process of research and learning. Moreover, information such as academic data, rese...The cyberspace has simultaneously presented opportunities and challenges alike for personal data security and privacy, as well as the process of research and learning. Moreover, information such as academic data, research data, personal data, proprietary knowledge, complex equipment designs and blueprints for yet to be patented products has all become extremely susceptible to Cybersecurity attacks. This research will investigate factors that affect that may have an influence on perceived ease of use of Cybersecurity, the influence of perceived ease of use on the attitude towards using Cybersecurity, the influence of attitude towards using Cybersecurity on the actual use of Cybersecurity and the influences of job positions on perceived ease of use of Cybersecurity and on the attitude towards using Cybersecurity and on the actual use of Cybersecurity. A model was constructed to investigate eight hypotheses that are related to the investigation. An online questionnaire was constructed to collect data and results showed that hypotheses 1 to 7 influence were significant. However, hypothesis 8 turned out to be insignificant and no influence was found between job positions and the actual use of Cybersecurity.展开更多
基金the Deanship of Scientific Research and Libraries in Princess Nourah bint Abdulrahman University for funding this research work through the Research Group project,Grant No.(RG-1445-0064).
文摘Although digital changes in power systems have added more ways to monitor and control them,these changes have also led to new cyber-attack risks,mainly from False Data Injection(FDI)attacks.If this happens,the sensors and operations are compromised,which can lead to big problems,disruptions,failures and blackouts.In response to this challenge,this paper presents a reliable and innovative detection framework that leverages Bidirectional Long Short-Term Memory(Bi-LSTM)networks and employs explanatory methods from Artificial Intelligence(AI).Not only does the suggested architecture detect potential fraud with high accuracy,but it also makes its decisions transparent,enabling operators to take appropriate action.Themethod developed here utilizesmodel-free,interpretable tools to identify essential input elements,thereby making predictions more understandable and usable.Enhancing detection performance is made possible by correcting class imbalance using Synthetic Minority Over-sampling Technique(SMOTE)-based data balancing.Benchmark power system data confirms that the model functions correctly through detailed experiments.Experimental results showed that Bi-LSTM+Explainable AI(XAI)achieved an average accuracy of 94%,surpassing XGBoost(89%)and Bagging(84%),while ensuring explainability and a high level of robustness across various operating scenarios.By conducting an ablation study,we find that bidirectional recursive modeling and ReLU activation help improve generalization and model predictability.Additionally,examining model decisions through LIME enables us to identify which features are crucial for making smart grid operational decisions in real time.The research offers a practical and flexible approach for detecting FDI attacks,improving the security of cyber-physical systems,and facilitating the deployment of AI in energy infrastructure.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(*MSIT)(No.2018R1A5A7059549)supported through Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R508)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia。
文摘Globally,skin cancer is a prevalent form of malignancy,and its early and accurate diagnosis is critical for patient survival.Clinical evaluation of skin lesions is essential,but several challenges,such as long waiting times and subjective interpretations,make this task difficult.The recent advancement of deep learning in healthcare has shownmuch success in diagnosing and classifying skin cancer and has assisted dermatologists in clinics.Deep learning improves the speed and precision of skin cancer diagnosis,leading to earlier prediction and treatment.In this work,we proposed a novel deep architecture for skin cancer classification in innovative healthcare.The proposed framework performed data augmentation at the first step to resolve the imbalance issue in the selected dataset.The proposed architecture is based on two customized,innovative Convolutional neural network(CNN)models based on small depth and filter sizes.In the first model,four residual blocks are added in a squeezed fashion with a small filter size.In the second model,five residual blocks are added with smaller depth and more useful weight information of the lesion region.To make models more useful,we selected the hyperparameters through Bayesian Optimization,in which the learning rate is selected.After training the proposed models,deep features are extracted and fused using a novel information entropy-controlled Euclidean Distance technique.The final features are passed on to the classifiers,and classification results are obtained.Also,the proposed trained model is interpreted through LIME-based localization on the HAM10000 dataset.The experimental process of the proposed architecture is performed on two dermoscopic datasets,HAM10000 and ISIC2019.We obtained an improved accuracy of 90.8%and 99.3%on these datasets,respectively.Also,the proposed architecture returned 91.6%for the cancer localization.In conclusion,the proposed architecture accuracy is compared with several pre-trained and state-of-the-art(SOTA)techniques and shows improved performance.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R195)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘With the increasing growth of online news,fake electronic news detection has become one of the most important paradigms of modern research.Traditional electronic news detection techniques are generally based on contextual understanding,sequential dependencies,and/or data imbalance.This makes distinction between genuine and fabricated news a challenging task.To address this problem,we propose a novel hybrid architecture,T5-SA-LSTM,which synergistically integrates the T5 Transformer for semantically rich contextual embedding with the Self-Attentionenhanced(SA)Long Short-Term Memory(LSTM).The LSTM is trained using the Adam optimizer,which provides faster and more stable convergence compared to the Stochastic Gradient Descend(SGD)and Root Mean Square Propagation(RMSProp).The WELFake and FakeNewsPrediction datasets are used,which consist of labeled news articles having fake and real news samples.Tokenization and Synthetic Minority Over-sampling Technique(SMOTE)methods are used for data preprocessing to ensure linguistic normalization and class imbalance.The incorporation of the Self-Attention(SA)mechanism enables the model to highlight critical words and phrases,thereby enhancing predictive accuracy.The proposed model is evaluated using accuracy,precision,recall(sensitivity),and F1-score as performance metrics.The model achieved 99%accuracy on the WELFake dataset and 96.5%accuracy on the FakeNewsPrediction dataset.It outperformed the competitive schemes such as T5-SA-LSTM(RMSProp),T5-SA-LSTM(SGD)and some other models.
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB BremenThe authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Group Project under grant number(RGP2/367/46)+1 种基金This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘As urban landscapes evolve and vehicular volumes soar,traditional traffic monitoring systems struggle to scale,often failing under the complexities of dense,dynamic,and occluded environments.This paper introduces a novel,unified deep learning framework for vehicle detection,tracking,counting,and classification in aerial imagery designed explicitly for modern smart city infrastructure demands.Our approach begins with adaptive histogram equalization to optimize aerial image clarity,followed by a cutting-edge scene parsing technique using Mask2Former,enabling robust segmentation even in visually congested settings.Vehicle detection leverages the latest YOLOv11 architecture,delivering superior accuracy in aerial contexts by addressing occlusion,scale variance,and fine-grained object differentiation.We incorporate the highly efficient ByteTrack algorithm for tracking,enabling seamless identity preservation across frames.Vehicle counting is achieved through an unsupervised DBSCAN-based method,ensuring adaptability to varying traffic densities.We further introduce a hybrid feature extraction module combining Convolutional Neural Networks(CNNs)with Zernike Moments,capturing both deep semantic and geometric signatures of vehicles.The final classification is powered by NASNet,a neural architecture search-optimized model,ensuring high accuracy across diverse vehicle types and orientations.Extensive evaluations of the VAID benchmark dataset demonstrate the system’s outstanding performance,achieving 96%detection,94%tracking,and 96.4%classification accuracy.On the UAVDT dataset,the system attains 95%detection,93%tracking,and 95%classification accuracy,confirming its robustness across diverse aerial traffic scenarios.These results establish new benchmarks in aerial traffic analysis and validate the framework’s scalability,making it a powerful and adaptable solution for next-generation intelligent transportation systems and urban surveillance.
文摘In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based financial technologies(Fintech)have been identified as important disruptive driving forces for this paradigm shift.In this paper,we take an information economics perspective to investigate how big data affects the transformation of the lending industry.By identifying how signaling and search costs are reduced by big data analytics for credit risk management of P2P lending,we discuss how information asymmetry is reduced in the big data era.Rooted in the lending business,we propose a theory on the economics of big data and outline a number of research opportunities and challenging issues.
基金supported by the Engineering and Physical Sciences Research Council(EPSRC)of the UK(No.GR/S27658/01)the Royal Society of the UK and the Alexander von Humboldt Foundation of Germany
文摘This paper deals with the robust control problem for a class of uncertain nonlinear networked systems with stochastic communication delays via sliding mode conception (SMC). A sequence of variables obeying Bernoulli distribution are employed to model the randomly occurring communication delays which could be different for different state variables. A discrete switching function that is different from those in the existing literature is first proposed. Then, expressed as the feasibility of a linear matrix inequality (LMI) with an equality constraint, sufficient conditions are derived in order to ensure the globally mean-square asymptotic stability of the system dynamics on the sliding surface. A discrete-time SMC controller is then synthesized to guarantee the discrete-time sliding mode reaching condition with the specified sliding surface. Finally, a simulation example is given to show the effectiveness of the proposed method.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP 2/209/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R77),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Mobile edge computing(MEC)provides effective cloud services and functionality at the edge device,to improve the quality of service(QoS)of end users by offloading the high computation tasks.Currently,the introduction of deep learning(DL)and hardware technologies paves amethod in detecting the current traffic status,data offloading,and cyberattacks in MEC.This study introduces an artificial intelligence with metaheuristic based data offloading technique for Secure MEC(AIMDO-SMEC)systems.The proposed AIMDO-SMEC technique incorporates an effective traffic prediction module using Siamese Neural Networks(SNN)to determine the traffic status in the MEC system.Also,an adaptive sampling cross entropy(ASCE)technique is utilized for data offloading in MEC systems.Moreover,the modified salp swarm algorithm(MSSA)with extreme gradient boosting(XGBoost)technique was implemented to identification and classification of cyberattack that exist in the MEC systems.For examining the enhanced outcomes of the AIMDO-SMEC technique,a comprehensive experimental analysis is carried out and the results demonstrated the enhanced outcomes of the AIMDOSMEC technique with the minimal completion time of tasks(CTT)of 0.680.
基金The authors acknowledge Jouf University,Saudi Arabia for his funding support.
文摘Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traffic features with high information gain are primarily found in data link layers rather than application layers in wired networks.This survey investigates some of the complexities and challenges in deploying wireless IDS in terms of data collection methods,IDS techniques,IDS placement strategies,and traffic data analysis techniques.This paper’s main finding highlights the lack of available network traces for training modern machine-learning models against IoT specific intrusions.Specifically,the Knowledge Discovery in Databases(KDD)Cup dataset is reviewed to highlight the design challenges of wireless intrusion detection based on current data attributes and proposed several guidelines to future-proof following traffic capture methods in the wireless network(WN).The paper starts with a review of various intrusion detection techniques,data collection methods and placement methods.The main goal of this paper is to study the design challenges of deploying intrusion detection system in a wireless environment.Intrusion detection system deployment in a wireless environment is not as straightforward as in the wired network environment due to the architectural complexities.So this paper reviews the traditional wired intrusion detection deployment methods and discusses how these techniques could be adopted into the wireless environment and also highlights the design challenges in the wireless environment.The main wireless environments to look into would be Wireless Sensor Networks(WSN),Mobile Ad Hoc Networks(MANET)and IoT as this are the future trends and a lot of attacks have been targeted into these networks.So it is very crucial to design an IDS specifically to target on the wireless networks.
基金Juan Feng would like to acknowledge GRF(General Research Fund)9042133City U SRG grant 7004566Bin Gu would like to acknowledge National Natural Science Foundation of China[Grant 71328102].
文摘Background:We examine the signaling effect of borrowers’social media behavior,especially self-disclosure behavior,on the default probability of money borrowers on a peer-to-peer(P2P)lending site.Method:We use a unique dataset that combines loan data from a large P2P lending site with the borrower’s social media presence data from a popular social media site.Results:Through a natural experiment enabled by an instrument variable,we identify two forms of social media information that act as signals of borrowers’creditworthiness:(1)borrowers’choice to self-disclose their social media account to the P2P lending site,and(2)borrowers’social media behavior,such as their social network scope and social media engagement.Conclusion:This study offers new insights for screening borrowers in P2P lending and a novel usage of social media information.
文摘Cyber-physical systems(CPS)are increasingly commonplace,with applications in energy,health,transportation,and many other sectors.One of the major requirements in CPS is that the interaction between cyber-world and man-made physical world(exchanging and sharing of data and information with other physical objects and systems)must be safe,especially in bi-directional communications.In particular,there is a need to suitably address security and/or privacy concerns in this human-in-the-loop CPS ecosystem.However,existing centralized architecture models in CPS,and also the more general IoT systems,have a number of associated limitations,in terms of single point of failure,data privacy,security,robustness,etc.Such limitations reinforce the importance of designing reliable,secure and privacy-preserving distributed solutions and other novel approaches,such as those based on blockchain technology due to its features(e.g.,decentralization,transparency and immutability of data).This is the focus of this special issue.
文摘The main objective of this research is to provide a solution for online exam systems by using face recognition to authenticate learners for attending an online exam. More importantly, the system continuously (with short time intervals), checks for learner identity during the whole exam period to ensure that the learner who started the exam is the same one who continued until the end and prevent the possibility of cheating by looking at adjacent PC or reading from an external paper. The system will issue an early warning to the learners if suspicious behavior has been noticed by the system. The proposed system has been presented to eight e-learning instructors and experts in addition to 32 students to gather feedback and to study the impact and the benefit of such system in e-learning environment.
基金Ministry of Higher Education of Malaysia under the Research GrantLRGS/1/2019/UKM-UKM/5/2 and Princess Nourah bint Abdulrahman University for financing this researcher through Supporting Project Number(PNURSP2024R235),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.
文摘The spread of social media has increased contacts of members of communities on the lntemet. Members of these communities often use account names instead of real names. When they meet in the real world, they will find it useful to have a tool that enables them to associate the faces in fiont of them with the account names they know. This paper proposes a method that enables a person to identify the account name of the person ("target") in front of him/her using a smartphone. The attendees to a meeting exchange their identifiers (i.e., the account name) and GPS information using smartphones. When the user points his/her smartphone towards a target, the target's identifier is displayed near the target's head on the camera screen using AR (augmented reality). The position where the identifier is displayed is calculated from the differences in longitude and latitude between the user and the target and the azimuth direction of the target from the user. The target is identified based on this information, the face detection coordinates, and the distance between the two. The proposed method has been implemented using Android terminals, and identification accuracy has been examined through experiments.
基金[King Abdulaziz University][Deanship of Scientific Research]Grant Number[KEP-PHD-20-611-42].
文摘Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.
文摘To guarantee a unified response to disasters, humanitarian organizations work together via the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Although the OCHA has made great strides to improve its information management and increase the availability of accurate, real-time data for disaster and humanitarian response teams, significant gaps persist. There are inefficiencies in the emergency management of data at every stage of its lifecycle: collection, processing, analysis, distribution, storage, and retrieval. Disaster risk reduction and disaster risk management are the two main tenets of the United Nations’ worldwide plan for disaster management. Information systems are crucial because of the crucial roles they play in capturing, processing, and transmitting data. The management of information is seldom discussed in published works. The goal of this study is to employ qualitative research methods to provide insight by facilitating an expanded comprehension of relevant contexts, phenomena, and individual experiences. Humanitarian workers and OCHA staffers will take part in the research. The study subjects will be chosen using a random selection procedure. Online surveys with both closed- and open-ended questions will be used to compile the data. UN OCHA offers a structure for the handling of information via which all humanitarian actors may contribute to the overall response. This research will enable the UN Office for OCHA better gather, process, analyze, disseminate, store, and retrieve data in the event of a catastrophe or humanitarian crisis.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP 2/49/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R237),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The increasing quantity of sensitive and personal data being gathered by data controllers has raised the security needs in the cloud environment.Cloud computing(CC)is used for storing as well as processing data.Therefore,security becomes important as the CC handles massive quantity of outsourced,and unprotected sensitive data for public access.This study introduces a novel chaotic chimp optimization with machine learning enabled information security(CCOML-IS)technique on cloud environment.The proposed CCOML-IS technique aims to accomplish maximum security in the CC environment by the identification of intrusions or anomalies in the network.The proposed CCOML-IS technique primarily normalizes the networking data by the use of data conversion and min-max normalization.Followed by,the CCOML-IS technique derives a feature selection technique using chaotic chimp optimization algorithm(CCOA).In addition,kernel ridge regression(KRR)classifier is used for the detection of security issues in the network.The design of CCOA technique assists in choosing optimal features and thereby boost the classification performance.A wide set of experimentations were carried out on benchmark datasets and the results are assessed under several measures.The comparison study reported the enhanced outcomes of the CCOML-IS technique over the recent approaches interms of several measures.
文摘Systems’ usability is one of the critical attribute of any system’s quality. Medical prac-titioners usually encounter usability difficulties while using a health information system like other systems. There are different usability factors, which are expected to influence systems’ usability. Errors preventions, patient safety and privacy are vital usability factors and should not be ignored while developing a health information system. This study is based on a comprehensive analysis of published academic and industrial literature to provide the current status of health information systems’ usability. It also identifies different usability factors such as privacy, errors, design and efficiency. Usability factors are then assessed. Those factors are further examined through a questionnaire to study the priorities of them from medical practitioners’ point of view in Saudi Arabia. The statistical analysis shows that the privacy and errors are very critical than the other usability factors. The study results further revealed that availability and response time are the main challenges faced by the medical practitioners when using the HIS. However, flexibility and customizability were claimed to ease the use of the HIS. In addition, a number of statistical correlations were established. Overall, the study findings seemed helpful to designers and implementers to consider these factors for successful implementation of HIS.
文摘The cyberspace has simultaneously presented opportunities and challenges alike for personal data security and privacy, as well as the process of research and learning. Moreover, information such as academic data, research data, personal data, proprietary knowledge, complex equipment designs and blueprints for yet to be patented products has all become extremely susceptible to Cybersecurity attacks. This research will investigate factors that affect that may have an influence on perceived ease of use of Cybersecurity, the influence of perceived ease of use on the attitude towards using Cybersecurity, the influence of attitude towards using Cybersecurity on the actual use of Cybersecurity and the influences of job positions on perceived ease of use of Cybersecurity and on the attitude towards using Cybersecurity and on the actual use of Cybersecurity. A model was constructed to investigate eight hypotheses that are related to the investigation. An online questionnaire was constructed to collect data and results showed that hypotheses 1 to 7 influence were significant. However, hypothesis 8 turned out to be insignificant and no influence was found between job positions and the actual use of Cybersecurity.