期刊文献+
共找到41篇文章
< 1 2 3 >
每页显示 20 50 100
A Systematic Literature Review of Machine Learning and Deep Learning Approaches for Spectral Image Classification in Agricultural Applications Using Aerial Photography 被引量:2
1
作者 Usman Khan Muhammad Khalid Khan +4 位作者 Muhammad Ayub Latif Muhammad Naveed Muhammad Mansoor Alam Salman A.Khan Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第3期2967-3000,共34页
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma... Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements. 展开更多
关键词 Machine learning deep learning unmanned aerial vehicles multi-spectral images image recognition object detection hyperspectral images aerial photography
在线阅读 下载PDF
Enhancing Software Cost Estimation Using Feature Selection and Machine Learning Techniques
2
作者 Fizza Mansoor Muhammad Affan Alim +2 位作者 Muhammad Taha Jilani Muhammad Monsoor Alam Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第12期4603-4624,共22页
Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software c... Software cost estimation is a crucial aspect of software project management,significantly impacting productivity and planning.This research investigates the impact of various feature selection techniques on software cost estimation accuracy using the CoCoMo NASA dataset,which comprises data from 93 unique software projects with 24 attributes.By applying multiple machine learning algorithms alongside three feature selection methods,this study aims to reduce data redundancy and enhance model accuracy.Our findings reveal that the principal component analysis(PCA)-based feature selection technique achieved the highest performance,underscoring the importance of optimal feature selection in improving software cost estimation accuracy.It is demonstrated that our proposed method outperforms the existing method while achieving the highest precision,accuracy,and recall rates. 展开更多
关键词 Machine learning software cost estimation PCA hyper parameter feature selection
在线阅读 下载PDF
AI-Driven Learning Management Systems:Modern Developments, Challenges and Future Trends during theAge of ChatGPT
3
作者 Sameer Qazi Muhammad Bilal Kadri +4 位作者 Muhammad Naveed Bilal AKhawaja Sohaib Zia Khan Muhammad Mansoor Alam Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第8期3289-3314,共26页
COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of en... COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of entire nations had shifted to online education during this time.Many shortcomings of Learning Management Systems(LMSs)were detected to support education in an online mode that spawned the research in Artificial Intelligence(AI)based tools that are being developed by the research community to improve the effectiveness of LMSs.This paper presents a detailed survey of the different enhancements to LMSs,which are led by key advances in the area of AI to enhance the real-time and non-real-time user experience.The AI-based enhancements proposed to the LMSs start from the Application layer and Presentation layer in the form of flipped classroom models for the efficient learning environment and appropriately designed UI/UX for efficient utilization of LMS utilities and resources,including AI-based chatbots.Session layer enhancements are also required,such as AI-based online proctoring and user authentication using Biometrics.These extend to the Transport layer to support real-time and rate adaptive encrypted video transmission for user security/privacy and satisfactory working of AI-algorithms.It also needs the support of the Networking layer for IP-based geolocation features,the Virtual Private Network(VPN)feature,and the support of Software-Defined Networks(SDN)for optimum Quality of Service(QoS).Finally,in addition to these,non-real-time user experience is enhanced by other AI-based enhancements such as Plagiarism detection algorithms and Data Analytics. 展开更多
关键词 Learning management systems chatbots ChatGPT online education Internet of Things(IoT) artificial intelligence(AI) convolutional neural networks natural language processing
在线阅读 下载PDF
A Survey of Lung Nodules Detection and Classification from CT Scan Images
4
作者 Salman Ahmed Fazli Subhan +2 位作者 Mazliham Mohd Su’ud Muhammad Mansoor Alam Adil Waheed 《Computer Systems Science & Engineering》 2024年第6期1483-1511,共29页
In the contemporary era,the death rate is increasing due to lung cancer.However,technology is continuously enhancing the quality of well-being.To improve the survival rate,radiologists rely on Computed Tomography(CT)s... In the contemporary era,the death rate is increasing due to lung cancer.However,technology is continuously enhancing the quality of well-being.To improve the survival rate,radiologists rely on Computed Tomography(CT)scans for early detection and diagnosis of lung nodules.This paper presented a detailed,systematic review of several identification and categorization techniques for lung nodules.The analysis of the report explored the challenges,advancements,and future opinions in computer-aided diagnosis CAD systems for detecting and classifying lung nodules employing the deep learning(DL)algorithm.The findings also highlighted the usefulness of DL networks,especially convolutional neural networks(CNNs)in elevating sensitivity,accuracy,and specificity as well as overcoming false positives in the initial stages of lung cancer detection.This paper further presented the integral nodule classification stage,which stressed the importance of differentiating between benign and malignant nodules for initial cancer diagnosis.Moreover,the findings presented a comprehensive analysis of multiple techniques and studies for nodule classification,highlighting the evolution of methodologies from conventional machine learning(ML)classifiers to transfer learning and integrated CNNs.Interestingly,while accepting the strides formed by CAD systems,the review addressed persistent challenges. 展开更多
关键词 Lung nodules computed tomography scans lung cancer deep learning
在线阅读 下载PDF
Impact of Coronavirus Pandemic Crisis on Technologies and Cloud Computing Applications
5
作者 Ziyad R.Alashhab Mohammed Anbar +3 位作者 Manmeet Mahinderjit Singh Yu-Beng Leau Zaher Ali Al-Sai Sami Abu Alhayja’a 《Journal of Electronic Science and Technology》 CAS CSCD 2021年第1期25-40,共16页
In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of ... In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic. 展开更多
关键词 Big data privacy cloud computing(CC)applications COVID-19 digital transformation security challenge work from home
在线阅读 下载PDF
A Review and Analysis of Localization Techniques in Underwater Wireless Sensor Networks 被引量:1
6
作者 Seema Rani Anju +6 位作者 Anupma Sangwan Krishna Kumar Kashif Nisar Tariq Rahim Soomro Ag.Asri Ag.Ibrahim Manoj Gupta Laxmi Chandand Sadiq Ali Khan 《Computers, Materials & Continua》 SCIE EI 2023年第6期5697-5715,共19页
In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in... In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN. 展开更多
关键词 Underwater wireless sensor networks localization schemes node localization ranging algorithms estimation based prediction based
在线阅读 下载PDF
Application of Fourth Industrial Revolution Technologies to Marine Aquaculture for Future Food:Imperatives,Challenges and Prospects 被引量:1
7
作者 Sitti Raehanah MShaleh Rossita Shapawi +8 位作者 Abentin Estim Ching Fui Fui Ag.Asri Ag.Ibrahim Audrey Daning Tuzan Lim Leong Seng Chen Cheng Ann Alter Jimat Burhan Japar Saleem Mustafa 《Sustainable Marine Structures》 2021年第1期22-31,共10页
This study was undertaken to examine the options and feasibility of deploy­ing new technologies for transforming the aquaculture sector with the ob­jective of increasing the production efficiency.Selection o... This study was undertaken to examine the options and feasibility of deploy­ing new technologies for transforming the aquaculture sector with the ob­jective of increasing the production efficiency.Selection of technologies to obtain the expected outcome should,obviously,be consistent with the crite­ria of sustainable development.There is a range of technologies being sug­gested for driving change in aquaculture to enhance its contribution to food security.It is necessary to highlight the complexity of issues for systems approach that can shape the course of development of aquaculture so that it can live-up to the expected fish demand by 2030 in addition to the current quantity of 82.1 million tons.Some of the Fourth Industrial Revolution(IR4.0)technologies suggested to achieve this target envisage the use of re­al-time monitoring,integration of a constant stream of data from connected production systems and intelligent automation in controls.This requires ap­plication of mobile devices,internet of things(IoT),smart sensors,artificial intelligence(AI),big data analytics,robotics as well as augmented virtual and mixed reality.AI is receiving more attention due to many reasons.Its use in aquaculture can happen in many ways,for example,in detecting and mitigating stress on the captive fish which is considered critical for the success of aquaculture.While the technology intensification in aquaculture holds a great potential but there are constraints in deploying IR4.0 tools in aquaculture.Possible solutions and practical options,especially with re­spect to future food choices are highlighted in this paper. 展开更多
关键词 Food security Aquaculture 4.0 DIGITALIZATION Imitation seafood Sustainable solutions
在线阅读 下载PDF
SNR and RSSI Based an Optimized Machine Learning Based Indoor Localization Approach:Multistory Round Building Scenario over LoRa Network 被引量:1
8
作者 Muhammad Ayoub Kamal Muhammad Mansoor Alam +1 位作者 Aznida Abu Bakar Sajak Mazliham Mohd Su’ud 《Computers, Materials & Continua》 SCIE EI 2024年第8期1927-1945,共19页
In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine ... In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity. 展开更多
关键词 Indoor localization MKNN LoRa machine learning classification RSSI SNR localization
在线阅读 下载PDF
Testing and Analysis of VoIPv6 (Voice over Internet Protocol V6) Performance Using FreeBSD
9
作者 Asaad A. Abusin M. D. Jahangir Alam Junaidi Abdullah 《International Journal of Communications, Network and System Sciences》 2012年第5期298-302,共5页
This study focuses on testing and quality measurement and analysis of VoIPv6 performance. A client, server codes were developed using FreeBSD. This is a step before analyzing the Architectures of VoIPv6 in the current... This study focuses on testing and quality measurement and analysis of VoIPv6 performance. A client, server codes were developed using FreeBSD. This is a step before analyzing the Architectures of VoIPv6 in the current internet in order for it to cope with IPv6 traffic transmission requirements in general and specifically voice traffic, which is being attracting the efforts of research, bodes currently. These tests were conducted in the application level without looking into the network level of the network. VoIPv6 performance tests were conducted in the current tunneled and native IPv6 aiming for better end-to-end VoIPv6 performance. The results obtained in this study were shown in deferent codec's for different bit rates in Kilo bits per second, which act as an indicator for the better performance of G.711 compared with the rest of the tested codes. 展开更多
关键词 VoIPv6 (Voice over INTERNET PROTOCOL V6) PERFORMANCE VOICE over INTERNET PROTOCOL V6 PERFORMANCE TESTING VOICE over INTERNET PROTOCOL V6 PERFORMANCE Analysis VoIPv6 Quality TESTING in the Application Level
在线阅读 下载PDF
Modelling and Performance Analysis of Visible Light Communication System in Industrial Implementations
10
作者 Mohammed S.M.Gismalla Asrul I.Azmi +5 位作者 Mohd R.Salim Farabi Iqbal Mohammad F.L.Abdullah Mosab Hamdan Muzaffar Hamzah Abu Sahmah M.Supa’at 《Computers, Materials & Continua》 SCIE EI 2023年第11期2189-2204,共16页
Visible light communication(VLC)has a paramount role in industrial implementations,especially for better energy efficiency,high speed-data rates,and low susceptibility to interference.However,since studies on VLC for ... Visible light communication(VLC)has a paramount role in industrial implementations,especially for better energy efficiency,high speed-data rates,and low susceptibility to interference.However,since studies on VLC for industrial implementations are in scarcity,areas concerning illumination optimisation and communication performances demand further investigation.As such,this paper presents a new modelling of light fixture distribution for a warehouse model to provide acceptable illumination and communication performances.The proposed model was evaluated based on various semi-angles at half power(SAAHP)and different height levels for several parameters,including received power,signal to noise ratio(SNR),and bit error rate(BER).The results revealed improvement in terms of received power and SNR with 30 Mbps data rate.Various modulations were studied to improve the link quality,whereby better average BER values of 5.55×10^(−15) and 1.06×10^(−10) had been achieved with 4 PAM and 8 PPM,respectively.The simulation outcomes are indeed viable for the practical warehouse model. 展开更多
关键词 Visible light communication(VLC) industrial applications warehouse model light fixtures bit error rate(BER)
在线阅读 下载PDF
3D Modelling, Simulation and Prediction of Facial Wrinkles
11
作者 Sokyna Alqatawneh Ali Mehdi Thamer Al Rawashdeh 《通讯和计算机(中英文版)》 2014年第4期365-370,共6页
关键词 面部皱纹 3D建模 预测 NURBS曲线 仿真 三维系统 警察部门 研究人员
在线阅读 下载PDF
Mobile Technology and Dissemination of Information in the Kenyan Insurance Industry
12
作者 Gladys Nyawira Maina Thomas Ogoro Ombati Oboko Obwocha Robert 《Intelligent Information Management》 2022年第4期105-118,共14页
It is estimated that only 15 percent of Kenyans have made plans for retirement, and many people fall into poverty once they retire. A 2018 survey by the Unclaimed Property Asset register found that insurance companies... It is estimated that only 15 percent of Kenyans have made plans for retirement, and many people fall into poverty once they retire. A 2018 survey by the Unclaimed Property Asset register found that insurance companies hold 25 percent of unclaimed funds with 10 percent belonging to pensioners. This was attributed to a lack of effective information flow between insurance companies and the customers and also between various departments in the insurance companies. Further, there were numerous cases of loss of documents and files and certain files were untraceable in the departments. This paper investigates ways in which mobile technology influences dissemination of information for processing pension claims in the insurance industry. An improvement in dissemination of information for processing of pension claims can carry out a key function in increasing percentage of Kenyans making plans for retirement. The study deployed a descriptive study design. The target population in this study was 561 pensioners in Jubilee Insurance and 8 heads of pensions business, finance, legal services, internal audit, operations, information and communication technology, actuary, business development and strategy and business development departments. The sample size of this study was obtained by use of Krejcie and Morgan formula of determining sample size. As a result of the small number of heads of departments, they were not sampled. Through systematic sampling a sample of 288 pensioners was selected from the list of pensioners in Jubilee Insurance. The findings from the study led to a conclusion that mobile application has a positive and significant association with dissemination of information for pension claims processing in Jubilee Insurance. It was further revealed that text messages have a positive and significant influence on dissemination of information. Concerning unstructured supplementary service data (USSD) it was concluded that it has a positive and significant influence on dissemination of information. The study findings also revealed that voice calls have a positive and significant influence on dissemination of information for pension claims processing in Jubilee Insurance. 展开更多
关键词 Mobile Technology INFORMATION Pension Claims Processing
在线阅读 下载PDF
Data-Oriented Operating System for Big Data and Cloud
13
作者 Selwyn Darryl Kessler Kok-Why Ng Su-Cheng Haw 《Intelligent Automation & Soft Computing》 2024年第4期633-647,共15页
Operating System(OS)is a critical piece of software that manages a computer’s hardware and resources,acting as the intermediary between the computer and the user.The existing OS is not designed for Big Data and Cloud... Operating System(OS)is a critical piece of software that manages a computer’s hardware and resources,acting as the intermediary between the computer and the user.The existing OS is not designed for Big Data and Cloud Computing,resulting in data processing and management inefficiency.This paper proposes a simplified and improved kernel on an x86 system designed for Big Data and Cloud Computing purposes.The proposed algorithm utilizes the performance benefits from the improved Input/Output(I/O)performance.The performance engineering runs the data-oriented design on traditional data management to improve data processing speed by reducing memory access overheads in conventional data management.The OS incorporates a data-oriented design to“modernize”various Data Science and management aspects.The resulting OS contains a basic input/output system(BIOS)bootloader that boots into Intel 32-bit protected mode,a text display terminal,4 GB paging memory,4096 heap block size,a Hard Disk Drive(HDD)I/O Advanced Technology Attachment(ATA)driver and more.There are also I/O scheduling algorithm prototypes that demonstrate how a simple Sweeping algorithm is superior to more conventionally known I/O scheduling algorithms.A MapReduce prototype is implemented using Message Passing Interface(MPI)for big data purposes.An attempt was made to optimize binary search using modern performance engineering and data-oriented design. 展开更多
关键词 Operating system big data cloud computing MAPREDUCE DATA-ORIENTED
在线阅读 下载PDF
ERBM:A Machine Learning-Driven Rule-Based Model for Intrusion Detection in IoT Environments
14
作者 Arshad Mehmmod Komal Batool +3 位作者 Ahthsham Sajid Muhammad Mansoor Alam Mazliham MohD Su’ud Inam Ullah Khan 《Computers, Materials & Continua》 2025年第6期5155-5179,共25页
Traditional rule-based IntrusionDetection Systems(IDS)are commonly employed owing to their simple design and ability to detect known threats.Nevertheless,as dynamic network traffic and a new degree of threats exist in... Traditional rule-based IntrusionDetection Systems(IDS)are commonly employed owing to their simple design and ability to detect known threats.Nevertheless,as dynamic network traffic and a new degree of threats exist in IoT environments,these systems do not perform well and have elevated false positive rates—consequently decreasing detection accuracy.In this study,we try to overcome these restrictions by employing fuzzy logic and machine learning to develop an Enhanced Rule-Based Model(ERBM)to classify the packets better and identify intrusions.The ERBM developed for this approach improves data preprocessing and feature selections by utilizing fuzzy logic,where three membership functions are created to classify all the network traffic features as low,medium,or high to remain situationally aware of the environment.Such fuzzy logic sets produce adaptive detection rules by reducing data uncertainty.Also,for further classification,machine learning classifiers such as Decision Tree(DT),Random Forest(RF),and Neural Networks(NN)learn complex ways of attacks and make the detection process more precise.A thorough performance evaluation using different metrics,including accuracy,precision,recall,F1 Score,detection rate,and false-positive rate,verifies the supremacy of ERBM over classical IDS.Under extensive experiments,the ERBM enables a remarkable detection rate of 99%with considerably fewer false positives than the conventional models.Integrating the ability for uncertain reasoning with fuzzy logic and an adaptable component via machine learning solutions,the ERBM systemprovides a unique,scalable,data-driven approach to IoT intrusion detection.This research presents a major enhancement initiative in the context of rule-based IDS,introducing improvements in accuracy to evolving IoT threats. 展开更多
关键词 Rule based INTRUSIONS IOT fuzzy prediction
在线阅读 下载PDF
Automated Controller Placement for Software-Defined Networks to Resist DDoS Attacks 被引量:4
15
作者 Muhammad Reazul Haque Saw Chin Tan +8 位作者 Zulfadzli Yusoff Kashif Nisar Lee Ching Kwang Rizaludin Kaspin Bhawani Shankar Chowdhry Rajkumar Buyya Satya Prasad Majumder Manoj Gupta Shuaib Memon 《Computers, Materials & Continua》 SCIE EI 2021年第9期3147-3165,共19页
In software-defined networks(SDNs),controller placement is a critical factor in the design and planning for the future Internet of Things(IoT),telecommunication,and satellite communication systems.Existing research ha... In software-defined networks(SDNs),controller placement is a critical factor in the design and planning for the future Internet of Things(IoT),telecommunication,and satellite communication systems.Existing research has concentrated largely on factors such as reliability,latency,controller capacity,propagation delay,and energy consumption.However,SDNs are vulnerable to distributed denial of service(DDoS)attacks that interfere with legitimate use of the network.The ever-increasing frequency of DDoS attacks has made it necessary to consider them in network design,especially in critical applications such as military,health care,and financial services networks requiring high availability.We propose a mathematical model for planning the deployment of SDN smart backup controllers(SBCs)to preserve service in the presence of DDoS attacks.Given a number of input parameters,our model has two distinct capabilities.First,it determines the optimal number of primary controllers to place at specific locations or nodes under normal operating conditions.Second,it recommends an optimal number of smart backup controllers for use with different levels of DDoS attacks.The goal of the model is to improve resistance to DDoS attacks while optimizing the overall cost based on the parameters.Our simulated results demonstrate that the model is useful in planning for SDN reliability in the presence of DDoS attacks while managing the overall cost. 展开更多
关键词 SDN automated controller placement SBC ILP DDoS attack
在线阅读 下载PDF
Deep Learning Based Classification of Wrist Cracks from X-ray Imaging 被引量:2
16
作者 Jahangir Jabbar Muzammil Hussain +3 位作者 Hassaan Malik Abdullah Gani Ali Haider Khan Muhammad Shiraz 《Computers, Materials & Continua》 SCIE EI 2022年第10期1827-1844,共18页
Wrist cracks are the most common sort of cracks with an excessive occurrence rate.For the routine detection of wrist cracks,conventional radiography(X-ray medical imaging)is used but periodically issues are presented ... Wrist cracks are the most common sort of cracks with an excessive occurrence rate.For the routine detection of wrist cracks,conventional radiography(X-ray medical imaging)is used but periodically issues are presented by crack depiction.Wrist cracks often appear in the human arbitrary bone due to accidental injuries such as slipping.Indeed,many hospitals lack experienced clinicians to diagnose wrist cracks.Therefore,an automated system is required to reduce the burden on clinicians and identify cracks.In this study,we have designed a novel residual network-based convolutional neural network(CNN)for the crack detection of the wrist.For the classification of wrist cracks medical imaging,the diagnostics accuracy of the RN-21CNN model is compared with four well-known transfer learning(TL)models such as Inception V3,Vgg16,ResNet-50,and Vgg19,to assist the medical imaging technologist in identifying the cracks that occur due to wrist fractures.The RN-21CNN model achieved an accuracy of 0.97 which is much better than its competitor`s approaches.The results reveal that implementing a correct generalization that a computer-aided recognition system precisely designed for the assistance of clinician would limit the number of incorrect diagnoses and also saves a lot of time. 展开更多
关键词 Wrist cracks FRACTURE deep learning X-rays CNN
在线阅读 下载PDF
Classification of Electrocardiogram Signals for Arrhythmia Detection Using Convolutional Neural Network 被引量:1
17
作者 Muhammad Aleem Raza Muhammad Anwar +4 位作者 Kashif Nisar Ag.Asri Ag.Ibrahim Usman Ahmed Raza Sadiq Ali Khan Fahad Ahmad 《Computers, Materials & Continua》 SCIE EI 2023年第12期3817-3834,共18页
With the help of computer-aided diagnostic systems,cardiovascular diseases can be identified timely manner to minimize the mortality rate of patients suffering from cardiac disease.However,the early diagnosis of cardi... With the help of computer-aided diagnostic systems,cardiovascular diseases can be identified timely manner to minimize the mortality rate of patients suffering from cardiac disease.However,the early diagnosis of cardiac arrhythmia is one of the most challenging tasks.The manual analysis of electrocardiogram(ECG)data with the help of the Holter monitor is challenging.Currently,the Convolutional Neural Network(CNN)is receiving considerable attention from researchers for automatically identifying ECG signals.This paper proposes a 9-layer-based CNN model to classify the ECG signals into five primary categories according to the American National Standards Institute(ANSI)standards and the Association for the Advancement of Medical Instruments(AAMI).The Massachusetts Institute of Technology-Beth Israel Hospital(MIT-BIH)arrhythmia dataset is used for the experiment.The proposed model outperformed the previous model in terms of accuracy and achieved a sensitivity of 99.0%and a positivity predictively 99.2%in the detection of a Ventricular Ectopic Beat(VEB).Moreover,it also gained a sensitivity of 99.0%and positivity predictively of 99.2%for the detection of a supraventricular ectopic beat(SVEB).The overall accuracy of the proposed model is 99.68%. 展开更多
关键词 ARRHYTHMIA ECG signal deep learning convolutional neural network physioNet MIT-BIH arrhythmia database
在线阅读 下载PDF
Optimizing Optical Attocells Positioning of Indoor Visible Light Communication System 被引量:1
18
作者 Mohammed S.M.Gismalla Asrul I.Azmi +5 位作者 Mohd R.Salim Farabi Iqbal Mohammad F.L.Abdullah Mosab Hamdan Muzaffar Hamzah Abu Sahmah M.Supa’at 《Computers, Materials & Continua》 SCIE EI 2023年第2期3607-3625,共19页
Visible light communication(VLC),which is a prominent emerging solution that complements the radio frequency(RF)technology,exhibits the potential to meet the demands of fifth-generation(5G)and beyond technologies.The ... Visible light communication(VLC),which is a prominent emerging solution that complements the radio frequency(RF)technology,exhibits the potential to meet the demands of fifth-generation(5G)and beyond technologies.The random movement of mobile terminals in the indoor environment is a challenge in the VLC system.The model of optical attocells has a critical role in the uniform distribution and the quality of communication links in terms of received power and signal-to-noise ratio(SNR).As such,the optical attocells positions were optimized in this study with a developed try and error(TE)algorithm.The optimized optical attocells were examined and compared with previous models.This novel approach had successfully increased minimum received power from−1.29 to−0.225 dBm,along with enhanced SNR performance by 2.06 dB.The bit error rate(BER)was reduced to 4.42×10−8 and 6.63×10−14 by utilizing OOK-NRZ and BPSK modulation techniques,respectively.The optimized attocells positions displayed better uniform distribution,as both received power and SNR performances improved by 0.45 and 0.026,respectively.As the results of the proposed model are optimal,it is suitable for standard office and room model applications. 展开更多
关键词 Visible light communication(VLC) optical attocell received power signal-to-noise ratio(SNR) bit error rate(BER) coefficient of variation(CV)
在线阅读 下载PDF
Efficient Resource Allocation Algorithm in Uplink OFDM-Based Cognitive Radio Networks 被引量:1
19
作者 Omar Abdulghafoor Musbah Shaat +7 位作者 Ibraheem Shayea Ahmad Hamood Abdelzahir Abdelmaboud Ashraf Osman Ibrahim Fadhil Mukhlif Herish Badal Norafida Ithnin Ali Khadim Lwas 《Computers, Materials & Continua》 SCIE EI 2023年第5期3045-3064,共20页
The computational complexity of resource allocation processes,in cognitive radio networks(CRNs),is a major issue to be managed.Furthermore,the complicated solution of the optimal algorithm for handling resource alloca... The computational complexity of resource allocation processes,in cognitive radio networks(CRNs),is a major issue to be managed.Furthermore,the complicated solution of the optimal algorithm for handling resource allocation in CRNs makes it unsuitable to adopt in real-world applications where both cognitive users,CRs,and primary users,PUs,exist in the identical geographical area.Hence,this work offers a primarily price-based power algorithm to reduce computational complexity in uplink scenarioswhile limiting interference to PUs to allowable threshold.Hence,this paper,compared to other frameworks proposed in the literature,proposes a two-step approach to reduce the complexity of the proposed mathematical model.In the first step,the subcarriers are assigned to the users of the CRN,while the cost function includes a pricing scheme to provide better power control algorithm with improved reliability proposed in the second stage.The main contribution of this paper is to lessen the complexity of the proposed algorithm and to offer flexibility in controlling the interference produced to the users of the primary networks,which has been achieved by including a pricing function in the proposed cost function.Finally,the performance of the proposed power and subcarrier algorithm is confirmed for orthogonal frequency-division multiplexing(OFDM).Simulation results prove that the performance of the proposed algorithm is better than other algorithms,albeit with a lesser complexity of O(NM)+O(Nlog(N)). 展开更多
关键词 Cognitive radio resource allocation OFDM PRICING
在线阅读 下载PDF
Efficient Power Control for UAV Based on Trajectory and Game Theory
20
作者 Fadhil Mukhlif Ashraf Osman Ibrahim +2 位作者 Norafida Ithnin Roobaea Alroobaea Majed Alsafyani 《Computers, Materials & Continua》 SCIE EI 2023年第3期5589-5606,共18页
Due to the fact that network space is becoming more limited,the implementation of ultra-dense networks(UDNs)has the potential to enhance not only network coverage but also network throughput.Unmanned Aerial Vehicle(UA... Due to the fact that network space is becoming more limited,the implementation of ultra-dense networks(UDNs)has the potential to enhance not only network coverage but also network throughput.Unmanned Aerial Vehicle(UAV)communications have recently garnered a lot of attention due to the fact that they are extremely versatile and may be applied to a wide variety of contexts and purposes.A cognitive UAV is proposed as a solution for the Internet of Things ground terminal’s wireless nodes in this article.In the IoT system,the UAV is utilised not only to determine how the resources should be distributed but also to provide power to the wireless nodes.The quality of service(QoS)offered by the cognitive node was interpreted as a price-based utility function,which was demonstrated in the form of a non-cooperative game theory in order to maximise customers’net utility functions.An energyefficient non-cooperative game theory power allocation with pricing strategy abbreviated as(EE-NGPAP)is implemented in this study with two trajectories Spiral and Sigmoidal in order to facilitate effective power management in Internet of Things(IoT)wireless nodes.It has also been demonstrated,theoretically and by the use of simulations,that the Nash equilibrium does exist and that it is one of a kind.The proposed energy harvesting approach was shown,through simulations,to significantly reduce the typical amount of power thatwas sent.This is taken into consideration to agree with the objective of 5G networks.In order to converge to Nash Equilibrium(NE),the method that is advised only needs roughly 4 iterations,which makes it easier to utilise in the real world,where things aren’t always the same. 展开更多
关键词 UAV spiral&sigmoid trajectory DRONES IoT game theory energy efficiency 6G
在线阅读 下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部