In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, c...In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, commonly used in data transmission protocols, increases transmission delay and consumes excessive bandwidth. To overcome this issue, forward error correction techniques, e.g., Random Linear Network Coding(RLNC) can be used in data transmission. The primary challenge in RLNC-based methodologies is sustaining a consistent coding ratio during data transmission, leading to notable bandwidth usage and transmission delay in dynamic network conditions. Therefore, this study proposes a new block-based RLNC strategy known as Adjustable RLNC(ARLNC), which dynamically adjusts the coding ratio and transmission window during runtime based on the estimated network error rate calculated via receiver feedback. The calculations in this approach are performed using a Galois field with the order of 256. Furthermore, we assessed ARLNC's performance by subjecting it to various error models such as Gilbert Elliott, exponential, and constant rates and compared it with the standard RLNC. The results show that dynamically adjusting the coding ratio and transmission window size based on network conditions significantly enhances network throughput and reduces total transmission delay in most scenarios. In contrast to the conventional RLNC method employing a fixed coding ratio, the presented approach has demonstrated significant enhancements, resulting in a 73% decrease in transmission delay and a 4 times augmentation in throughput. However, in dynamic computational environments, ARLNC generally incurs higher computational costs than the standard RLNC but excels in high-performance networks.展开更多
Despite the widespread use of Decision trees (DT) across various applications, their performance tends to suffer when dealing with imbalanced datasets, where the distribution of certain classes significantly outweighs...Despite the widespread use of Decision trees (DT) across various applications, their performance tends to suffer when dealing with imbalanced datasets, where the distribution of certain classes significantly outweighs others. Cost-sensitive learning is a strategy to solve this problem, and several cost-sensitive DT algorithms have been proposed to date. However, existing algorithms, which are heuristic, tried to greedily select either a better splitting point or feature node, leading to local optima for tree nodes and ignoring the cost of the whole tree. In addition, determination of the costs is difficult and often requires domain expertise. This study proposes a DT for imbalanced data, called Swarm-based Cost-sensitive DT (SCDT), using the cost-sensitive learning strategy and an enhanced swarm-based algorithm. The DT is encoded using a hybrid individual representation. A hybrid artificial bee colony approach is designed to optimize rules, considering specified costs in an F-Measure-based fitness function. Experimental results using datasets compared with state-of-the-art DT algorithms show that the SCDT method achieved the highest performance on most datasets. Moreover, SCDT also excels in other critical performance metrics, such as recall, precision, F1-score, and AUC, with notable results with average values of 83%, 87.3%, 85%, and 80.7%, respectively.展开更多
Prediction of the age of each individual is possible using the changing pattern of DNA methylation with age.In this paper an age prediction approach to work out multivariate regression problems using DNA methylation d...Prediction of the age of each individual is possible using the changing pattern of DNA methylation with age.In this paper an age prediction approach to work out multivariate regression problems using DNA methylation data is developed.In this research study a convolutional neural network(CNN)-based model optimised by the genetic algorithm(GA)is addressed.This paper contributes to enhancing age prediction as a regression problem using a union of two CNNs and exchanging knowledge be-tween them.This specifically re-starts the training process from a possibly higher-quality point in different iterations and,consequently,causes potentially yeilds better results at each iteration.The method proposed,which is called cooperative deep neural network(Co-DeepNet),is tested on two types of age prediction problems.Sixteen datasets containing 1899 healthy blood samples and nine datasets containing 2395 diseased blood samples are employed to examine the method's efficiency.As a result,the mean absolute deviation(MAD)is 1.49 and 3.61 years for training and testing data,respectively,when the healthy data is tested.The diseased blood data show MAD results of 3.81 and 5.43 years for training and testing data,respectively.The results of the Co-DeepNet are compared with six other methods proposed in previous studies and a single CNN using four prediction accuracy measurements(R^(2),MAD,MSE and RMSE).The effectiveness of the Co-DeepNet and superiority of its results is proved through the statistical analysis.展开更多
In the present study,the thermal,mechanical,and biological properties of xAg/Ti-30Ta(x=0,0.41,0.82 and 2.48 at%)shape memory alloys(SMAs)were investigated.The study was conducted using optical and scanning electron mi...In the present study,the thermal,mechanical,and biological properties of xAg/Ti-30Ta(x=0,0.41,0.82 and 2.48 at%)shape memory alloys(SMAs)were investigated.The study was conducted using optical and scanning electron microscopy(SEM),X-ray diffractometry(XRD),compression test,and shape memory testing.The xAg/Ti-Ta was made using a powder metallurgy technique and microwave-sintering process.The results revealed that the addition of Ag has a significant effect on the pore size and shape,whereas the smallest pore size of 11μm was found with the addition of 0.41 at%along with a relative density of 72%.The fracture stress and strain increased with the addition of Ag,reaching the minimum values around 0.41 at%Ag.Therefore,this composition showed the maximum stress and strain at fracture region.Moreover,0.82 Ag/Ti-Ta shows more excellent corrosion resistance and biocompatibility than other percentages,obtaining almost the same behaviour of the pure Ti and Ti-6Al-4V alloys,which can be recommended for their promising and potential response for biomaterial applications.展开更多
Hybrid metaheuristic algorithms play a prominent role in improving algorithms'searchability by combining each algorithm's advantages and minimizing any substantial shortcomings.The Quantum-based Avian Navigati...Hybrid metaheuristic algorithms play a prominent role in improving algorithms'searchability by combining each algorithm's advantages and minimizing any substantial shortcomings.The Quantum-based Avian Navigation Optimizer Algorithm(QANA)is a recent metaheuristic algorithm inspired by the navigation behavior of migratory birds.Different experimental results show that QANA is a competitive and applicable algorithm in different optimization fields.However,it suffers from shortcomings such as low solution quality and premature convergence when tackling some complex problems.Therefore,instead of proposing a new algorithm to solve these weaknesses,we use the advantages of the bonobo optimizer to improve global search capability and mitigate premature convergence of the original QANA.The effectiveness of the proposed Hybrid Quantum-based Avian Navigation Optimizer Algorithm(HQANA)is assessed on 29 test functions of the CEC 2018 benchmark test suite with different dimensions,30,50,and 100.The results are then statistically investigated by the Friedman test and compared with the results of eight well-known optimization algorithms,including PSO,KH,GWO,WOA,CSA,HOA,BO,and QANA.Ultimately,five constrained engineering optimization problems from the latest test suite,CEC 2020 are used to assess the applicability of HQANA to solve complex real-world engineering optimization problems.The experimental and statistical findings prove that the proposed HQANA algorithm is superior to the comparative algorithms.展开更多
We investigate the existence and nonexistence of positive solutions for a system of nonlinear Riemann-Liouville fractional differential equations with coupled integral boundary conditions which contain some positive c...We investigate the existence and nonexistence of positive solutions for a system of nonlinear Riemann-Liouville fractional differential equations with coupled integral boundary conditions which contain some positive constants.展开更多
Development of bilingual curricula of computer science in HYIT has been introduced.The key success factors of bilingual education were proposed and improvement of the abilities of bilingual teachers,textbook construct...Development of bilingual curricula of computer science in HYIT has been introduced.The key success factors of bilingual education were proposed and improvement of the abilities of bilingual teachers,textbook construction and evaluation of bilingual education are discussed in detail.展开更多
Farmland Fertility Algorithm(FFA)is a recent nature-inspired metaheuristic algorithm for solving optimization problems.Nevertheless,FFA has some drawbacks:slow convergence and imbalance of diversification(exploration)...Farmland Fertility Algorithm(FFA)is a recent nature-inspired metaheuristic algorithm for solving optimization problems.Nevertheless,FFA has some drawbacks:slow convergence and imbalance of diversification(exploration)and intensification(exploitation).An adaptive mechanism in every algorithm can achieve a proper balance between exploration and exploitation.The literature shows that chaotic maps are incorporated into metaheuristic algorithms to eliminate these drawbacks.Therefore,in this paper,twelve chaotic maps have been embedded into FFA to find the best numbers of prospectors to increase the exploitation of the best promising solutions.Furthermore,the Quasi-Oppositional-Based Learning(QOBL)mechanism enhances the exploration speed and convergence rate;we name a CQFFA algorithm.The improvements have been made in line with the weaknesses of the FFA algorithm because the FFA algorithm has fallen into the optimal local trap in solving some complex problems or does not have sufficient ability in the intensification component.The results obtained show that the proposed CQFFA model has been significantly improved.It is applied to twenty-three widely-used test functions and compared with similar state-of-the-art algorithms statistically and visually.Also,the CQFFA algorithm has evaluated six real-world engineering problems.The experimental results showed that the CQFFA algorithm outperforms other competitor algorithms.展开更多
This research paper describes the design and implementation of the Consultative Committee for Space Data Systems (CCSDS) standards REF _Ref401069962 \r \h \* MERGEFORMAT [1] for Space Data Link Layer Protocol (SDLP). ...This research paper describes the design and implementation of the Consultative Committee for Space Data Systems (CCSDS) standards REF _Ref401069962 \r \h \* MERGEFORMAT [1] for Space Data Link Layer Protocol (SDLP). The primer focus is the telecommand (TC) part of the standard. The implementation of the standard was in the form of DLL functions using C++ programming language. The second objective of this paper was to use the DLL functions with OMNeT++ simulating environment to create a simulator in order to analyze the mean end-to-end Packet Delay, maximum achievable application layer throughput for a given fixed link capacity and normalized protocol overhead, defined as the total number of bytes transmitted on the link in a given period of time (e.g. per second) divided by the number of bytes of application data received at the application layer model data sink. In addition, the DLL was also integrated with Ground Support Equipment Operating System (GSEOS), a software system for space instruments and small spacecrafts especially suited for low budget missions. The SDLP is designed for rapid test system design and high flexibility for changing telemetry and command requirements. GSEOS can be seamlessly moved from EM/FM development (bench testing) to flight operations. It features the Python programming language as a configuration/scripting tool and can easily be extended to accommodate custom hardware interfaces. This paper also shows the results of the simulations and its analysis.展开更多
This article is dedicated to the analysis list of a set of rules to traffic filtering, which is a multi-dimensional structure, where each dimension is a set of networking field or the field of action, measuring the co...This article is dedicated to the analysis list of a set of rules to traffic filtering, which is a multi-dimensional structure, where each dimension is a set of networking field or the field of action, measuring the cost of the rules to traffic filtering on computer networks, allowing to determine the difference between definition of the rules and the control of the packet fields. Furthermore, the article was considered a hierarchical model to optimize traffic filtering, which reduces the overhead traffic filtering rules and provides the semantic integrity of the original set of rules to traffic filtering. The hierarchical structure of the design and optimization of traffic filtering was researched. And also was developed the hierarchical approach to optimize traffic filtering for reducing set of rules traffic filtering. Analyzed the algorithm optimal solutions and algorithm of random search filters that, allowing you to find the shortest way to a set of rules to traffic filtering. Moreover, in this article was presented the effectiveness evaluation of the process accelerating traffic filtering proposed by HAOTF.展开更多
For series manufacture of pressure sensors, stage of technological tests is performed, related to a definition of the manufacturing accuracy of the sensors. Technological test plan of pressure sensors involves testing...For series manufacture of pressure sensors, stage of technological tests is performed, related to a definition of the manufacturing accuracy of the sensors. Technological test plan of pressure sensors involves testing the sensors on certain fixed temperature and pressure points available in the table. According to a test results, we determine transformation function mathematical model coefficients of sensors and accordance by the claimed accuracy class, of the manufactured sensors. The cost of pressure sensors mostly depends on the cost of this step and determined by the complexity of the used transformation function model. The analysis of a contemporary works associated with the choice of transformation functions for smart pressure sensors. A new proposed indicator of model complexity of a sensor transformation function. In details shown features of the complexity indicator use and given an example. In the article was set and resolved the task to reduce the cost of the tests for commercially available sensors, by reducing the number of temperature points, without compromising the accuracy of the sensor measurement ability.展开更多
The Whale Optimization Algorithm(WOA)is a swarm intelligence metaheuristic inspired by the bubble-net hunting tactic of humpback whales.In spite of its popularity due to simplicity,ease of implementation,and a limited...The Whale Optimization Algorithm(WOA)is a swarm intelligence metaheuristic inspired by the bubble-net hunting tactic of humpback whales.In spite of its popularity due to simplicity,ease of implementation,and a limited number of parameters,WOA’s search strategy can adversely affect the convergence and equilibrium between exploration and exploitation in complex problems.To address this limitation,we propose a new algorithm called Multi-trial Vector-based Whale Optimization Algorithm(MTV-WOA)that incorporates a Balancing Strategy-based Trial-vector Producer(BS_TVP),a Local Strategy-based Trial-vector Producer(LS_TVP),and a Global Strategy-based Trial-vector Producer(GS_TVP)to address real-world optimization problems of varied degrees of difficulty.MTV-WOA has the potential to enhance exploitation and exploration,reduce the probability of being stranded in local optima,and preserve the equilibrium between exploration and exploitation.For the purpose of evaluating the proposed algorithm's performance,it is compared to eight metaheuristic algorithms utilizing CEC 2018 test functions.Moreover,MTV-WOA is compared with well-stablished,recent,and WOA variant algorithms.The experimental results demonstrate that MTV-WOA surpasses comparative algorithms in terms of the accuracy of the solutions and convergence rate.Additionally,we conducted the Friedman test to assess the gained results statistically and observed that MTV-WOA significantly outperforms comparative algorithms.Finally,we solved five engineering design problems to demonstrate the practicality of MTV-WOA.The results indicate that the proposed MTV-WOA can efficiently address the complexities of engineering challenges and provide superior solutions that are superior to those of other algorithms.展开更多
Software-Defined Networking(SDN)represents a significant paradigm shift in network architecture,separating network logic from the underlying forwarding devices to enhance flexibility and centralize deployment.Concur-r...Software-Defined Networking(SDN)represents a significant paradigm shift in network architecture,separating network logic from the underlying forwarding devices to enhance flexibility and centralize deployment.Concur-rently,the Internet of Things(IoT)connects numerous devices to the Internet,enabling autonomous interactions with minimal human intervention.However,implementing and managing an SDN-IoT system is inherently complex,particularly for those with limited resources,as the dynamic and distributed nature of IoT infrastructures creates security and privacy challenges during SDN integration.The findings of this study underscore the primary security and privacy challenges across application,control,and data planes.A comprehensive review evaluates the root causes of these challenges and the defense techniques employed in prior works to establish sufficient secrecy and privacy protection.Recent investigations have explored cutting-edge methods,such as leveraging blockchain for transaction recording to enhance security and privacy,along with applying machine learning and deep learning approaches to identify and mitigate the impacts of Denial of Service(DoS)and Distributed DoS(DDoS)attacks.Moreover,the analysis indicates that encryption and hashing techniques are prevalent in the data plane,whereas access control and certificate authorization are prominently considered in the control plane,and authentication is commonly employed within the application plane.Additionally,this paper outlines future directions,offering insights into potential strategies and technological advancements aimed at fostering a more secure and privacy-conscious SDN-based IoT ecosystem.展开更多
In this paper, definition and properties of logistic map along with orbit and bifurcation diagrams, Lyapunov exponent, and its histogram are considered. In order to expand chaotic region of Logistic map and make it su...In this paper, definition and properties of logistic map along with orbit and bifurcation diagrams, Lyapunov exponent, and its histogram are considered. In order to expand chaotic region of Logistic map and make it suitable for cryptography, two modified versions of Logistic map are proposed. In the First Modification of Logistic map (FML), vertical symmetry and transformation to the right are used. In the Second Modification of Logistic (SML) map, vertical and horizontal symmetry and transformation to the right are used. Sensitivity of FML to initial condition is less and sensitivity of SML map to initial condition is more than the others. The total chaotic range of SML is more than others. Histograms of Logistic map and SML map are identical. Chaotic range of SML map is fivefold of chaotic range of Logistic map. This property gave more key space for cryptographic purposes.展开更多
Accurately estimating of Retransmission TimeOut (RTO) in Content-Centric Networking (CCN) is crucial for efficient rate control in end nodes and effective interface ranking in intermediate routers. Toward this end, th...Accurately estimating of Retransmission TimeOut (RTO) in Content-Centric Networking (CCN) is crucial for efficient rate control in end nodes and effective interface ranking in intermediate routers. Toward this end, the Jacobson algorithm, which is an Exponentially Weighted Moving Average (EWMA) on the Round Trip Time (RTT) of previous packets, is a promising scheme. Assigning the lower bound to RTO, determining how an EWMA rapidly adapts to changes, and setting the multiplier of variance RTT have the most impact on the accuracy of this estimator for which several evaluations have been performed to set them in Transmission Control Protocol/Internet Protocol (TCP/IP) networks. However, the performance of this estimator in CCN has not been explored yet, despite CCN having a significant architectural difference with TCP/IP networks. In this study, two new metrics for assessing the performance of RTO estimators in CCN are defined and the performance of the Jacobson algorithm in CCN is evaluated. This evaluation is performed by varying the minimum RTO, EWMA parameters, and multiplier of variance RTT against different content popularity distribution gains. The obtained results are used to reconsider the Jacobson algorithm for accurately estimating RTO in CCN. Comparing the performance of the reconsidered Jacobson estimator with the existing solutions shows that it can estimate RTO simply and more accurately without any additional information or computation overhead.展开更多
The efficiency and performance of Distributed Database Management Systems (DDBMS) is mainly measured by its proper design and by network communication cost between sites. Fragmentation and distribution of data are the...The efficiency and performance of Distributed Database Management Systems (DDBMS) is mainly measured by its proper design and by network communication cost between sites. Fragmentation and distribution of data are the major design issues of the DDBMS. In this paper, we propose new approach that integrates both fragmentation and data allocation in one strategy based on high performance clustering technique and transaction processing cost functions. This new approach achieves efficiently and effectively the objectives of data fragmentation, data allocation and network sites clustering. The approach splits the data relations into pair-wise disjoint fragments and determine whether each fragment has to be allocated or not in the network sites, where allocation benefit outweighs the cost depending on high performance clustering technique. To show the performance of the proposed approach, we performed experimental studies on real database application at different networks connectivity. The obtained results proved to achieve minimum total data transaction costs between different sites, reduced the amount of redundant data to be accessed between these sites and improved the overall DDBMS performance.展开更多
The capability of a system to fulfill its mission promptly in the presence of attacks,failures,or accidents is one of the qualitative definitions of survivability.In this paper,we propose a model for survivability qua...The capability of a system to fulfill its mission promptly in the presence of attacks,failures,or accidents is one of the qualitative definitions of survivability.In this paper,we propose a model for survivability quantification,which is acceptable for networks carrying complex traffic flows.Complex network traffic is considered as general multi-rate,heterogeneous traffic,where the individual bandwidth demands may aggregate in complex,nonlinear ways.Blocking probability is the chosen measure for survivability analysis.We study an arbitrary topology and some other known topologies for the network.Independent and dependent failure scenarios as well as deterministic and random traffic models are investigated.Finally,we provide survivability evaluation results for different network configurations.The results show that by using about 50%of the link capacity in networks with a relatively high number of links,the blocking probability remains near zero in the case of a limited number of failures.展开更多
Biometric-based authentication systems have attracted more attention than traditional authentication techniques such as passwords in the last two decades.Multiple biometrics such as fingerprint,palm,iris,palm vein and...Biometric-based authentication systems have attracted more attention than traditional authentication techniques such as passwords in the last two decades.Multiple biometrics such as fingerprint,palm,iris,palm vein and finger vein and other biometrics have been introduced.One of the challenges in biometrics is physical injury.Biometric of finger vein is of the biometrics least exposed to physical damage.Numerous methods have been proposed for authentication with the help of this biometric that suffer from weaknesses such as high computational complexity and low identification rate.This paper presents a novel method of scattering wavelet-based identity identification.Scattering wavelet extracts image features from Gabor wavelet filters in a structure similar to convolutional neural networks.What distinguishes this algorithm from other popular feature extraction methods such as deep learning methods,filter-based methods,statistical methods,etc.,is that this algorithm has very high skill and accuracy in differentiating similar images but belongs to different classes,even when the image is subject to serious damage such as noise,angle changes or pixel location,this descriptor still generates feature vectors in away thatminimizes classifier error.This improves classification and authentication.The proposed method has been evaluated using two databases Finger Vein USM(FV-USM)and Homologous Multimodal biometrics Traits(SDUMLA-HMT).In addition to having reasonable computational complexity,it has recorded excellent identification rates in noise,rotation,and transmission challenges.At best,it has a 98.2%identification rate for the SDUMLA-HMT database and a 96.1%identification rate for the FV-USM database.展开更多
In the current mobile IPv6 (MIPv6) systems for the System architecture evaluation (SAE) networks, such as 4th generation (4G) mobile network, the data delivery is performed basing on a centralized mobility network anc...In the current mobile IPv6 (MIPv6) systems for the System architecture evaluation (SAE) networks, such as 4th generation (4G) mobile network, the data delivery is performed basing on a centralized mobility network anchor between Evolved Node B (eNB) and Serving Gateways (S-GW), and also between S-GW and Packet Data Network Gateway (P-GW). However, the existing network has many obstacles, including suboptimal data routing, injection of unwanted data traffic into mobile core network and the requirement of capital expenditure. To handle these challenges, here we describe a flat mobile core network scheme donated by F-EPC, based SAE mobile network. In the proposed scheme, the P-GW and S-GW gateways are features as one node named Cellular Gateway (C-GW). Further, we proposed to distribute and increase the number of C-GW in mobile core network, the Mobility Management Entity (MME) functioned as centralizing mobility anchor and allocating the IP address for the User Entity (UE). In this paper, the explained results of a simulation analysis showed that the proposed scheme provides a superior performance compared with the current 4G architecture in terms of total transmission delay, handover delay and initial attach procedure.展开更多
The analysis of real social, biological and technological networks has attracted a lot of attention as technological advances have given us a wealth of empirical data. For, analysis and investigation time varying grap...The analysis of real social, biological and technological networks has attracted a lot of attention as technological advances have given us a wealth of empirical data. For, analysis and investigation time varying graphs are used to understand the relationship, contact duration, repeated occurrence of contact. It is under exploring in intermittently connected networks. Now, by extending the same concept in intermittent networks, the efficiency of the routing protocol can be improved. This paper discusses about the temporal characterizing algorithm. Such characterization can help in accurately understanding dynamic behaviors and taking appropriate routing decisions. Therefore, the present research provokes exploring different possibilities of utilizing the same time varying network analyses and designing an Adaptive Routing protocol using temporal distance metric. The adaptive routing protocol is implemented using ONE simulator and is compared with the Epidemic and PropHET for delivery ratio, overhead and the number of dropped messages. The result reveals that Adaptive routing performs better than Epidemic and PropHET for real and synthetic datasets.展开更多
文摘In mobile computing environments, most IoT devices connected to networks experience variable error rates and possess limited bandwidth. The conventional method of retransmitting lost information during transmission, commonly used in data transmission protocols, increases transmission delay and consumes excessive bandwidth. To overcome this issue, forward error correction techniques, e.g., Random Linear Network Coding(RLNC) can be used in data transmission. The primary challenge in RLNC-based methodologies is sustaining a consistent coding ratio during data transmission, leading to notable bandwidth usage and transmission delay in dynamic network conditions. Therefore, this study proposes a new block-based RLNC strategy known as Adjustable RLNC(ARLNC), which dynamically adjusts the coding ratio and transmission window during runtime based on the estimated network error rate calculated via receiver feedback. The calculations in this approach are performed using a Galois field with the order of 256. Furthermore, we assessed ARLNC's performance by subjecting it to various error models such as Gilbert Elliott, exponential, and constant rates and compared it with the standard RLNC. The results show that dynamically adjusting the coding ratio and transmission window size based on network conditions significantly enhances network throughput and reduces total transmission delay in most scenarios. In contrast to the conventional RLNC method employing a fixed coding ratio, the presented approach has demonstrated significant enhancements, resulting in a 73% decrease in transmission delay and a 4 times augmentation in throughput. However, in dynamic computational environments, ARLNC generally incurs higher computational costs than the standard RLNC but excels in high-performance networks.
文摘Despite the widespread use of Decision trees (DT) across various applications, their performance tends to suffer when dealing with imbalanced datasets, where the distribution of certain classes significantly outweighs others. Cost-sensitive learning is a strategy to solve this problem, and several cost-sensitive DT algorithms have been proposed to date. However, existing algorithms, which are heuristic, tried to greedily select either a better splitting point or feature node, leading to local optima for tree nodes and ignoring the cost of the whole tree. In addition, determination of the costs is difficult and often requires domain expertise. This study proposes a DT for imbalanced data, called Swarm-based Cost-sensitive DT (SCDT), using the cost-sensitive learning strategy and an enhanced swarm-based algorithm. The DT is encoded using a hybrid individual representation. A hybrid artificial bee colony approach is designed to optimize rules, considering specified costs in an F-Measure-based fitness function. Experimental results using datasets compared with state-of-the-art DT algorithms show that the SCDT method achieved the highest performance on most datasets. Moreover, SCDT also excels in other critical performance metrics, such as recall, precision, F1-score, and AUC, with notable results with average values of 83%, 87.3%, 85%, and 80.7%, respectively.
基金supported by the Universiti Kebangsaan Malaysia(DIP-2016-024).
文摘Prediction of the age of each individual is possible using the changing pattern of DNA methylation with age.In this paper an age prediction approach to work out multivariate regression problems using DNA methylation data is developed.In this research study a convolutional neural network(CNN)-based model optimised by the genetic algorithm(GA)is addressed.This paper contributes to enhancing age prediction as a regression problem using a union of two CNNs and exchanging knowledge be-tween them.This specifically re-starts the training process from a possibly higher-quality point in different iterations and,consequently,causes potentially yeilds better results at each iteration.The method proposed,which is called cooperative deep neural network(Co-DeepNet),is tested on two types of age prediction problems.Sixteen datasets containing 1899 healthy blood samples and nine datasets containing 2395 diseased blood samples are employed to examine the method's efficiency.As a result,the mean absolute deviation(MAD)is 1.49 and 3.61 years for training and testing data,respectively,when the healthy data is tested.The diseased blood data show MAD results of 3.81 and 5.43 years for training and testing data,respectively.The results of the Co-DeepNet are compared with six other methods proposed in previous studies and a single CNN using four prediction accuracy measurements(R^(2),MAD,MSE and RMSE).The effectiveness of the Co-DeepNet and superiority of its results is proved through the statistical analysis.
基金Project(Q.J130000.2524.12H60)supported by the Ministry of Higher Education of Malaysia and Universiti Teknologi Malaysia。
文摘In the present study,the thermal,mechanical,and biological properties of xAg/Ti-30Ta(x=0,0.41,0.82 and 2.48 at%)shape memory alloys(SMAs)were investigated.The study was conducted using optical and scanning electron microscopy(SEM),X-ray diffractometry(XRD),compression test,and shape memory testing.The xAg/Ti-Ta was made using a powder metallurgy technique and microwave-sintering process.The results revealed that the addition of Ag has a significant effect on the pore size and shape,whereas the smallest pore size of 11μm was found with the addition of 0.41 at%along with a relative density of 72%.The fracture stress and strain increased with the addition of Ag,reaching the minimum values around 0.41 at%Ag.Therefore,this composition showed the maximum stress and strain at fracture region.Moreover,0.82 Ag/Ti-Ta shows more excellent corrosion resistance and biocompatibility than other percentages,obtaining almost the same behaviour of the pure Ti and Ti-6Al-4V alloys,which can be recommended for their promising and potential response for biomaterial applications.
文摘Hybrid metaheuristic algorithms play a prominent role in improving algorithms'searchability by combining each algorithm's advantages and minimizing any substantial shortcomings.The Quantum-based Avian Navigation Optimizer Algorithm(QANA)is a recent metaheuristic algorithm inspired by the navigation behavior of migratory birds.Different experimental results show that QANA is a competitive and applicable algorithm in different optimization fields.However,it suffers from shortcomings such as low solution quality and premature convergence when tackling some complex problems.Therefore,instead of proposing a new algorithm to solve these weaknesses,we use the advantages of the bonobo optimizer to improve global search capability and mitigate premature convergence of the original QANA.The effectiveness of the proposed Hybrid Quantum-based Avian Navigation Optimizer Algorithm(HQANA)is assessed on 29 test functions of the CEC 2018 benchmark test suite with different dimensions,30,50,and 100.The results are then statistically investigated by the Friedman test and compared with the results of eight well-known optimization algorithms,including PSO,KH,GWO,WOA,CSA,HOA,BO,and QANA.Ultimately,five constrained engineering optimization problems from the latest test suite,CEC 2020 are used to assess the applicability of HQANA to solve complex real-world engineering optimization problems.The experimental and statistical findings prove that the proposed HQANA algorithm is superior to the comparative algorithms.
文摘We investigate the existence and nonexistence of positive solutions for a system of nonlinear Riemann-Liouville fractional differential equations with coupled integral boundary conditions which contain some positive constants.
文摘Development of bilingual curricula of computer science in HYIT has been introduced.The key success factors of bilingual education were proposed and improvement of the abilities of bilingual teachers,textbook construction and evaluation of bilingual education are discussed in detail.
文摘Farmland Fertility Algorithm(FFA)is a recent nature-inspired metaheuristic algorithm for solving optimization problems.Nevertheless,FFA has some drawbacks:slow convergence and imbalance of diversification(exploration)and intensification(exploitation).An adaptive mechanism in every algorithm can achieve a proper balance between exploration and exploitation.The literature shows that chaotic maps are incorporated into metaheuristic algorithms to eliminate these drawbacks.Therefore,in this paper,twelve chaotic maps have been embedded into FFA to find the best numbers of prospectors to increase the exploitation of the best promising solutions.Furthermore,the Quasi-Oppositional-Based Learning(QOBL)mechanism enhances the exploration speed and convergence rate;we name a CQFFA algorithm.The improvements have been made in line with the weaknesses of the FFA algorithm because the FFA algorithm has fallen into the optimal local trap in solving some complex problems or does not have sufficient ability in the intensification component.The results obtained show that the proposed CQFFA model has been significantly improved.It is applied to twenty-three widely-used test functions and compared with similar state-of-the-art algorithms statistically and visually.Also,the CQFFA algorithm has evaluated six real-world engineering problems.The experimental results showed that the CQFFA algorithm outperforms other competitor algorithms.
文摘This research paper describes the design and implementation of the Consultative Committee for Space Data Systems (CCSDS) standards REF _Ref401069962 \r \h \* MERGEFORMAT [1] for Space Data Link Layer Protocol (SDLP). The primer focus is the telecommand (TC) part of the standard. The implementation of the standard was in the form of DLL functions using C++ programming language. The second objective of this paper was to use the DLL functions with OMNeT++ simulating environment to create a simulator in order to analyze the mean end-to-end Packet Delay, maximum achievable application layer throughput for a given fixed link capacity and normalized protocol overhead, defined as the total number of bytes transmitted on the link in a given period of time (e.g. per second) divided by the number of bytes of application data received at the application layer model data sink. In addition, the DLL was also integrated with Ground Support Equipment Operating System (GSEOS), a software system for space instruments and small spacecrafts especially suited for low budget missions. The SDLP is designed for rapid test system design and high flexibility for changing telemetry and command requirements. GSEOS can be seamlessly moved from EM/FM development (bench testing) to flight operations. It features the Python programming language as a configuration/scripting tool and can easily be extended to accommodate custom hardware interfaces. This paper also shows the results of the simulations and its analysis.
文摘This article is dedicated to the analysis list of a set of rules to traffic filtering, which is a multi-dimensional structure, where each dimension is a set of networking field or the field of action, measuring the cost of the rules to traffic filtering on computer networks, allowing to determine the difference between definition of the rules and the control of the packet fields. Furthermore, the article was considered a hierarchical model to optimize traffic filtering, which reduces the overhead traffic filtering rules and provides the semantic integrity of the original set of rules to traffic filtering. The hierarchical structure of the design and optimization of traffic filtering was researched. And also was developed the hierarchical approach to optimize traffic filtering for reducing set of rules traffic filtering. Analyzed the algorithm optimal solutions and algorithm of random search filters that, allowing you to find the shortest way to a set of rules to traffic filtering. Moreover, in this article was presented the effectiveness evaluation of the process accelerating traffic filtering proposed by HAOTF.
文摘For series manufacture of pressure sensors, stage of technological tests is performed, related to a definition of the manufacturing accuracy of the sensors. Technological test plan of pressure sensors involves testing the sensors on certain fixed temperature and pressure points available in the table. According to a test results, we determine transformation function mathematical model coefficients of sensors and accordance by the claimed accuracy class, of the manufactured sensors. The cost of pressure sensors mostly depends on the cost of this step and determined by the complexity of the used transformation function model. The analysis of a contemporary works associated with the choice of transformation functions for smart pressure sensors. A new proposed indicator of model complexity of a sensor transformation function. In details shown features of the complexity indicator use and given an example. In the article was set and resolved the task to reduce the cost of the tests for commercially available sensors, by reducing the number of temperature points, without compromising the accuracy of the sensor measurement ability.
文摘The Whale Optimization Algorithm(WOA)is a swarm intelligence metaheuristic inspired by the bubble-net hunting tactic of humpback whales.In spite of its popularity due to simplicity,ease of implementation,and a limited number of parameters,WOA’s search strategy can adversely affect the convergence and equilibrium between exploration and exploitation in complex problems.To address this limitation,we propose a new algorithm called Multi-trial Vector-based Whale Optimization Algorithm(MTV-WOA)that incorporates a Balancing Strategy-based Trial-vector Producer(BS_TVP),a Local Strategy-based Trial-vector Producer(LS_TVP),and a Global Strategy-based Trial-vector Producer(GS_TVP)to address real-world optimization problems of varied degrees of difficulty.MTV-WOA has the potential to enhance exploitation and exploration,reduce the probability of being stranded in local optima,and preserve the equilibrium between exploration and exploitation.For the purpose of evaluating the proposed algorithm's performance,it is compared to eight metaheuristic algorithms utilizing CEC 2018 test functions.Moreover,MTV-WOA is compared with well-stablished,recent,and WOA variant algorithms.The experimental results demonstrate that MTV-WOA surpasses comparative algorithms in terms of the accuracy of the solutions and convergence rate.Additionally,we conducted the Friedman test to assess the gained results statistically and observed that MTV-WOA significantly outperforms comparative algorithms.Finally,we solved five engineering design problems to demonstrate the practicality of MTV-WOA.The results indicate that the proposed MTV-WOA can efficiently address the complexities of engineering challenges and provide superior solutions that are superior to those of other algorithms.
基金This work was supported by National Natural Science Foundation of China(Grant No.62341208)Natural Science Foundation of Zhejiang Province(Grant Nos.LY23F020006 and LR23F020001)Moreover,it has been supported by Islamic Azad University with the Grant No.133713281361.
文摘Software-Defined Networking(SDN)represents a significant paradigm shift in network architecture,separating network logic from the underlying forwarding devices to enhance flexibility and centralize deployment.Concur-rently,the Internet of Things(IoT)connects numerous devices to the Internet,enabling autonomous interactions with minimal human intervention.However,implementing and managing an SDN-IoT system is inherently complex,particularly for those with limited resources,as the dynamic and distributed nature of IoT infrastructures creates security and privacy challenges during SDN integration.The findings of this study underscore the primary security and privacy challenges across application,control,and data planes.A comprehensive review evaluates the root causes of these challenges and the defense techniques employed in prior works to establish sufficient secrecy and privacy protection.Recent investigations have explored cutting-edge methods,such as leveraging blockchain for transaction recording to enhance security and privacy,along with applying machine learning and deep learning approaches to identify and mitigate the impacts of Denial of Service(DoS)and Distributed DoS(DDoS)attacks.Moreover,the analysis indicates that encryption and hashing techniques are prevalent in the data plane,whereas access control and certificate authorization are prominently considered in the control plane,and authentication is commonly employed within the application plane.Additionally,this paper outlines future directions,offering insights into potential strategies and technological advancements aimed at fostering a more secure and privacy-conscious SDN-based IoT ecosystem.
文摘In this paper, definition and properties of logistic map along with orbit and bifurcation diagrams, Lyapunov exponent, and its histogram are considered. In order to expand chaotic region of Logistic map and make it suitable for cryptography, two modified versions of Logistic map are proposed. In the First Modification of Logistic map (FML), vertical symmetry and transformation to the right are used. In the Second Modification of Logistic (SML) map, vertical and horizontal symmetry and transformation to the right are used. Sensitivity of FML to initial condition is less and sensitivity of SML map to initial condition is more than the others. The total chaotic range of SML is more than others. Histograms of Logistic map and SML map are identical. Chaotic range of SML map is fivefold of chaotic range of Logistic map. This property gave more key space for cryptographic purposes.
文摘Accurately estimating of Retransmission TimeOut (RTO) in Content-Centric Networking (CCN) is crucial for efficient rate control in end nodes and effective interface ranking in intermediate routers. Toward this end, the Jacobson algorithm, which is an Exponentially Weighted Moving Average (EWMA) on the Round Trip Time (RTT) of previous packets, is a promising scheme. Assigning the lower bound to RTO, determining how an EWMA rapidly adapts to changes, and setting the multiplier of variance RTT have the most impact on the accuracy of this estimator for which several evaluations have been performed to set them in Transmission Control Protocol/Internet Protocol (TCP/IP) networks. However, the performance of this estimator in CCN has not been explored yet, despite CCN having a significant architectural difference with TCP/IP networks. In this study, two new metrics for assessing the performance of RTO estimators in CCN are defined and the performance of the Jacobson algorithm in CCN is evaluated. This evaluation is performed by varying the minimum RTO, EWMA parameters, and multiplier of variance RTT against different content popularity distribution gains. The obtained results are used to reconsider the Jacobson algorithm for accurately estimating RTO in CCN. Comparing the performance of the reconsidered Jacobson estimator with the existing solutions shows that it can estimate RTO simply and more accurately without any additional information or computation overhead.
文摘The efficiency and performance of Distributed Database Management Systems (DDBMS) is mainly measured by its proper design and by network communication cost between sites. Fragmentation and distribution of data are the major design issues of the DDBMS. In this paper, we propose new approach that integrates both fragmentation and data allocation in one strategy based on high performance clustering technique and transaction processing cost functions. This new approach achieves efficiently and effectively the objectives of data fragmentation, data allocation and network sites clustering. The approach splits the data relations into pair-wise disjoint fragments and determine whether each fragment has to be allocated or not in the network sites, where allocation benefit outweighs the cost depending on high performance clustering technique. To show the performance of the proposed approach, we performed experimental studies on real database application at different networks connectivity. The obtained results proved to achieve minimum total data transaction costs between different sites, reduced the amount of redundant data to be accessed between these sites and improved the overall DDBMS performance.
文摘The capability of a system to fulfill its mission promptly in the presence of attacks,failures,or accidents is one of the qualitative definitions of survivability.In this paper,we propose a model for survivability quantification,which is acceptable for networks carrying complex traffic flows.Complex network traffic is considered as general multi-rate,heterogeneous traffic,where the individual bandwidth demands may aggregate in complex,nonlinear ways.Blocking probability is the chosen measure for survivability analysis.We study an arbitrary topology and some other known topologies for the network.Independent and dependent failure scenarios as well as deterministic and random traffic models are investigated.Finally,we provide survivability evaluation results for different network configurations.The results show that by using about 50%of the link capacity in networks with a relatively high number of links,the blocking probability remains near zero in the case of a limited number of failures.
基金This research is supported by Artificial Intelligence&Data Analytics Lab(AIDA)CCIS Prince Sultan University,Riyadh 11586 Saudi Arabia.
文摘Biometric-based authentication systems have attracted more attention than traditional authentication techniques such as passwords in the last two decades.Multiple biometrics such as fingerprint,palm,iris,palm vein and finger vein and other biometrics have been introduced.One of the challenges in biometrics is physical injury.Biometric of finger vein is of the biometrics least exposed to physical damage.Numerous methods have been proposed for authentication with the help of this biometric that suffer from weaknesses such as high computational complexity and low identification rate.This paper presents a novel method of scattering wavelet-based identity identification.Scattering wavelet extracts image features from Gabor wavelet filters in a structure similar to convolutional neural networks.What distinguishes this algorithm from other popular feature extraction methods such as deep learning methods,filter-based methods,statistical methods,etc.,is that this algorithm has very high skill and accuracy in differentiating similar images but belongs to different classes,even when the image is subject to serious damage such as noise,angle changes or pixel location,this descriptor still generates feature vectors in away thatminimizes classifier error.This improves classification and authentication.The proposed method has been evaluated using two databases Finger Vein USM(FV-USM)and Homologous Multimodal biometrics Traits(SDUMLA-HMT).In addition to having reasonable computational complexity,it has recorded excellent identification rates in noise,rotation,and transmission challenges.At best,it has a 98.2%identification rate for the SDUMLA-HMT database and a 96.1%identification rate for the FV-USM database.
文摘In the current mobile IPv6 (MIPv6) systems for the System architecture evaluation (SAE) networks, such as 4th generation (4G) mobile network, the data delivery is performed basing on a centralized mobility network anchor between Evolved Node B (eNB) and Serving Gateways (S-GW), and also between S-GW and Packet Data Network Gateway (P-GW). However, the existing network has many obstacles, including suboptimal data routing, injection of unwanted data traffic into mobile core network and the requirement of capital expenditure. To handle these challenges, here we describe a flat mobile core network scheme donated by F-EPC, based SAE mobile network. In the proposed scheme, the P-GW and S-GW gateways are features as one node named Cellular Gateway (C-GW). Further, we proposed to distribute and increase the number of C-GW in mobile core network, the Mobility Management Entity (MME) functioned as centralizing mobility anchor and allocating the IP address for the User Entity (UE). In this paper, the explained results of a simulation analysis showed that the proposed scheme provides a superior performance compared with the current 4G architecture in terms of total transmission delay, handover delay and initial attach procedure.
文摘The analysis of real social, biological and technological networks has attracted a lot of attention as technological advances have given us a wealth of empirical data. For, analysis and investigation time varying graphs are used to understand the relationship, contact duration, repeated occurrence of contact. It is under exploring in intermittently connected networks. Now, by extending the same concept in intermittent networks, the efficiency of the routing protocol can be improved. This paper discusses about the temporal characterizing algorithm. Such characterization can help in accurately understanding dynamic behaviors and taking appropriate routing decisions. Therefore, the present research provokes exploring different possibilities of utilizing the same time varying network analyses and designing an Adaptive Routing protocol using temporal distance metric. The adaptive routing protocol is implemented using ONE simulator and is compared with the Epidemic and PropHET for delivery ratio, overhead and the number of dropped messages. The result reveals that Adaptive routing performs better than Epidemic and PropHET for real and synthetic datasets.