Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a p...Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.展开更多
The majority of spatial data reveal some degree of spatial dependence. The term “spatial dependence” refers to the tendency for phenomena to be more similar when they occur close together than when they occur far ap...The majority of spatial data reveal some degree of spatial dependence. The term “spatial dependence” refers to the tendency for phenomena to be more similar when they occur close together than when they occur far apart in space. This property is ignored in machine learning (ML) for spatial domains of application. Most classical machine learning algorithms are generally inappropriate unless modified in some way to account for it. In this study, we proposed an approach that aimed to improve a ML model to detect the dependence without incorporating any spatial features in the learning process. To detect this dependence while also improving performance, a hybrid model was used based on two representative algorithms. In addition, cross-validation method was used to make the model stable. Furthermore, global moran’s I and local moran were used to capture the spatial dependence in the residuals. The results show that the HM has significant with a R2 of 99.91% performance compared to RBFNN and RF that have 74.22% and 82.26% as R2 respectively. With lower errors, the HM was able to achieve an average test error of 0.033% and a positive global moran’s of 0.12. We concluded that as the R2 value increases, the models become weaker in terms of capturing the dependence.展开更多
In networking,one major difficulty that nodes suffer from is the need for their addresses to be generated and verified without relying on a third party or public authorized servers.To resolve this issue,the use of sel...In networking,one major difficulty that nodes suffer from is the need for their addresses to be generated and verified without relying on a third party or public authorized servers.To resolve this issue,the use of selfcertifying addresses have become a highly popular and standardized method,of which Cryptographically Generated Addresses(CGA)is a prime example.CGA was primarily designed to deter the theft of IPv6 addresses by binding the generated address to a public key to prove address ownership.Even though the CGA technique is highly effective,this method is still subject to several vulnerabilities with respect to security,in addition to certain limitations in its performance.In this study,the authors present an intensive systematic review of the literature to explore the technical specifications of CGA,its challenges,and existing proposals to enhance the protocol.Given that CGA generation is a time-consuming process,this limitation has hampered the application of CGA in mobile environments where nodes have limited energy and storage.Fulfilling Hash2 conditions in CGA is the heaviest and most timeconsuming part of SEND.To improve the performance of CGA,we replaced the Secure Hash Algorithm(SHA1)with the Message Digest(MD5)hash function.Furthermore,this study also analyzes the possible methods through which a CGA could be attacked.In conducting this analysis,Denial-of-Service(DoS)attacks were identified as the main method of attack toward the CGA verification process,which compromise and threaten the privacy of CGA.Therefore,we propose some modifications to the CGA standard verification algorithm to mitigate DoS attacks and to make CGA more security conscious.展开更多
The COVID-19 pandemic has caused higher educational institutions around the world to close campus-based activities and move to online delivery.The aim of this paper is to present the case of Global College of Engineer...The COVID-19 pandemic has caused higher educational institutions around the world to close campus-based activities and move to online delivery.The aim of this paper is to present the case of Global College of Engineering and Technology(GCET)and how its practices including teaching,students/staff support,assessments,and exam policies were affected.The paper investigates the mediating role of no detriment policy impact on students’result along with the challenges faced by the higher educational institution,recommendations and suggestions.The investigation concludes that the strategies adopted for online delivery,student support,assessments and exam policies have helped students to effectively cope with the teaching and learning challenges posed by the COVID-19 pandemic without affecting their academic results.The study shows that 99%of students were able to maintain the same or better level of performance during the 1st COVID-19 semester.One percent of students had shown a slight decrease in their performance(about 1%–2%)with respect to their overall marks pre-COVID-19.The no detriment policy has succoured those 1%of the students to maintain their overall performance to what it used to be pre-COVID-19 pandemic.Finally,the paper provides the list of challenges and suggestions for smooth conduction of online education.展开更多
Feature Selection(FS)is considered as an important preprocessing step in data mining and is used to remove redundant or unrelated features from high-dimensional data.Most optimization algorithms for FS problems are no...Feature Selection(FS)is considered as an important preprocessing step in data mining and is used to remove redundant or unrelated features from high-dimensional data.Most optimization algorithms for FS problems are not balanced in search.A hybrid algorithm called nonlinear binary grasshopper whale optimization algorithm(NL-BGWOA)is proposed to solve the problem in this paper.In the proposed method,a new position updating strategy combining the position changes of whales and grasshoppers population is expressed,which optimizes the diversity of searching in the target domain.Ten distinct high-dimensional UCI datasets,the multi-modal Parkinson's speech datasets,and the COVID-19 symptom dataset are used to validate the proposed method.It has been demonstrated that the proposed NL-BGWOA performs well across most of high-dimensional datasets,which shows a high accuracy rate of up to 0.9895.Furthermore,the experimental results on the medical datasets also demonstrate the advantages of the proposed method in actual FS problem,including accuracy,size of feature subsets,and fitness with best values of 0.913,5.7,and 0.0873,respectively.The results reveal that the proposed NL-BGWOA has comprehensive superiority in solving the FS problem of high-dimensional data.展开更多
This paper focuses on investigating immunological principles in designing a multi-agent security architecture for intrusion detection and response in mobile ad hoc networks. In this approach, the immunity-based agents...This paper focuses on investigating immunological principles in designing a multi-agent security architecture for intrusion detection and response in mobile ad hoc networks. In this approach, the immunity-based agents monitor the situation in the network. These agents can take appropriate actions according to the underlying security policies. Specifically, their activities are coordinated in a hierarchical fashion while sensing, communicating, decision and generating responses. Such an agent can learn and adapt to its environment dynamically and can detect both known and unknown intrusions. The proposed intrusion detection architecture is designed to be flexible, extendible, and adaptable that can perform real-time monitoring. This paper provides the conceptual view and a general framework of the proposed system. In the end, the architecture is illustrated by an example to show it can prevent the attack efficiently.展开更多
The forecasting research literature has developed greatly in recent years as a result of advances in information technology. Financial time-series tasks have made substantial use of machine learning and deep neural ne...The forecasting research literature has developed greatly in recent years as a result of advances in information technology. Financial time-series tasks have made substantial use of machine learning and deep neural networks, but building a prediction model from scratch takes time and computational resources. Transfer learning is growing popular in tackling these constraints of training time and computational resources in several disciplines. This study proposes a hybrid base model for the financial time series prediction employing the recurrent neural network (RNN) and long-short term memory (LSTM) called RNN-LSTM. We used random search to fine-tune the hyperparameters and compared our proposed model to the RNN and LSTM base models and evaluate using the RMSE, MAE, and MAPE metrics. When forecasting Forex currency pairs GBP/USD, USD/ZAR, and AUD/NZD our proposed base model for transfer learning outperforms RNN and LSTM base model with root mean squared errors of 0.007656, 0.165250, and 0.001730 respectively.展开更多
In this paper, a new method, named as L-tree match, is presented for extracting data from complex data sources. Firstly, based on data extraction logic presented in this work, a new data extraction model is constructe...In this paper, a new method, named as L-tree match, is presented for extracting data from complex data sources. Firstly, based on data extraction logic presented in this work, a new data extraction model is constructed in which model components are structurally correlated via a generalized template. Secondly, a database-populating mechanism is built, along with some object-manipulating operations needed for flexible database design, to support data extraction from huge text stream. Thirdly, top-down and bottom-up strategies are combined to design a new extraction algorithm that can extract data from data sources with optional, unordered, nested, and/or noisy components. Lastly, this method is applied to extract accurate data from biological documents amounting to 100GB for the first online integrated biological data warehouse of China.展开更多
We present the first efficient sound and complete algorithm (i.e., AOMSSQ) for optimizing multiple subspace skyline queries simultaneously in this paper. We first identify three performance problems of the na/ve app...We present the first efficient sound and complete algorithm (i.e., AOMSSQ) for optimizing multiple subspace skyline queries simultaneously in this paper. We first identify three performance problems of the na/ve approach (i.e., SUBSKY) which can be used in processing arbitrary single-subspace skyline query. Then we propose a cell-dominance computation algorithm (i.e., CDCA) to efficiently overcome the drawbacks of SUBSKY. Specially, a novel pruning technique is used in CDCA to dramatically decrease the query time. Finally, based on the CDCA algorithm and the share mechanism between subspaces, we present and discuss the AOMSSQ algorithm and prove it sound and complete. We also present detailed theoretical analyses and extensive experiments that demonstrate our algorithms are both efficient and effective.展开更多
In this paper, for the the primes p such that 3 is a divisor of p - 1, we prove a result which reduces the computation of the linear complexity of a sequence over GF(pm)(any positive integer m) with the period 3n (n a...In this paper, for the the primes p such that 3 is a divisor of p - 1, we prove a result which reduces the computation of the linear complexity of a sequence over GF(pm)(any positive integer m) with the period 3n (n and pm - 1 are coprime) to the computation of the linear complexities of three sequences with the period n. Combined with some known algorithms such as generalized Games-Chan algorithm, Berlekamp-Massey algorithm and Xiao-Wei-Lam-lmamura algorithm, we can determine the linear complexity of any sequence over GF(pm) with the period 3n (n and pm - 1 are coprime) more efficiently.展开更多
Cluster analysis is a process to classify data in a specified data set. In this field,much attention is paid to high-efficiency clustering algorithms. In this paper, the features in thecurrent partition-based and hier...Cluster analysis is a process to classify data in a specified data set. In this field,much attention is paid to high-efficiency clustering algorithms. In this paper, the features in thecurrent partition-based and hierarchy-based algorithms are reviewed, and a new hierarchy-basedalgorithm PHC is proposed by combining advantages of both algorithms, which uses the cohesionand the closeness to amalgamate the clusters. Compared with similar algorithms, the performanceof PHC is improved, and the quality of clustering is guaranteed. And both the features were provedby the theoretic and experimental analyses in the paper.展开更多
Software automated testing is one of the critical research subjects in the field of computer application. In this paper, a novel design of architecture called automated testing system (ATS) is proposed. Based on techn...Software automated testing is one of the critical research subjects in the field of computer application. In this paper, a novel design of architecture called automated testing system (ATS) is proposed. Based on techniques relating to J2EE including MVC design pattern, Struts framework, etc, ATS can support any black-box testing business theoretically with relevant APIs programmed using Tcl script language beforehand. Moreover, as the core of ATS is built in Java, it can work in different environments without being recomplied. The efficiency of the new system is validated by plenty of applications in communication industry and the results also show the effectiveness and flexibility of the approach.展开更多
This study introduces an innovative approach to optimize cloud computing job distribution using the Improved Dynamic Johnson Sequencing Algorithm(DJS).Emphasizing on-demand resource sharing,typical to Cloud Service Pr...This study introduces an innovative approach to optimize cloud computing job distribution using the Improved Dynamic Johnson Sequencing Algorithm(DJS).Emphasizing on-demand resource sharing,typical to Cloud Service Providers(CSPs),the research focuses on minimizing job completion delays through efficient task allocation.Utilizing Johnson’s rule from operations research,the study addresses the challenge of resource availability post-task completion.It advocates for queuing models with multiple servers and finite capacity to improve job scheduling models,subsequently reducing wait times and queue lengths.The Dynamic Johnson Sequencing Algorithm and the M/M/c/K queuing model are applied to optimize task sequences,showcasing their efficacy through comparative analysis.The research evaluates the impact of makespan calculation on data file transfer times and assesses vital performance indicators,ultimately positioning the proposed technique as superior to existing approaches,offering a robust framework for enhanced task scheduling and resource allocation in cloud computing.展开更多
In this paper, ARMiner, a data mining tool based on association rules, isintroduced. Beginning with the system architecture, the characteristics and functions are dis-cussed in details, including data transfer, concep...In this paper, ARMiner, a data mining tool based on association rules, isintroduced. Beginning with the system architecture, the characteristics and functions are dis-cussed in details, including data transfer, concept hierarchy generalization, mining rules withnegative items and the re-development of the system. An example of the tool's application isalso shown. Finally, some issues for future research are presented.展开更多
ymmetry of the world trade network provides a novel perspective to understand the world-wide trading system. However, symmetry in the world trade network (WTN) has been rarely studied so far. In this paper, the auth...ymmetry of the world trade network provides a novel perspective to understand the world-wide trading system. However, symmetry in the world trade network (WTN) has been rarely studied so far. In this paper, the authors systematically explore the symmetry in WTN. The authors construct WTN in 2005 and explore the size and structure of its automorphism group, through which the authors find that WTN is symmetric, particularly, locally symmetric to a certain degree. Furthermore, the authors work out the symmetric motifs of WTN and investigate the structure and function of the symmetric motifs, coming to the conclusion that local symmetry will have great effect on the stability of the WTN and that continuous symmetry-breakings will generate complexity and diversity of the trade network. Finally, utilizing the local symmetry of the network, the authors work out the quotient of WTN, which is the structural skeleton dominating stability and evolution of WTN.展开更多
Fault-tolerant systems have found wide applications in military, industrial andcommercial areas. Most of these systems are constructed by multiple-modular redundancy or er-ror control coding techniques. They need some...Fault-tolerant systems have found wide applications in military, industrial andcommercial areas. Most of these systems are constructed by multiple-modular redundancy or er-ror control coding techniques. They need some fault-tolerant specific components (such as voter,switcher, encoder, or decoder) to implement error-detecting or error-correcting functions. However,the problem of error detection, location or correction for fault-tolerance specific components them-selves has not been solved properly so far. Thus, the dependability of a whole fault-tolerant systemwill be greatly affected. This paper presents a theory of robust fault-masking digital circuits forcharacterizing fault-tolerant systems with the ability of concurrent error location and a new schemeof dual-modular redundant systems with partially robust fault-masking property. A basic robustfault-masking circuit is composed of a basic functional circuit and an error-locating corrector. Sucha circuit not only has the ability of concurrent error correction, but also has the ability of concurrenterror location. According to this circuit model, for a partially robust fault-masking dual-modularredundant system, two redundant modules based on alternating-complementary logic consist of thebasic functional circuit. An error-correction specific circuit named as alternating-complementarycorrector is used as the error-locating corrector. The performance (such as hardware complexity,time delay) of the scheme is analyzed.展开更多
文摘Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.
文摘The majority of spatial data reveal some degree of spatial dependence. The term “spatial dependence” refers to the tendency for phenomena to be more similar when they occur close together than when they occur far apart in space. This property is ignored in machine learning (ML) for spatial domains of application. Most classical machine learning algorithms are generally inappropriate unless modified in some way to account for it. In this study, we proposed an approach that aimed to improve a ML model to detect the dependence without incorporating any spatial features in the learning process. To detect this dependence while also improving performance, a hybrid model was used based on two representative algorithms. In addition, cross-validation method was used to make the model stable. Furthermore, global moran’s I and local moran were used to capture the spatial dependence in the residuals. The results show that the HM has significant with a R2 of 99.91% performance compared to RBFNN and RF that have 74.22% and 82.26% as R2 respectively. With lower errors, the HM was able to achieve an average test error of 0.033% and a positive global moran’s of 0.12. We concluded that as the R2 value increases, the models become weaker in terms of capturing the dependence.
基金supported by Dana Impak Perdana fund,no.UKM DIP-2018-040 and Fundamental Research Grant Scheme fund no FRGS/1/2018/TK04/UKM/02/7 under Author R.Hassan.
文摘In networking,one major difficulty that nodes suffer from is the need for their addresses to be generated and verified without relying on a third party or public authorized servers.To resolve this issue,the use of selfcertifying addresses have become a highly popular and standardized method,of which Cryptographically Generated Addresses(CGA)is a prime example.CGA was primarily designed to deter the theft of IPv6 addresses by binding the generated address to a public key to prove address ownership.Even though the CGA technique is highly effective,this method is still subject to several vulnerabilities with respect to security,in addition to certain limitations in its performance.In this study,the authors present an intensive systematic review of the literature to explore the technical specifications of CGA,its challenges,and existing proposals to enhance the protocol.Given that CGA generation is a time-consuming process,this limitation has hampered the application of CGA in mobile environments where nodes have limited energy and storage.Fulfilling Hash2 conditions in CGA is the heaviest and most timeconsuming part of SEND.To improve the performance of CGA,we replaced the Secure Hash Algorithm(SHA1)with the Message Digest(MD5)hash function.Furthermore,this study also analyzes the possible methods through which a CGA could be attacked.In conducting this analysis,Denial-of-Service(DoS)attacks were identified as the main method of attack toward the CGA verification process,which compromise and threaten the privacy of CGA.Therefore,we propose some modifications to the CGA standard verification algorithm to mitigate DoS attacks and to make CGA more security conscious.
基金supported by Global College of Engineering and Technology(GCET).
文摘The COVID-19 pandemic has caused higher educational institutions around the world to close campus-based activities and move to online delivery.The aim of this paper is to present the case of Global College of Engineering and Technology(GCET)and how its practices including teaching,students/staff support,assessments,and exam policies were affected.The paper investigates the mediating role of no detriment policy impact on students’result along with the challenges faced by the higher educational institution,recommendations and suggestions.The investigation concludes that the strategies adopted for online delivery,student support,assessments and exam policies have helped students to effectively cope with the teaching and learning challenges posed by the COVID-19 pandemic without affecting their academic results.The study shows that 99%of students were able to maintain the same or better level of performance during the 1st COVID-19 semester.One percent of students had shown a slight decrease in their performance(about 1%–2%)with respect to their overall marks pre-COVID-19.The no detriment policy has succoured those 1%of the students to maintain their overall performance to what it used to be pre-COVID-19 pandemic.Finally,the paper provides the list of challenges and suggestions for smooth conduction of online education.
基金supported by Natural Science Foundation of Liaoning Province under Grant 2021-MS-272Educational Committee project of Liaoning Province under Grant LJKQZ2021088.
文摘Feature Selection(FS)is considered as an important preprocessing step in data mining and is used to remove redundant or unrelated features from high-dimensional data.Most optimization algorithms for FS problems are not balanced in search.A hybrid algorithm called nonlinear binary grasshopper whale optimization algorithm(NL-BGWOA)is proposed to solve the problem in this paper.In the proposed method,a new position updating strategy combining the position changes of whales and grasshoppers population is expressed,which optimizes the diversity of searching in the target domain.Ten distinct high-dimensional UCI datasets,the multi-modal Parkinson's speech datasets,and the COVID-19 symptom dataset are used to validate the proposed method.It has been demonstrated that the proposed NL-BGWOA performs well across most of high-dimensional datasets,which shows a high accuracy rate of up to 0.9895.Furthermore,the experimental results on the medical datasets also demonstrate the advantages of the proposed method in actual FS problem,including accuracy,size of feature subsets,and fitness with best values of 0.913,5.7,and 0.0873,respectively.The results reveal that the proposed NL-BGWOA has comprehensive superiority in solving the FS problem of high-dimensional data.
基金Supported by the National High Technology Develop ment 863 Program of China (No.2003AA148010)Key Technologies R&D Program of China (No.2002DA103A03-07).
文摘This paper focuses on investigating immunological principles in designing a multi-agent security architecture for intrusion detection and response in mobile ad hoc networks. In this approach, the immunity-based agents monitor the situation in the network. These agents can take appropriate actions according to the underlying security policies. Specifically, their activities are coordinated in a hierarchical fashion while sensing, communicating, decision and generating responses. Such an agent can learn and adapt to its environment dynamically and can detect both known and unknown intrusions. The proposed intrusion detection architecture is designed to be flexible, extendible, and adaptable that can perform real-time monitoring. This paper provides the conceptual view and a general framework of the proposed system. In the end, the architecture is illustrated by an example to show it can prevent the attack efficiently.
文摘The forecasting research literature has developed greatly in recent years as a result of advances in information technology. Financial time-series tasks have made substantial use of machine learning and deep neural networks, but building a prediction model from scratch takes time and computational resources. Transfer learning is growing popular in tackling these constraints of training time and computational resources in several disciplines. This study proposes a hybrid base model for the financial time series prediction employing the recurrent neural network (RNN) and long-short term memory (LSTM) called RNN-LSTM. We used random search to fine-tune the hyperparameters and compared our proposed model to the RNN and LSTM base models and evaluate using the RMSE, MAE, and MAPE metrics. When forecasting Forex currency pairs GBP/USD, USD/ZAR, and AUD/NZD our proposed base model for transfer learning outperforms RNN and LSTM base model with root mean squared errors of 0.007656, 0.165250, and 0.001730 respectively.
文摘In this paper, a new method, named as L-tree match, is presented for extracting data from complex data sources. Firstly, based on data extraction logic presented in this work, a new data extraction model is constructed in which model components are structurally correlated via a generalized template. Secondly, a database-populating mechanism is built, along with some object-manipulating operations needed for flexible database design, to support data extraction from huge text stream. Thirdly, top-down and bottom-up strategies are combined to design a new extraction algorithm that can extract data from data sources with optional, unordered, nested, and/or noisy components. Lastly, this method is applied to extract accurate data from biological documents amounting to 100GB for the first online integrated biological data warehouse of China.
基金This work is supported by the NSF of USA under Grant No.IIS-0308001the National Natural Science Foundation of China under Grant No.60303008the National Grand Fundamental Research 973 Program of China under Grant No.2005CB321905.
文摘We present the first efficient sound and complete algorithm (i.e., AOMSSQ) for optimizing multiple subspace skyline queries simultaneously in this paper. We first identify three performance problems of the na/ve approach (i.e., SUBSKY) which can be used in processing arbitrary single-subspace skyline query. Then we propose a cell-dominance computation algorithm (i.e., CDCA) to efficiently overcome the drawbacks of SUBSKY. Specially, a novel pruning technique is used in CDCA to dramatically decrease the query time. Finally, based on the CDCA algorithm and the share mechanism between subspaces, we present and discuss the AOMSSQ algorithm and prove it sound and complete. We also present detailed theoretical analyses and extensive experiments that demonstrate our algorithms are both efficient and effective.
基金supported by the National Natural Science Foundation of China(Grant Nos.60542006,60433050&10225106).
文摘In this paper, for the the primes p such that 3 is a divisor of p - 1, we prove a result which reduces the computation of the linear complexity of a sequence over GF(pm)(any positive integer m) with the period 3n (n and pm - 1 are coprime) to the computation of the linear complexities of three sequences with the period n. Combined with some known algorithms such as generalized Games-Chan algorithm, Berlekamp-Massey algorithm and Xiao-Wei-Lam-lmamura algorithm, we can determine the linear complexity of any sequence over GF(pm) with the period 3n (n and pm - 1 are coprime) more efficiently.
文摘Cluster analysis is a process to classify data in a specified data set. In this field,much attention is paid to high-efficiency clustering algorithms. In this paper, the features in thecurrent partition-based and hierarchy-based algorithms are reviewed, and a new hierarchy-basedalgorithm PHC is proposed by combining advantages of both algorithms, which uses the cohesionand the closeness to amalgamate the clusters. Compared with similar algorithms, the performanceof PHC is improved, and the quality of clustering is guaranteed. And both the features were provedby the theoretic and experimental analyses in the paper.
文摘Software automated testing is one of the critical research subjects in the field of computer application. In this paper, a novel design of architecture called automated testing system (ATS) is proposed. Based on techniques relating to J2EE including MVC design pattern, Struts framework, etc, ATS can support any black-box testing business theoretically with relevant APIs programmed using Tcl script language beforehand. Moreover, as the core of ATS is built in Java, it can work in different environments without being recomplied. The efficiency of the new system is validated by plenty of applications in communication industry and the results also show the effectiveness and flexibility of the approach.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project(No.PNURSP2023R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘This study introduces an innovative approach to optimize cloud computing job distribution using the Improved Dynamic Johnson Sequencing Algorithm(DJS).Emphasizing on-demand resource sharing,typical to Cloud Service Providers(CSPs),the research focuses on minimizing job completion delays through efficient task allocation.Utilizing Johnson’s rule from operations research,the study addresses the challenge of resource availability post-task completion.It advocates for queuing models with multiple servers and finite capacity to improve job scheduling models,subsequently reducing wait times and queue lengths.The Dynamic Johnson Sequencing Algorithm and the M/M/c/K queuing model are applied to optimize task sequences,showcasing their efficacy through comparative analysis.The research evaluates the impact of makespan calculation on data file transfer times and assesses vital performance indicators,ultimately positioning the proposed technique as superior to existing approaches,offering a robust framework for enhanced task scheduling and resource allocation in cloud computing.
文摘In this paper, ARMiner, a data mining tool based on association rules, isintroduced. Beginning with the system architecture, the characteristics and functions are dis-cussed in details, including data transfer, concept hierarchy generalization, mining rules withnegative items and the re-development of the system. An example of the tool's application isalso shown. Finally, some issues for future research are presented.
基金supported by the National Natural Science Foundation of China under Grant No. 70371070 Shanghai Leading Academic Discipline Project under Grant No. S30504Key Project for Fundamental Research of STCSM under Grant No. 06JC14057
文摘ymmetry of the world trade network provides a novel perspective to understand the world-wide trading system. However, symmetry in the world trade network (WTN) has been rarely studied so far. In this paper, the authors systematically explore the symmetry in WTN. The authors construct WTN in 2005 and explore the size and structure of its automorphism group, through which the authors find that WTN is symmetric, particularly, locally symmetric to a certain degree. Furthermore, the authors work out the symmetric motifs of WTN and investigate the structure and function of the symmetric motifs, coming to the conclusion that local symmetry will have great effect on the stability of the WTN and that continuous symmetry-breakings will generate complexity and diversity of the trade network. Finally, utilizing the local symmetry of the network, the authors work out the quotient of WTN, which is the structural skeleton dominating stability and evolution of WTN.
文摘Fault-tolerant systems have found wide applications in military, industrial andcommercial areas. Most of these systems are constructed by multiple-modular redundancy or er-ror control coding techniques. They need some fault-tolerant specific components (such as voter,switcher, encoder, or decoder) to implement error-detecting or error-correcting functions. However,the problem of error detection, location or correction for fault-tolerance specific components them-selves has not been solved properly so far. Thus, the dependability of a whole fault-tolerant systemwill be greatly affected. This paper presents a theory of robust fault-masking digital circuits forcharacterizing fault-tolerant systems with the ability of concurrent error location and a new schemeof dual-modular redundant systems with partially robust fault-masking property. A basic robustfault-masking circuit is composed of a basic functional circuit and an error-locating corrector. Sucha circuit not only has the ability of concurrent error correction, but also has the ability of concurrenterror location. According to this circuit model, for a partially robust fault-masking dual-modularredundant system, two redundant modules based on alternating-complementary logic consist of thebasic functional circuit. An error-correction specific circuit named as alternating-complementarycorrector is used as the error-locating corrector. The performance (such as hardware complexity,time delay) of the scheme is analyzed.