In this paper, a strong limit theorem on gambling strategy for binary Bernoulli sequence, (i.e.) irregularity theorem, is extended to random selection for dependent m-valued random variables, via using a new method-di...In this paper, a strong limit theorem on gambling strategy for binary Bernoulli sequence, (i.e.) irregularity theorem, is extended to random selection for dependent m-valued random variables, via using a new method-differentiability on net. Furthermore, by allowing the selection function to take value in finite interval [-M,M], the conception of random selection is generalized.展开更多
Moran or Wright–Fisher processes are probably the most well known models to study the evolution of a population under environmental various effects.Our object of study will be the Simpson index which measures the lev...Moran or Wright–Fisher processes are probably the most well known models to study the evolution of a population under environmental various effects.Our object of study will be the Simpson index which measures the level of diversity of the population,one of the key parameters for ecologists who study for example,forest dynamics.Following ecological motivations,we will consider,here,the case,where there are various species with fitness and immigration parameters being random processes(and thus time evolving).The Simpson index is difficult to evaluate when the population is large,except in the neutral(no selection)case,because it has no closed formula.Our approach relies on the large population limit in the“weak”selection case,and thus to give a procedure which enables us to approximate,with controlled rate,the expectation of the Simpson index at fixed time.We will also study the long time behavior(invariant measure and convergence speed towards equilibrium)of the Wright–Fisher process in a simplified setting,allowing us to get a full picture for the approximation of the expectation of the Simpson index.展开更多
As the incidence of this disease has increased significantly in the recent years, expert systems and machine learning techniques to this problem have also taken a great attention from many scholars. This study aims at...As the incidence of this disease has increased significantly in the recent years, expert systems and machine learning techniques to this problem have also taken a great attention from many scholars. This study aims at diagnosing and prognosticating breast cancer with a machine learning method based on random forest classifier and feature selection technique. By weighting, keeping useful features and removing redundant features in datasets, the method was obtained to solve diagnosis problems via classifying Wisconsin Breast Cancer Diagnosis Dataset and to solve prognosis problem via classifying Wisconsin Breast Cancer Prognostic Dataset. On these datasets we obtained classification accuracy of 100% in the best case and of around 99.8% on average. This is very promising compared to the previously reported results. This result is for Wisconsin Breast Cancer Dataset but it states that this method can be used confidently for other breast cancer diagnosis problems, too.展开更多
We consider the problem of variable selection for the single-index random effects models with longitudinal data. An automatic variable selection procedure is developed using smooth-threshold. The proposed method share...We consider the problem of variable selection for the single-index random effects models with longitudinal data. An automatic variable selection procedure is developed using smooth-threshold. The proposed method shares some of the desired features of existing variable selection methods: the resulting estimator enjoys the oracle property;the proposed procedure avoids the convex optimization problem and is flexible and easy to implement. Moreover, we use the penalized weighted deviance criterion for a data-driven choice of the tuning parameters. Simulation studies are carried out to assess the performance of our method, and a real dataset is analyzed for further illustration.展开更多
A novel frequency selective surface (FSS) for reducing radar cross section (RCS) is proposed in this paper. This FSS is based on the random distribution method, so it can be called random surface. In this paper, t...A novel frequency selective surface (FSS) for reducing radar cross section (RCS) is proposed in this paper. This FSS is based on the random distribution method, so it can be called random surface. In this paper, the stacked patches serving as periodic elements are employed for RCS reduction. Previous work has demonstrated the efficiency by utilizing the microstrip patches, especially for the reflectarray. First, the relevant theory of the method is described. Then a sample of a three-layer variable-sized stacked patch random surface with a dimension of 260 mm x 260 mm is simulated, fabricated, and measured in order to demonstrate the validity of the proposed design. For the normal incidence, the 8-dB RCS reduction can be achieved both by the simulation and the measurement in 8 GHz-13 GHz. The oblique incidence of 30° is also investigated, in which the 7-dB RCS reduction can be obtained in a frequency range of 8 GHz-14 GHz.展开更多
An intrusion detection system collects and analyzes information from different areas within a computer or a network to identify possible security threats that include threats from both outside as well as inside of the...An intrusion detection system collects and analyzes information from different areas within a computer or a network to identify possible security threats that include threats from both outside as well as inside of the organization. It deals with large amount of data, which contains various ir-relevant and redundant features and results in increased processing time and low detection rate. Therefore, feature selection should be treated as an indispensable pre-processing step to improve the overall system performance significantly while mining on huge datasets. In this context, in this paper, we focus on a two-step approach of feature selection based on Random Forest. The first step selects the features with higher variable importance score and guides the initialization of search process for the second step whose outputs the final feature subset for classification and in-terpretation. The effectiveness of this algorithm is demonstrated on KDD’99 intrusion detection datasets, which are based on DARPA 98 dataset, provides labeled data for researchers working in the field of intrusion detection. The important deficiency in the KDD’99 data set is the huge number of redundant records as observed earlier. Therefore, we have derived a data set RRE-KDD by eliminating redundant record from KDD’99 train and test dataset, so the classifiers and feature selection method will not be biased towards more frequent records. This RRE-KDD consists of both KDD99Train+ and KDD99Test+ dataset for training and testing purposes, respectively. The experimental results show that the Random Forest based proposed approach can select most im-portant and relevant features useful for classification, which, in turn, reduces not only the number of input features and time but also increases the classification accuracy.展开更多
Combining the characteristics of peer-to-peer (P2P) and grid, a super-peer selection algorithm--SSABC is presented in the distributed network merging P2P and grid. The algorithm computes nodes capacities using their...Combining the characteristics of peer-to-peer (P2P) and grid, a super-peer selection algorithm--SSABC is presented in the distributed network merging P2P and grid. The algorithm computes nodes capacities using their resource properties provided by a grid monitoring and discovery system, such as available bandwidth, free CPU and idle memory, as well as the number of current connections and online time. when a new node joins the network and the super-peers are all saturated, it should select a new super-peer from the new node or joined nodes with the highest capacity. By theoretical analyses and simulation experiments, it is shown that super-peers selected by capacity can achieve higher query success rates and shorten the average hop count when compared with super-peers selected randomly, and they can also balance the network load when all super-peers are saturated. When the number of total nodes changes, the conclusion is still valid, which explains that the algorithm SSABC is feasible and stable.展开更多
Quality of Maternal, Neonatal and Child (MNCH) care is an important aspect in ensuring healthy outcomes and survival of mothers and children. To maintain quality in health services provided, organizations and other st...Quality of Maternal, Neonatal and Child (MNCH) care is an important aspect in ensuring healthy outcomes and survival of mothers and children. To maintain quality in health services provided, organizations and other stakeholders in maternal and child health recommend regular quality measurement. Quality indicators are the key components in the quality measurement process. However, the literature shows neither an indicator selection process nor a set of quality indicators for quality measurement that is universally accepted. The lack of a universally accepted quality indicator selection process and set of quality indicators results in the establishment of a variety of quality indicator selection processes and several sets of quality indicators whenever the need for quality measurement arises. This adds extra processes that render quality measurement process. This study, therefore, aims to establish a set of quality indicators from a broad set of quality indicators recommended by the World Health Organization (WHO). The study deployed a machine learning technique, specifically a random forest classifier to select important indicators for quality measurement. Twenty-nine indicators were identified as important features and among those, eight indicators namely maternal mortality ratio, still-birth rate, delivery at a health facility, deliveries assisted by skilled attendants, proportional breach delivery, normal delivery rate, born before arrival rate and antenatal care visit coverage were identified to be the most important indicators for quality measurement.展开更多
针对复杂果园环境行间导航树干检测问题,提出一种基于多线激光雷达(Light detection and ranging,Li DAR)的主干形果树树干层级检测方法,使用16线VLP-16型LiDAR采集车辆周围的果园点云数据,通过目标分割和树干检测2个步骤层次化检测树干...针对复杂果园环境行间导航树干检测问题,提出一种基于多线激光雷达(Light detection and ranging,Li DAR)的主干形果树树干层级检测方法,使用16线VLP-16型LiDAR采集车辆周围的果园点云数据,通过目标分割和树干检测2个步骤层次化检测树干,去除非树干目标,提高树干检测精度。首先,设置环形感兴趣区域(Region of interest,ROI),采用地面拟合算法移除地面点云,消除果园目标点云之间的连通性;其次,设置矩形ROI,采用基于密度的带噪声空间聚类(Density-based spatial clustering of applications with noise,DBSCAN)算法对非地面点云进行x Oy平面聚类,根据Li DAR测量分辨率和果园目标参数设置DBSCAN算法超参数,将非地面点云分割为若干目标簇;然后,从全局和局部2个尺度提取目标簇的几何和强度特征,用这些特征描述树干与其他果园目标间的差异;最后,采用训练好的树干检测器融合特征,将目标簇划分为树干与非树干2个类别,输出树干簇。树干检测步骤采用随机森林(Random forest,RF)算法进行离线特征选择与融合,使用树干和非树干训练样本,基于基尼指数(Gini index,GI)改变量评价特征重要性,从初始特征中选择22个鉴别力较强的特征,再融合这些特征生成树干检测器。实验场景为标准化种植核桃园,共采集1317帧点云数据,从中分割12213个目标簇,其中,树冠、杂草、支撑杆、围栏、土坡、农具、行人等非树干目标占比58.04%。按照帧比例1∶4将目标簇随机划分为训练集和测试集,测试集树干检测精确率为99.16%、召回率为99.21%、F1分数为99.19%,树干层级检测平均帧耗时85.25 ms。本文方法能对复杂果园场景快速、精准地检测出树干,满足果园行间导航对树干检测的准确性和实时性要求。展开更多
Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extrac...Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extraction tools to detect the hidden data and ensures high-quality stego image generation.However,using a seed key to generate non-repeated sequential numbers takes a long time because it requires specific mathematical equations.In addition,these numbers may cluster in certain ranges.The hidden data in these clustered pixels will reduce the image quality,which steganalysis tools can detect.Therefore,this paper proposes a data structure that safeguards the steganographic model data and maintains the quality of the stego image.This paper employs the AdelsonVelsky and Landis(AVL)tree data structure algorithm to implement the randomization pixel selection technique for data concealment.The AVL tree algorithm provides several advantages for image steganography.Firstly,it ensures balanced tree structures,which leads to efficient data retrieval and insertion operations.Secondly,the self-balancing nature of AVL trees minimizes clustering by maintaining an even distribution of pixels,thereby preserving the stego image quality.The data structure employs the pixel indicator technique for Red,Green,and Blue(RGB)channel extraction.The green channel serves as the foundation for building a balanced binary tree.First,the sender identifies the colored cover image and secret data.The sender will use the two least significant bits(2-LSB)of RGB channels to conceal the data’s size and associated information.The next step is to create a balanced binary tree based on the green channel.Utilizing the channel pixel indicator on the LSB of the green channel,we can conceal bits in the 2-LSB of the red or blue channel.The first four levels of the data structure tree will mask the data size,while subsequent levels will conceal the remaining digits of secret data.After embedding the bits in the binary tree level by level,the model restores the AVL tree to create the stego image.Ultimately,the receiver receives this stego image through the public channel,enabling secret data recovery without stego or crypto keys.This method ensures that the stego image appears unsuspicious to potential attackers.Without an extraction algorithm,a third party cannot extract the original secret information from an intercepted stego image.Experimental results showed high levels of imperceptibility and security.展开更多
【目的】耒阳市滑坡灾害频发,对人民生命财产和生态安全构成严重威胁。为提高滑坡易发性评价的精度,【方法】以湖南省耒阳市为研究区,构建信息量模型(information value model,IV)与随机森林模型(random forest,RF)耦合的IV-RF模型,引...【目的】耒阳市滑坡灾害频发,对人民生命财产和生态安全构成严重威胁。为提高滑坡易发性评价的精度,【方法】以湖南省耒阳市为研究区,构建信息量模型(information value model,IV)与随机森林模型(random forest,RF)耦合的IV-RF模型,引入空间约束采样策略优化负样本选取策略,开展滑坡易发性评价。通过ROC曲线和AUC值对3种模型进行对比分析,同时提出综合性能指数用于综合评价模型表现。【结果】1)IV-RF耦合模型表现优于单一模型,AUC=0.952,综合性能指数(Accuracy+F1+MCC)为2.593。极高-高易发区滑坡点分布密集,极低-低易发区滑坡点极少,验证模型具有较高的空间预测精度。2)工程地质岩组因子是影响研究区滑坡发育最重要的评价因子之一。【结论】IV-RF耦合模型结合IV的数据定量解译与RF的非线性识别能力,可有效提升模型识别精度,研究结果可为研究区滑坡灾害风险防控、水土保持和国土空间规划提供科学依据。展开更多
The gravitational search algorithm (GSA) is a population-based heuristic optimization technique and has been proposed for solving continuous optimization problems. The GSA tries to obtain optimum or near optimum solut...The gravitational search algorithm (GSA) is a population-based heuristic optimization technique and has been proposed for solving continuous optimization problems. The GSA tries to obtain optimum or near optimum solution for the optimization problems by using interaction in all agents or masses in the population. This paper proposes and analyzes fitness-based proportional (rou- lette-wheel), tournament, rank-based and random selection mechanisms for choosing agents which they act masses in the GSA. The proposed methods are applied to solve 23 numerical benchmark functions, and obtained results are compared with the basic GSA algorithm. Experimental results show that the proposed methods are better than the basic GSA in terms of solution quality.展开更多
文摘In this paper, a strong limit theorem on gambling strategy for binary Bernoulli sequence, (i.e.) irregularity theorem, is extended to random selection for dependent m-valued random variables, via using a new method-differentiability on net. Furthermore, by allowing the selection function to take value in finite interval [-M,M], the conception of random selection is generalized.
文摘Moran or Wright–Fisher processes are probably the most well known models to study the evolution of a population under environmental various effects.Our object of study will be the Simpson index which measures the level of diversity of the population,one of the key parameters for ecologists who study for example,forest dynamics.Following ecological motivations,we will consider,here,the case,where there are various species with fitness and immigration parameters being random processes(and thus time evolving).The Simpson index is difficult to evaluate when the population is large,except in the neutral(no selection)case,because it has no closed formula.Our approach relies on the large population limit in the“weak”selection case,and thus to give a procedure which enables us to approximate,with controlled rate,the expectation of the Simpson index at fixed time.We will also study the long time behavior(invariant measure and convergence speed towards equilibrium)of the Wright–Fisher process in a simplified setting,allowing us to get a full picture for the approximation of the expectation of the Simpson index.
文摘As the incidence of this disease has increased significantly in the recent years, expert systems and machine learning techniques to this problem have also taken a great attention from many scholars. This study aims at diagnosing and prognosticating breast cancer with a machine learning method based on random forest classifier and feature selection technique. By weighting, keeping useful features and removing redundant features in datasets, the method was obtained to solve diagnosis problems via classifying Wisconsin Breast Cancer Diagnosis Dataset and to solve prognosis problem via classifying Wisconsin Breast Cancer Prognostic Dataset. On these datasets we obtained classification accuracy of 100% in the best case and of around 99.8% on average. This is very promising compared to the previously reported results. This result is for Wisconsin Breast Cancer Dataset but it states that this method can be used confidently for other breast cancer diagnosis problems, too.
文摘We consider the problem of variable selection for the single-index random effects models with longitudinal data. An automatic variable selection procedure is developed using smooth-threshold. The proposed method shares some of the desired features of existing variable selection methods: the resulting estimator enjoys the oracle property;the proposed procedure avoids the convex optimization problem and is flexible and easy to implement. Moreover, we use the penalized weighted deviance criterion for a data-driven choice of the tuning parameters. Simulation studies are carried out to assess the performance of our method, and a real dataset is analyzed for further illustration.
文摘A novel frequency selective surface (FSS) for reducing radar cross section (RCS) is proposed in this paper. This FSS is based on the random distribution method, so it can be called random surface. In this paper, the stacked patches serving as periodic elements are employed for RCS reduction. Previous work has demonstrated the efficiency by utilizing the microstrip patches, especially for the reflectarray. First, the relevant theory of the method is described. Then a sample of a three-layer variable-sized stacked patch random surface with a dimension of 260 mm x 260 mm is simulated, fabricated, and measured in order to demonstrate the validity of the proposed design. For the normal incidence, the 8-dB RCS reduction can be achieved both by the simulation and the measurement in 8 GHz-13 GHz. The oblique incidence of 30° is also investigated, in which the 7-dB RCS reduction can be obtained in a frequency range of 8 GHz-14 GHz.
文摘An intrusion detection system collects and analyzes information from different areas within a computer or a network to identify possible security threats that include threats from both outside as well as inside of the organization. It deals with large amount of data, which contains various ir-relevant and redundant features and results in increased processing time and low detection rate. Therefore, feature selection should be treated as an indispensable pre-processing step to improve the overall system performance significantly while mining on huge datasets. In this context, in this paper, we focus on a two-step approach of feature selection based on Random Forest. The first step selects the features with higher variable importance score and guides the initialization of search process for the second step whose outputs the final feature subset for classification and in-terpretation. The effectiveness of this algorithm is demonstrated on KDD’99 intrusion detection datasets, which are based on DARPA 98 dataset, provides labeled data for researchers working in the field of intrusion detection. The important deficiency in the KDD’99 data set is the huge number of redundant records as observed earlier. Therefore, we have derived a data set RRE-KDD by eliminating redundant record from KDD’99 train and test dataset, so the classifiers and feature selection method will not be biased towards more frequent records. This RRE-KDD consists of both KDD99Train+ and KDD99Test+ dataset for training and testing purposes, respectively. The experimental results show that the Random Forest based proposed approach can select most im-portant and relevant features useful for classification, which, in turn, reduces not only the number of input features and time but also increases the classification accuracy.
基金The National High Technology Research and Development Program of China (863 Program) (No.2007AA01Z422)the NaturalFoundation of Anhui Provincial Education Department (No.2006KJ041B,KJ2007B073)
文摘Combining the characteristics of peer-to-peer (P2P) and grid, a super-peer selection algorithm--SSABC is presented in the distributed network merging P2P and grid. The algorithm computes nodes capacities using their resource properties provided by a grid monitoring and discovery system, such as available bandwidth, free CPU and idle memory, as well as the number of current connections and online time. when a new node joins the network and the super-peers are all saturated, it should select a new super-peer from the new node or joined nodes with the highest capacity. By theoretical analyses and simulation experiments, it is shown that super-peers selected by capacity can achieve higher query success rates and shorten the average hop count when compared with super-peers selected randomly, and they can also balance the network load when all super-peers are saturated. When the number of total nodes changes, the conclusion is still valid, which explains that the algorithm SSABC is feasible and stable.
文摘Quality of Maternal, Neonatal and Child (MNCH) care is an important aspect in ensuring healthy outcomes and survival of mothers and children. To maintain quality in health services provided, organizations and other stakeholders in maternal and child health recommend regular quality measurement. Quality indicators are the key components in the quality measurement process. However, the literature shows neither an indicator selection process nor a set of quality indicators for quality measurement that is universally accepted. The lack of a universally accepted quality indicator selection process and set of quality indicators results in the establishment of a variety of quality indicator selection processes and several sets of quality indicators whenever the need for quality measurement arises. This adds extra processes that render quality measurement process. This study, therefore, aims to establish a set of quality indicators from a broad set of quality indicators recommended by the World Health Organization (WHO). The study deployed a machine learning technique, specifically a random forest classifier to select important indicators for quality measurement. Twenty-nine indicators were identified as important features and among those, eight indicators namely maternal mortality ratio, still-birth rate, delivery at a health facility, deliveries assisted by skilled attendants, proportional breach delivery, normal delivery rate, born before arrival rate and antenatal care visit coverage were identified to be the most important indicators for quality measurement.
文摘针对复杂果园环境行间导航树干检测问题,提出一种基于多线激光雷达(Light detection and ranging,Li DAR)的主干形果树树干层级检测方法,使用16线VLP-16型LiDAR采集车辆周围的果园点云数据,通过目标分割和树干检测2个步骤层次化检测树干,去除非树干目标,提高树干检测精度。首先,设置环形感兴趣区域(Region of interest,ROI),采用地面拟合算法移除地面点云,消除果园目标点云之间的连通性;其次,设置矩形ROI,采用基于密度的带噪声空间聚类(Density-based spatial clustering of applications with noise,DBSCAN)算法对非地面点云进行x Oy平面聚类,根据Li DAR测量分辨率和果园目标参数设置DBSCAN算法超参数,将非地面点云分割为若干目标簇;然后,从全局和局部2个尺度提取目标簇的几何和强度特征,用这些特征描述树干与其他果园目标间的差异;最后,采用训练好的树干检测器融合特征,将目标簇划分为树干与非树干2个类别,输出树干簇。树干检测步骤采用随机森林(Random forest,RF)算法进行离线特征选择与融合,使用树干和非树干训练样本,基于基尼指数(Gini index,GI)改变量评价特征重要性,从初始特征中选择22个鉴别力较强的特征,再融合这些特征生成树干检测器。实验场景为标准化种植核桃园,共采集1317帧点云数据,从中分割12213个目标簇,其中,树冠、杂草、支撑杆、围栏、土坡、农具、行人等非树干目标占比58.04%。按照帧比例1∶4将目标簇随机划分为训练集和测试集,测试集树干检测精确率为99.16%、召回率为99.21%、F1分数为99.19%,树干层级检测平均帧耗时85.25 ms。本文方法能对复杂果园场景快速、精准地检测出树干,满足果园行间导航对树干检测的准确性和实时性要求。
文摘Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extraction tools to detect the hidden data and ensures high-quality stego image generation.However,using a seed key to generate non-repeated sequential numbers takes a long time because it requires specific mathematical equations.In addition,these numbers may cluster in certain ranges.The hidden data in these clustered pixels will reduce the image quality,which steganalysis tools can detect.Therefore,this paper proposes a data structure that safeguards the steganographic model data and maintains the quality of the stego image.This paper employs the AdelsonVelsky and Landis(AVL)tree data structure algorithm to implement the randomization pixel selection technique for data concealment.The AVL tree algorithm provides several advantages for image steganography.Firstly,it ensures balanced tree structures,which leads to efficient data retrieval and insertion operations.Secondly,the self-balancing nature of AVL trees minimizes clustering by maintaining an even distribution of pixels,thereby preserving the stego image quality.The data structure employs the pixel indicator technique for Red,Green,and Blue(RGB)channel extraction.The green channel serves as the foundation for building a balanced binary tree.First,the sender identifies the colored cover image and secret data.The sender will use the two least significant bits(2-LSB)of RGB channels to conceal the data’s size and associated information.The next step is to create a balanced binary tree based on the green channel.Utilizing the channel pixel indicator on the LSB of the green channel,we can conceal bits in the 2-LSB of the red or blue channel.The first four levels of the data structure tree will mask the data size,while subsequent levels will conceal the remaining digits of secret data.After embedding the bits in the binary tree level by level,the model restores the AVL tree to create the stego image.Ultimately,the receiver receives this stego image through the public channel,enabling secret data recovery without stego or crypto keys.This method ensures that the stego image appears unsuspicious to potential attackers.Without an extraction algorithm,a third party cannot extract the original secret information from an intercepted stego image.Experimental results showed high levels of imperceptibility and security.
文摘【目的】耒阳市滑坡灾害频发,对人民生命财产和生态安全构成严重威胁。为提高滑坡易发性评价的精度,【方法】以湖南省耒阳市为研究区,构建信息量模型(information value model,IV)与随机森林模型(random forest,RF)耦合的IV-RF模型,引入空间约束采样策略优化负样本选取策略,开展滑坡易发性评价。通过ROC曲线和AUC值对3种模型进行对比分析,同时提出综合性能指数用于综合评价模型表现。【结果】1)IV-RF耦合模型表现优于单一模型,AUC=0.952,综合性能指数(Accuracy+F1+MCC)为2.593。极高-高易发区滑坡点分布密集,极低-低易发区滑坡点极少,验证模型具有较高的空间预测精度。2)工程地质岩组因子是影响研究区滑坡发育最重要的评价因子之一。【结论】IV-RF耦合模型结合IV的数据定量解译与RF的非线性识别能力,可有效提升模型识别精度,研究结果可为研究区滑坡灾害风险防控、水土保持和国土空间规划提供科学依据。
基金supported by Scientific Research Project of Selçuk University
文摘The gravitational search algorithm (GSA) is a population-based heuristic optimization technique and has been proposed for solving continuous optimization problems. The GSA tries to obtain optimum or near optimum solution for the optimization problems by using interaction in all agents or masses in the population. This paper proposes and analyzes fitness-based proportional (rou- lette-wheel), tournament, rank-based and random selection mechanisms for choosing agents which they act masses in the GSA. The proposed methods are applied to solve 23 numerical benchmark functions, and obtained results are compared with the basic GSA algorithm. Experimental results show that the proposed methods are better than the basic GSA in terms of solution quality.