Nitrogen(N)enrichment has resulted in widespread alteration of grassland ecosystem processes and functions mainly through disturbance in soil enzyme activities.However,we lack a comprehensive understanding of how N de...Nitrogen(N)enrichment has resulted in widespread alteration of grassland ecosystem processes and functions mainly through disturbance in soil enzyme activities.However,we lack a comprehensive understanding of how N deposition affects specific key soil enzymes that mediate plant-soil feedback of grassland.Here,with a meta-analysis on 1446 cases from field observations in China,we show that N deposition differently affects soil enzymes associated with soil biochemical processes.Specifically,N-promoted C,N,and P-acquiring hydrolase activities significantly increased by 8.73%,7.67%,and 8.69%,respectively,related to an increase in microbial-specific enzyme secretion.The increased relative N availability and soil acidification were two potential mechanisms accounting for the changes in soil enzyme activities with N enrichment.The mixed N addition in combination of NH_(4)NO_(3) and urea showed greater stimulation effect on soil enzyme activities.However,the high rate and long-term N addition tended to weaken the positive responses of soil C-,Nand P-acquiring hydrolase activities to N enrichment.Spatially increased mean annual precipitation and temperature primarily promoted the positive effects of N enrichment on N-and P-acquiring hydrolase activities,and the stimulation of C-and N-acquiring hydrolase activities by N enrichment was intensified with the increase in soil depth.Finally,multimodal inference showed that grassland type was the most important regulator of responses of microbial C,N,and P-acquiring hydrolase activities to N enrichment.This meta-analysis provides a comprehensive insight into understanding the key role of N enrichment in shaping soil enzyme activities of grassland ecosystems.展开更多
As the density of wireless networks increases globally, the vulnerability of overlapped dense wireless communications to interference by hidden nodes and denial-of-service (DoS) attacks is becoming more apparent. Ther...As the density of wireless networks increases globally, the vulnerability of overlapped dense wireless communications to interference by hidden nodes and denial-of-service (DoS) attacks is becoming more apparent. There exists a gap in research on the detection and response to attacks on Medium Access Control (MAC) mechanisms themselves, which would lead to service outages between nodes. Classifying exploitation and deceptive jamming attacks on control mechanisms is particularly challengingdue to their resemblance to normal heavy communication patterns. Accordingly, this paper proposes a machine learning-based selective attack mitigation model that detects DoS attacks on wireless networks by monitoring packet log data. Based on the type of detected attack, it implements effective corresponding mitigation techniques to restore performance to nodes whose availability has been compromised. Experimental results reveal that the accuracy of the proposed model is 14% higher than that of a baseline anomaly detection model. Further, the appropriate mitigation techniques selected by the proposed system based on the attack type improve the average throughput by more than 440% compared to the case without a response.展开更多
Fish behaviour affects the performance of selection devices in fishing gears.Traditionally,fish behaviour in relation to selection devices is assessed by direct observation.However,this approach has limitations,and th...Fish behaviour affects the performance of selection devices in fishing gears.Traditionally,fish behaviour in relation to selection devices is assessed by direct observation.However,this approach has limitations,and the observations are not explicitly incorporated in the selectivity models.Further,underwater observations and quantification of fish behaviour can be challenging.In this study we outline and use an indirect method to explicitly incorporate and quantify fish behaviour in trawl selectivity analysis.We use a set of structural models,which are based on modelling the actual processes believed to determine the size selection of the device,to discern which behaviours are most likely to explain the selectivity process.By bootstrapping we assess how confident we can be in the choice of a specific structural model and on discerning the associated behavioural aspects.We collected size selectivity data in the Barents Sea demersal trawl fishery targeting gadoids,where the use of a sorting grid is compulsory.Using our modelling approach,we obtained deeper understanding of which behavioural processes most likely affect size selectivity in the sorting grids tested.Our approach can be applied to other fishing gears to understand and quantify fish behaviour in relation to size selectivity.展开更多
We introduce a new generalization of the exponentiated power Lindley distribution,called the exponentiated power Lindley power series(EPLPS)distribution.The new distribution arises on a latent complementary risks scen...We introduce a new generalization of the exponentiated power Lindley distribution,called the exponentiated power Lindley power series(EPLPS)distribution.The new distribution arises on a latent complementary risks scenario,in which the lifetime associated with a particular risk is not observable;rather,we observe only the maximum lifetime value among all risks.The distribution exhibits decreasing,increasing,unimodal and bathtub shaped hazard rate functions,depending on its parameters.Several properties of the EPLPS distribution are investigated.Moreover,we discuss maximum likelihood estimation and provide formulas for the elements of the Fisher information matrix.Finally,applications to three real data sets show the flexibility and potentiality of the EPLPS distribution.展开更多
Deep neural network(DNN)models have achieved remarkable performance across diverse tasks,leading to widespread commercial adoption.However,training high-accuracy models demands extensive data,substantial computational...Deep neural network(DNN)models have achieved remarkable performance across diverse tasks,leading to widespread commercial adoption.However,training high-accuracy models demands extensive data,substantial computational resources,and significant time investment,making them valuable assets vulnerable to unauthorized exploitation.To address this issue,this paper proposes an intellectual property(IP)protection framework for DNN models based on feature layer selection and hyper-chaotic mapping.Firstly,a sensitivity-based importance evaluation algorithm is used to identify the key feature layers for encryption,effectively protecting the core components of the model.Next,the L1 regularization criterion is applied to further select high-weight features that significantly impact the model’s performance,ensuring that the encryption process minimizes performance loss.Finally,a dual-layer encryption mechanism is designed,introducing perturbations into the weight values and utilizing hyperchaotic mapping to disrupt channel information,further enhancing the model’s security.Experimental results demonstrate that encrypting only a small subset of parameters effectively reduces model accuracy to random-guessing levels while ensuring full recoverability.The scheme exhibits strong robustness against model pruning and fine-tuning attacks and maintains consistent performance across multiple datasets,providing an efficient and practical solution for authorization-based DNN IP protection.展开更多
The literature on multi-attribute optimization for renewable energy source(RES)placement in deregulated power markets is extensive and diverse in methodology.This study focuses on the most relevant publications direct...The literature on multi-attribute optimization for renewable energy source(RES)placement in deregulated power markets is extensive and diverse in methodology.This study focuses on the most relevant publications directly addressing the research problem at hand.Similarly,while the body of work on optimal location and sizing of renewable energy generators(REGs)in balanced distribution systems is substantial,only the most pertinent sources are cited,aligning closely with the study’s objective function.A comprehensive literature review reveals several key research areas:RES integration,RES-related optimization techniques,strategic placement of wind and solar generation,and RES promotion in deregulated powermarkets,particularly within transmission systems.Furthermore,the optimal location and sizing of REGs in both balanced and unbalanced distribution systems have been extensively studied.RESs demonstrate significant potential for standalone applications in remote areas lacking conventional transmission and distribution infrastructure.Also presents a thorough review of current modeling and optimization approaches for RES-based distribution system location and sizing.Additionally,it examines the optimal positioning,sizing,and performance of hybrid and standalone renewable energy systems.This paper provides a comprehensive review of current modeling and optimization approaches for the location and sizing of Renewable Energy Sources(RESs)in distribution systems,focusing on both balanced and unbalanced networks.展开更多
The performance of six statistical approaches,which can be used for selection of the best model to describe the growth of individual fish,was analyzed using simulated and real length-at-age data.The six approaches inc...The performance of six statistical approaches,which can be used for selection of the best model to describe the growth of individual fish,was analyzed using simulated and real length-at-age data.The six approaches include coefficient of determination(R2),adjusted coefficient of determination(adj.-R2),root mean squared error(RMSE),Akaike's information criterion(AIC),bias correction of AIC(AICc) and Bayesian information criterion(BIC).The simulation data were generated by five growth models with different numbers of parameters.Four sets of real data were taken from the literature.The parameters in each of the five growth models were estimated using the maximum likelihood method under the assumption of the additive error structure for the data.The best supported model by the data was identified using each of the six approaches.The results show that R2 and RMSE have the same properties and perform worst.The sample size has an effect on the performance of adj.-R2,AIC,AICc and BIC.Adj.-R2 does better in small samples than in large samples.AIC is not suitable to use in small samples and tends to select more complex model when the sample size becomes large.AICc and BIC have best performance in small and large sample cases,respectively.Use of AICc or BIC is recommended for selection of fish growth model according to the size of the length-at-age data.展开更多
Comparative fishing experiments were carried out in 2010 using tube traps with five hole diameters (8, 15, 18, 20 and 22 mm) to establish the size selectivity of escape holes for white-spotted conger. Selectivity and ...Comparative fishing experiments were carried out in 2010 using tube traps with five hole diameters (8, 15, 18, 20 and 22 mm) to establish the size selectivity of escape holes for white-spotted conger. Selectivity and split parameters of the SELECT model were calculated using the estimated-split and equal-spilt model. From likelihood ratio tests and AIC (Akaike's Information Criterion) values, the estimated-split model was selected as the best-fit model. Size selectivity of escape holes in the tube traps was expressed as a logistic curve, similar to mesh selectivity. The 50% selection length of white-spotted conger in the estimated-split model was 28.26, 33.35, 39.31 and 47.30 cm for escape-hole diameters of 15, 18, 20 and 22 mm, respectively. The optimum escape-hole size is discussed with respect to management of the white-spotted conger fishery. The results indicate that tube traps with escape holes of 18 mm in diameter would benefit this fishery.展开更多
Peak ground acceleration(PGA) estimation is an important task in earthquake engineering practice.One of the most well-known models is the Boore-Joyner-Fumal formula,which estimates the PGA using the moment magnitude,t...Peak ground acceleration(PGA) estimation is an important task in earthquake engineering practice.One of the most well-known models is the Boore-Joyner-Fumal formula,which estimates the PGA using the moment magnitude,the site-to-fault distance and the site foundation properties.In the present study,the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an effi ciency-robustness balanced formula is proposed.For this purpose,a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship.In this approach,each model class(a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data.The one with the highest plausibility is robust since it possesses the optimal balance between the data fi tting capability and the sensitivity to noise.A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis.The optimal predictive formula is proposed based on this database.It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore,Joyner and Fumal(1993).展开更多
Rodents have been widely used in the production of cerebral ischemia models. However, successful therapies have been proven on experimental rodent stroke model, and they have often failed to be effective when tested c...Rodents have been widely used in the production of cerebral ischemia models. However, successful therapies have been proven on experimental rodent stroke model, and they have often failed to be effective when tested clinically. Therefore, nonhuman primates were recommended as the ideal alternatives, owing to their similarities with the human cerebrovascular system, brain metabolism, grey to white matter ratio and even their rich behavioral repertoire. The present review is a thorough summary of ten methods that establish nonhuman primate models of focal cerebral ischemia; electrocoagulation, endothelin-1-induced occlusion, microvascular clip occlusion, autologous blood clot embolization, balloon inflation, microcatheter embolization, coil embolization, surgical suture embolization, suture, and photochemical induction methods. This review addresses the advantages and disadvantages of each method, as well as precautions for each model, compared nonhuman primates with rodents, different species of nonhuman primates and different modeling methods. Finally it discusses various factors that need to be considered when modelling and the method of evaluation after modelling. These are critical for understanding their respective strengths and weaknesses and underlie the selection of the optimum model.展开更多
Modern engineering design optimization often relies on computer simulations to evaluate candidate designs, a setup which results in expensive black-box optimization problems. Such problems introduce unique challenges,...Modern engineering design optimization often relies on computer simulations to evaluate candidate designs, a setup which results in expensive black-box optimization problems. Such problems introduce unique challenges, which has motivated the application of metamodel-assisted computational intelligence algorithms to solve them. Such algorithms combine a computational intelligence optimizer which employs a population of candidate solutions, with a metamodel which is a computationally cheaper approximation of the expensive computer simulation. However, although a variety of metamodels and optimizers have been proposed, the optimal types to employ are problem dependant. Therefore, a priori prescribing the type of metamodel and optimizer to be used may degrade its effectiveness. Leveraging on this issue, this study proposes a new computational intelligence algorithm which autonomously adapts the type of the metamodel and optimizer during the search by selecting the most suitable types out of a family of candidates at each stage. Performance analysis using a set of test functions demonstrates the effectiveness of the proposed algorithm, and highlights the merit of the proposed adaptation approach.展开更多
Evaluation of numerical earthquake forecasting models needs to consider two issues of equal importance:the application scenario of the simulation,and the complexity of the model.Criterion of the evaluation-based model...Evaluation of numerical earthquake forecasting models needs to consider two issues of equal importance:the application scenario of the simulation,and the complexity of the model.Criterion of the evaluation-based model selection faces some interesting problems in need of discussion.展开更多
Time-series-based forecasting is essential to determine how past events affect future events. This paper compares the performance accuracy of different time-series models for oil prices. Three types of univariate mode...Time-series-based forecasting is essential to determine how past events affect future events. This paper compares the performance accuracy of different time-series models for oil prices. Three types of univariate models are discussed: the exponential smoothing (ES), Holt-Winters (HW) and autoregressive intergrade moving average (ARIMA) models. To determine the best model, six different strategies were applied as selection criteria to quantify these models’ prediction accuracies. This comparison should help policy makers and industry marketing strategists select the best forecasting method in oil market. The three models were compared by applying them to the time series of regular oil prices for West Texas Intermediate (WTI) crude. The comparison indicated that the HW model performed better than the ES model for a prediction with a confidence interval of 95%. However, the ARIMA (2, 1, 2) model yielded the best results, leading us to conclude that this sophisticated and robust model outperformed other simple yet flexible models in oil market.展开更多
Increase in greenhouse gases, has made scientists to substitute alternative fuels for fossil fuels. Nowadays, converting biomass into liquid by Fischer-Tropsch synthesis is a major concern for alternative fuels (gaso...Increase in greenhouse gases, has made scientists to substitute alternative fuels for fossil fuels. Nowadays, converting biomass into liquid by Fischer-Tropsch synthesis is a major concern for alternative fuels (gasoline, diesel etc.). Selectivity of Fischer-Tropsch hydrocarbon product (green fuel) is an important issue. In this study, the experimental data has been obtained from three factors; temperature, H2/CO ratio and pressure in the fixed bed micro reactor. T = 543-618 (K), P = 3-10 (bar), H2/CO = 1-2 and space velocity = 4500 (l/h) were the reactor conditions. The results of product modeling for methane (CH4), ethane (C2H6), ethylene (C2H4) and CO conversion with experimental data were compared. The effective parameters and the interaction between them were investigated in the model. H2/CO ratio and pressure and interaction between pressure and H2/CO in ethane selectivity model and CO conversion and interaction between temperature and H2/CO ratio in methane selectivity model and ethylene gave the best results. To determine the optimal conditions for light hydrocarbons, ANOVA and RSM were employed. Finally, products optimization was done and results were concluded.展开更多
Widely used deep neural networks currently face limitations in achieving optimal performance for purchase intention prediction due to constraints on data volume and hyperparameter selection.To address this issue,based...Widely used deep neural networks currently face limitations in achieving optimal performance for purchase intention prediction due to constraints on data volume and hyperparameter selection.To address this issue,based on the deep forest algorithm and further integrating evolutionary ensemble learning methods,this paper proposes a novel Deep Adaptive Evolutionary Ensemble(DAEE)model.This model introduces model diversity into the cascade layer,allowing it to adaptively adjust its structure to accommodate complex and evolving purchasing behavior patterns.Moreover,this paper optimizes the methods of obtaining feature vectors,enhancement vectors,and prediction results within the deep forest algorithm to enhance the model’s predictive accuracy.Results demonstrate that the improved deep forest model not only possesses higher robustness but also shows an increase of 5.02%in AUC value compared to the baseline model.Furthermore,its training runtime speed is 6 times faster than that of deep models,and compared to other improved models,its accuracy has been enhanced by 0.9%.展开更多
Soybean frogeye leaf spot(FLS) disease is a global disease affecting soybean yield, especially in the soybean growing area of Heilongjiang Province. In order to realize genomic selection breeding for FLS resistance of...Soybean frogeye leaf spot(FLS) disease is a global disease affecting soybean yield, especially in the soybean growing area of Heilongjiang Province. In order to realize genomic selection breeding for FLS resistance of soybean, least absolute shrinkage and selection operator(LASSO) regression and stepwise regression were combined, and a genomic selection model was established for 40 002 SNP markers covering soybean genome and relative lesion area of soybean FLS. As a result, 68 molecular markers controlling soybean FLS were detected accurately, and the phenotypic contribution rate of these markers reached 82.45%. In this study, a model was established, which could be used directly to evaluate the resistance of soybean FLS and to select excellent offspring. This research method could also provide ideas and methods for other plants to breeding in disease resistance.展开更多
Government credibility is an important asset of contemporary national governance, an important criterion for evaluating government legitimacy, and a key factor in measuring the effectiveness of government governance. ...Government credibility is an important asset of contemporary national governance, an important criterion for evaluating government legitimacy, and a key factor in measuring the effectiveness of government governance. In recent years, researchers’ research on government credibility has mostly focused on exploring theories and mechanisms, with little empirical research on this topic. This article intends to apply variable selection models in the field of social statistics to the issue of government credibility, in order to achieve empirical research on government credibility and explore its core influencing factors from a statistical perspective. Specifically, this article intends to use four regression-analysis-based methods and three random-forest-based methods to study the influencing factors of government credibility in various provinces in China, and compare the performance of these seven variable selection methods in different dimensions. The research results show that there are certain differences in simplicity, accuracy, and variable importance ranking among different variable selection methods, which present different importance in the study of government credibility issues. This study provides a methodological reference for variable selection models in the field of social science research, and also offers a multidimensional comparative perspective for analyzing the influencing factors of government credibility.展开更多
This study explores the application of Bayesian econometrics in policy evaluation through theoretical analysis. The research first reviews the theoretical foundations of Bayesian methods, including the concepts of Bay...This study explores the application of Bayesian econometrics in policy evaluation through theoretical analysis. The research first reviews the theoretical foundations of Bayesian methods, including the concepts of Bayesian inference, prior distributions, and posterior distributions. Through systematic analysis, the study constructs a theoretical framework for applying Bayesian methods in policy evaluation. The research finds that Bayesian methods have multiple theoretical advantages in policy evaluation: Based on parameter uncertainty theory, Bayesian methods can better handle uncertainty in model parameters and provide more comprehensive estimates of policy effects;from the perspective of model selection theory, Bayesian model averaging can reduce model selection bias and enhance the robustness of evaluation results;according to causal inference theory, Bayesian causal inference methods provide new approaches for evaluating policy causal effects. The study also points out the complexities of applying Bayesian methods in policy evaluation, such as the selection of prior information and computational complexity. To address these complexities, the study proposes hybrid methods combining frequentist approaches and suggestions for developing computationally efficient algorithms. The research also discusses theoretical comparisons between Bayesian methods and other policy evaluation techniques, providing directions for future research.展开更多
Traditional methods for selecting models in experimental data analysis are susceptible to researcher bias, hindering exploration of alternative explanations and potentially leading to overfitting. The Finite Informati...Traditional methods for selecting models in experimental data analysis are susceptible to researcher bias, hindering exploration of alternative explanations and potentially leading to overfitting. The Finite Information Quantity (FIQ) approach offers a novel solution by acknowledging the inherent limitations in information processing capacity of physical systems. This framework facilitates the development of objective criteria for model selection (comparative uncertainty) and paves the way for a more comprehensive understanding of phenomena through exploring diverse explanations. This work presents a detailed comparison of the FIQ approach with ten established model selection methods, highlighting the advantages and limitations of each. We demonstrate the potential of FIQ to enhance the objectivity and robustness of scientific inquiry through three practical examples: selecting appropriate models for measuring fundamental constants, sound velocity, and underwater electrical discharges. Further research is warranted to explore the full applicability of FIQ across various scientific disciplines.展开更多
An improved Gaussian mixture model (GMM)- based clustering method is proposed for the difficult case where the true distribution of data is against the assumed GMM. First, an improved model selection criterion, the ...An improved Gaussian mixture model (GMM)- based clustering method is proposed for the difficult case where the true distribution of data is against the assumed GMM. First, an improved model selection criterion, the completed likelihood minimum message length criterion, is derived. It can measure both the goodness-of-fit of the candidate GMM to the data and the goodness-of-partition of the data. Secondly, by utilizing the proposed criterion as the clustering objective function, an improved expectation- maximization (EM) algorithm is developed, which can avoid poor local optimal solutions compared to the standard EM algorithm for estimating the model parameters. The experimental results demonstrate that the proposed method can rectify the over-fitting tendency of representative GMM-based clustering approaches and can robustly provide more accurate clustering results.展开更多
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences(No.XDA28110300)National Natural Science Foundation of China(No.U23A2004)+3 种基金Natural Science Foundation of Jilin Province,China(No.YDZJ202201ZYTS522)Science and Technology Cooperation Program between Jilin Province and Chinese Academy of Sciences(No.2023SYHZ0053)Innovation Team Program of Northeast Institute of Geography and Agroecology,Chinese Academy of Sciences(No.2023CXTD02)the European Commission under Marie Sk?odowska-Curie(No.101034371)。
文摘Nitrogen(N)enrichment has resulted in widespread alteration of grassland ecosystem processes and functions mainly through disturbance in soil enzyme activities.However,we lack a comprehensive understanding of how N deposition affects specific key soil enzymes that mediate plant-soil feedback of grassland.Here,with a meta-analysis on 1446 cases from field observations in China,we show that N deposition differently affects soil enzymes associated with soil biochemical processes.Specifically,N-promoted C,N,and P-acquiring hydrolase activities significantly increased by 8.73%,7.67%,and 8.69%,respectively,related to an increase in microbial-specific enzyme secretion.The increased relative N availability and soil acidification were two potential mechanisms accounting for the changes in soil enzyme activities with N enrichment.The mixed N addition in combination of NH_(4)NO_(3) and urea showed greater stimulation effect on soil enzyme activities.However,the high rate and long-term N addition tended to weaken the positive responses of soil C-,Nand P-acquiring hydrolase activities to N enrichment.Spatially increased mean annual precipitation and temperature primarily promoted the positive effects of N enrichment on N-and P-acquiring hydrolase activities,and the stimulation of C-and N-acquiring hydrolase activities by N enrichment was intensified with the increase in soil depth.Finally,multimodal inference showed that grassland type was the most important regulator of responses of microbial C,N,and P-acquiring hydrolase activities to N enrichment.This meta-analysis provides a comprehensive insight into understanding the key role of N enrichment in shaping soil enzyme activities of grassland ecosystems.
基金supported by the Ministry of Trade,Industry and Energy(MOTIE)under Training Industrial Security Specialist for High-Tech Industry(RS-2024-00415520)supervised by the Korea Institute for Advancement of Technology(KIAT)the Ministry of Science and ICT(MSIT)under the ICT Challenge and Advanced Network of HRD(ICAN)Program(No.IITP-2022-RS-2022-00156310)supervised by the Institute of Information&Communication Technology Planning&Evaluation(IITP).
文摘As the density of wireless networks increases globally, the vulnerability of overlapped dense wireless communications to interference by hidden nodes and denial-of-service (DoS) attacks is becoming more apparent. There exists a gap in research on the detection and response to attacks on Medium Access Control (MAC) mechanisms themselves, which would lead to service outages between nodes. Classifying exploitation and deceptive jamming attacks on control mechanisms is particularly challengingdue to their resemblance to normal heavy communication patterns. Accordingly, this paper proposes a machine learning-based selective attack mitigation model that detects DoS attacks on wireless networks by monitoring packet log data. Based on the type of detected attack, it implements effective corresponding mitigation techniques to restore performance to nodes whose availability has been compromised. Experimental results reveal that the accuracy of the proposed model is 14% higher than that of a baseline anomaly detection model. Further, the appropriate mitigation techniques selected by the proposed system based on the attack type improve the average throughput by more than 440% compared to the case without a response.
基金part of the project FHF 901633"Development of selectivity systems for gadoid trawls".
文摘Fish behaviour affects the performance of selection devices in fishing gears.Traditionally,fish behaviour in relation to selection devices is assessed by direct observation.However,this approach has limitations,and the observations are not explicitly incorporated in the selectivity models.Further,underwater observations and quantification of fish behaviour can be challenging.In this study we outline and use an indirect method to explicitly incorporate and quantify fish behaviour in trawl selectivity analysis.We use a set of structural models,which are based on modelling the actual processes believed to determine the size selection of the device,to discern which behaviours are most likely to explain the selectivity process.By bootstrapping we assess how confident we can be in the choice of a specific structural model and on discerning the associated behavioural aspects.We collected size selectivity data in the Barents Sea demersal trawl fishery targeting gadoids,where the use of a sorting grid is compulsory.Using our modelling approach,we obtained deeper understanding of which behavioural processes most likely affect size selectivity in the sorting grids tested.Our approach can be applied to other fishing gears to understand and quantify fish behaviour in relation to size selectivity.
文摘We introduce a new generalization of the exponentiated power Lindley distribution,called the exponentiated power Lindley power series(EPLPS)distribution.The new distribution arises on a latent complementary risks scenario,in which the lifetime associated with a particular risk is not observable;rather,we observe only the maximum lifetime value among all risks.The distribution exhibits decreasing,increasing,unimodal and bathtub shaped hazard rate functions,depending on its parameters.Several properties of the EPLPS distribution are investigated.Moreover,we discuss maximum likelihood estimation and provide formulas for the elements of the Fisher information matrix.Finally,applications to three real data sets show the flexibility and potentiality of the EPLPS distribution.
基金supported in part by the National Natural Science Foundation of China under Grant No.62172280in part by the Key Scientific Research Projects of Colleges and Universities in Henan Province,China under Grant No.23A520006in part by Henan Provincial Science and Technology Research Project under Grant No.222102210199.
文摘Deep neural network(DNN)models have achieved remarkable performance across diverse tasks,leading to widespread commercial adoption.However,training high-accuracy models demands extensive data,substantial computational resources,and significant time investment,making them valuable assets vulnerable to unauthorized exploitation.To address this issue,this paper proposes an intellectual property(IP)protection framework for DNN models based on feature layer selection and hyper-chaotic mapping.Firstly,a sensitivity-based importance evaluation algorithm is used to identify the key feature layers for encryption,effectively protecting the core components of the model.Next,the L1 regularization criterion is applied to further select high-weight features that significantly impact the model’s performance,ensuring that the encryption process minimizes performance loss.Finally,a dual-layer encryption mechanism is designed,introducing perturbations into the weight values and utilizing hyperchaotic mapping to disrupt channel information,further enhancing the model’s security.Experimental results demonstrate that encrypting only a small subset of parameters effectively reduces model accuracy to random-guessing levels while ensuring full recoverability.The scheme exhibits strong robustness against model pruning and fine-tuning attacks and maintains consistent performance across multiple datasets,providing an efficient and practical solution for authorization-based DNN IP protection.
文摘The literature on multi-attribute optimization for renewable energy source(RES)placement in deregulated power markets is extensive and diverse in methodology.This study focuses on the most relevant publications directly addressing the research problem at hand.Similarly,while the body of work on optimal location and sizing of renewable energy generators(REGs)in balanced distribution systems is substantial,only the most pertinent sources are cited,aligning closely with the study’s objective function.A comprehensive literature review reveals several key research areas:RES integration,RES-related optimization techniques,strategic placement of wind and solar generation,and RES promotion in deregulated powermarkets,particularly within transmission systems.Furthermore,the optimal location and sizing of REGs in both balanced and unbalanced distribution systems have been extensively studied.RESs demonstrate significant potential for standalone applications in remote areas lacking conventional transmission and distribution infrastructure.Also presents a thorough review of current modeling and optimization approaches for RES-based distribution system location and sizing.Additionally,it examines the optimal positioning,sizing,and performance of hybrid and standalone renewable energy systems.This paper provides a comprehensive review of current modeling and optimization approaches for the location and sizing of Renewable Energy Sources(RESs)in distribution systems,focusing on both balanced and unbalanced networks.
基金Supported by the High Technology Research and Development Program of China (863 Program,No2006AA100301)
文摘The performance of six statistical approaches,which can be used for selection of the best model to describe the growth of individual fish,was analyzed using simulated and real length-at-age data.The six approaches include coefficient of determination(R2),adjusted coefficient of determination(adj.-R2),root mean squared error(RMSE),Akaike's information criterion(AIC),bias correction of AIC(AICc) and Bayesian information criterion(BIC).The simulation data were generated by five growth models with different numbers of parameters.Four sets of real data were taken from the literature.The parameters in each of the five growth models were estimated using the maximum likelihood method under the assumption of the additive error structure for the data.The best supported model by the data was identified using each of the six approaches.The results show that R2 and RMSE have the same properties and perform worst.The sample size has an effect on the performance of adj.-R2,AIC,AICc and BIC.Adj.-R2 does better in small samples than in large samples.AIC is not suitable to use in small samples and tends to select more complex model when the sample size becomes large.AICc and BIC have best performance in small and large sample cases,respectively.Use of AICc or BIC is recommended for selection of fish growth model according to the size of the length-at-age data.
基金Supported by National Key Technology Research and Development Program of China (No. 2006BAD09A05)
文摘Comparative fishing experiments were carried out in 2010 using tube traps with five hole diameters (8, 15, 18, 20 and 22 mm) to establish the size selectivity of escape holes for white-spotted conger. Selectivity and split parameters of the SELECT model were calculated using the estimated-split and equal-spilt model. From likelihood ratio tests and AIC (Akaike's Information Criterion) values, the estimated-split model was selected as the best-fit model. Size selectivity of escape holes in the tube traps was expressed as a logistic curve, similar to mesh selectivity. The 50% selection length of white-spotted conger in the estimated-split model was 28.26, 33.35, 39.31 and 47.30 cm for escape-hole diameters of 15, 18, 20 and 22 mm, respectively. The optimum escape-hole size is discussed with respect to management of the white-spotted conger fishery. The results indicate that tube traps with escape holes of 18 mm in diameter would benefit this fishery.
基金Research Committee of University of Macao under Research Grant No.MYRG081(Y1-L2)-FST13-YKVthe Science and Technology Development Fund of the Macao SAR government under Grant No.012/2013/A1
文摘Peak ground acceleration(PGA) estimation is an important task in earthquake engineering practice.One of the most well-known models is the Boore-Joyner-Fumal formula,which estimates the PGA using the moment magnitude,the site-to-fault distance and the site foundation properties.In the present study,the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an effi ciency-robustness balanced formula is proposed.For this purpose,a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship.In this approach,each model class(a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data.The one with the highest plausibility is robust since it possesses the optimal balance between the data fi tting capability and the sensitivity to noise.A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis.The optimal predictive formula is proposed based on this database.It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore,Joyner and Fumal(1993).
基金supported by the National Natural Science Foundation of China,No.81000852 and 81301677the AHA Award,No.17POST32530004+1 种基金the Supporting Project of Science & Technology of Sichuan Province of China,No.2012SZ0140the Research Foundation of Zhejiang Province of China,No.201022896
文摘Rodents have been widely used in the production of cerebral ischemia models. However, successful therapies have been proven on experimental rodent stroke model, and they have often failed to be effective when tested clinically. Therefore, nonhuman primates were recommended as the ideal alternatives, owing to their similarities with the human cerebrovascular system, brain metabolism, grey to white matter ratio and even their rich behavioral repertoire. The present review is a thorough summary of ten methods that establish nonhuman primate models of focal cerebral ischemia; electrocoagulation, endothelin-1-induced occlusion, microvascular clip occlusion, autologous blood clot embolization, balloon inflation, microcatheter embolization, coil embolization, surgical suture embolization, suture, and photochemical induction methods. This review addresses the advantages and disadvantages of each method, as well as precautions for each model, compared nonhuman primates with rodents, different species of nonhuman primates and different modeling methods. Finally it discusses various factors that need to be considered when modelling and the method of evaluation after modelling. These are critical for understanding their respective strengths and weaknesses and underlie the selection of the optimum model.
文摘Modern engineering design optimization often relies on computer simulations to evaluate candidate designs, a setup which results in expensive black-box optimization problems. Such problems introduce unique challenges, which has motivated the application of metamodel-assisted computational intelligence algorithms to solve them. Such algorithms combine a computational intelligence optimizer which employs a population of candidate solutions, with a metamodel which is a computationally cheaper approximation of the expensive computer simulation. However, although a variety of metamodels and optimizers have been proposed, the optimal types to employ are problem dependant. Therefore, a priori prescribing the type of metamodel and optimizer to be used may degrade its effectiveness. Leveraging on this issue, this study proposes a new computational intelligence algorithm which autonomously adapts the type of the metamodel and optimizer during the search by selecting the most suitable types out of a family of candidates at each stage. Performance analysis using a set of test functions demonstrates the effectiveness of the proposed algorithm, and highlights the merit of the proposed adaptation approach.
基金supported by the National natural Science Foundation of China (NSFC, grant No. U2039207)
文摘Evaluation of numerical earthquake forecasting models needs to consider two issues of equal importance:the application scenario of the simulation,and the complexity of the model.Criterion of the evaluation-based model selection faces some interesting problems in need of discussion.
文摘Time-series-based forecasting is essential to determine how past events affect future events. This paper compares the performance accuracy of different time-series models for oil prices. Three types of univariate models are discussed: the exponential smoothing (ES), Holt-Winters (HW) and autoregressive intergrade moving average (ARIMA) models. To determine the best model, six different strategies were applied as selection criteria to quantify these models’ prediction accuracies. This comparison should help policy makers and industry marketing strategists select the best forecasting method in oil market. The three models were compared by applying them to the time series of regular oil prices for West Texas Intermediate (WTI) crude. The comparison indicated that the HW model performed better than the ES model for a prediction with a confidence interval of 95%. However, the ARIMA (2, 1, 2) model yielded the best results, leading us to conclude that this sophisticated and robust model outperformed other simple yet flexible models in oil market.
文摘Increase in greenhouse gases, has made scientists to substitute alternative fuels for fossil fuels. Nowadays, converting biomass into liquid by Fischer-Tropsch synthesis is a major concern for alternative fuels (gasoline, diesel etc.). Selectivity of Fischer-Tropsch hydrocarbon product (green fuel) is an important issue. In this study, the experimental data has been obtained from three factors; temperature, H2/CO ratio and pressure in the fixed bed micro reactor. T = 543-618 (K), P = 3-10 (bar), H2/CO = 1-2 and space velocity = 4500 (l/h) were the reactor conditions. The results of product modeling for methane (CH4), ethane (C2H6), ethylene (C2H4) and CO conversion with experimental data were compared. The effective parameters and the interaction between them were investigated in the model. H2/CO ratio and pressure and interaction between pressure and H2/CO in ethane selectivity model and CO conversion and interaction between temperature and H2/CO ratio in methane selectivity model and ethylene gave the best results. To determine the optimal conditions for light hydrocarbons, ANOVA and RSM were employed. Finally, products optimization was done and results were concluded.
基金supported by Ningxia Key R&D Program (Key)Project (2023BDE02001)Ningxia Key R&D Program (Talent Introduction Special)Project (2022YCZX0013)+2 种基金North Minzu University 2022 School-Level Research Platform“Digital Agriculture Empowering Ningxia Rural Revitalization Innovation Team”,Project Number:2022PT_S10Yinchuan City School-Enterprise Joint Innovation Project (2022XQZD009)“Innovation Team for Imaging and Intelligent Information Processing”of the National Ethnic Affairs Commission.
文摘Widely used deep neural networks currently face limitations in achieving optimal performance for purchase intention prediction due to constraints on data volume and hyperparameter selection.To address this issue,based on the deep forest algorithm and further integrating evolutionary ensemble learning methods,this paper proposes a novel Deep Adaptive Evolutionary Ensemble(DAEE)model.This model introduces model diversity into the cascade layer,allowing it to adaptively adjust its structure to accommodate complex and evolving purchasing behavior patterns.Moreover,this paper optimizes the methods of obtaining feature vectors,enhancement vectors,and prediction results within the deep forest algorithm to enhance the model’s predictive accuracy.Results demonstrate that the improved deep forest model not only possesses higher robustness but also shows an increase of 5.02%in AUC value compared to the baseline model.Furthermore,its training runtime speed is 6 times faster than that of deep models,and compared to other improved models,its accuracy has been enhanced by 0.9%.
基金Supported by the National Key Research and Development Program of China(2021YFD1201103-01-05)。
文摘Soybean frogeye leaf spot(FLS) disease is a global disease affecting soybean yield, especially in the soybean growing area of Heilongjiang Province. In order to realize genomic selection breeding for FLS resistance of soybean, least absolute shrinkage and selection operator(LASSO) regression and stepwise regression were combined, and a genomic selection model was established for 40 002 SNP markers covering soybean genome and relative lesion area of soybean FLS. As a result, 68 molecular markers controlling soybean FLS were detected accurately, and the phenotypic contribution rate of these markers reached 82.45%. In this study, a model was established, which could be used directly to evaluate the resistance of soybean FLS and to select excellent offspring. This research method could also provide ideas and methods for other plants to breeding in disease resistance.
文摘Government credibility is an important asset of contemporary national governance, an important criterion for evaluating government legitimacy, and a key factor in measuring the effectiveness of government governance. In recent years, researchers’ research on government credibility has mostly focused on exploring theories and mechanisms, with little empirical research on this topic. This article intends to apply variable selection models in the field of social statistics to the issue of government credibility, in order to achieve empirical research on government credibility and explore its core influencing factors from a statistical perspective. Specifically, this article intends to use four regression-analysis-based methods and three random-forest-based methods to study the influencing factors of government credibility in various provinces in China, and compare the performance of these seven variable selection methods in different dimensions. The research results show that there are certain differences in simplicity, accuracy, and variable importance ranking among different variable selection methods, which present different importance in the study of government credibility issues. This study provides a methodological reference for variable selection models in the field of social science research, and also offers a multidimensional comparative perspective for analyzing the influencing factors of government credibility.
文摘This study explores the application of Bayesian econometrics in policy evaluation through theoretical analysis. The research first reviews the theoretical foundations of Bayesian methods, including the concepts of Bayesian inference, prior distributions, and posterior distributions. Through systematic analysis, the study constructs a theoretical framework for applying Bayesian methods in policy evaluation. The research finds that Bayesian methods have multiple theoretical advantages in policy evaluation: Based on parameter uncertainty theory, Bayesian methods can better handle uncertainty in model parameters and provide more comprehensive estimates of policy effects;from the perspective of model selection theory, Bayesian model averaging can reduce model selection bias and enhance the robustness of evaluation results;according to causal inference theory, Bayesian causal inference methods provide new approaches for evaluating policy causal effects. The study also points out the complexities of applying Bayesian methods in policy evaluation, such as the selection of prior information and computational complexity. To address these complexities, the study proposes hybrid methods combining frequentist approaches and suggestions for developing computationally efficient algorithms. The research also discusses theoretical comparisons between Bayesian methods and other policy evaluation techniques, providing directions for future research.
文摘Traditional methods for selecting models in experimental data analysis are susceptible to researcher bias, hindering exploration of alternative explanations and potentially leading to overfitting. The Finite Information Quantity (FIQ) approach offers a novel solution by acknowledging the inherent limitations in information processing capacity of physical systems. This framework facilitates the development of objective criteria for model selection (comparative uncertainty) and paves the way for a more comprehensive understanding of phenomena through exploring diverse explanations. This work presents a detailed comparison of the FIQ approach with ten established model selection methods, highlighting the advantages and limitations of each. We demonstrate the potential of FIQ to enhance the objectivity and robustness of scientific inquiry through three practical examples: selecting appropriate models for measuring fundamental constants, sound velocity, and underwater electrical discharges. Further research is warranted to explore the full applicability of FIQ across various scientific disciplines.
基金The National Natural Science Foundation of China(No.61105048,60972165)the Doctoral Fund of Ministry of Education of China(No.20110092120034)+2 种基金the Natural Science Foundation of Jiangsu Province(No.BK2010240)the Technology Foundation for Selected Overseas Chinese Scholar,Ministry of Human Resources and Social Security of China(No.6722000008)the Open Fund of Jiangsu Province Key Laboratory for Remote Measuring and Control(No.YCCK201005)
文摘An improved Gaussian mixture model (GMM)- based clustering method is proposed for the difficult case where the true distribution of data is against the assumed GMM. First, an improved model selection criterion, the completed likelihood minimum message length criterion, is derived. It can measure both the goodness-of-fit of the candidate GMM to the data and the goodness-of-partition of the data. Secondly, by utilizing the proposed criterion as the clustering objective function, an improved expectation- maximization (EM) algorithm is developed, which can avoid poor local optimal solutions compared to the standard EM algorithm for estimating the model parameters. The experimental results demonstrate that the proposed method can rectify the over-fitting tendency of representative GMM-based clustering approaches and can robustly provide more accurate clustering results.