Liposomes serve as critical carriers for drugs and vaccines,with their biological effects influenced by their size.The microfluidic method,renowned for its precise control,reproducibility,and scalability,has been wide...Liposomes serve as critical carriers for drugs and vaccines,with their biological effects influenced by their size.The microfluidic method,renowned for its precise control,reproducibility,and scalability,has been widely employed for liposome preparation.Although some studies have explored factors affecting liposomal size in microfluidic processes,most focus on small-sized liposomes,predominantly through experimental data analysis.However,the production of larger liposomes,which are equally significant,remains underexplored.In this work,we thoroughly investigate multiple variables influencing liposome size during microfluidic preparation and develop a machine learning(ML)model capable of accurately predicting liposomal size.Experimental validation was conducted using a staggered herringbone micromixer(SHM)chip.Our findings reveal that most investigated variables significantly influence liposomal size,often interrelating in complex ways.We evaluated the predictive performance of several widely-used ML algorithms,including ensemble methods,through cross-validation(CV)for both lipo-some size and polydispersity index(PDI).A standalone dataset was experimentally validated to assess the accuracy of the ML predictions,with results indicating that ensemble algorithms provided the most reliable predictions.Specifically,gradient boosting was selected for size prediction,while random forest was employed for PDI prediction.We successfully produced uniform large(600 nm)and small(100 nm)liposomes using the optimised experimental conditions derived from the ML models.In conclusion,this study presents a robust methodology that enables precise control over liposome size distribution,of-fering valuable insights for medicinal research applications.展开更多
The presence of aluminum(Al^(3+))and fluoride(F^(−))ions in the environment can be harmful to ecosystems and human health,highlighting the need for accurate and efficient monitoring.In this paper,an innovative approac...The presence of aluminum(Al^(3+))and fluoride(F^(−))ions in the environment can be harmful to ecosystems and human health,highlighting the need for accurate and efficient monitoring.In this paper,an innovative approach is presented that leverages the power of machine learning to enhance the accuracy and efficiency of fluorescence-based detection for sequential quantitative analysis of aluminum(Al^(3+))and fluoride(F^(−))ions in aqueous solutions.The proposed method involves the synthesis of sulfur-functionalized carbon dots(C-dots)as fluorescence probes,with fluorescence enhancement upon interaction with Al^(3+)ions,achieving a detection limit of 4.2 nmol/L.Subsequently,in the presence of F^(−)ions,fluorescence is quenched,with a detection limit of 47.6 nmol/L.The fingerprints of fluorescence images are extracted using a cross-platform computer vision library in Python,followed by data preprocessing.Subsequently,the fingerprint data is subjected to cluster analysis using the K-means model from machine learning,and the average Silhouette Coefficient indicates excellent model performance.Finally,a regression analysis based on the principal component analysis method is employed to achieve more precise quantitative analysis of aluminum and fluoride ions.The results demonstrate that the developed model excels in terms of accuracy and sensitivity.This groundbreaking model not only showcases exceptional performance but also addresses the urgent need for effective environmental monitoring and risk assessment,making it a valuable tool for safeguarding our ecosystems and public health.展开更多
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so...Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.展开更多
Recent studies have shown that synergistic precipitation of continuous precipitates(CPs)and discontinuous precipitates(DPs)is a promising method to simultaneously improve the strength and electrical conductivity of Cu...Recent studies have shown that synergistic precipitation of continuous precipitates(CPs)and discontinuous precipitates(DPs)is a promising method to simultaneously improve the strength and electrical conductivity of Cu-Ni-Si alloy.However,the complex relationship between precipitates and two-stage aging process presents a significant challenge for the optimization of process parameters.In this study,machine learning models were established based on orthogonal experiment to mine the relationship between two-stage aging parameters and properties of Cu-5.3Ni-1.3Si-0.12Nb alloy with preferred formation of DPs.Two-stage aging parameters of 400℃/75 min+400℃/30 min were then obtained by multi-objective optimization combined with an experimental iteration strategy,resulting in a tensile strength of 875 MPa and a conductivity of 41.43%IACS,respectively.Such an excellent comprehensive performance of the alloy is attributed to the combined precipitation of DPs and CPs(with a total volume fraction of 5.4%and a volume ratio of CPs to DPs of 6.7).This study could provide a new approach and insight for improving the comprehensive properties of the Cu-Ni-Si alloys.展开更多
To better understand the migration behavior of plastic fragments in the environment,development of rapid non-destructive methods for in-situ identification and characterization of plastic fragments is necessary.Howeve...To better understand the migration behavior of plastic fragments in the environment,development of rapid non-destructive methods for in-situ identification and characterization of plastic fragments is necessary.However,most of the studies had focused only on colored plastic fragments,ignoring colorless plastic fragments and the effects of different environmental media(backgrounds),thus underestimating their abundance.To address this issue,the present study used near-infrared spectroscopy to compare the identification of colored and colorless plastic fragments based on partial least squares-discriminant analysis(PLS-DA),extreme gradient boost,support vector machine and random forest classifier.The effects of polymer color,type,thickness,and background on the plastic fragments classification were evaluated.PLS-DA presented the best and most stable outcome,with higher robustness and lower misclassification rate.All models frequently misinterpreted colorless plastic fragments and its background when the fragment thickness was less than 0.1mm.A two-stage modeling method,which first distinguishes the plastic types and then identifies colorless plastic fragments that had been misclassified as background,was proposed.The method presented an accuracy higher than 99%in different backgrounds.In summary,this study developed a novel method for rapid and synchronous identification of colored and colorless plastic fragments under complex environmental backgrounds.展开更多
Carbon emissions resulting from energy consumption have become a pressing issue for governments worldwide.Accurate estimation of carbon emissions using satellite remote sensing data has become a crucial research probl...Carbon emissions resulting from energy consumption have become a pressing issue for governments worldwide.Accurate estimation of carbon emissions using satellite remote sensing data has become a crucial research problem.Previous studies relied on statistical regression models that failed to capture the complex nonlinear relationships between carbon emissions and characteristic variables.In this study,we propose a machine learning algorithm for carbon emissions,a Bayesian optimized XGboost regression model,using multi-year energy carbon emission data and nighttime lights(NTL)remote sensing data from Shaanxi Province,China.Our results demonstrate that the XGboost algorithm outperforms linear regression and four other machine learning models,with an R^(2)of 0.906 and RMSE of 5.687.We observe an annual increase in carbon emissions,with high-emission counties primarily concentrated in northern and central Shaanxi Province,displaying a shift from discrete,sporadic points to contiguous,extended spatial distribution.Spatial autocorrelation clustering reveals predominantly high-high and low-low clustering patterns,with economically developed counties showing high-emission clustering and economically relatively backward counties displaying low-emission clustering.Our findings show that the use of NTL data and the XGboost algorithm can estimate and predict carbon emissionsmore accurately and provide a complementary reference for satellite remote sensing image data to serve carbon emission monitoring and assessment.This research provides an important theoretical basis for formulating practical carbon emission reduction policies and contributes to the development of techniques for accurate carbon emission estimation using remote sensing data.展开更多
Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experie...Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance.展开更多
Superconducting radio-frequency(SRF)cavities are the core components of SRF linear accelerators,making their stable operation considerably important.However,the operational experience from different accelerator labora...Superconducting radio-frequency(SRF)cavities are the core components of SRF linear accelerators,making their stable operation considerably important.However,the operational experience from different accelerator laboratories has revealed that SRF faults are the leading cause of short machine downtime trips.When a cavity fault occurs,system experts analyze the time-series data recorded by low-level RF systems and identify the fault type.However,this requires expertise and intuition,posing a major challenge for control-room operators.Here,we propose an expert feature-based machine learning model for automating SRF cavity fault recognition.The main challenge in converting the"expert reasoning"process for SRF faults into a"model inference"process lies in feature extraction,which is attributed to the associated multidimensional and complex time-series waveforms.Existing autoregression-based feature-extraction methods require the signal to be stable and autocorrelated,resulting in difficulty in capturing the abrupt features that exist in several SRF failure patterns.To address these issues,we introduce expertise into the classification model through reasonable feature engineering.We demonstrate the feasibility of this method using the SRF cavity of the China accelerator facility for superheavy elements(CAFE2).Although specific faults in SRF cavities may vary across different accelerators,similarities exist in the RF signals.Therefore,this study provides valuable guidance for fault analysis of the entire SRF community.展开更多
The design of casting gating system directly determines the solidification sequence,defect severity,and overall quality of the casting.A novel machine learning strategy was developed to design the counter pressure cas...The design of casting gating system directly determines the solidification sequence,defect severity,and overall quality of the casting.A novel machine learning strategy was developed to design the counter pressure casting gating system of a large thin-walled cabin casting.A high-quality dataset was established through orthogonal experiments combined with design criteria for the gating system.Spearman’s correlation analysis was used to select high-quality features.The gating system dimensions were predicted using a gated recurrent unit(GRU)recurrent neural network and an elastic network model.Using EasyCast and ProCAST casting software,a comparative analysis of the flow field,temperature field,and solidification field can be conducted to demonstrate the achievement of steady filling and top-down sequential solidification.Compared to the empirical formula method,this method eliminates trial-and-error iterations,reduces porosity,reduces casting defect volume from 11.23 cubic centimeters to 2.23 cubic centimeters,eliminates internal casting defects through the incorporation of an internally cooled iron,fulfilling the goal of intelligent gating system design.展开更多
In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive streng...In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive strength.In this study,505 groups of data were collected,and a new database of compressive strength of PFGC was constructed.In order to establish an accurate prediction model of compressive strength,five different types of machine learning networks were used for comparative analysis.The five machine learning models all showed good compressive strength prediction performance on PFGC.Among them,R2,MSE,RMSE and MAE of decision tree model(DT)are 0.99,1.58,1.25,and 0.25,respectively.While R2,MSE,RMSE and MAE of random forest model(RF)are 0.97,5.17,2.27 and 1.38,respectively.The two models have high prediction accuracy and outstanding generalization ability.In order to enhance the interpretability of model decision-making,we used importance ranking to obtain the perception of machine learning model to 13 variables.These 13 variables include chemical composition of fly ash(SiO_(2)/Al_(2)O_(3),Si/Al),the ratio of alkaline liquid to the binder,curing temperature,curing durations inside oven,fly ash dosage,fine aggregate dosage,coarse aggregate dosage,extra water dosage and sodium hydroxide dosage.Curing temperature,specimen ages and curing durations inside oven have the greatest influence on the prediction results,indicating that curing conditions have more prominent influence on the compressive strength of PFGC than ordinary Portland cement concrete.The importance of curing conditions of PFGC even exceeds that of the concrete mix proportion,due to the low reactivity of pure fly ash.展开更多
In traditional trial-and-error method,enhancing the Young's modulus of magnesium alloys while maintaining a favorable ductility has consistently been a challenge.It is a need to explore more efficient and expedite...In traditional trial-and-error method,enhancing the Young's modulus of magnesium alloys while maintaining a favorable ductility has consistently been a challenge.It is a need to explore more efficient and expedited methods to design magnesium alloys with high modulus and ductility.In this study,machine learning(ML)and assisted microstructure control methods are used to design high modulus magnesium alloys.Six key features that influence stiffness and ductility have been extracted in this ML model based on abundant data from literature sources.As a result,predictive models for Young's modulus and elongation are established,with errors<2.4%and 4.5%through XGBoost machine learning model,respectively.Within the given range of six features,the magnesium alloys can be fabricated with the Young's modulus exceeding 50 GPa and an elongation surpassing 6%.As a validation,Mg-Al-Y alloys were experimentally prepared to meet the criteria of six features,achieving Young's modulus of 51.5 GPa,and the elongation of 7%.Moreover,the SHapley Additive exPlanation(SHAP)is introduced to boost the model interpretability.This indicates that balancing the volume fraction of reinforcement,the most important feature,is key to achieve Mg-Al-Y alloys with high Young's modulus and favorable elongation through the two models.Enhancing reinforcement dispersion and reducing the size of reinforcement and grain can further improve the elongation of high-stiffness Mg alloy.展开更多
The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to ...The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to achieving controllable stress-strain rate loading.In this study,we have,for the first time,combined one-dimensional fluid computational software with machine learning methods.We first elucidated the mechanisms by which GDI structures control stress and strain rates.Subsequently,we constructed a machine learning model to create a structure-property response surface.The results show that altering the loading velocity and interlayer thickness has a pronounced regulatory effect on stress and strain rates.In contrast,the impedance distribution index and target thickness have less significant effects on stress regulation,although there is a matching relationship between target thickness and interlayer thickness.Compared with traditional design methods,the machine learning approach offers a10^(4)—10^(5)times increase in efficiency and the potential to achieve a global optimum,holding promise for guiding the design of GDI.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC...Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.展开更多
Gas hydrate(GH)is an unconventional resource estimated at 1000-120,000 trillion m^(3)worldwide.Research on GH is ongoing to determine its geological and flow characteristics for commercial produc-tion.After two large-...Gas hydrate(GH)is an unconventional resource estimated at 1000-120,000 trillion m^(3)worldwide.Research on GH is ongoing to determine its geological and flow characteristics for commercial produc-tion.After two large-scale drilling expeditions to study the GH-bearing zone in the Ulleung Basin,the mineral composition of 488 sediment samples was analyzed using X-ray diffraction(XRD).Because the analysis is costly and dependent on experts,a machine learning model was developed to predict the mineral composition using XRD intensity profiles as input data.However,the model’s performance was limited because of improper preprocessing of the intensity profile.Because preprocessing was applied to each feature,the intensity trend was not preserved even though this factor is the most important when analyzing mineral composition.In this study,the profile was preprocessed for each sample using min-max scaling because relative intensity is critical for mineral analysis.For 49 test data among the 488 data,the convolutional neural network(CNN)model improved the average absolute error and coefficient of determination by 41%and 46%,respectively,than those of CNN model with feature-based pre-processing.This study confirms that combining preprocessing for each sample with CNN is the most efficient approach for analyzing XRD data.The developed model can be used for the compositional analysis of sediment samples from the Ulleung Basin and the Korea Plateau.In addition,the overall procedure can be applied to any XRD data of sediments worldwide.展开更多
Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in ...Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in different database repositories every day. Most of the review data are useful to new customers for theier further purchases as well as existing companies to view customers feedback about various products. Data Mining and Machine Leaning techniques are familiar to analyse such kind of data to visualise and know the potential use of the purchased items through online. The customers are making quality of products through their sentiments about the purchased items from different online companies. In this research work, it is analysed sentiments of Headphone review data, which is collected from online repositories. For the analysis of Headphone review data, some of the Machine Learning techniques like Support Vector Machines, Naive Bayes, Decision Trees and Random Forest Algorithms and a Hybrid method are applied to find the quality via the customers’ sentiments. The accuracy and performance of the taken algorithms are also analysed based on the three types of sentiments such as positive, negative and neutral.展开更多
Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing ...Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.展开更多
The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods...The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods have become impractical due to their resource demands.Automated Machine Learning(AutoML)systems automate this process,but often neglect the group structures and sparsity in meta-features,leading to inefficiencies in algorithm recommendations for classification tasks.This paper proposes a meta-learning approach using Multivariate Sparse Group Lasso(MSGL)to address these limitations.Our method models both within-group and across-group sparsity among meta-features to manage high-dimensional data and reduce multicollinearity across eight meta-feature groups.The Fast Iterative Shrinkage-Thresholding Algorithm(FISTA)with adaptive restart efficiently solves the non-smooth optimization problem.Empirical validation on 145 classification datasets with 17 classification algorithms shows that our meta-learning method outperforms four state-of-the-art approaches,achieving 77.18%classification accuracy,86.07%recommendation accuracy and 88.83%normalized discounted cumulative gain.展开更多
Vibration cutting has emerged as a promising method for creating surface functional microstructures.However,achieving precise tool setting is a time-consuming process that significantly impacts process efficiency.This...Vibration cutting has emerged as a promising method for creating surface functional microstructures.However,achieving precise tool setting is a time-consuming process that significantly impacts process efficiency.This study proposes an intelligent approach for tool setting in vibration cutting using machine vision and hearing,divided into two steps.In the first step,machine vision is employed to achieve rough precision in tool setting within tens of micrometers.Subsequently,in the second step,machine hearing utilizes sound pickup to capture vibration audio signals,enabling fine tool adjustment within 1μm precision.The relationship between the spectral intensity of vibration audio and cutting depth is analyzed to establish criteria for tool–workpiece contact.Finally,the efficacy of this approach is validated on an ultra-precision platform,demonstrating that the automated tool-setting process takes no more than 74 s.The total cost of the vision and hearing sensors is less than$1500.展开更多
As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that a...As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.展开更多
基金supported by the National Key Research and Development Plan of the Ministry of Science and Technology,China(Grant No.:2022YFE0125300)the National Natural Science Foundation of China(Grant No:81690262)+2 种基金the National Science and Technology Major Project,China(Grant No.:2017ZX09201004-021)the Open Project of National facility for Translational Medicine(Shanghai),China(Grant No.:TMSK-2021-104)Shanghai Jiao Tong University STAR Grant,China(Grant Nos.:YG2022ZD024 and YG2022QN111).
文摘Liposomes serve as critical carriers for drugs and vaccines,with their biological effects influenced by their size.The microfluidic method,renowned for its precise control,reproducibility,and scalability,has been widely employed for liposome preparation.Although some studies have explored factors affecting liposomal size in microfluidic processes,most focus on small-sized liposomes,predominantly through experimental data analysis.However,the production of larger liposomes,which are equally significant,remains underexplored.In this work,we thoroughly investigate multiple variables influencing liposome size during microfluidic preparation and develop a machine learning(ML)model capable of accurately predicting liposomal size.Experimental validation was conducted using a staggered herringbone micromixer(SHM)chip.Our findings reveal that most investigated variables significantly influence liposomal size,often interrelating in complex ways.We evaluated the predictive performance of several widely-used ML algorithms,including ensemble methods,through cross-validation(CV)for both lipo-some size and polydispersity index(PDI).A standalone dataset was experimentally validated to assess the accuracy of the ML predictions,with results indicating that ensemble algorithms provided the most reliable predictions.Specifically,gradient boosting was selected for size prediction,while random forest was employed for PDI prediction.We successfully produced uniform large(600 nm)and small(100 nm)liposomes using the optimised experimental conditions derived from the ML models.In conclusion,this study presents a robust methodology that enables precise control over liposome size distribution,of-fering valuable insights for medicinal research applications.
基金supported by the National Natural Science Foundation of China(No.U21A20290)Guangdong Basic and Applied Basic Research Foundation(No.2022A1515011656)+2 种基金the Projects of Talents Recruitment of GDUPT(No.2023rcyj1003)the 2022“Sail Plan”Project of Maoming Green Chemical Industry Research Institute(No.MMGCIRI2022YFJH-Y-024)Maoming Science and Technology Project(No.2023382).
文摘The presence of aluminum(Al^(3+))and fluoride(F^(−))ions in the environment can be harmful to ecosystems and human health,highlighting the need for accurate and efficient monitoring.In this paper,an innovative approach is presented that leverages the power of machine learning to enhance the accuracy and efficiency of fluorescence-based detection for sequential quantitative analysis of aluminum(Al^(3+))and fluoride(F^(−))ions in aqueous solutions.The proposed method involves the synthesis of sulfur-functionalized carbon dots(C-dots)as fluorescence probes,with fluorescence enhancement upon interaction with Al^(3+)ions,achieving a detection limit of 4.2 nmol/L.Subsequently,in the presence of F^(−)ions,fluorescence is quenched,with a detection limit of 47.6 nmol/L.The fingerprints of fluorescence images are extracted using a cross-platform computer vision library in Python,followed by data preprocessing.Subsequently,the fingerprint data is subjected to cluster analysis using the K-means model from machine learning,and the average Silhouette Coefficient indicates excellent model performance.Finally,a regression analysis based on the principal component analysis method is employed to achieve more precise quantitative analysis of aluminum and fluoride ions.The results demonstrate that the developed model excels in terms of accuracy and sensitivity.This groundbreaking model not only showcases exceptional performance but also addresses the urgent need for effective environmental monitoring and risk assessment,making it a valuable tool for safeguarding our ecosystems and public health.
文摘Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.
基金financially supported by the National Key Research and Development Program of China(No.2023YFB3812601)the National Natural Science Foundation of China(Nos.51925401,92066205 and 92266301)the Young Elite Scientists Sponsorship Program by CAST(No.2022QNRC001).
文摘Recent studies have shown that synergistic precipitation of continuous precipitates(CPs)and discontinuous precipitates(DPs)is a promising method to simultaneously improve the strength and electrical conductivity of Cu-Ni-Si alloy.However,the complex relationship between precipitates and two-stage aging process presents a significant challenge for the optimization of process parameters.In this study,machine learning models were established based on orthogonal experiment to mine the relationship between two-stage aging parameters and properties of Cu-5.3Ni-1.3Si-0.12Nb alloy with preferred formation of DPs.Two-stage aging parameters of 400℃/75 min+400℃/30 min were then obtained by multi-objective optimization combined with an experimental iteration strategy,resulting in a tensile strength of 875 MPa and a conductivity of 41.43%IACS,respectively.Such an excellent comprehensive performance of the alloy is attributed to the combined precipitation of DPs and CPs(with a total volume fraction of 5.4%and a volume ratio of CPs to DPs of 6.7).This study could provide a new approach and insight for improving the comprehensive properties of the Cu-Ni-Si alloys.
基金supported by the National Natural Science Foundation of China(No.22276139)the Shanghai’s Municipal State-owned Assets Supervision and Administration Commission(No.2022028).
文摘To better understand the migration behavior of plastic fragments in the environment,development of rapid non-destructive methods for in-situ identification and characterization of plastic fragments is necessary.However,most of the studies had focused only on colored plastic fragments,ignoring colorless plastic fragments and the effects of different environmental media(backgrounds),thus underestimating their abundance.To address this issue,the present study used near-infrared spectroscopy to compare the identification of colored and colorless plastic fragments based on partial least squares-discriminant analysis(PLS-DA),extreme gradient boost,support vector machine and random forest classifier.The effects of polymer color,type,thickness,and background on the plastic fragments classification were evaluated.PLS-DA presented the best and most stable outcome,with higher robustness and lower misclassification rate.All models frequently misinterpreted colorless plastic fragments and its background when the fragment thickness was less than 0.1mm.A two-stage modeling method,which first distinguishes the plastic types and then identifies colorless plastic fragments that had been misclassified as background,was proposed.The method presented an accuracy higher than 99%in different backgrounds.In summary,this study developed a novel method for rapid and synchronous identification of colored and colorless plastic fragments under complex environmental backgrounds.
基金supported by the Key Research and Development Program in Shaanxi Province,China(No.2022ZDLSF07-05)the Fundamental Research Funds for the Central Universities,CHD(No.300102352901)。
文摘Carbon emissions resulting from energy consumption have become a pressing issue for governments worldwide.Accurate estimation of carbon emissions using satellite remote sensing data has become a crucial research problem.Previous studies relied on statistical regression models that failed to capture the complex nonlinear relationships between carbon emissions and characteristic variables.In this study,we propose a machine learning algorithm for carbon emissions,a Bayesian optimized XGboost regression model,using multi-year energy carbon emission data and nighttime lights(NTL)remote sensing data from Shaanxi Province,China.Our results demonstrate that the XGboost algorithm outperforms linear regression and four other machine learning models,with an R^(2)of 0.906 and RMSE of 5.687.We observe an annual increase in carbon emissions,with high-emission counties primarily concentrated in northern and central Shaanxi Province,displaying a shift from discrete,sporadic points to contiguous,extended spatial distribution.Spatial autocorrelation clustering reveals predominantly high-high and low-low clustering patterns,with economically developed counties showing high-emission clustering and economically relatively backward counties displaying low-emission clustering.Our findings show that the use of NTL data and the XGboost algorithm can estimate and predict carbon emissionsmore accurately and provide a complementary reference for satellite remote sensing image data to serve carbon emission monitoring and assessment.This research provides an important theoretical basis for formulating practical carbon emission reduction policies and contributes to the development of techniques for accurate carbon emission estimation using remote sensing data.
基金supported by the Deanship of Graduate Studies and Scientific Research at University of Bisha for funding this research through the promising program under grant number(UB-Promising-33-1445).
文摘Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance.
基金supported by the studies of intelligent LLRF control algorithms for superconducting RF cavities(No.E129851YR0)the National Natural Science Foundation of China(No.U22A20261)Applications of Artificial Intelligence in the Stability Study of Superconducting Linear Accelerators(No.E429851YR0)。
文摘Superconducting radio-frequency(SRF)cavities are the core components of SRF linear accelerators,making their stable operation considerably important.However,the operational experience from different accelerator laboratories has revealed that SRF faults are the leading cause of short machine downtime trips.When a cavity fault occurs,system experts analyze the time-series data recorded by low-level RF systems and identify the fault type.However,this requires expertise and intuition,posing a major challenge for control-room operators.Here,we propose an expert feature-based machine learning model for automating SRF cavity fault recognition.The main challenge in converting the"expert reasoning"process for SRF faults into a"model inference"process lies in feature extraction,which is attributed to the associated multidimensional and complex time-series waveforms.Existing autoregression-based feature-extraction methods require the signal to be stable and autocorrelated,resulting in difficulty in capturing the abrupt features that exist in several SRF failure patterns.To address these issues,we introduce expertise into the classification model through reasonable feature engineering.We demonstrate the feasibility of this method using the SRF cavity of the China accelerator facility for superheavy elements(CAFE2).Although specific faults in SRF cavities may vary across different accelerators,similarities exist in the RF signals.Therefore,this study provides valuable guidance for fault analysis of the entire SRF community.
基金supported by the National Natural Science Foundation of China(Nos.52074246,52275390,52375394)the National Defense Basic Scientific Research Program of China(No.JCKY2020408B002)the Key R&D Program of Shanxi Province(No.202102050201011).
文摘The design of casting gating system directly determines the solidification sequence,defect severity,and overall quality of the casting.A novel machine learning strategy was developed to design the counter pressure casting gating system of a large thin-walled cabin casting.A high-quality dataset was established through orthogonal experiments combined with design criteria for the gating system.Spearman’s correlation analysis was used to select high-quality features.The gating system dimensions were predicted using a gated recurrent unit(GRU)recurrent neural network and an elastic network model.Using EasyCast and ProCAST casting software,a comparative analysis of the flow field,temperature field,and solidification field can be conducted to demonstrate the achievement of steady filling and top-down sequential solidification.Compared to the empirical formula method,this method eliminates trial-and-error iterations,reduces porosity,reduces casting defect volume from 11.23 cubic centimeters to 2.23 cubic centimeters,eliminates internal casting defects through the incorporation of an internally cooled iron,fulfilling the goal of intelligent gating system design.
基金Funded by the Natural Science Foundation of China(No.52109168)。
文摘In order to study the characteristics of pure fly ash-based geopolymer concrete(PFGC)conveniently,we used a machine learning method that can quantify the perception of characteristics to predict its compressive strength.In this study,505 groups of data were collected,and a new database of compressive strength of PFGC was constructed.In order to establish an accurate prediction model of compressive strength,five different types of machine learning networks were used for comparative analysis.The five machine learning models all showed good compressive strength prediction performance on PFGC.Among them,R2,MSE,RMSE and MAE of decision tree model(DT)are 0.99,1.58,1.25,and 0.25,respectively.While R2,MSE,RMSE and MAE of random forest model(RF)are 0.97,5.17,2.27 and 1.38,respectively.The two models have high prediction accuracy and outstanding generalization ability.In order to enhance the interpretability of model decision-making,we used importance ranking to obtain the perception of machine learning model to 13 variables.These 13 variables include chemical composition of fly ash(SiO_(2)/Al_(2)O_(3),Si/Al),the ratio of alkaline liquid to the binder,curing temperature,curing durations inside oven,fly ash dosage,fine aggregate dosage,coarse aggregate dosage,extra water dosage and sodium hydroxide dosage.Curing temperature,specimen ages and curing durations inside oven have the greatest influence on the prediction results,indicating that curing conditions have more prominent influence on the compressive strength of PFGC than ordinary Portland cement concrete.The importance of curing conditions of PFGC even exceeds that of the concrete mix proportion,due to the low reactivity of pure fly ash.
基金financially supported by the National Natural Science Foundation of China(Nos.U2241231 and 52071206).
文摘In traditional trial-and-error method,enhancing the Young's modulus of magnesium alloys while maintaining a favorable ductility has consistently been a challenge.It is a need to explore more efficient and expedited methods to design magnesium alloys with high modulus and ductility.In this study,machine learning(ML)and assisted microstructure control methods are used to design high modulus magnesium alloys.Six key features that influence stiffness and ductility have been extracted in this ML model based on abundant data from literature sources.As a result,predictive models for Young's modulus and elongation are established,with errors<2.4%and 4.5%through XGBoost machine learning model,respectively.Within the given range of six features,the magnesium alloys can be fabricated with the Young's modulus exceeding 50 GPa and an elongation surpassing 6%.As a validation,Mg-Al-Y alloys were experimentally prepared to meet the criteria of six features,achieving Young's modulus of 51.5 GPa,and the elongation of 7%.Moreover,the SHapley Additive exPlanation(SHAP)is introduced to boost the model interpretability.This indicates that balancing the volume fraction of reinforcement,the most important feature,is key to achieve Mg-Al-Y alloys with high Young's modulus and favorable elongation through the two models.Enhancing reinforcement dispersion and reducing the size of reinforcement and grain can further improve the elongation of high-stiffness Mg alloy.
基金supported by the Guangdong Major Project of Basic and Applied Basic Research(Grant No.2021B0301030001)the National Key Research and Development Program of China(Grant No.2021YFB3802300)the Foundation of National Key Laboratory of Shock Wave and Detonation Physics(Grant No.JCKYS2022212004)。
文摘The graded density impactor(GDI)dynamic loading technique is crucial for acquiring the dynamic physical property parameters of materials used in weapons.The accuracy and timeliness of GDI structural design are key to achieving controllable stress-strain rate loading.In this study,we have,for the first time,combined one-dimensional fluid computational software with machine learning methods.We first elucidated the mechanisms by which GDI structures control stress and strain rates.Subsequently,we constructed a machine learning model to create a structure-property response surface.The results show that altering the loading velocity and interlayer thickness has a pronounced regulatory effect on stress and strain rates.In contrast,the impedance distribution index and target thickness have less significant effects on stress regulation,although there is a matching relationship between target thickness and interlayer thickness.Compared with traditional design methods,the machine learning approach offers a10^(4)—10^(5)times increase in efficiency and the potential to achieve a global optimum,holding promise for guiding the design of GDI.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金funding from the National Natural Science Foundation of China (Grant No.42277175)the pilot project of cooperation between the Ministry of Natural Resources and Hunan Province“Research and demonstration of key technologies for comprehensive remote sensing identification of geological hazards in typical regions of Hunan Province” (Grant No.2023ZRBSHZ056)the National Key Research and Development Program of China-2023 Key Special Project (Grant No.2023YFC2907400).
文摘Joint roughness coefficient(JRC)is the most commonly used parameter for quantifying surface roughness of rock discontinuities in practice.The system composed of multiple roughness statistical parameters to measure JRC is a nonlinear system with a lot of overlapping information.In this paper,a dataset of eight roughness statistical parameters covering 112 digital joints is established.Then,the principal component analysis method is introduced to extract the significant information,which solves the information overlap problem of roughness characterization.Based on the two principal components of extracted features,the white shark optimizer algorithm was introduced to optimize the extreme gradient boosting model,and a new machine learning(ML)prediction model was established.The prediction accuracy of the new model and the other 17 models was measured using statistical metrics.The results show that the prediction result of the new model is more consistent with the real JRC value,with higher recognition accuracy and generalization ability.
基金supported by the Gas Hydrate R&D Organization and the Korea Institute of Geoscience and Mineral Resources(KIGAM)(GP2021-010)supported by the National Research Foundation of Korea(NRF)grant funded by the Korean government(MSIT)(No.2021R1C1C1004460)Korea Institute of Energy Technology Evaluation and Planning(KETEP)grant funded by the Korean government(MOTIE)(20214000000500,Training Program of CCUS for Green Growth).
文摘Gas hydrate(GH)is an unconventional resource estimated at 1000-120,000 trillion m^(3)worldwide.Research on GH is ongoing to determine its geological and flow characteristics for commercial produc-tion.After two large-scale drilling expeditions to study the GH-bearing zone in the Ulleung Basin,the mineral composition of 488 sediment samples was analyzed using X-ray diffraction(XRD).Because the analysis is costly and dependent on experts,a machine learning model was developed to predict the mineral composition using XRD intensity profiles as input data.However,the model’s performance was limited because of improper preprocessing of the intensity profile.Because preprocessing was applied to each feature,the intensity trend was not preserved even though this factor is the most important when analyzing mineral composition.In this study,the profile was preprocessed for each sample using min-max scaling because relative intensity is critical for mineral analysis.For 49 test data among the 488 data,the convolutional neural network(CNN)model improved the average absolute error and coefficient of determination by 41%and 46%,respectively,than those of CNN model with feature-based pre-processing.This study confirms that combining preprocessing for each sample with CNN is the most efficient approach for analyzing XRD data.The developed model can be used for the compositional analysis of sediment samples from the Ulleung Basin and the Korea Plateau.In addition,the overall procedure can be applied to any XRD data of sediments worldwide.
文摘Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in different database repositories every day. Most of the review data are useful to new customers for theier further purchases as well as existing companies to view customers feedback about various products. Data Mining and Machine Leaning techniques are familiar to analyse such kind of data to visualise and know the potential use of the purchased items through online. The customers are making quality of products through their sentiments about the purchased items from different online companies. In this research work, it is analysed sentiments of Headphone review data, which is collected from online repositories. For the analysis of Headphone review data, some of the Machine Learning techniques like Support Vector Machines, Naive Bayes, Decision Trees and Random Forest Algorithms and a Hybrid method are applied to find the quality via the customers’ sentiments. The accuracy and performance of the taken algorithms are also analysed based on the three types of sentiments such as positive, negative and neutral.
文摘Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.
文摘The rapid growth of machine learning(ML)across fields has intensified the challenge of selecting the right algorithm for specific tasks,known as the Algorithm Selection Problem(ASP).Traditional trial-and-error methods have become impractical due to their resource demands.Automated Machine Learning(AutoML)systems automate this process,but often neglect the group structures and sparsity in meta-features,leading to inefficiencies in algorithm recommendations for classification tasks.This paper proposes a meta-learning approach using Multivariate Sparse Group Lasso(MSGL)to address these limitations.Our method models both within-group and across-group sparsity among meta-features to manage high-dimensional data and reduce multicollinearity across eight meta-feature groups.The Fast Iterative Shrinkage-Thresholding Algorithm(FISTA)with adaptive restart efficiently solves the non-smooth optimization problem.Empirical validation on 145 classification datasets with 17 classification algorithms shows that our meta-learning method outperforms four state-of-the-art approaches,achieving 77.18%classification accuracy,86.07%recommendation accuracy and 88.83%normalized discounted cumulative gain.
基金the financial support for this research provided by the National Natural Science Foundation of China(Grant Nos.52275470,124115301,and 52105458)the Natural Science Foundation of Beijing(Grant No.3222009).
文摘Vibration cutting has emerged as a promising method for creating surface functional microstructures.However,achieving precise tool setting is a time-consuming process that significantly impacts process efficiency.This study proposes an intelligent approach for tool setting in vibration cutting using machine vision and hearing,divided into two steps.In the first step,machine vision is employed to achieve rough precision in tool setting within tens of micrometers.Subsequently,in the second step,machine hearing utilizes sound pickup to capture vibration audio signals,enabling fine tool adjustment within 1μm precision.The relationship between the spectral intensity of vibration audio and cutting depth is analyzed to establish criteria for tool–workpiece contact.Finally,the efficacy of this approach is validated on an ultra-precision platform,demonstrating that the automated tool-setting process takes no more than 74 s.The total cost of the vision and hearing sensors is less than$1500.
基金supported by the National Natural Science Foundation of China(Grant Number 61573264).
文摘As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.