We propose and discuss a novel concept of robust set stabilization by permissible controls; this concept is helpful when dealing with both a priori information of model parameters and different permissible controls in...We propose and discuss a novel concept of robust set stabilization by permissible controls; this concept is helpful when dealing with both a priori information of model parameters and different permissible controls including quantum measurements. Both controllability and stabilization can be regarded as the special case of the novel concept. An instance is presented for a kind of uncertain open quantum systems to further justify this gen- eralized concept. It is underlined that a new type of hybrid control based on periodically perturbed projective measurements can be the permissible control of uncertain open quantum systems when perturbed projective measurements are available. The sufficient conditions are given for the robust set stabilization of uncertain quantum open systems by the hybrid control, and the design of the hybrid control is reduced to selecting the period of measurements.展开更多
Many international brands have a phenomenal Chinese name which,paradoxically,comes from a rather prosaic name.The reason for this may lie in the fact that they need an outstanding translation of their names in order t...Many international brands have a phenomenal Chinese name which,paradoxically,comes from a rather prosaic name.The reason for this may lie in the fact that they need an outstanding translation of their names in order to be successful in international marketing.Hence the translation of brand names is an important part of the advertisement.And a good translation is expected to bridge the differences of cultures,languages,spending habits,thinking patterns,etc.展开更多
Deep Web sources contain a large of high-quality and query-related structured date. One of the challenges in the Deep Web is extracting result schemas of Deep Web sources. To address this challenge, this paper describ...Deep Web sources contain a large of high-quality and query-related structured date. One of the challenges in the Deep Web is extracting result schemas of Deep Web sources. To address this challenge, this paper describes a novel approach that extracts both result data and the result schema of a Web database. The approach first models the query interface of a Deep Web source and fills in it with a specifically query instance. Then the result pages of the Deep Web sources are formatted in the tree structure to retrieve subtrees that contain elements of the query instance, Next, result schema of the Deep Web source is extracted by matching the subtree' nodes with the query instance, in which, a two-phase schema extraction method is adopted for obtaining more accurate result schema. Finally, experiments on real Deep Web sources show the utility of our approach, which provides a high precision and recall.展开更多
Search-based software engineering has mainly dealt with automated test data generation by metaheuristic search techniques. Similarly, we try to generate the test data (i.e., problem instances) which show the worst cas...Search-based software engineering has mainly dealt with automated test data generation by metaheuristic search techniques. Similarly, we try to generate the test data (i.e., problem instances) which show the worst case of algorithms by such a technique. In this paper, in terms of non-functional testing, we re-define the worst case of some algorithms, respectively. By using genetic algorithms (GAs), we illustrate the strategies corresponding to each type of instances. We here adopt three problems for examples;the sorting problem, the 0/1 knapsack problem (0/1KP), and the travelling salesperson problem (TSP). In some algorithms solving these problems, we could find the worst-case instances successfully;the successfulness of the result is based on a statistical approach and comparison to the results by using the random testing. Our tried examples introduce informative guidelines to the use of genetic algorithms in generating the worst-case instance, which is defined in the aspect of algorithm performance.展开更多
This paper proposes a checking method based on mutual instances and discusses three key problems in the method: how to deal with mistakes in the mutual instances and how to deal with too many or too few mutual instan...This paper proposes a checking method based on mutual instances and discusses three key problems in the method: how to deal with mistakes in the mutual instances and how to deal with too many or too few mutual instances. It provides the checking based on the weighted mutual instances considering fault tolerance, gives a way to partition the large-scale mutual instances, and proposes a process greatly reducing the manual annotation work to get more mutual instances. Intension annotation that improves the checking method is also discussed. The method is practical and effective to check subsumption relations between concept queries in different ontologies based on mutual instances.展开更多
HTTP Adaptive Streaming(HAS)of video content is becoming an undivided part of the Internet and accounts for most of today’s network traffic.Video compression technology plays a vital role in efficiently utilizing net...HTTP Adaptive Streaming(HAS)of video content is becoming an undivided part of the Internet and accounts for most of today’s network traffic.Video compression technology plays a vital role in efficiently utilizing network channels,but encoding videos into multiple representations with selected encoding parameters is a significant challenge.However,video encoding is a computationally intensive and time-consuming operation that requires high-performance resources provided by on-premise infrastructures or public clouds.In turn,the public clouds,such as Amazon elastic compute cloud(EC2),provide hundreds of computing instances optimized for different purposes and clients’budgets.Thus,there is a need for algorithms and methods for optimized computing instance selection for specific tasks such as video encoding and transcoding operations.Additionally,the encoding speed directly depends on the selected encoding parameters and the complexity characteristics of video content.In this paper,we first benchmarked the video encoding performance of Amazon EC2 spot instances using multiple×264 codec encoding parameters and video sequences of varying complexity.Then,we proposed a novel fast approach to optimize Amazon EC2 spot instances and minimize video encoding costs.Furthermore,we evaluated how the optimized selection of EC2 spot instances can affect the encoding cost.The results show that our approach,on average,can reduce the encoding costs by at least 15.8%and up to 47.8%when compared to a random selection of EC2 spot instances.展开更多
When deploying workflows in cloud environments,the use of Spot Instances(SIs)is intriguing as they are much cheaper than on-demand ones.However,Sls are volatile and may be revoked at any time,which results in a more c...When deploying workflows in cloud environments,the use of Spot Instances(SIs)is intriguing as they are much cheaper than on-demand ones.However,Sls are volatile and may be revoked at any time,which results in a more challenging scheduling problem involving execution interruption and hence hinders the successful handling of conventional cloud workflow scheduling techniques.Although some scheduling methods for Sls have been proposed,most of them are no more applicable to the latest Sls,as they have evolved by eliminating bidding and simplifying the pricing model.This study focuses on how to minimize the execution cost with a deadline constraint when deploying a workflow on volatile Sls in cloud environments.Based on Monte Carlo simulation and list scheduling,a stochastic scheduling method called MCLS is devised to optimize a utility function introduced for this problem.With the Monte Carlo simulation framework,MCLS employs sampled task execution time to build solutions via deadline distribution and list scheduling,and then returns the most robust solution from all the candidates with a specific evaluation mechanism and selection criteria.Experimental results show that the performance of MCLS is more competitive comparedwithtraditionalalgorithms.展开更多
BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning ofte...BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning often fail to capture the sparse and diagnostically critical features of metastatic potential.AIM To develop and validate a case-level multiple-instance learning(MIL)framework mimicking a pathologist's comprehensive review and improve T3/T4 CRC LNM prediction.METHODS The whole-slide images of 130 patients with T3/T4 CRC were retrospectively collected.A case-level MIL framework utilising the CONCH v1.5 and UNI2-h deep learning models was trained on features from all haematoxylin and eosinstained primary tumour slides for each patient.These pathological features were subsequently integrated with clinical data,and model performance was evaluated using the area under the curve(AUC).RESULTS The case-level framework demonstrated superior LNM prediction over slide-level training,with the CONCH v1.5 model achieving a mean AUC(±SD)of 0.899±0.033 vs 0.814±0.083,respectively.Integrating pathology features with clinical data further enhanced performance,yielding a top model with a mean AUC of 0.904±0.047,in sharp contrast to a clinical-only model(mean AUC 0.584±0.084).Crucially,a pathologist’s review confirmed that the model-identified high-attention regions correspond to known high-risk histopathological features.CONCLUSION A case-level MIL framework provides a superior approach for predicting LNM in advanced CRC.This method shows promise for risk stratification and therapy decisions,requiring further validation.展开更多
Infrastructure-as-a-Service(IaaS)cloud platforms offer resources with diverse buying options.Users can run an instance on the on-demand market which is stable but expensive or on the spot market with a significant dis...Infrastructure-as-a-Service(IaaS)cloud platforms offer resources with diverse buying options.Users can run an instance on the on-demand market which is stable but expensive or on the spot market with a significant discount.However,users have to carefully weigh the low cost of spot instances against their poor availability.Spot instances will be revoked when the revocation event occurs.Thus,an important problem that an IaaS user faces now is how to use spot in-stances in a cost-effective and low-risk way.Based on the replication-based fault tolerance mechanism,we propose an on-line termination algorithm that optimizes the cost of using spot instances while ensuring operational stability.We prove that in most cases,the cost of our proposed online algorithm will not exceed twice the minimum cost of the optimal of-fline algorithm that knows the exact future a priori.Through a large number of experiments,we verify that our algorithm in most cases has a competitive ratio of no more than 2,and in other cases it can also reach the guaranteed competitive ratio.展开更多
Panax Ginseng(2n=48)represents a quintessential resource in traditional Chinese medicine,renowned for its outstanding medicinal and economic benefits(Choi,2008).But the late start in analyzing the ginseng genome and t...Panax Ginseng(2n=48)represents a quintessential resource in traditional Chinese medicine,renowned for its outstanding medicinal and economic benefits(Choi,2008).But the late start in analyzing the ginseng genome and the poorly developed genetic transformation system still impede the study of ginseng gene function and the application of molecular breeding.Transient transformation has the advantages of high efficiency,low cost,and short cycle while laying the foundation for stable genetic transformation(Chen et al.,2021).In the plant transformation process,the cell wall prevents exogenous DNA or protein entry,significantly reducing the efficiency of the transformation.Protoplasts,as exposed cells wrapped by the plasma membrane,are more likely to absorb exogenous DNA,RNA,and protein.Transgenic systems of protoplasts have been established in several species and applied in many fields,such as gene function research(Gou et al.,2020),gene editing(Yang et al.,2023),and physiological or molecular mechanism research(Aoyagi,2011).For instance,Oryza sativa protoplasts were employed to screen genes involved in rice defense signaling pathways through fluorescent reporter systems,with BiFC employed to verified inter-protein interactions(He et al.,2016).A study transformed Cannabis sativa L.protoplasts with the plasmids carrying GFP and RFP genes,evaluated the efficiency under different transformation conditions by flow cytometry,and verified the induction of synthetic DR5 promoter by IAA based on the constructed system(Beard et al.,2021).展开更多
Instance segmentation is crucial in various domains,such as autonomous driving and robotics.However,there is scope for improvement in the detection speed of instance-segmentation algorithms for edge devices.Therefore,...Instance segmentation is crucial in various domains,such as autonomous driving and robotics.However,there is scope for improvement in the detection speed of instance-segmentation algorithms for edge devices.Therefore,it is essential to enhance detection speed while maintaining high accuracy.In this study,we propose you only look once-layer fusion(YOLO-LF),a lightweight instance segmentation method specifically designed to optimize the speed of instance segmentation for autonomous driving applications.Based on the You Only Look Once version 8 nano(YOLOv8n)framework,we introduce a lightweight convolutional module and design a lightweight layer aggrega-tion module called Reparameterization convolution and Partial convolution Efficient Layer Aggregation Networks(RPELAN).This module effectively reduces the impact of redundant information generated by traditional convolutional stacking on the network size and detection speed while enhancing the capability to process feature information.We experimentally verified that our generalized one-stage detection network lightweight method based on Grouped Spatial Convolution(GSconv)enhances the detection speed while maintaining accuracy across various state-of-the-art(SOTA)networks.Our experiments conducted on the publicly available Cityscapes dataset demonstrated that YOLO-LF maintained the same accuracy as yolov8n(mAP@0.537.9%),the model volume decreased by 14.3%from 3.259 to=2.804 M,and the Frames Per Second(FPS)increased by 14.48%from 57.47 to 65.79 compared with YOLOv8n,thereby demonstrating its potential for real-time instance segmentation on edge devices.展开更多
Tree trunk instance segmentation is crucial for under-canopy unmanned aerial vehicles(UAVs)to autonomously extract standing tree stem attributes.Using cameras as sensors makes these UAVs compact and lightweight,facili...Tree trunk instance segmentation is crucial for under-canopy unmanned aerial vehicles(UAVs)to autonomously extract standing tree stem attributes.Using cameras as sensors makes these UAVs compact and lightweight,facilitating safe and flexible navigation in dense forests.However,their limited onboard computational power makes real-time,image-based tree trunk segmentation challenging,emphasizing the urgent need for lightweight and efficient segmentation models.In this study,we present RT-Trunk,a model specifically designed for real-time tree trunk instance segmentation in complex forest environments.To ensure real-time performance,we selected SparseInst as the base framework.We incorporated ConvNeXt-T as the backbone to enhance feature extraction for tree trunks,thereby improving segmentation accuracy.We further integrate the lightweight convolutional block attention module(CBAM),enabling the model to focus on tree trunk features while suppressing irrelevant information,which leads to additional gains in segmentation accuracy.To enable RT-Trunk to operate effectively under diverse complex forest environments,we constructed a comprehensive dataset for training and testing by combining self-collected data with multiple public datasets covering different locations,seasons,weather conditions,tree species,and levels of forest clutter.Com-pared with the other tree trunk segmentation methods,the RT-Trunk method achieved an average precision of 91.4%and the fastest inference speed of 32.9 frames per second.Overall,the proposed RT-Trunk provides superior trunk segmentation performance that balances speed and accu-racy,making it a promising solution for supporting under-canopy UAVs in the autonomous extraction of standing tree stem attributes.The code for this work is available at https://github.com/NEFU CVRG/RT Trunk.展开更多
The instance segmentation of impacted teeth in the oral panoramic X-ray images is hotly researched.However,due to the complex structure,low contrast,and complex background of teeth in panoramic X-ray images,the task o...The instance segmentation of impacted teeth in the oral panoramic X-ray images is hotly researched.However,due to the complex structure,low contrast,and complex background of teeth in panoramic X-ray images,the task of instance segmentation is technically tricky.In this study,the contrast between impacted Teeth and periodontal tissues such as gingiva,periodontalmembrane,and alveolar bone is low,resulting in fuzzy boundaries of impacted teeth.Amodel based on Teeth YOLACT is proposed to provide amore efficient and accurate solution for the segmentation of impacted teeth in oral panoramic X-ray films.Firstly,a Multi-scale Res-Transformer Module(MRTM)is designed.In the module,depthwise separable convolutions with different receptive fields are used to enhance the sensitivity of the model to lesion size.Additionally,the Vision Transformer is integrated to improve the model’s ability to perceive global features.Secondly,the Context Interaction-awareness Module(CIaM)is designed to fuse deep and shallow features.The deep semantic features guide the shallow spatial features.Then,the shallow spatial features are embedded into the deep semantic features,and the cross-weighted attention mechanism is used to aggregate the deep and shallow features efficiently,and richer context information is obtained.Thirdly,the Edge-preserving perceptionModule(E2PM)is designed to enhance the teeth edge features.The first-order differential operator is used to get the tooth edge weight,and the perception ability of tooth edge features is improved.The shallow spatial feature is fused by linear mapping,weight concatenation,and matrix multiplication operations to preserve the tooth edge information.Finally,comparison experiments and ablation experiments are conducted on the oral panoramic X-ray image datasets.The results show that the APdet,APseg,ARdet,ARseg,mAPdet,and mAPseg indicators of the proposed model are 89.9%,91.9%,77.4%,77.6%,72.8%,and 73.5%,respectively.This study further verifies the application potential of the method combining multi-scale feature extraction,multi-scale feature fusion,and edge perception enhancement in medical image segmentation,which provides a valuable reference for future related research.展开更多
Broad-spectrum effect,derived from the pharmaceutical term,refers to the effectiveness of a drug against many microorganisms,pathogenic factors or diseases.For instance,broad-spectrum antibiotics are the antibiotics w...Broad-spectrum effect,derived from the pharmaceutical term,refers to the effectiveness of a drug against many microorganisms,pathogenic factors or diseases.For instance,broad-spectrum antibiotics are the antibiotics working on many types of bacteria.展开更多
Water infiltration into soil is an important process in hydrologic cycle;however,its measurement is difficult,time-consuming and costly.Empirical and physical models have been developed to predict cumulative infiltrat...Water infiltration into soil is an important process in hydrologic cycle;however,its measurement is difficult,time-consuming and costly.Empirical and physical models have been developed to predict cumulative infiltration(CI),but are often inaccurate.In this study,several novel standalone machine learning algorithms(M5Prime(M5P),decision stump(DS),and sequential minimal optimization(SMO))and hybrid algorithms based on additive regression(AR)(i.e.,AR-M5P,AR-DS,and AR-SMO)and weighted instance handler wrapper(WIHW)(i.e.,WIHW-M5P,WIHW-DS,and WIHW-SMO)were developed for CI prediction.The Soil Conservation Service(SCS)model developed by the United States Department of Agriculture(USDA),one of the most popular empirical models to predict CI,was considered as a benchmark.Overall,154 measurements of CI(explanatory/input variables)were taken from 16 sites in a semi-arid region of Iran(Illam and Lorestan provinces).Six input variable combinations were considered based on Pearson correlations between candidate model inputs(time of measuring and soil bulk density,moisture content,and sand,clay,and silt percentages)and CI.The dataset was divided into two subgroups at random:70%of the data were used for model building(training dataset)and the remaining 30%were used for model validation(testing dataset).The various models were evaluated using different graphical approaches(bar charts,scatter plots,violin plots,and Taylor diagrams)and quantitative measures(root mean square error(RMSE),mean absolute error(MAE),Nash-Sutcliffe efficiency(NSE),and percent bias(PBIAS)).Time of measuring had the highest correlation with CI in the study area.The best input combinations were different for different algorithms.The results showed that all hybrid algorithms enhanced the CI prediction accuracy compared to the standalone models.The AR-M5P model provided the most accurate CI predictions(RMSE=0.75 cm,MAE=0.59 cm,NSE=0.98),while the SCS model had the lowest performance(RMSE=4.77 cm,MAE=2.64 cm,NSE=0.23).The differences in RMSE between the best model(AR-M5P)and the second-best(WIHW-M5P)and worst(SCS)were 40%and 84%,respectively.展开更多
The minimum vertex cover problem(MVCP)is a well-known combinatorial optimization problem of graph theory.The MVCP is an NP(nondeterministic polynomial)complete problem and it has an exponential growing complexity with...The minimum vertex cover problem(MVCP)is a well-known combinatorial optimization problem of graph theory.The MVCP is an NP(nondeterministic polynomial)complete problem and it has an exponential growing complexity with respect to the size of a graph.No algorithm exits till date that can exactly solve the problem in a deterministic polynomial time scale.However,several algorithms are proposed that solve the problem approximately in a short polynomial time scale.Such algorithms are useful for large size graphs,for which exact solution of MVCP is impossible with current computational resources.The MVCP has a wide range of applications in the fields like bioinformatics,biochemistry,circuit design,electrical engineering,data aggregation,networking,internet traffic monitoring,pattern recognition,marketing and franchising etc.This work aims to solve the MVCP approximately by a novel graph decomposition approach.The decomposition of the graph yields a subgraph that contains edges shared by triangular edge structures.A subgraph is covered to yield a subgraph that forms one or more Hamiltonian cycles or paths.In order to reduce complexity of the algorithm a new strategy is also proposed.The reduction strategy can be used for any algorithm solving MVCP.Based on the graph decomposition and the reduction strategy,two algorithms are formulated to approximately solve the MVCP.These algorithms are tested using well known standard benchmark graphs.The key feature of the results is a good approximate error ratio and improvement in optimum vertex cover values for few graphs.展开更多
This paper analyzes the resolution complexity of a random CSP model named model RBmix, the instance of which is composed by constraints with different length. For model RBmix, the existence of phase transitions has be...This paper analyzes the resolution complexity of a random CSP model named model RBmix, the instance of which is composed by constraints with different length. For model RBmix, the existence of phase transitions has been established and the threshold points have been located exactly. By encoding the random instances into CNF formulas, it is proved that almost all instances of model RBmix have no tree-like resolution proofs of less than exponential size. Thus the model RBmix can generate abundant hard instances in the threshold. This result is of great significance for algorithm testing and complexity analysis in NP-complete problems.展开更多
基金Supported by the National Natural Science Foundation of China under Grant Nos 61673389,61273202 and 61134008
文摘We propose and discuss a novel concept of robust set stabilization by permissible controls; this concept is helpful when dealing with both a priori information of model parameters and different permissible controls including quantum measurements. Both controllability and stabilization can be regarded as the special case of the novel concept. An instance is presented for a kind of uncertain open quantum systems to further justify this gen- eralized concept. It is underlined that a new type of hybrid control based on periodically perturbed projective measurements can be the permissible control of uncertain open quantum systems when perturbed projective measurements are available. The sufficient conditions are given for the robust set stabilization of uncertain quantum open systems by the hybrid control, and the design of the hybrid control is reduced to selecting the period of measurements.
文摘Many international brands have a phenomenal Chinese name which,paradoxically,comes from a rather prosaic name.The reason for this may lie in the fact that they need an outstanding translation of their names in order to be successful in international marketing.Hence the translation of brand names is an important part of the advertisement.And a good translation is expected to bridge the differences of cultures,languages,spending habits,thinking patterns,etc.
基金Supported by the National Natural Science Foundation of China (60673139, 60473073, 60573090)
文摘Deep Web sources contain a large of high-quality and query-related structured date. One of the challenges in the Deep Web is extracting result schemas of Deep Web sources. To address this challenge, this paper describes a novel approach that extracts both result data and the result schema of a Web database. The approach first models the query interface of a Deep Web source and fills in it with a specifically query instance. Then the result pages of the Deep Web sources are formatted in the tree structure to retrieve subtrees that contain elements of the query instance, Next, result schema of the Deep Web source is extracted by matching the subtree' nodes with the query instance, in which, a two-phase schema extraction method is adopted for obtaining more accurate result schema. Finally, experiments on real Deep Web sources show the utility of our approach, which provides a high precision and recall.
文摘Search-based software engineering has mainly dealt with automated test data generation by metaheuristic search techniques. Similarly, we try to generate the test data (i.e., problem instances) which show the worst case of algorithms by such a technique. In this paper, in terms of non-functional testing, we re-define the worst case of some algorithms, respectively. By using genetic algorithms (GAs), we illustrate the strategies corresponding to each type of instances. We here adopt three problems for examples;the sorting problem, the 0/1 knapsack problem (0/1KP), and the travelling salesperson problem (TSP). In some algorithms solving these problems, we could find the worst-case instances successfully;the successfulness of the result is based on a statistical approach and comparison to the results by using the random testing. Our tried examples introduce informative guidelines to the use of genetic algorithms in generating the worst-case instance, which is defined in the aspect of algorithm performance.
基金Supported by the National Natural Sciences Foundation of China(60373066 ,60425206 ,90412003) , National Grand Fundamental Research 973 Pro-gramof China(2002CB312000) , National Research Foundation for the Doctoral Pro-gramof Higher Education of China (20020286004)
文摘This paper proposes a checking method based on mutual instances and discusses three key problems in the method: how to deal with mistakes in the mutual instances and how to deal with too many or too few mutual instances. It provides the checking based on the weighted mutual instances considering fault tolerance, gives a way to partition the large-scale mutual instances, and proposes a process greatly reducing the manual annotation work to get more mutual instances. Intension annotation that improves the checking method is also discussed. The method is practical and effective to check subsumption relations between concept queries in different ontologies based on mutual instances.
基金This work has been supported in part by the Austrian Research Promotion Agency(FFG)under the APOLLO and Karnten Fog project.
文摘HTTP Adaptive Streaming(HAS)of video content is becoming an undivided part of the Internet and accounts for most of today’s network traffic.Video compression technology plays a vital role in efficiently utilizing network channels,but encoding videos into multiple representations with selected encoding parameters is a significant challenge.However,video encoding is a computationally intensive and time-consuming operation that requires high-performance resources provided by on-premise infrastructures or public clouds.In turn,the public clouds,such as Amazon elastic compute cloud(EC2),provide hundreds of computing instances optimized for different purposes and clients’budgets.Thus,there is a need for algorithms and methods for optimized computing instance selection for specific tasks such as video encoding and transcoding operations.Additionally,the encoding speed directly depends on the selected encoding parameters and the complexity characteristics of video content.In this paper,we first benchmarked the video encoding performance of Amazon EC2 spot instances using multiple×264 codec encoding parameters and video sequences of varying complexity.Then,we proposed a novel fast approach to optimize Amazon EC2 spot instances and minimize video encoding costs.Furthermore,we evaluated how the optimized selection of EC2 spot instances can affect the encoding cost.The results show that our approach,on average,can reduce the encoding costs by at least 15.8%and up to 47.8%when compared to a random selection of EC2 spot instances.
基金This work was supported by the National Natural Science Foundation of China(Nos.62172065 and 62072060)the Natural Science Foundation of Chongqing(No.cstc2020jcyj-msxmX0137).
文摘When deploying workflows in cloud environments,the use of Spot Instances(SIs)is intriguing as they are much cheaper than on-demand ones.However,Sls are volatile and may be revoked at any time,which results in a more challenging scheduling problem involving execution interruption and hence hinders the successful handling of conventional cloud workflow scheduling techniques.Although some scheduling methods for Sls have been proposed,most of them are no more applicable to the latest Sls,as they have evolved by eliminating bidding and simplifying the pricing model.This study focuses on how to minimize the execution cost with a deadline constraint when deploying a workflow on volatile Sls in cloud environments.Based on Monte Carlo simulation and list scheduling,a stochastic scheduling method called MCLS is devised to optimize a utility function introduced for this problem.With the Monte Carlo simulation framework,MCLS employs sampled task execution time to build solutions via deadline distribution and list scheduling,and then returns the most robust solution from all the candidates with a specific evaluation mechanism and selection criteria.Experimental results show that the performance of MCLS is more competitive comparedwithtraditionalalgorithms.
基金Supported by Chongqing Medical Scientific Research Project(Joint Project of Chongqing Health Commission and Science and Technology Bureau),No.2023MSXM060.
文摘BACKGROUND The accurate prediction of lymph node metastasis(LNM)is crucial for managing locally advanced(T3/T4)colorectal cancer(CRC).However,both traditional histopathology and standard slide-level deep learning often fail to capture the sparse and diagnostically critical features of metastatic potential.AIM To develop and validate a case-level multiple-instance learning(MIL)framework mimicking a pathologist's comprehensive review and improve T3/T4 CRC LNM prediction.METHODS The whole-slide images of 130 patients with T3/T4 CRC were retrospectively collected.A case-level MIL framework utilising the CONCH v1.5 and UNI2-h deep learning models was trained on features from all haematoxylin and eosinstained primary tumour slides for each patient.These pathological features were subsequently integrated with clinical data,and model performance was evaluated using the area under the curve(AUC).RESULTS The case-level framework demonstrated superior LNM prediction over slide-level training,with the CONCH v1.5 model achieving a mean AUC(±SD)of 0.899±0.033 vs 0.814±0.083,respectively.Integrating pathology features with clinical data further enhanced performance,yielding a top model with a mean AUC of 0.904±0.047,in sharp contrast to a clinical-only model(mean AUC 0.584±0.084).Crucially,a pathologist’s review confirmed that the model-identified high-attention regions correspond to known high-risk histopathological features.CONCLUSION A case-level MIL framework provides a superior approach for predicting LNM in advanced CRC.This method shows promise for risk stratification and therapy decisions,requiring further validation.
基金This work was supported by the National Key Research and Development Program of China under Grant No.2018YFB14-04501。
文摘Infrastructure-as-a-Service(IaaS)cloud platforms offer resources with diverse buying options.Users can run an instance on the on-demand market which is stable but expensive or on the spot market with a significant discount.However,users have to carefully weigh the low cost of spot instances against their poor availability.Spot instances will be revoked when the revocation event occurs.Thus,an important problem that an IaaS user faces now is how to use spot in-stances in a cost-effective and low-risk way.Based on the replication-based fault tolerance mechanism,we propose an on-line termination algorithm that optimizes the cost of using spot instances while ensuring operational stability.We prove that in most cases,the cost of our proposed online algorithm will not exceed twice the minimum cost of the optimal of-fline algorithm that knows the exact future a priori.Through a large number of experiments,we verify that our algorithm in most cases has a competitive ratio of no more than 2,and in other cases it can also reach the guaranteed competitive ratio.
基金supported by the Genetic analysis of important quality and traits of ginseng and basic research on molecular design breeding(Grant No.U21A20405)。
文摘Panax Ginseng(2n=48)represents a quintessential resource in traditional Chinese medicine,renowned for its outstanding medicinal and economic benefits(Choi,2008).But the late start in analyzing the ginseng genome and the poorly developed genetic transformation system still impede the study of ginseng gene function and the application of molecular breeding.Transient transformation has the advantages of high efficiency,low cost,and short cycle while laying the foundation for stable genetic transformation(Chen et al.,2021).In the plant transformation process,the cell wall prevents exogenous DNA or protein entry,significantly reducing the efficiency of the transformation.Protoplasts,as exposed cells wrapped by the plasma membrane,are more likely to absorb exogenous DNA,RNA,and protein.Transgenic systems of protoplasts have been established in several species and applied in many fields,such as gene function research(Gou et al.,2020),gene editing(Yang et al.,2023),and physiological or molecular mechanism research(Aoyagi,2011).For instance,Oryza sativa protoplasts were employed to screen genes involved in rice defense signaling pathways through fluorescent reporter systems,with BiFC employed to verified inter-protein interactions(He et al.,2016).A study transformed Cannabis sativa L.protoplasts with the plasmids carrying GFP and RFP genes,evaluated the efficiency under different transformation conditions by flow cytometry,and verified the induction of synthetic DR5 promoter by IAA based on the constructed system(Beard et al.,2021).
基金supported by Science and Technology Research Youth Project of Chongqing Municipal Education Commission(No.KJQN202301104)Cooperative Project between universities in Chongqing and Affiliated Institutes of Chinese Academy of Sciences(No.HZ2021011)+1 种基金Chongqing Municipal Science and Technology Commission Technology Innovation and Application Development Special Project(No.2022TIAD-KPX0040)Action Plan for Quality Development of Chongqing University of Technology Graduate Education(Grant No.gzlcx20242014).
文摘Instance segmentation is crucial in various domains,such as autonomous driving and robotics.However,there is scope for improvement in the detection speed of instance-segmentation algorithms for edge devices.Therefore,it is essential to enhance detection speed while maintaining high accuracy.In this study,we propose you only look once-layer fusion(YOLO-LF),a lightweight instance segmentation method specifically designed to optimize the speed of instance segmentation for autonomous driving applications.Based on the You Only Look Once version 8 nano(YOLOv8n)framework,we introduce a lightweight convolutional module and design a lightweight layer aggrega-tion module called Reparameterization convolution and Partial convolution Efficient Layer Aggregation Networks(RPELAN).This module effectively reduces the impact of redundant information generated by traditional convolutional stacking on the network size and detection speed while enhancing the capability to process feature information.We experimentally verified that our generalized one-stage detection network lightweight method based on Grouped Spatial Convolution(GSconv)enhances the detection speed while maintaining accuracy across various state-of-the-art(SOTA)networks.Our experiments conducted on the publicly available Cityscapes dataset demonstrated that YOLO-LF maintained the same accuracy as yolov8n(mAP@0.537.9%),the model volume decreased by 14.3%from 3.259 to=2.804 M,and the Frames Per Second(FPS)increased by 14.48%from 57.47 to 65.79 compared with YOLOv8n,thereby demonstrating its potential for real-time instance segmentation on edge devices.
基金supported in part by the National Natural Science Foundation of China(No.31470714 and 61701105).
文摘Tree trunk instance segmentation is crucial for under-canopy unmanned aerial vehicles(UAVs)to autonomously extract standing tree stem attributes.Using cameras as sensors makes these UAVs compact and lightweight,facilitating safe and flexible navigation in dense forests.However,their limited onboard computational power makes real-time,image-based tree trunk segmentation challenging,emphasizing the urgent need for lightweight and efficient segmentation models.In this study,we present RT-Trunk,a model specifically designed for real-time tree trunk instance segmentation in complex forest environments.To ensure real-time performance,we selected SparseInst as the base framework.We incorporated ConvNeXt-T as the backbone to enhance feature extraction for tree trunks,thereby improving segmentation accuracy.We further integrate the lightweight convolutional block attention module(CBAM),enabling the model to focus on tree trunk features while suppressing irrelevant information,which leads to additional gains in segmentation accuracy.To enable RT-Trunk to operate effectively under diverse complex forest environments,we constructed a comprehensive dataset for training and testing by combining self-collected data with multiple public datasets covering different locations,seasons,weather conditions,tree species,and levels of forest clutter.Com-pared with the other tree trunk segmentation methods,the RT-Trunk method achieved an average precision of 91.4%and the fastest inference speed of 32.9 frames per second.Overall,the proposed RT-Trunk provides superior trunk segmentation performance that balances speed and accu-racy,making it a promising solution for supporting under-canopy UAVs in the autonomous extraction of standing tree stem attributes.The code for this work is available at https://github.com/NEFU CVRG/RT Trunk.
基金supported in part by the National Natural Science Foundation of China(Grant No.62062003)Natural Science Foundation of Ningxia(Grant No.2023AAC03293).
文摘The instance segmentation of impacted teeth in the oral panoramic X-ray images is hotly researched.However,due to the complex structure,low contrast,and complex background of teeth in panoramic X-ray images,the task of instance segmentation is technically tricky.In this study,the contrast between impacted Teeth and periodontal tissues such as gingiva,periodontalmembrane,and alveolar bone is low,resulting in fuzzy boundaries of impacted teeth.Amodel based on Teeth YOLACT is proposed to provide amore efficient and accurate solution for the segmentation of impacted teeth in oral panoramic X-ray films.Firstly,a Multi-scale Res-Transformer Module(MRTM)is designed.In the module,depthwise separable convolutions with different receptive fields are used to enhance the sensitivity of the model to lesion size.Additionally,the Vision Transformer is integrated to improve the model’s ability to perceive global features.Secondly,the Context Interaction-awareness Module(CIaM)is designed to fuse deep and shallow features.The deep semantic features guide the shallow spatial features.Then,the shallow spatial features are embedded into the deep semantic features,and the cross-weighted attention mechanism is used to aggregate the deep and shallow features efficiently,and richer context information is obtained.Thirdly,the Edge-preserving perceptionModule(E2PM)is designed to enhance the teeth edge features.The first-order differential operator is used to get the tooth edge weight,and the perception ability of tooth edge features is improved.The shallow spatial feature is fused by linear mapping,weight concatenation,and matrix multiplication operations to preserve the tooth edge information.Finally,comparison experiments and ablation experiments are conducted on the oral panoramic X-ray image datasets.The results show that the APdet,APseg,ARdet,ARseg,mAPdet,and mAPseg indicators of the proposed model are 89.9%,91.9%,77.4%,77.6%,72.8%,and 73.5%,respectively.This study further verifies the application potential of the method combining multi-scale feature extraction,multi-scale feature fusion,and edge perception enhancement in medical image segmentation,which provides a valuable reference for future related research.
文摘Broad-spectrum effect,derived from the pharmaceutical term,refers to the effectiveness of a drug against many microorganisms,pathogenic factors or diseases.For instance,broad-spectrum antibiotics are the antibiotics working on many types of bacteria.
文摘Water infiltration into soil is an important process in hydrologic cycle;however,its measurement is difficult,time-consuming and costly.Empirical and physical models have been developed to predict cumulative infiltration(CI),but are often inaccurate.In this study,several novel standalone machine learning algorithms(M5Prime(M5P),decision stump(DS),and sequential minimal optimization(SMO))and hybrid algorithms based on additive regression(AR)(i.e.,AR-M5P,AR-DS,and AR-SMO)and weighted instance handler wrapper(WIHW)(i.e.,WIHW-M5P,WIHW-DS,and WIHW-SMO)were developed for CI prediction.The Soil Conservation Service(SCS)model developed by the United States Department of Agriculture(USDA),one of the most popular empirical models to predict CI,was considered as a benchmark.Overall,154 measurements of CI(explanatory/input variables)were taken from 16 sites in a semi-arid region of Iran(Illam and Lorestan provinces).Six input variable combinations were considered based on Pearson correlations between candidate model inputs(time of measuring and soil bulk density,moisture content,and sand,clay,and silt percentages)and CI.The dataset was divided into two subgroups at random:70%of the data were used for model building(training dataset)and the remaining 30%were used for model validation(testing dataset).The various models were evaluated using different graphical approaches(bar charts,scatter plots,violin plots,and Taylor diagrams)and quantitative measures(root mean square error(RMSE),mean absolute error(MAE),Nash-Sutcliffe efficiency(NSE),and percent bias(PBIAS)).Time of measuring had the highest correlation with CI in the study area.The best input combinations were different for different algorithms.The results showed that all hybrid algorithms enhanced the CI prediction accuracy compared to the standalone models.The AR-M5P model provided the most accurate CI predictions(RMSE=0.75 cm,MAE=0.59 cm,NSE=0.98),while the SCS model had the lowest performance(RMSE=4.77 cm,MAE=2.64 cm,NSE=0.23).The differences in RMSE between the best model(AR-M5P)and the second-best(WIHW-M5P)and worst(SCS)were 40%and 84%,respectively.
文摘The minimum vertex cover problem(MVCP)is a well-known combinatorial optimization problem of graph theory.The MVCP is an NP(nondeterministic polynomial)complete problem and it has an exponential growing complexity with respect to the size of a graph.No algorithm exits till date that can exactly solve the problem in a deterministic polynomial time scale.However,several algorithms are proposed that solve the problem approximately in a short polynomial time scale.Such algorithms are useful for large size graphs,for which exact solution of MVCP is impossible with current computational resources.The MVCP has a wide range of applications in the fields like bioinformatics,biochemistry,circuit design,electrical engineering,data aggregation,networking,internet traffic monitoring,pattern recognition,marketing and franchising etc.This work aims to solve the MVCP approximately by a novel graph decomposition approach.The decomposition of the graph yields a subgraph that contains edges shared by triangular edge structures.A subgraph is covered to yield a subgraph that forms one or more Hamiltonian cycles or paths.In order to reduce complexity of the algorithm a new strategy is also proposed.The reduction strategy can be used for any algorithm solving MVCP.Based on the graph decomposition and the reduction strategy,two algorithms are formulated to approximately solve the MVCP.These algorithms are tested using well known standard benchmark graphs.The key feature of the results is a good approximate error ratio and improvement in optimum vertex cover values for few graphs.
文摘This paper analyzes the resolution complexity of a random CSP model named model RBmix, the instance of which is composed by constraints with different length. For model RBmix, the existence of phase transitions has been established and the threshold points have been located exactly. By encoding the random instances into CNF formulas, it is proved that almost all instances of model RBmix have no tree-like resolution proofs of less than exponential size. Thus the model RBmix can generate abundant hard instances in the threshold. This result is of great significance for algorithm testing and complexity analysis in NP-complete problems.