In this paper, a single-machine scheduling model with a given common due date is considered. Job processing time is a linear decreasing function of its starting time. The objective function is to minimize the total we...In this paper, a single-machine scheduling model with a given common due date is considered. Job processing time is a linear decreasing function of its starting time. The objective function is to minimize the total weighted earliness award and tardiness penalty. Our aim is to find an optimal schedule so as to minimize the objective function. As the problem is NP-hard, some properties and polynomial time solvable cases of this problem are given. A dynamic programming algorithm for the general case of the problem is provided.展开更多
In this paper, by considering the fuzzy nature of the data in real-life problems, single machine scheduling problems with fuzzy processing time and multiple objectives are formulated and an efficient genetic algorithm...In this paper, by considering the fuzzy nature of the data in real-life problems, single machine scheduling problems with fuzzy processing time and multiple objectives are formulated and an efficient genetic algorithm which is suitable for solving these problems is proposed. As illustrative numerical examples, twenty jobs processing on a machine is considered. The feasibility and effectiveness of the proposed method have been demonstrated in the simulation.展开更多
In this paper, single machine scheduling problems with variable processing time are raised. The criterions of the problem considered are minimizing scheduling length of all jobs, flow time and number of tardy jobs and...In this paper, single machine scheduling problems with variable processing time are raised. The criterions of the problem considered are minimizing scheduling length of all jobs, flow time and number of tardy jobs and so on. The complexity of the problem is determined. [WT5HZ]展开更多
In this paper,single machine scheduling problems with variable processing time areraised.The criterions of the prolem considered are minimizing scheduling length of all jobs,flowtime and number of tardy jobs and so on...In this paper,single machine scheduling problems with variable processing time areraised.The criterions of the prolem considered are minimizing scheduling length of all jobs,flowtime and number of tardy jobs and so on.The complexity of the problem is determined.展开更多
Abstract Most papers in scheduling research have treated individual job processing times as fixed parameters. However, in many practical situations, a manager may control processing time by reallocating resources. In ...Abstract Most papers in scheduling research have treated individual job processing times as fixed parameters. However, in many practical situations, a manager may control processing time by reallocating resources. In this paper, authors consider a machine scheduling problem with controllable processing times. In the first part of this paper, a special case where the processing times and compression costs are uniform among jobs is discussed. Theoretical results are derived that aid in developing an O(n 2) algorithm to slove the problem optimally. In the second part of this paper, authors generalize the discussion to general case. An effective heuristic to the general problem will be presented.展开更多
Although the image dehazing problem has received considerable attention over recent years,the existing models often prioritise performance at the expense of complexity,making them unsuitable for real-world application...Although the image dehazing problem has received considerable attention over recent years,the existing models often prioritise performance at the expense of complexity,making them unsuitable for real-world applications,which require algorithms to be deployed on resource constrained-devices.To address this challenge,we propose WaveLiteDehaze-Network(WLD-Net),an end-to-end dehazing model that delivers performance comparable to complex models while operating in real time and using significantly fewer parameters.This approach capitalises on the insight that haze predominantly affects low-frequency infor-mation.By exclusively processing the image in the frequency domain using discrete wavelet transform(DWT),we segregate the image into high and low frequencies and process them separately.This allows us to preserve high-frequency details and recover low-frequency components affected by haze,distinguishing our method from existing approaches that use spatial domain processing as the backbone,with DWT serving as an auxiliary component.DWT is applied at multiple levels for better in-formation retention while also accelerating computation by downsampling feature maps.Subsequently,a learning-based fusion mechanism reintegrates the processed frequencies to reconstruct the dehazed image.Experiments show that WLD-Net out-performs other low-parameter models on real-world hazy images and rivals much larger models,achieving the highest PSNR and SSIM scores on the O-Haze dataset.Qualitatively,the proposed method demonstrates its effectiveness in handling a diverse range of haze types,delivering visually pleasing results and robust performance,while also generalising well across different scenarios.With only 0.385 million parameters(more than 100 times smaller than comparable dehazing methods),WLD-Net processes 1024×1024 images in just 0.045 s,highlighting its applicability across various real-world scenarios.The code is available at https://github.com/AliMurtaza29/WLD-Net.展开更多
Job-shop scheduling problem with discretely controllable processing times (JSP-DCPT) is modeled based on the disjunctive graph, and the formulation of JSP-DCPT is presented. A three-step decomposition approach is prop...Job-shop scheduling problem with discretely controllable processing times (JSP-DCPT) is modeled based on the disjunctive graph, and the formulation of JSP-DCPT is presented. A three-step decomposition approach is proposed so that JSP-DCPT can be handled by solving a job-shop scheduling problem (JSP) and a series of discrete time-cost tradeoff problems. To simplify the decomposition approach, the time-cost phase plane is introduced to describe tradeoffs of the discrete time-cost tradeoff problem, and an extreme mode-based set dominant theory is elaborated so that an upper bound is determined to cut discrete time-cost tradeoff problems generated by using the proposed decomposition approach. An extreme mode-based set dominant decomposition algorithm (EMSDDA) is then proposed. Experimental simulations for instance JSPDCPT_FT10, which is designed based on a JSP benchmark FT10, demonstrate the effectiveness of the proposed theory and the decomposition approach.展开更多
Two-stage hybrid flow shop scheduling has been extensively considered in single-factory settings.However,the distributed two-stage hybrid flow shop scheduling problem(DTHFSP)with fuzzy processing time is seldom invest...Two-stage hybrid flow shop scheduling has been extensively considered in single-factory settings.However,the distributed two-stage hybrid flow shop scheduling problem(DTHFSP)with fuzzy processing time is seldom investigated in multiple factories.Furthermore,the integration of reinforcement learning and metaheuristic is seldom applied to solve DTHFSP.In the current study,DTHFSP with fuzzy processing time was investigated,and a novel Q-learning-based teaching-learning based optimization(QTLBO)was constructed to minimize makespan.Several teachers were recruited for this study.The teacher phase,learner phase,teacher’s self-learning phase,and learner’s self-learning phase were designed.The Q-learning algorithm was implemented by 9 states,4 actions defined as combinations of the above phases,a reward,and an adaptive action selection,which were applied to dynamically adjust the algorithm structure.A number of experiments were conducted.The computational results demonstrate that the new strategies of QTLBO are effective;furthermore,it presents promising results on the considered DTHFSP.展开更多
Due date quotation and scheduling are important tools to match demand with production capacity in the MTO (make-to-order) environment. We consider an order scheduling problem faced by a manufacturing f'trm operatin...Due date quotation and scheduling are important tools to match demand with production capacity in the MTO (make-to-order) environment. We consider an order scheduling problem faced by a manufacturing f'trm operating in an MTO environment, where the firm needs to quote a common due date for the customers, and simultaneously control the processing times of customer orders (by allocating extra resources to process the orders) so as to complete the orders before a given deadline. The objective is to minimize the total costs of earliness, tardiness, due date assignment and extra resource consumption. We show the problem is NP-hard, even if the cost weights for controlling the order processing times are identical. We identify several polynomially solvable cases of the problem, and develop a branch and bound algorithm and three Tabu search algorithms to solve the general problem. We then conduct computational experiments to evaluate the performance of the three Tabu-search algorithms and show that they are generally effective in terms of solution quality.展开更多
In this paper, single machine scheduling problems with variable processing time is discussed according to published instances of management engineering. Processing time of a job is the product of a “coefficient' ...In this paper, single machine scheduling problems with variable processing time is discussed according to published instances of management engineering. Processing time of a job is the product of a “coefficient' of the job on position i and a “normal' processing time of the job. The criteria considered is to minimize scheduled length of all jobs. A lemma is proposed and proved. In no deadline constrained condition, the problem belongs to polynomial time algorithm. It is proved by using 3 partition that if the problem is deadline constrained, its complexity is strong NP hard. Finally, a conjuncture is proposed that is to be proved.展开更多
Background:Dry specimen transport has shown equivalence to traditional liquid transport using a novel high-risk Human papillomavirus assay.Considering that dry transport might cross obstacles during cervical cancer sc...Background:Dry specimen transport has shown equivalence to traditional liquid transport using a novel high-risk Human papillomavirus assay.Considering that dry transport might cross obstacles during cervical cancer screening in low and middle resource settings,this study was designed evaluate different processing time of dry specimen transport using the same isothermal amplification hrHPV assay.Methods:There were 564 women between the ages of 30–55 recruited from colposcopy clinic.For each patient,two endocervical samples were collected and placed into empty collection tubes by physician.Samples were stored at room temperature until analyzed for hrHPV using the AmpFire assay at two time points:2 days and 2 weeks.511 of the 564 participants with positive hrHPV were provided colposcopy exam and quadrant biopsy.Results:A total of 1128 endocervical samples from 564 patients were detected by the Ampfire assay.Good agreement was found between two time periods(KappaStandard error=0.67±0.04).Sensitivity(2days/2weeks)for CIN2t was 95.28%(95%CI:92.14%–98.42%)vs 90.57%(CI(86.65%–94.49%)and specificity(2days/2weeks)was 22.47%(CI 19.33%–25.61%)vs 28.15%(CI 24.23%–32.07%)respectively.The difference for Ampfire HPV detection in sensitivity for CIN2t for the two time periods was not significant(P=0.227),while the difference in specificity for CIN2t was significant(P=0.001).The difference in Ct values 29.23(CI 28.15–30.31)and 29.27(CI 28.19–30.35)between two time points was not significant(P?0.164).Conclusion:Processing dry brush specimens can be delayed up to 2 weeks.Using the AmpFire assay platform which supports cervical cancer prevention programs in low-to-middle-income countries(LMICs).展开更多
Seismological Bureau of Sichuan Province, Chengdu 610041, China2) Center for Analysis and Prediction, State Seismological Bureau, Beijing 100036, China3) Observation Center for Prediction of Earthquakes and Volcanic E...Seismological Bureau of Sichuan Province, Chengdu 610041, China2) Center for Analysis and Prediction, State Seismological Bureau, Beijing 100036, China3) Observation Center for Prediction of Earthquakes and Volcanic Eruptions, Faculty of Sciences, Tohoku University, Sendai 98077, Japan展开更多
Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features.Several cloud-based IoT health providers have been described in the literature prev...Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features.Several cloud-based IoT health providers have been described in the literature previously.Furthermore,there are a number of issues related to time consumed and overall network performance when it comes to big data information.In the existing method,less performed optimization algorithms were used for optimizing the data.In the proposed method,the Chaotic Cuckoo Optimization algorithm was used for feature selection,and Convolutional Support Vector Machine(CSVM)was used.The research presents a method for analyzing healthcare information that uses in future prediction.The major goal is to take a variety of data while improving efficiency and minimizing process time.The suggested method employs a hybrid method that is divided into two stages.In the first stage,it reduces the features by using the Chaotic Cuckoo Optimization algorithm with Levy flight,opposition-based learning,and distributor operator.In the second stage,CSVM is used which combines the benefits of convolutional neural network(CNN)and SVM.The CSVM modifies CNN’s convolution product to learn hidden deep inside data sources.For improved economic flexibility,greater protection,greater analytics with confidentiality,and lower operating cost,the suggested approach is built on fog computing.Overall results of the experiments show that the suggested method can minimize the number of features in the datasets,enhances the accuracy by 82%,and decrease the time of the process.展开更多
In this paper, a fabrication scheduling problem concerning the production of components at a single manufacturing facility was studied, in which the manufactured components are subsequently assembled into a finite num...In this paper, a fabrication scheduling problem concerning the production of components at a single manufacturing facility was studied, in which the manufactured components are subsequently assembled into a finite number of end products. Each product was assumed to comprise a common component to all jobs and a unique component to itself. Common operations were processed in batches and each batch required a setup time. A product is completed when both its two operations have been processed and are available. The optimality criterion considered was the minimization of weighted flow time. For this scheduling problem, the optimal schedules were described in a weignted shortest processing time first (WSPT) order and two algorithms were constructed corresponding to the batch availability and item availability, respectively.展开更多
Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve ...Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.展开更多
A real time algorithm is presented here to recognize and analyze 8 channel simultaneous electro cardiograph(ECG). The algorithm transforms 8 channel simultaneous ECG into three orthogonal vectors and spatial veloc...A real time algorithm is presented here to recognize and analyze 8 channel simultaneous electro cardiograph(ECG). The algorithm transforms 8 channel simultaneous ECG into three orthogonal vectors and spatial velocity first, then forms the spatial velocity sample, and uses this spatial velocity sample to recognize each beat. The algorithm computes the averaged parameters by using averaged spatial velocity and the averaged ECG and the current parameters by using the current beat period and current width of QRS. The algorithm can recognize P, QRS and T onsets and ends of simultaneous 12 lead ECG precisely, and some arrhythmias such as premature ventricular beat, ventricular escape beat, R on T, bigeminy, trigeminy. The algorithm software works well on a real 8 channel ECG system and meets the demands of designing.展开更多
Ground condition and construction (excavation and support) time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic est...Ground condition and construction (excavation and support) time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC) simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.展开更多
Some properties of Super-Brownian motion have been approached by Dawson & Hochberg [1], Iscoe [2] & L3], Konno & Shiga [4] and so on. In this paper, we limit our attention to the occupation time processes ...Some properties of Super-Brownian motion have been approached by Dawson & Hochberg [1], Iscoe [2] & L3], Konno & Shiga [4] and so on. In this paper, we limit our attention to the occupation time processes of the Super-Brownian motion,and try to give an intuitive proof for their absolute continuity with respect to the Lebesgue measure on Rd (d≤3) when the initial measure of the Super-Brownian motion has the absolute continuity.展开更多
Due to the widespread application of the PID controller in industrial control systems, it is desirable to know the complete set of all the stabilizing PID controllers for a given plant before the controller design and...Due to the widespread application of the PID controller in industrial control systems, it is desirable to know the complete set of all the stabilizing PID controllers for a given plant before the controller design and tuning. In this paper, the stabilization problems of the classical proportionalintegral-derivative (PID) controller and the singleparameter PID controller (containing only one adjustable parameter) for integral processes with time delay are investigated, respectively. The complete set of stabilizing parameters of the classical PID controller is determined using a version of the Hermite-Biehler Theorem applicable to quasipolynomials. Since the stabilization problem of the singie-parameter PID controller cannot be treated by the Hermite-Biehler Theorem, a simple method called duallocus diagram is employed to derive the stabilizing range of the single-parameter PID controller. These results provide insight into the tuning of the PID controllers.展开更多
文摘In this paper, a single-machine scheduling model with a given common due date is considered. Job processing time is a linear decreasing function of its starting time. The objective function is to minimize the total weighted earliness award and tardiness penalty. Our aim is to find an optimal schedule so as to minimize the objective function. As the problem is NP-hard, some properties and polynomial time solvable cases of this problem are given. A dynamic programming algorithm for the general case of the problem is provided.
基金supported by the National Natural Science Foundation of China(NNSFC)(the grant No.60274043)supported by the National High-tech Research&Development Project(863)(the grant No.2002AA412610)
文摘In this paper, by considering the fuzzy nature of the data in real-life problems, single machine scheduling problems with fuzzy processing time and multiple objectives are formulated and an efficient genetic algorithm which is suitable for solving these problems is proposed. As illustrative numerical examples, twenty jobs processing on a machine is considered. The feasibility and effectiveness of the proposed method have been demonstrated in the simulation.
文摘In this paper, single machine scheduling problems with variable processing time are raised. The criterions of the problem considered are minimizing scheduling length of all jobs, flow time and number of tardy jobs and so on. The complexity of the problem is determined. [WT5HZ]
基金Supported by National Postdoctoral Science Foundation of P.R.China(9902)
文摘In this paper,single machine scheduling problems with variable processing time areraised.The criterions of the prolem considered are minimizing scheduling length of all jobs,flowtime and number of tardy jobs and so on.The complexity of the problem is determined.
文摘Abstract Most papers in scheduling research have treated individual job processing times as fixed parameters. However, in many practical situations, a manager may control processing time by reallocating resources. In this paper, authors consider a machine scheduling problem with controllable processing times. In the first part of this paper, a special case where the processing times and compression costs are uniform among jobs is discussed. Theoretical results are derived that aid in developing an O(n 2) algorithm to slove the problem optimally. In the second part of this paper, authors generalize the discussion to general case. An effective heuristic to the general problem will be presented.
基金Japan International Cooperation Agency(JICA)via Malaysia-Japan Linkage Research Grant 2024.
文摘Although the image dehazing problem has received considerable attention over recent years,the existing models often prioritise performance at the expense of complexity,making them unsuitable for real-world applications,which require algorithms to be deployed on resource constrained-devices.To address this challenge,we propose WaveLiteDehaze-Network(WLD-Net),an end-to-end dehazing model that delivers performance comparable to complex models while operating in real time and using significantly fewer parameters.This approach capitalises on the insight that haze predominantly affects low-frequency infor-mation.By exclusively processing the image in the frequency domain using discrete wavelet transform(DWT),we segregate the image into high and low frequencies and process them separately.This allows us to preserve high-frequency details and recover low-frequency components affected by haze,distinguishing our method from existing approaches that use spatial domain processing as the backbone,with DWT serving as an auxiliary component.DWT is applied at multiple levels for better in-formation retention while also accelerating computation by downsampling feature maps.Subsequently,a learning-based fusion mechanism reintegrates the processed frequencies to reconstruct the dehazed image.Experiments show that WLD-Net out-performs other low-parameter models on real-world hazy images and rivals much larger models,achieving the highest PSNR and SSIM scores on the O-Haze dataset.Qualitatively,the proposed method demonstrates its effectiveness in handling a diverse range of haze types,delivering visually pleasing results and robust performance,while also generalising well across different scenarios.With only 0.385 million parameters(more than 100 times smaller than comparable dehazing methods),WLD-Net processes 1024×1024 images in just 0.045 s,highlighting its applicability across various real-world scenarios.The code is available at https://github.com/AliMurtaza29/WLD-Net.
基金supported by the National Natural Science Foundation of China (Grant Nos. 51075337, 50705076, 50705077)the Natural Sci-ence Basic Research Plan in Shaanxi Province of China (Grant No. 2009JQ9002)
文摘Job-shop scheduling problem with discretely controllable processing times (JSP-DCPT) is modeled based on the disjunctive graph, and the formulation of JSP-DCPT is presented. A three-step decomposition approach is proposed so that JSP-DCPT can be handled by solving a job-shop scheduling problem (JSP) and a series of discrete time-cost tradeoff problems. To simplify the decomposition approach, the time-cost phase plane is introduced to describe tradeoffs of the discrete time-cost tradeoff problem, and an extreme mode-based set dominant theory is elaborated so that an upper bound is determined to cut discrete time-cost tradeoff problems generated by using the proposed decomposition approach. An extreme mode-based set dominant decomposition algorithm (EMSDDA) is then proposed. Experimental simulations for instance JSPDCPT_FT10, which is designed based on a JSP benchmark FT10, demonstrate the effectiveness of the proposed theory and the decomposition approach.
文摘Two-stage hybrid flow shop scheduling has been extensively considered in single-factory settings.However,the distributed two-stage hybrid flow shop scheduling problem(DTHFSP)with fuzzy processing time is seldom investigated in multiple factories.Furthermore,the integration of reinforcement learning and metaheuristic is seldom applied to solve DTHFSP.In the current study,DTHFSP with fuzzy processing time was investigated,and a novel Q-learning-based teaching-learning based optimization(QTLBO)was constructed to minimize makespan.Several teachers were recruited for this study.The teacher phase,learner phase,teacher’s self-learning phase,and learner’s self-learning phase were designed.The Q-learning algorithm was implemented by 9 states,4 actions defined as combinations of the above phases,a reward,and an adaptive action selection,which were applied to dynamically adjust the algorithm structure.A number of experiments were conducted.The computational results demonstrate that the new strategies of QTLBO are effective;furthermore,it presents promising results on the considered DTHFSP.
文摘Due date quotation and scheduling are important tools to match demand with production capacity in the MTO (make-to-order) environment. We consider an order scheduling problem faced by a manufacturing f'trm operating in an MTO environment, where the firm needs to quote a common due date for the customers, and simultaneously control the processing times of customer orders (by allocating extra resources to process the orders) so as to complete the orders before a given deadline. The objective is to minimize the total costs of earliness, tardiness, due date assignment and extra resource consumption. We show the problem is NP-hard, even if the cost weights for controlling the order processing times are identical. We identify several polynomially solvable cases of the problem, and develop a branch and bound algorithm and three Tabu search algorithms to solve the general problem. We then conduct computational experiments to evaluate the performance of the three Tabu-search algorithms and show that they are generally effective in terms of solution quality.
文摘In this paper, single machine scheduling problems with variable processing time is discussed according to published instances of management engineering. Processing time of a job is the product of a “coefficient' of the job on position i and a “normal' processing time of the job. The criteria considered is to minimize scheduled length of all jobs. A lemma is proposed and proved. In no deadline constrained condition, the problem belongs to polynomial time algorithm. It is proved by using 3 partition that if the problem is deadline constrained, its complexity is strong NP hard. Finally, a conjuncture is proposed that is to be proved.
基金the Science and Technology Research Project Foundation of Shanxi Province,China(201803D421049).
文摘Background:Dry specimen transport has shown equivalence to traditional liquid transport using a novel high-risk Human papillomavirus assay.Considering that dry transport might cross obstacles during cervical cancer screening in low and middle resource settings,this study was designed evaluate different processing time of dry specimen transport using the same isothermal amplification hrHPV assay.Methods:There were 564 women between the ages of 30–55 recruited from colposcopy clinic.For each patient,two endocervical samples were collected and placed into empty collection tubes by physician.Samples were stored at room temperature until analyzed for hrHPV using the AmpFire assay at two time points:2 days and 2 weeks.511 of the 564 participants with positive hrHPV were provided colposcopy exam and quadrant biopsy.Results:A total of 1128 endocervical samples from 564 patients were detected by the Ampfire assay.Good agreement was found between two time periods(KappaStandard error=0.67±0.04).Sensitivity(2days/2weeks)for CIN2t was 95.28%(95%CI:92.14%–98.42%)vs 90.57%(CI(86.65%–94.49%)and specificity(2days/2weeks)was 22.47%(CI 19.33%–25.61%)vs 28.15%(CI 24.23%–32.07%)respectively.The difference for Ampfire HPV detection in sensitivity for CIN2t for the two time periods was not significant(P=0.227),while the difference in specificity for CIN2t was significant(P=0.001).The difference in Ct values 29.23(CI 28.15–30.31)and 29.27(CI 28.19–30.35)between two time points was not significant(P?0.164).Conclusion:Processing dry brush specimens can be delayed up to 2 weeks.Using the AmpFire assay platform which supports cervical cancer prevention programs in low-to-middle-income countries(LMICs).
文摘Seismological Bureau of Sichuan Province, Chengdu 610041, China2) Center for Analysis and Prediction, State Seismological Bureau, Beijing 100036, China3) Observation Center for Prediction of Earthquakes and Volcanic Eruptions, Faculty of Sciences, Tohoku University, Sendai 98077, Japan
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/158/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features.Several cloud-based IoT health providers have been described in the literature previously.Furthermore,there are a number of issues related to time consumed and overall network performance when it comes to big data information.In the existing method,less performed optimization algorithms were used for optimizing the data.In the proposed method,the Chaotic Cuckoo Optimization algorithm was used for feature selection,and Convolutional Support Vector Machine(CSVM)was used.The research presents a method for analyzing healthcare information that uses in future prediction.The major goal is to take a variety of data while improving efficiency and minimizing process time.The suggested method employs a hybrid method that is divided into two stages.In the first stage,it reduces the features by using the Chaotic Cuckoo Optimization algorithm with Levy flight,opposition-based learning,and distributor operator.In the second stage,CSVM is used which combines the benefits of convolutional neural network(CNN)and SVM.The CSVM modifies CNN’s convolution product to learn hidden deep inside data sources.For improved economic flexibility,greater protection,greater analytics with confidentiality,and lower operating cost,the suggested approach is built on fog computing.Overall results of the experiments show that the suggested method can minimize the number of features in the datasets,enhances the accuracy by 82%,and decrease the time of the process.
文摘In this paper, a fabrication scheduling problem concerning the production of components at a single manufacturing facility was studied, in which the manufactured components are subsequently assembled into a finite number of end products. Each product was assumed to comprise a common component to all jobs and a unique component to itself. Common operations were processed in batches and each batch required a setup time. A product is completed when both its two operations have been processed and are available. The optimality criterion considered was the minimization of weighted flow time. For this scheduling problem, the optimal schedules were described in a weignted shortest processing time first (WSPT) order and two algorithms were constructed corresponding to the batch availability and item availability, respectively.
基金the Ontario Ministry of Agriculture,Food and Rural Affairs,Canada,who supported this project by providing updated soil information on Ontario and Middlesex Countysupported by the Natural Science and Engineering Research Council of Canada(No.RGPIN-2014-4100)。
文摘Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.
文摘A real time algorithm is presented here to recognize and analyze 8 channel simultaneous electro cardiograph(ECG). The algorithm transforms 8 channel simultaneous ECG into three orthogonal vectors and spatial velocity first, then forms the spatial velocity sample, and uses this spatial velocity sample to recognize each beat. The algorithm computes the averaged parameters by using averaged spatial velocity and the averaged ECG and the current parameters by using the current beat period and current width of QRS. The algorithm can recognize P, QRS and T onsets and ends of simultaneous 12 lead ECG precisely, and some arrhythmias such as premature ventricular beat, ventricular escape beat, R on T, bigeminy, trigeminy. The algorithm software works well on a real 8 channel ECG system and meets the demands of designing.
文摘Ground condition and construction (excavation and support) time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC) simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.
文摘Some properties of Super-Brownian motion have been approached by Dawson & Hochberg [1], Iscoe [2] & L3], Konno & Shiga [4] and so on. In this paper, we limit our attention to the occupation time processes of the Super-Brownian motion,and try to give an intuitive proof for their absolute continuity with respect to the Lebesgue measure on Rd (d≤3) when the initial measure of the Super-Brownian motion has the absolute continuity.
基金National Science Foundation of China (60274032) SRFDP (20030248040) SRSP (04QMH1405)
文摘Due to the widespread application of the PID controller in industrial control systems, it is desirable to know the complete set of all the stabilizing PID controllers for a given plant before the controller design and tuning. In this paper, the stabilization problems of the classical proportionalintegral-derivative (PID) controller and the singleparameter PID controller (containing only one adjustable parameter) for integral processes with time delay are investigated, respectively. The complete set of stabilizing parameters of the classical PID controller is determined using a version of the Hermite-Biehler Theorem applicable to quasipolynomials. Since the stabilization problem of the singie-parameter PID controller cannot be treated by the Hermite-Biehler Theorem, a simple method called duallocus diagram is employed to derive the stabilizing range of the single-parameter PID controller. These results provide insight into the tuning of the PID controllers.