Although the image dehazing problem has received considerable attention over recent years,the existing models often prioritise performance at the expense of complexity,making them unsuitable for real-world application...Although the image dehazing problem has received considerable attention over recent years,the existing models often prioritise performance at the expense of complexity,making them unsuitable for real-world applications,which require algorithms to be deployed on resource constrained-devices.To address this challenge,we propose WaveLiteDehaze-Network(WLD-Net),an end-to-end dehazing model that delivers performance comparable to complex models while operating in real time and using significantly fewer parameters.This approach capitalises on the insight that haze predominantly affects low-frequency infor-mation.By exclusively processing the image in the frequency domain using discrete wavelet transform(DWT),we segregate the image into high and low frequencies and process them separately.This allows us to preserve high-frequency details and recover low-frequency components affected by haze,distinguishing our method from existing approaches that use spatial domain processing as the backbone,with DWT serving as an auxiliary component.DWT is applied at multiple levels for better in-formation retention while also accelerating computation by downsampling feature maps.Subsequently,a learning-based fusion mechanism reintegrates the processed frequencies to reconstruct the dehazed image.Experiments show that WLD-Net out-performs other low-parameter models on real-world hazy images and rivals much larger models,achieving the highest PSNR and SSIM scores on the O-Haze dataset.Qualitatively,the proposed method demonstrates its effectiveness in handling a diverse range of haze types,delivering visually pleasing results and robust performance,while also generalising well across different scenarios.With only 0.385 million parameters(more than 100 times smaller than comparable dehazing methods),WLD-Net processes 1024×1024 images in just 0.045 s,highlighting its applicability across various real-world scenarios.The code is available at https://github.com/AliMurtaza29/WLD-Net.展开更多
Seismological Bureau of Sichuan Province, Chengdu 610041, China2) Center for Analysis and Prediction, State Seismological Bureau, Beijing 100036, China3) Observation Center for Prediction of Earthquakes and Volcanic E...Seismological Bureau of Sichuan Province, Chengdu 610041, China2) Center for Analysis and Prediction, State Seismological Bureau, Beijing 100036, China3) Observation Center for Prediction of Earthquakes and Volcanic Eruptions, Faculty of Sciences, Tohoku University, Sendai 98077, Japan展开更多
A simple way of interpolation for real time processing is pres -ented. For passive localization, the time delay between two signals can be determined by the peak of their cross-correlation. It is more efficient to est...A simple way of interpolation for real time processing is pres -ented. For passive localization, the time delay between two signals can be determined by the peak of their cross-correlation. It is more efficient to estimate a cross-correlation function by the inverse FFT of the cross-spectral density. The original smapling rate is usually very low to reduce computation. The sampling rate of the cross-correlation so computed is too low to estimate satisfactorily and an interpolation procedure is therefore needed. The interpolation by zero augmented spectrum is concise, fast and accurate. The results of the computer simulation and real nuderwater signal processing are given in the paper.展开更多
In this paper, a single-machine scheduling model with a given common due date is considered. Job processing time is a linear decreasing function of its starting time. The objective function is to minimize the total we...In this paper, a single-machine scheduling model with a given common due date is considered. Job processing time is a linear decreasing function of its starting time. The objective function is to minimize the total weighted earliness award and tardiness penalty. Our aim is to find an optimal schedule so as to minimize the objective function. As the problem is NP-hard, some properties and polynomial time solvable cases of this problem are given. A dynamic programming algorithm for the general case of the problem is provided.展开更多
In this paper, by considering the fuzzy nature of the data in real-life problems, single machine scheduling problems with fuzzy processing time and multiple objectives are formulated and an efficient genetic algorithm...In this paper, by considering the fuzzy nature of the data in real-life problems, single machine scheduling problems with fuzzy processing time and multiple objectives are formulated and an efficient genetic algorithm which is suitable for solving these problems is proposed. As illustrative numerical examples, twenty jobs processing on a machine is considered. The feasibility and effectiveness of the proposed method have been demonstrated in the simulation.展开更多
In this paper, single machine scheduling problems with variable processing time are raised. The criterions of the problem considered are minimizing scheduling length of all jobs, flow time and number of tardy jobs and...In this paper, single machine scheduling problems with variable processing time are raised. The criterions of the problem considered are minimizing scheduling length of all jobs, flow time and number of tardy jobs and so on. The complexity of the problem is determined. [WT5HZ]展开更多
In this paper,single machine scheduling problems with variable processing time areraised.The criterions of the prolem considered are minimizing scheduling length of all jobs,flowtime and number of tardy jobs and so on...In this paper,single machine scheduling problems with variable processing time areraised.The criterions of the prolem considered are minimizing scheduling length of all jobs,flowtime and number of tardy jobs and so on.The complexity of the problem is determined.展开更多
In this paper, single machine scheduling problems with variable processing time is discussed according to published instances of management engineering. Processing time of a job is the product of a “coefficient' ...In this paper, single machine scheduling problems with variable processing time is discussed according to published instances of management engineering. Processing time of a job is the product of a “coefficient' of the job on position i and a “normal' processing time of the job. The criteria considered is to minimize scheduled length of all jobs. A lemma is proposed and proved. In no deadline constrained condition, the problem belongs to polynomial time algorithm. It is proved by using 3 partition that if the problem is deadline constrained, its complexity is strong NP hard. Finally, a conjuncture is proposed that is to be proved.展开更多
Abstract Most papers in scheduling research have treated individual job processing times as fixed parameters. However, in many practical situations, a manager may control processing time by reallocating resources. In ...Abstract Most papers in scheduling research have treated individual job processing times as fixed parameters. However, in many practical situations, a manager may control processing time by reallocating resources. In this paper, authors consider a machine scheduling problem with controllable processing times. In the first part of this paper, a special case where the processing times and compression costs are uniform among jobs is discussed. Theoretical results are derived that aid in developing an O(n 2) algorithm to slove the problem optimally. In the second part of this paper, authors generalize the discussion to general case. An effective heuristic to the general problem will be presented.展开更多
Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve ...Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.展开更多
A real time algorithm is presented here to recognize and analyze 8 channel simultaneous electro cardiograph(ECG). The algorithm transforms 8 channel simultaneous ECG into three orthogonal vectors and spatial veloc...A real time algorithm is presented here to recognize and analyze 8 channel simultaneous electro cardiograph(ECG). The algorithm transforms 8 channel simultaneous ECG into three orthogonal vectors and spatial velocity first, then forms the spatial velocity sample, and uses this spatial velocity sample to recognize each beat. The algorithm computes the averaged parameters by using averaged spatial velocity and the averaged ECG and the current parameters by using the current beat period and current width of QRS. The algorithm can recognize P, QRS and T onsets and ends of simultaneous 12 lead ECG precisely, and some arrhythmias such as premature ventricular beat, ventricular escape beat, R on T, bigeminy, trigeminy. The algorithm software works well on a real 8 channel ECG system and meets the demands of designing.展开更多
Some properties of Super-Brownian motion have been approached by Dawson & Hochberg [1], Iscoe [2] & L3], Konno & Shiga [4] and so on. In this paper, we limit our attention to the occupation time processes ...Some properties of Super-Brownian motion have been approached by Dawson & Hochberg [1], Iscoe [2] & L3], Konno & Shiga [4] and so on. In this paper, we limit our attention to the occupation time processes of the Super-Brownian motion,and try to give an intuitive proof for their absolute continuity with respect to the Lebesgue measure on Rd (d≤3) when the initial measure of the Super-Brownian motion has the absolute continuity.展开更多
Ground condition and construction (excavation and support) time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic est...Ground condition and construction (excavation and support) time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC) simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.展开更多
A multi-GPU system designed for high-speed,real-time signal processing of optical coherencetomography(OCT)is described herein.For the OCT data sampled in linear wave numbers,themaximum procesing rates reached 2.95 MHz...A multi-GPU system designed for high-speed,real-time signal processing of optical coherencetomography(OCT)is described herein.For the OCT data sampled in linear wave numbers,themaximum procesing rates reached 2.95 MHz for 1024-OCT and 1.96 MHz for 2048-OCT.Data sampled using linear wavelengths were re-sampled using a time-domain interpolation method and zero-padding interpolation method to improve image quality.The maximum processing rates for1024-OCT reached 2.16 MHz for the time-domain method and 1.26 MHz for the zero-paddingmethod.The maximum processing rates for 2048-0CT reached_1.58 MHz,and 0.68 MHz,respectively.This method is capable of high-speed,real-time processing for O CT systems.展开更多
Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features.Several cloud-based IoT health providers have been described in the literature prev...Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features.Several cloud-based IoT health providers have been described in the literature previously.Furthermore,there are a number of issues related to time consumed and overall network performance when it comes to big data information.In the existing method,less performed optimization algorithms were used for optimizing the data.In the proposed method,the Chaotic Cuckoo Optimization algorithm was used for feature selection,and Convolutional Support Vector Machine(CSVM)was used.The research presents a method for analyzing healthcare information that uses in future prediction.The major goal is to take a variety of data while improving efficiency and minimizing process time.The suggested method employs a hybrid method that is divided into two stages.In the first stage,it reduces the features by using the Chaotic Cuckoo Optimization algorithm with Levy flight,opposition-based learning,and distributor operator.In the second stage,CSVM is used which combines the benefits of convolutional neural network(CNN)and SVM.The CSVM modifies CNN’s convolution product to learn hidden deep inside data sources.For improved economic flexibility,greater protection,greater analytics with confidentiality,and lower operating cost,the suggested approach is built on fog computing.Overall results of the experiments show that the suggested method can minimize the number of features in the datasets,enhances the accuracy by 82%,and decrease the time of the process.展开更多
In this paper, a fabrication scheduling problem concerning the production of components at a single manufacturing facility was studied, in which the manufactured components are subsequently assembled into a finite num...In this paper, a fabrication scheduling problem concerning the production of components at a single manufacturing facility was studied, in which the manufactured components are subsequently assembled into a finite number of end products. Each product was assumed to comprise a common component to all jobs and a unique component to itself. Common operations were processed in batches and each batch required a setup time. A product is completed when both its two operations have been processed and are available. The optimality criterion considered was the minimization of weighted flow time. For this scheduling problem, the optimal schedules were described in a weignted shortest processing time first (WSPT) order and two algorithms were constructed corresponding to the batch availability and item availability, respectively.展开更多
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It au...The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expec- tation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHO-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval SHC-EM outperforms the traditional variational 1earning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning.展开更多
Due to the widespread application of the PID controller in industrial control systems, it is desirable to know the complete set of all the stabilizing PID controllers for a given plant before the controller design and...Due to the widespread application of the PID controller in industrial control systems, it is desirable to know the complete set of all the stabilizing PID controllers for a given plant before the controller design and tuning. In this paper, the stabilization problems of the classical proportionalintegral-derivative (PID) controller and the singleparameter PID controller (containing only one adjustable parameter) for integral processes with time delay are investigated, respectively. The complete set of stabilizing parameters of the classical PID controller is determined using a version of the Hermite-Biehler Theorem applicable to quasipolynomials. Since the stabilization problem of the singie-parameter PID controller cannot be treated by the Hermite-Biehler Theorem, a simple method called duallocus diagram is employed to derive the stabilizing range of the single-parameter PID controller. These results provide insight into the tuning of the PID controllers.展开更多
Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO...Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO) communica- tion systems, this paper establishes a channel model for the mobile platform. Based on the combination of Alamouti space-time code and time hopping ultra-wide band (TH-UWB) communications, a novel repetition space-time coding (RSTC) method for mobile 2x2 free-space optical communications with pulse position modulation (PPM) is devel- oped. In particular, two decoding methods of equal gain combining (EGC) maximum likelihood detection (MLD) and correlation matrix detection (CMD) are derived. When a quasi-static fading and weak turbulence channel model are considered, simulation results show that whether the channel state information (CSI) is known or not, the coding sys- tem demonstrates more significant performance of the symbol error rate (SER) than the uncoding. In other words, transmitting diversity can be achieved while conveying the information only through the time delays of the modulated signals transmitted from different antennas. CMD has almost the same effect of signal combining with maximal ratio combining (MRC). However, when the channel correlation increases, SER performance of the coding 2×2 system de- grades significantly.展开更多
基金Japan International Cooperation Agency(JICA)via Malaysia-Japan Linkage Research Grant 2024.
文摘Although the image dehazing problem has received considerable attention over recent years,the existing models often prioritise performance at the expense of complexity,making them unsuitable for real-world applications,which require algorithms to be deployed on resource constrained-devices.To address this challenge,we propose WaveLiteDehaze-Network(WLD-Net),an end-to-end dehazing model that delivers performance comparable to complex models while operating in real time and using significantly fewer parameters.This approach capitalises on the insight that haze predominantly affects low-frequency infor-mation.By exclusively processing the image in the frequency domain using discrete wavelet transform(DWT),we segregate the image into high and low frequencies and process them separately.This allows us to preserve high-frequency details and recover low-frequency components affected by haze,distinguishing our method from existing approaches that use spatial domain processing as the backbone,with DWT serving as an auxiliary component.DWT is applied at multiple levels for better in-formation retention while also accelerating computation by downsampling feature maps.Subsequently,a learning-based fusion mechanism reintegrates the processed frequencies to reconstruct the dehazed image.Experiments show that WLD-Net out-performs other low-parameter models on real-world hazy images and rivals much larger models,achieving the highest PSNR and SSIM scores on the O-Haze dataset.Qualitatively,the proposed method demonstrates its effectiveness in handling a diverse range of haze types,delivering visually pleasing results and robust performance,while also generalising well across different scenarios.With only 0.385 million parameters(more than 100 times smaller than comparable dehazing methods),WLD-Net processes 1024×1024 images in just 0.045 s,highlighting its applicability across various real-world scenarios.The code is available at https://github.com/AliMurtaza29/WLD-Net.
文摘Seismological Bureau of Sichuan Province, Chengdu 610041, China2) Center for Analysis and Prediction, State Seismological Bureau, Beijing 100036, China3) Observation Center for Prediction of Earthquakes and Volcanic Eruptions, Faculty of Sciences, Tohoku University, Sendai 98077, Japan
文摘A simple way of interpolation for real time processing is pres -ented. For passive localization, the time delay between two signals can be determined by the peak of their cross-correlation. It is more efficient to estimate a cross-correlation function by the inverse FFT of the cross-spectral density. The original smapling rate is usually very low to reduce computation. The sampling rate of the cross-correlation so computed is too low to estimate satisfactorily and an interpolation procedure is therefore needed. The interpolation by zero augmented spectrum is concise, fast and accurate. The results of the computer simulation and real nuderwater signal processing are given in the paper.
文摘In this paper, a single-machine scheduling model with a given common due date is considered. Job processing time is a linear decreasing function of its starting time. The objective function is to minimize the total weighted earliness award and tardiness penalty. Our aim is to find an optimal schedule so as to minimize the objective function. As the problem is NP-hard, some properties and polynomial time solvable cases of this problem are given. A dynamic programming algorithm for the general case of the problem is provided.
基金supported by the National Natural Science Foundation of China(NNSFC)(the grant No.60274043)supported by the National High-tech Research&Development Project(863)(the grant No.2002AA412610)
文摘In this paper, by considering the fuzzy nature of the data in real-life problems, single machine scheduling problems with fuzzy processing time and multiple objectives are formulated and an efficient genetic algorithm which is suitable for solving these problems is proposed. As illustrative numerical examples, twenty jobs processing on a machine is considered. The feasibility and effectiveness of the proposed method have been demonstrated in the simulation.
文摘In this paper, single machine scheduling problems with variable processing time are raised. The criterions of the problem considered are minimizing scheduling length of all jobs, flow time and number of tardy jobs and so on. The complexity of the problem is determined. [WT5HZ]
基金Supported by National Postdoctoral Science Foundation of P.R.China(9902)
文摘In this paper,single machine scheduling problems with variable processing time areraised.The criterions of the prolem considered are minimizing scheduling length of all jobs,flowtime and number of tardy jobs and so on.The complexity of the problem is determined.
文摘In this paper, single machine scheduling problems with variable processing time is discussed according to published instances of management engineering. Processing time of a job is the product of a “coefficient' of the job on position i and a “normal' processing time of the job. The criteria considered is to minimize scheduled length of all jobs. A lemma is proposed and proved. In no deadline constrained condition, the problem belongs to polynomial time algorithm. It is proved by using 3 partition that if the problem is deadline constrained, its complexity is strong NP hard. Finally, a conjuncture is proposed that is to be proved.
文摘Abstract Most papers in scheduling research have treated individual job processing times as fixed parameters. However, in many practical situations, a manager may control processing time by reallocating resources. In this paper, authors consider a machine scheduling problem with controllable processing times. In the first part of this paper, a special case where the processing times and compression costs are uniform among jobs is discussed. Theoretical results are derived that aid in developing an O(n 2) algorithm to slove the problem optimally. In the second part of this paper, authors generalize the discussion to general case. An effective heuristic to the general problem will be presented.
基金the Ontario Ministry of Agriculture,Food and Rural Affairs,Canada,who supported this project by providing updated soil information on Ontario and Middlesex Countysupported by the Natural Science and Engineering Research Council of Canada(No.RGPIN-2014-4100)。
文摘Conventional soil maps(CSMs)often have multiple soil types within a single polygon,which hinders the ability of machine learning to accurately predict soils.Soil disaggregation approaches are commonly used to improve the spatial and attribute precision of CSMs.The approach disaggregation and harmonization of soil map units through resampled classification trees(DSMART)is popular but computationally intensive,as it generates and assigns synthetic samples to soil series based on the areal coverage information of CSMs.Alternatively,the disaggregation approach pure polygon disaggregation(PPD)assigns soil series based solely on the proportions of soil series in pure polygons in CSMs.This study compared these two disaggregation approaches by applying them to a CSM of Middlesex County,Ontario,Canada.Four different sampling methods were used:two sampling designs,simple random sampling(SRS)and conditional Latin hypercube sampling(cLHS),with two sample sizes(83100 and 19420 samples per sampling plan),both based on an area-weighted approach.Two machine learning algorithms(MLAs),C5.0 decision tree(C5.0)and random forest(RF),were applied to the disaggregation approaches to compare the disaggregation accuracy.The accuracy assessment utilized a set of 500 validation points obtained from the Middlesex County soil survey report.The MLA C5.0(Kappa index=0.58–0.63)showed better performance than RF(Kappa index=0.53–0.54)based on the larger sample size,and PPD with C5.0 based on the larger sample size was the best-performing(Kappa index=0.63)approach.Based on the smaller sample size,both cLHS(Kappa index=0.41–0.48)and SRS(Kappa index=0.40–0.47)produced similar accuracy results.The disaggregation approach PPD exhibited lower processing capacity and time demands(1.62–5.93 h)while yielding maps with lower uncertainty as compared to DSMART(2.75–194.2 h).For CSMs predominantly composed of pure polygons,utilizing PPD for soil series disaggregation is a more efficient and rational choice.However,DSMART is the preferable approach for disaggregating soil series that lack pure polygon representations in the CSMs.
文摘A real time algorithm is presented here to recognize and analyze 8 channel simultaneous electro cardiograph(ECG). The algorithm transforms 8 channel simultaneous ECG into three orthogonal vectors and spatial velocity first, then forms the spatial velocity sample, and uses this spatial velocity sample to recognize each beat. The algorithm computes the averaged parameters by using averaged spatial velocity and the averaged ECG and the current parameters by using the current beat period and current width of QRS. The algorithm can recognize P, QRS and T onsets and ends of simultaneous 12 lead ECG precisely, and some arrhythmias such as premature ventricular beat, ventricular escape beat, R on T, bigeminy, trigeminy. The algorithm software works well on a real 8 channel ECG system and meets the demands of designing.
文摘Some properties of Super-Brownian motion have been approached by Dawson & Hochberg [1], Iscoe [2] & L3], Konno & Shiga [4] and so on. In this paper, we limit our attention to the occupation time processes of the Super-Brownian motion,and try to give an intuitive proof for their absolute continuity with respect to the Lebesgue measure on Rd (d≤3) when the initial measure of the Super-Brownian motion has the absolute continuity.
文摘Ground condition and construction (excavation and support) time and costs are the key factors in decision-making during planning and design phases of a tunnel project. An innovative methodology for probabilistic estimation of ground condition and construction time and costs is proposed, which is an integration of the ground prediction approach based on Markov process, and the time and cost variance analysis based on Monte-Carlo (MC) simulation. The former provides the probabilistic description of ground classification along tunnel alignment according to the geological information revealed from geological profile and boreholes. The latter provides the probabilistic description of the expected construction time and costs for each operation according to the survey feedbacks from experts. Then an engineering application to Hamro tunnel is presented to demonstrate how the ground condition and the construction time and costs are estimated in a probabilistic way. In most items, in order to estimate the data needed for this methodology, a number of questionnaires are distributed among the tunneling experts and finally the mean values of the respondents are applied. These facilitate both the owners and the contractors to be aware of the risk that they should carry before construction, and are useful for both tendering and bidding.
基金the support from the union project of Peking University third hospital&Chinese Academy of Sciences(Grant No.7490-04,Grant No.KJZD-EW-TZ-L03)the Sichuan Youth Science&Technology Foundation(Grant No.13QNJJ0034)+1 种基金the West Light Foundation of the Chinese Academy of Sciences,the National Major Scientific Equipment program(Grant No.2012YQ120080)the National Science Foundation of China(Grant No.6118082).
文摘A multi-GPU system designed for high-speed,real-time signal processing of optical coherencetomography(OCT)is described herein.For the OCT data sampled in linear wave numbers,themaximum procesing rates reached 2.95 MHz for 1024-OCT and 1.96 MHz for 2048-OCT.Data sampled using linear wavelengths were re-sampled using a time-domain interpolation method and zero-padding interpolation method to improve image quality.The maximum processing rates for1024-OCT reached 2.16 MHz for the time-domain method and 1.26 MHz for the zero-paddingmethod.The maximum processing rates for 2048-0CT reached_1.58 MHz,and 0.68 MHz,respectively.This method is capable of high-speed,real-time processing for O CT systems.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/158/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features.Several cloud-based IoT health providers have been described in the literature previously.Furthermore,there are a number of issues related to time consumed and overall network performance when it comes to big data information.In the existing method,less performed optimization algorithms were used for optimizing the data.In the proposed method,the Chaotic Cuckoo Optimization algorithm was used for feature selection,and Convolutional Support Vector Machine(CSVM)was used.The research presents a method for analyzing healthcare information that uses in future prediction.The major goal is to take a variety of data while improving efficiency and minimizing process time.The suggested method employs a hybrid method that is divided into two stages.In the first stage,it reduces the features by using the Chaotic Cuckoo Optimization algorithm with Levy flight,opposition-based learning,and distributor operator.In the second stage,CSVM is used which combines the benefits of convolutional neural network(CNN)and SVM.The CSVM modifies CNN’s convolution product to learn hidden deep inside data sources.For improved economic flexibility,greater protection,greater analytics with confidentiality,and lower operating cost,the suggested approach is built on fog computing.Overall results of the experiments show that the suggested method can minimize the number of features in the datasets,enhances the accuracy by 82%,and decrease the time of the process.
文摘In this paper, a fabrication scheduling problem concerning the production of components at a single manufacturing facility was studied, in which the manufactured components are subsequently assembled into a finite number of end products. Each product was assumed to comprise a common component to all jobs and a unique component to itself. Common operations were processed in batches and each batch required a setup time. A product is completed when both its two operations have been processed and are available. The optimality criterion considered was the minimization of weighted flow time. For this scheduling problem, the optimal schedules were described in a weignted shortest processing time first (WSPT) order and two algorithms were constructed corresponding to the batch availability and item availability, respectively.
基金Supported by the National Natural Science Foundation of China under Grant No 60972106the China Postdoctoral Science Foundation under Grant No 2014M561053+1 种基金the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108the Hebei Province Natural Science Foundation under Grant No E2016202341
文摘The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expec- tation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHO-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval SHC-EM outperforms the traditional variational 1earning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning.
基金National Science Foundation of China (60274032) SRFDP (20030248040) SRSP (04QMH1405)
文摘Due to the widespread application of the PID controller in industrial control systems, it is desirable to know the complete set of all the stabilizing PID controllers for a given plant before the controller design and tuning. In this paper, the stabilization problems of the classical proportionalintegral-derivative (PID) controller and the singleparameter PID controller (containing only one adjustable parameter) for integral processes with time delay are investigated, respectively. The complete set of stabilizing parameters of the classical PID controller is determined using a version of the Hermite-Biehler Theorem applicable to quasipolynomials. Since the stabilization problem of the singie-parameter PID controller cannot be treated by the Hermite-Biehler Theorem, a simple method called duallocus diagram is employed to derive the stabilizing range of the single-parameter PID controller. These results provide insight into the tuning of the PID controllers.
基金supported by the National Natural Science Foundation of China(No.61205106)
文摘Considering the influence of more random atmospheric turbulence, worse pointing errors and highly dynamic link on the transmission performance of mobile multiple-input multiple-output (MIMO) free space optics (FSO) communica- tion systems, this paper establishes a channel model for the mobile platform. Based on the combination of Alamouti space-time code and time hopping ultra-wide band (TH-UWB) communications, a novel repetition space-time coding (RSTC) method for mobile 2x2 free-space optical communications with pulse position modulation (PPM) is devel- oped. In particular, two decoding methods of equal gain combining (EGC) maximum likelihood detection (MLD) and correlation matrix detection (CMD) are derived. When a quasi-static fading and weak turbulence channel model are considered, simulation results show that whether the channel state information (CSI) is known or not, the coding sys- tem demonstrates more significant performance of the symbol error rate (SER) than the uncoding. In other words, transmitting diversity can be achieved while conveying the information only through the time delays of the modulated signals transmitted from different antennas. CMD has almost the same effect of signal combining with maximal ratio combining (MRC). However, when the channel correlation increases, SER performance of the coding 2×2 system de- grades significantly.