In parallel-batching machine scheduling, all jobs in a batch start and complete at the same time, and the processing time of the batch is the maximum processing time of any job in it. For the unbounded parallel-batchi...In parallel-batching machine scheduling, all jobs in a batch start and complete at the same time, and the processing time of the batch is the maximum processing time of any job in it. For the unbounded parallel-batching machine scheduling problem of minimizing the maximum lateness, denoted 1|p-batch|L_(max), a dynamic programming algorithm with time complexity O(n^2) is well known in the literature.Later, this algorithm is improved to be an O(n log n) algorithm. In this note, we present another O(n log n) algorithm with simplifications on data structure and implementation details.展开更多
A batch is a subset of jobs which must be processed jointly in either serial or parallel form. For the single machine, batching, total completion time scheduling problems, the algorithmic aspects have been extensively...A batch is a subset of jobs which must be processed jointly in either serial or parallel form. For the single machine, batching, total completion time scheduling problems, the algorithmic aspects have been extensively studied in the literature. This paper presents the optimal hatching structures of the problems on the batching ways: all jobs in exactly N(arbitrary fix batch number and 1 〈 N 〈 n) batches.展开更多
The scheduling problem on a single batching machine with family jobs was proposed.The single batching machine can process a group of jobs simultaneously as a batch.Jobs in the same batch complete at the same time.The ...The scheduling problem on a single batching machine with family jobs was proposed.The single batching machine can process a group of jobs simultaneously as a batch.Jobs in the same batch complete at the same time.The batch size is assumed to be unbounded.Jobs that belong to different families can not be processed in the same batch.The objective function is minimizing maximum lateness.For the problem with fixed number of m families and n jobs,a polynomial time algorithm based on dynamic programming with time complexity of O(n(n/m+1)m)was presented.展开更多
This research assessed the environmental impact of cement silos emission on the existing concrete batching facilities in M35-Mussafah, Abu Dhabi, United Arab Emirates. These assessments were conducted using an air qua...This research assessed the environmental impact of cement silos emission on the existing concrete batching facilities in M35-Mussafah, Abu Dhabi, United Arab Emirates. These assessments were conducted using an air quality dispersion model (AERMOD) to predict the ambient concentration of Portland Cement particulate matter less than 10 microns (PM<sub>10</sub>) emitted to the atmosphere during loading and unloading activities from 176 silos located in 25 concrete batching facilities. AERMOD was applied to simulate and describe the dispersion of PM<sub>10</sub> released from the cement silos into the air. Simulations were carried out for PM<sub>10</sub> emissions on controlled and uncontrolled cement silos scenarios. Results showed an incremental negative impact on air quality and public health from uncontrolled silos emissions and estimated that the uncontrolled PM<sub>10</sub> emission sources contribute to air pollution by 528958.32 kg/Year. The modeling comparison between the controlled and uncontrolled silos shows that the highest annual average concentration from controlled cement silos is 0.065 μg/m<sup>3</sup>, and the highest daily emission value is 0.6 μg/m<sup>3</sup>;both values are negligible and will not lead to significant air quality impact in the entire study domain. However, the uncontrolled cement silos’ highest annual average concentration value is 328.08 μg/m<sup>3</sup>. The highest daily emission average value was 1250.09 μg/m<sup>3</sup>;this might cause a significant air pollution quality impact and health effects on the public and workers. The short-term and long-term average PM<sub>10</sub> pollutant concentrations at these receptors predicted by the air dispersion model are discussed for both scenarios and compared with local and international air quality standards and guidelines.展开更多
This research study quantifies the PM<sub>10</sub> emission rates (g/s) from cement silos in 25 concrete batching facilities for both controlled and uncontrolled scenarios by applying the USEPA AP-42 guide...This research study quantifies the PM<sub>10</sub> emission rates (g/s) from cement silos in 25 concrete batching facilities for both controlled and uncontrolled scenarios by applying the USEPA AP-42 guidelines step-by-step approach. The study focuses on evaluating the potential environmental impact of cement dust fugitive emissions from 176 cement silos located in 25 concrete batching facilities in the M35 Mussafah industrial area of Abu Dhabi, UAE. Emission factors are crucial for quantifying the PM<sub>10</sub> emission rates (g/s) that support developing source-specific emission estimates for areawide inventories to identify major sources of pollution that provide screening sources for compliance monitoring and air dispersion modeling. This requires data to be collected involves information on production, raw material usage, energy consumption, and process-related details, this was obtained using various methods, including field visits, surveys, and interviews with facility representatives to calculate emission rates accurately. Statistical analysis was conducted on cement consumption and emission rates for controlled and uncontrolled sources of the targeted facilities. The data shows that the average cement consumption among the facilities is approximately 88,160 (MT/yr), with a wide range of variation depending on the facility size and production rate. The emission rates from controlled sources have an average of 4.752E<sup>-04</sup> (g/s), while the rates from uncontrolled sources average 0.6716 (g/s). The analysis shows a significant statistical relationship (p < 0.05) and perfect positive correlation (r = 1) between cement consumption and emission rates, indicating that as cement consumption increases, emission rates tend to increase as well. Furthermore, comparing the emission rates from controlled and uncontrolled scenarios. The data showed a significant difference between the two scenarios, highlighting the effectiveness of control measures in reducing PM<sub>10</sub> emissions. The study’s findings provide insights into the impact of cement silo emissions on air quality and the importance of implementing control measures in concrete batching facilities. The comparative analysis contributes to understanding emission sources and supports the development of pollution control strategies in the Ready-Mix industry.展开更多
The volume of instant delivery has witnessed a significant growth in recent years.Given the involvement of numerous heterogeneous stakeholders,instant delivery operations are inherently characterized by dynamics and u...The volume of instant delivery has witnessed a significant growth in recent years.Given the involvement of numerous heterogeneous stakeholders,instant delivery operations are inherently characterized by dynamics and uncertainties.This study introduces two order dispatching strategies,namely task buffering and dynamic batching,as potential solutions to address these challenges.The task buffering strategy aims to optimize the assignment timing of orders to couriers,thereby mitigating demand uncertainties.On the other hand,the dynamic batching strategy focuses on alleviating delivery pressure by assigning orders to couriers based on their residual capacity and extra delivery dis tances.To model the instant delivery problem and evaluate the performances of order dis patching strategies,Adaptive Agent-Based Order Dispatching(ABOD)approach is developed,which combines agent-based modelling,deep reinforcement learning,and the Kuhn-Munkres algorithm.The ABOD effectively captures the system’s uncertainties and heterogeneity,facilitating stakeholders learning in novel scenarios and enabling adap tive task buffering and dynamic batching decision-makings.The efficacy of the ABOD approach is verified through both synthetic and real-world case studies.Experimental results demonstrate that implementing the ABOD approach can lead to a significant increase in customer satisfaction,up to 275.42%,while simultaneously reducing the deliv ery distance by 11.38%compared to baseline policies.Additionally,the ABOD approach exhibits the ability to adaptively adjust buffering times to maintain high levels of customer satisfaction across various demand scenarios.As a result,this approach offers valuable sup port to logistics providers in making informed decisions regarding order dispatching in instant delivery operations.展开更多
In this paper we study the problem of scheduling a batching machine with nonidentical job sizes. The jobs arrive simultaneously and have unit processing time. The goal is to minimize the total completion times. Having...In this paper we study the problem of scheduling a batching machine with nonidentical job sizes. The jobs arrive simultaneously and have unit processing time. The goal is to minimize the total completion times. Having shown that the problem is NP-hard, we put forward three approximation schemes with worst case ratio 4, 2, and 3/2, respectively.展开更多
This paper addresses the scheduling problem involving batch processing machines, which is Mso known as parallel batching in the literature. The presented mixed integer programming formulation first provides an elegant...This paper addresses the scheduling problem involving batch processing machines, which is Mso known as parallel batching in the literature. The presented mixed integer programming formulation first provides an elegant model for the problem under study. Fhrthermore, it enables solutions to the problem instances beyond the capability of exact methods developed so far. In order to alleviate computational burden, the authors propose MIP-based heuristic approaches which balance solution quality and computing time.展开更多
Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers...Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers.However,users’sensitive data would then become unregulated.In the event of data loss,cloud storage providers might conceal the fact that data has been compromised to protect their reputation and mitigate losses.Ensuring the integrity of data stored in the cloud remains a pressing issue that urgently needs to be addressed.In this paper,we propose a data auditing scheme for cloud-based EHRs that incorporates recoverability and batch auditing,alongside a thorough security and performance evaluation.Our scheme builds upon the indistinguishability-based privacy-preserving auditing approach proposed by Zhou et al.We identify that this scheme is insecure and vulnerable to forgery attacks on data storage proofs.To address these vulnerabilities,we enhanced the auditing process using masking techniques and designed new algorithms to strengthen security.We also provide formal proof of the security of the signature algorithm and the auditing scheme.Furthermore,our results show that our scheme effectively protects user privacy and is resilient against malicious attacks.Experimental results indicate that our scheme is not only secure and efficient but also supports batch auditing of cloud data.Specifically,when auditing 10,000 users,batch auditing reduces computational overhead by 101 s compared to normal auditing.展开更多
On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to f...On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.展开更多
The flexible satellite batch production line is a complex discrete production system with multiple cross-disciplinary fields and mixed serial parallel tasks.As the source of the satellite batch production line process...The flexible satellite batch production line is a complex discrete production system with multiple cross-disciplinary fields and mixed serial parallel tasks.As the source of the satellite batch production line process,the warehousing system has urgent needs such as uncertain production scale and rapid iteration and optimization of business processes.Therefore,the requirements and architecture of complex discrete warehousing systems such as flexible satellite batch production lines are studied.The physical system of intelligent equipment is abstracted as a digital model to form the underlying module,and a digital fusion framework of“business domain+middleware platform+intelligent equipment information model”is constructed.The granularity of microservice splitting is calculated based on the dynamic correlation relationship between user access instances and database table structures.The general warehousing functions of the platform are divided to achieve module customization,addition,and configuration.An open discrete warehousing system based on microservices is designed.Software architecture and design develop complex discrete warehousing systems based on the SpringCloud framework.This architecture achieves the decoupling of business logic and physical hardware,enhances the maintainability and scalability of the system,and greatly improves the system’s adaptability to different complex discrete warehousing business scenarios.展开更多
As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that a...As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.展开更多
Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of ...Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of DWBDM process is challenging,since inherently dynamic and highly nonlinear,which make it difficult to give the controller reasonable set value or optimal temperature profile for temperature control scheme.To overcome this obstacle,this study proposes a new strategy to develop temperature control scheme for DWBDM combining neural network soft-sensor with fuzzy control.Dynamic model of DWBDM was firstly developed and numerically solved by Python,with three control schemes:composition control by PID and fuzzy control respectively,and temperature control by fuzzy control with neural network soft-sensor.For dynamic process,the neural networks with memory functions,such as RNN,LSTM and GRU,are used to handle with time-series data.The results from a case example show that the new control scheme can perform a good temperature control of DWBDM with the same or even better product purities as traditional PID or fuzzy control,and fuzzy control could reduce the effect of prediction error from neural network,indicating that it is a highly feasible and effective control approach for DWBDM,and could even be extended to other dynamic processes.展开更多
Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing ...Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.展开更多
The initial step in the resource utilization of Chinese medicine residues(CMRs)involves dehydration pretreatment,which results in high concentrations of organic wastewater and leads to environmental pollution.Meanwhil...The initial step in the resource utilization of Chinese medicine residues(CMRs)involves dehydration pretreatment,which results in high concentrations of organic wastewater and leads to environmental pollution.Meanwhile,to address the issue of anaerobic systems failing due to acidification under shock loading,a microaerobic expanded granular sludge bed(EGSB)and moving bed sequencing batch reactor(MBSBR)combined process was proposed in this study.Microaeration facilitated hydrolysis,improved the removal of nitrogen and phosphorus pollutants,maintained a low concentration of volatile fatty acids(VFAs),and enhanced system stability.In addition,microaeration promoted microbial richness and diversity,enriching three phyla:Bacteroidota,Synergistota and Firmicutes associated with hydrolytic acidification.Furthermore,aeration intensity in MBSBR was optimized.Elevated levels of dissolved oxygen(DO)impacted biofilm structure,suppressed denitrifying bacteria activity,led to nitrate accumulation,and hindered simultaneous nitrification and denitrification(SND).Maintaining a DO concentration of 2 mg/L enhanced the removal of nitrogen and phosphorus while conserving energy.The combined process achieved removal efficiencies of 98.25%,90.49%,and 98.55%for chemical oxygen demand(COD),total nitrogen(TN),and total phosphorus(TP),respectively.Typical pollutants liquiritin(LQ)and glycyrrhizic acid(GA)were completely degraded.This study presents an innovative approach for the treatment of high-concentration organic wastewater and provides a reliable solution for the pollution control in utilization of CMRs resources.展开更多
Simultaneous nitrification and denitrification(SND)is considered an attractive alternative to traditionally biological nitrogen removal technology.Knowing the effects of heavy metals on the SND process is essential fo...Simultaneous nitrification and denitrification(SND)is considered an attractive alternative to traditionally biological nitrogen removal technology.Knowing the effects of heavy metals on the SND process is essential for engineering.In this study,the responses of SND performance to Zn(Ⅱ)exposure were investigated in a biofilm reactor.The results indicated that Zn(Ⅱ)at low concentration(≤2 mg·L^(-1))had negligible effects on the removal of nitrogen and COD in the SND process compared to that without Zn(Ⅱ),while the removal of ammonium and COD was strongly inhibited with an increasing in the concentration of Zn(Ⅱ)at 5 or 10 mg·L^(-1).Large amounts of extracellular polymeric substance(EPS),especially protein(PN),were secreted to protect microorganisms from the increasing Zn(Ⅱ)damage.High-throughput sequencing analysis indicated that Zn(Ⅱ)exposure could significantly reduce the microbial diversity and change the structure of microbial community.The RDA analysis further confirmed that Azoarcus-Thauera-cluster was the dominant genus in response to low exposure of Zn(Ⅱ)from 1 to 2 mg·L^(-1),while the genus Klebsiella and Enterobacter indicated their adaptability to the presence of elevated Zn(Ⅱ).According to PICRUSt,the abundance of key genes encoding ammonia monooxygenase(EC:1.14.99.39)was obviously reduced after exposure to Zn(Ⅱ),suggesting that the influence of Zn(Ⅱ)on nitrification was greater than that of denitrification,leading to a decrease in ammonium removal of SND system.This study provides a theoretical foundation for understanding the influence of Zn(Ⅱ)on the SND process in a biofilm system,which should be a source of great concern.展开更多
The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in clo...The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.展开更多
This paper studies the batch sizing scheduling problem with earliness and tardiness penalties which is closely related to a two-level supply chain problem.In the problem,there are K customer orders,where each customer...This paper studies the batch sizing scheduling problem with earliness and tardiness penalties which is closely related to a two-level supply chain problem.In the problem,there are K customer orders,where each customer order consisting of some unit length jobs has a due date.The jobs are processed in a common machine and then delivered to their customers in batches,where the size of each batch has upper and lower bounds and each batch may incur a fixed setup cost which can also be considered a fixed delivery cost.The goal is to find a schedule which minimizes the sum of the earliness and tardiness costs and the setup costs incurred by creating a new batch.The authors first present some structural properties of the optimal schedules for single-order problem with an additional assumption(a):The jobs are consecutively processed from time zero.Based on these properties,the authors give a polynomial-time algorithm for single-order problem with Assumption(a).Then the authors give dynamic programming algorithms for some special cases of multiple-order problem with Assumption(a).At last,the authors present some structural properties of the optimal schedules for single-order problem without Assumption(a) and give a polynomial-time algorithm for it.展开更多
To improve the productivity,the resource utilization and reduce the production cost of flexible job shops,this paper designs an improved two-layer optimization algorithm for the dual-resource scheduling optimization p...To improve the productivity,the resource utilization and reduce the production cost of flexible job shops,this paper designs an improved two-layer optimization algorithm for the dual-resource scheduling optimization problem of flexible job shop considering workpiece batching.Firstly,a mathematical model is established to minimize the maximum completion time.Secondly,an improved two-layer optimization algorithm is designed:the outer layer algorithm uses an improved PSO(Particle Swarm Optimization)to solve the workpiece batching problem,and the inner layer algorithm uses an improved GA(Genetic Algorithm)to solve the dual-resource scheduling problem.Then,a rescheduling method is designed to solve the task disturbance problem,represented by machine failures,occurring in the workshop production process.Finally,the superiority and effectiveness of the improved two-layer optimization algorithm are verified by two typical cases.The case results show that the improved two-layer optimization algorithm increases the average productivity by 7.44% compared to the ordinary two-layer optimization algorithm.By setting the different numbers of AGVs(Automated Guided Vehicles)and analyzing the impact on the production cycle of the whole order,this paper uses two indicators,the maximum completion time decreasing rate and the average AGV load time,to obtain the optimal number of AGVs,which saves the cost of production while ensuring the production efficiency.This research combines the solved problem with the real production process,which improves the productivity and reduces the production cost of the flexible job shop,and provides new ideas for the subsequent research.展开更多
基金Supported by NSFC(11571323 11201121)+1 种基金NSFSTDOHN(162300410221)NSFEDOHN(2013GGJS-079)
文摘In parallel-batching machine scheduling, all jobs in a batch start and complete at the same time, and the processing time of the batch is the maximum processing time of any job in it. For the unbounded parallel-batching machine scheduling problem of minimizing the maximum lateness, denoted 1|p-batch|L_(max), a dynamic programming algorithm with time complexity O(n^2) is well known in the literature.Later, this algorithm is improved to be an O(n log n) algorithm. In this note, we present another O(n log n) algorithm with simplifications on data structure and implementation details.
基金Supported by the NSF of Henan Province(082300410070)
文摘A batch is a subset of jobs which must be processed jointly in either serial or parallel form. For the single machine, batching, total completion time scheduling problems, the algorithmic aspects have been extensively studied in the literature. This paper presents the optimal hatching structures of the problems on the batching ways: all jobs in exactly N(arbitrary fix batch number and 1 〈 N 〈 n) batches.
基金National Natural Science Foundation of China(No.70832002)Graduate Student Innovation Fund of Fudan University,China
文摘The scheduling problem on a single batching machine with family jobs was proposed.The single batching machine can process a group of jobs simultaneously as a batch.Jobs in the same batch complete at the same time.The batch size is assumed to be unbounded.Jobs that belong to different families can not be processed in the same batch.The objective function is minimizing maximum lateness.For the problem with fixed number of m families and n jobs,a polynomial time algorithm based on dynamic programming with time complexity of O(n(n/m+1)m)was presented.
文摘This research assessed the environmental impact of cement silos emission on the existing concrete batching facilities in M35-Mussafah, Abu Dhabi, United Arab Emirates. These assessments were conducted using an air quality dispersion model (AERMOD) to predict the ambient concentration of Portland Cement particulate matter less than 10 microns (PM<sub>10</sub>) emitted to the atmosphere during loading and unloading activities from 176 silos located in 25 concrete batching facilities. AERMOD was applied to simulate and describe the dispersion of PM<sub>10</sub> released from the cement silos into the air. Simulations were carried out for PM<sub>10</sub> emissions on controlled and uncontrolled cement silos scenarios. Results showed an incremental negative impact on air quality and public health from uncontrolled silos emissions and estimated that the uncontrolled PM<sub>10</sub> emission sources contribute to air pollution by 528958.32 kg/Year. The modeling comparison between the controlled and uncontrolled silos shows that the highest annual average concentration from controlled cement silos is 0.065 μg/m<sup>3</sup>, and the highest daily emission value is 0.6 μg/m<sup>3</sup>;both values are negligible and will not lead to significant air quality impact in the entire study domain. However, the uncontrolled cement silos’ highest annual average concentration value is 328.08 μg/m<sup>3</sup>. The highest daily emission average value was 1250.09 μg/m<sup>3</sup>;this might cause a significant air pollution quality impact and health effects on the public and workers. The short-term and long-term average PM<sub>10</sub> pollutant concentrations at these receptors predicted by the air dispersion model are discussed for both scenarios and compared with local and international air quality standards and guidelines.
文摘This research study quantifies the PM<sub>10</sub> emission rates (g/s) from cement silos in 25 concrete batching facilities for both controlled and uncontrolled scenarios by applying the USEPA AP-42 guidelines step-by-step approach. The study focuses on evaluating the potential environmental impact of cement dust fugitive emissions from 176 cement silos located in 25 concrete batching facilities in the M35 Mussafah industrial area of Abu Dhabi, UAE. Emission factors are crucial for quantifying the PM<sub>10</sub> emission rates (g/s) that support developing source-specific emission estimates for areawide inventories to identify major sources of pollution that provide screening sources for compliance monitoring and air dispersion modeling. This requires data to be collected involves information on production, raw material usage, energy consumption, and process-related details, this was obtained using various methods, including field visits, surveys, and interviews with facility representatives to calculate emission rates accurately. Statistical analysis was conducted on cement consumption and emission rates for controlled and uncontrolled sources of the targeted facilities. The data shows that the average cement consumption among the facilities is approximately 88,160 (MT/yr), with a wide range of variation depending on the facility size and production rate. The emission rates from controlled sources have an average of 4.752E<sup>-04</sup> (g/s), while the rates from uncontrolled sources average 0.6716 (g/s). The analysis shows a significant statistical relationship (p < 0.05) and perfect positive correlation (r = 1) between cement consumption and emission rates, indicating that as cement consumption increases, emission rates tend to increase as well. Furthermore, comparing the emission rates from controlled and uncontrolled scenarios. The data showed a significant difference between the two scenarios, highlighting the effectiveness of control measures in reducing PM<sub>10</sub> emissions. The study’s findings provide insights into the impact of cement silo emissions on air quality and the importance of implementing control measures in concrete batching facilities. The comparative analysis contributes to understanding emission sources and supports the development of pollution control strategies in the Ready-Mix industry.
基金This work was supported in part by the National Natural Science Foundation of China[72101188]the Shanghai Municipal Science and Technology Major Project[2021SHZDZX0100]the Fundamental Research Funds for the Central Universities.
文摘The volume of instant delivery has witnessed a significant growth in recent years.Given the involvement of numerous heterogeneous stakeholders,instant delivery operations are inherently characterized by dynamics and uncertainties.This study introduces two order dispatching strategies,namely task buffering and dynamic batching,as potential solutions to address these challenges.The task buffering strategy aims to optimize the assignment timing of orders to couriers,thereby mitigating demand uncertainties.On the other hand,the dynamic batching strategy focuses on alleviating delivery pressure by assigning orders to couriers based on their residual capacity and extra delivery dis tances.To model the instant delivery problem and evaluate the performances of order dis patching strategies,Adaptive Agent-Based Order Dispatching(ABOD)approach is developed,which combines agent-based modelling,deep reinforcement learning,and the Kuhn-Munkres algorithm.The ABOD effectively captures the system’s uncertainties and heterogeneity,facilitating stakeholders learning in novel scenarios and enabling adap tive task buffering and dynamic batching decision-makings.The efficacy of the ABOD approach is verified through both synthetic and real-world case studies.Experimental results demonstrate that implementing the ABOD approach can lead to a significant increase in customer satisfaction,up to 275.42%,while simultaneously reducing the deliv ery distance by 11.38%compared to baseline policies.Additionally,the ABOD approach exhibits the ability to adaptively adjust buffering times to maintain high levels of customer satisfaction across various demand scenarios.As a result,this approach offers valuable sup port to logistics providers in making informed decisions regarding order dispatching in instant delivery operations.
文摘In this paper we study the problem of scheduling a batching machine with nonidentical job sizes. The jobs arrive simultaneously and have unit processing time. The goal is to minimize the total completion times. Having shown that the problem is NP-hard, we put forward three approximation schemes with worst case ratio 4, 2, and 3/2, respectively.
文摘This paper addresses the scheduling problem involving batch processing machines, which is Mso known as parallel batching in the literature. The presented mixed integer programming formulation first provides an elegant model for the problem under study. Fhrthermore, it enables solutions to the problem instances beyond the capability of exact methods developed so far. In order to alleviate computational burden, the authors propose MIP-based heuristic approaches which balance solution quality and computing time.
基金supported by National Natural Science Foundation of China(No.62172436)Additionally,it is supported by Natural Science Foundation of Shaanxi Province(No.2023-JC-YB-584)Engineering University of PAP’s Funding for Scientific Research Innovation Team and Key Researcher(No.KYGG202011).
文摘Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers.However,users’sensitive data would then become unregulated.In the event of data loss,cloud storage providers might conceal the fact that data has been compromised to protect their reputation and mitigate losses.Ensuring the integrity of data stored in the cloud remains a pressing issue that urgently needs to be addressed.In this paper,we propose a data auditing scheme for cloud-based EHRs that incorporates recoverability and batch auditing,alongside a thorough security and performance evaluation.Our scheme builds upon the indistinguishability-based privacy-preserving auditing approach proposed by Zhou et al.We identify that this scheme is insecure and vulnerable to forgery attacks on data storage proofs.To address these vulnerabilities,we enhanced the auditing process using masking techniques and designed new algorithms to strengthen security.We also provide formal proof of the security of the signature algorithm and the auditing scheme.Furthermore,our results show that our scheme effectively protects user privacy and is resilient against malicious attacks.Experimental results indicate that our scheme is not only secure and efficient but also supports batch auditing of cloud data.Specifically,when auditing 10,000 users,batch auditing reduces computational overhead by 101 s compared to normal auditing.
基金supported by the National Research Foundation of Korea(NRF)grant for RLRC funded by the Korea government(MSIT)(No.2022R1A5A8026986,RLRC)supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2020-0-01304,Development of Self-Learnable Mobile Recursive Neural Network Processor Technology)+3 种基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the Grand Information Technology Research Center support program(IITP-2024-2020-0-01462,Grand-ICT)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Korea Technology and Information Promotion Agency for SMEs(TIPA)supported by the Korean government(Ministry of SMEs and Startups)’s Smart Manufacturing Innovation R&D(RS-2024-00434259).
文摘On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.
文摘The flexible satellite batch production line is a complex discrete production system with multiple cross-disciplinary fields and mixed serial parallel tasks.As the source of the satellite batch production line process,the warehousing system has urgent needs such as uncertain production scale and rapid iteration and optimization of business processes.Therefore,the requirements and architecture of complex discrete warehousing systems such as flexible satellite batch production lines are studied.The physical system of intelligent equipment is abstracted as a digital model to form the underlying module,and a digital fusion framework of“business domain+middleware platform+intelligent equipment information model”is constructed.The granularity of microservice splitting is calculated based on the dynamic correlation relationship between user access instances and database table structures.The general warehousing functions of the platform are divided to achieve module customization,addition,and configuration.An open discrete warehousing system based on microservices is designed.Software architecture and design develop complex discrete warehousing systems based on the SpringCloud framework.This architecture achieves the decoupling of business logic and physical hardware,enhances the maintainability and scalability of the system,and greatly improves the system’s adaptability to different complex discrete warehousing business scenarios.
基金supported by the National Natural Science Foundation of China(Grant Number 61573264).
文摘As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.
基金supported by Beijing Natural Science Foundation(2222037)the Special Educating Project of the Talent for Carbon Peak and Carbon Neutrality of University of Chinese Academy of Sciences(Innovation of talent cultivation model for“dual carbon”in chemical engineering industry,E3E56501A2).
文摘Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of DWBDM process is challenging,since inherently dynamic and highly nonlinear,which make it difficult to give the controller reasonable set value or optimal temperature profile for temperature control scheme.To overcome this obstacle,this study proposes a new strategy to develop temperature control scheme for DWBDM combining neural network soft-sensor with fuzzy control.Dynamic model of DWBDM was firstly developed and numerically solved by Python,with three control schemes:composition control by PID and fuzzy control respectively,and temperature control by fuzzy control with neural network soft-sensor.For dynamic process,the neural networks with memory functions,such as RNN,LSTM and GRU,are used to handle with time-series data.The results from a case example show that the new control scheme can perform a good temperature control of DWBDM with the same or even better product purities as traditional PID or fuzzy control,and fuzzy control could reduce the effect of prediction error from neural network,indicating that it is a highly feasible and effective control approach for DWBDM,and could even be extended to other dynamic processes.
文摘Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.
基金funding from the National Key R&D Program of China(No.2019YFC1906600)the National Natural Science Foundation of China(No.52200049)+3 种基金the China Postdoctoral Science Foundation(No.2022TQ0089)the Heilongjiang Province Postdoctoral Science Foundation(No.LBH-Z22181)the State Key Laboratory of Urban Water Resource and Environment(Harbin Institute of Technology)(No.2023DX06)the Fundamental Research Funds for the Central Universities。
文摘The initial step in the resource utilization of Chinese medicine residues(CMRs)involves dehydration pretreatment,which results in high concentrations of organic wastewater and leads to environmental pollution.Meanwhile,to address the issue of anaerobic systems failing due to acidification under shock loading,a microaerobic expanded granular sludge bed(EGSB)and moving bed sequencing batch reactor(MBSBR)combined process was proposed in this study.Microaeration facilitated hydrolysis,improved the removal of nitrogen and phosphorus pollutants,maintained a low concentration of volatile fatty acids(VFAs),and enhanced system stability.In addition,microaeration promoted microbial richness and diversity,enriching three phyla:Bacteroidota,Synergistota and Firmicutes associated with hydrolytic acidification.Furthermore,aeration intensity in MBSBR was optimized.Elevated levels of dissolved oxygen(DO)impacted biofilm structure,suppressed denitrifying bacteria activity,led to nitrate accumulation,and hindered simultaneous nitrification and denitrification(SND).Maintaining a DO concentration of 2 mg/L enhanced the removal of nitrogen and phosphorus while conserving energy.The combined process achieved removal efficiencies of 98.25%,90.49%,and 98.55%for chemical oxygen demand(COD),total nitrogen(TN),and total phosphorus(TP),respectively.Typical pollutants liquiritin(LQ)and glycyrrhizic acid(GA)were completely degraded.This study presents an innovative approach for the treatment of high-concentration organic wastewater and provides a reliable solution for the pollution control in utilization of CMRs resources.
基金supported by the Shanxi Province Science Foundation for Youths(20210302124348 and 202103021223099)the Basic Research Project for the ShanxiZheda Institute of Advanced Materials and Chemical Engineering(2021SX-AT004)the National Natural Science Foundation of China(51778397).
文摘Simultaneous nitrification and denitrification(SND)is considered an attractive alternative to traditionally biological nitrogen removal technology.Knowing the effects of heavy metals on the SND process is essential for engineering.In this study,the responses of SND performance to Zn(Ⅱ)exposure were investigated in a biofilm reactor.The results indicated that Zn(Ⅱ)at low concentration(≤2 mg·L^(-1))had negligible effects on the removal of nitrogen and COD in the SND process compared to that without Zn(Ⅱ),while the removal of ammonium and COD was strongly inhibited with an increasing in the concentration of Zn(Ⅱ)at 5 or 10 mg·L^(-1).Large amounts of extracellular polymeric substance(EPS),especially protein(PN),were secreted to protect microorganisms from the increasing Zn(Ⅱ)damage.High-throughput sequencing analysis indicated that Zn(Ⅱ)exposure could significantly reduce the microbial diversity and change the structure of microbial community.The RDA analysis further confirmed that Azoarcus-Thauera-cluster was the dominant genus in response to low exposure of Zn(Ⅱ)from 1 to 2 mg·L^(-1),while the genus Klebsiella and Enterobacter indicated their adaptability to the presence of elevated Zn(Ⅱ).According to PICRUSt,the abundance of key genes encoding ammonia monooxygenase(EC:1.14.99.39)was obviously reduced after exposure to Zn(Ⅱ),suggesting that the influence of Zn(Ⅱ)on nitrification was greater than that of denitrification,leading to a decrease in ammonium removal of SND system.This study provides a theoretical foundation for understanding the influence of Zn(Ⅱ)on the SND process in a biofilm system,which should be a source of great concern.
基金supported by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project number RSP2025R498.
文摘The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.
基金National Nature Science Foundation of China under Grant Nos.11471210and 11171207the Natural Science Foundation of Ningbo City under Grant No.2015A610168the Natural Science Foundation of Zhejiang Province of China under Grant No.LQl3A010010
文摘This paper studies the batch sizing scheduling problem with earliness and tardiness penalties which is closely related to a two-level supply chain problem.In the problem,there are K customer orders,where each customer order consisting of some unit length jobs has a due date.The jobs are processed in a common machine and then delivered to their customers in batches,where the size of each batch has upper and lower bounds and each batch may incur a fixed setup cost which can also be considered a fixed delivery cost.The goal is to find a schedule which minimizes the sum of the earliness and tardiness costs and the setup costs incurred by creating a new batch.The authors first present some structural properties of the optimal schedules for single-order problem with an additional assumption(a):The jobs are consecutively processed from time zero.Based on these properties,the authors give a polynomial-time algorithm for single-order problem with Assumption(a).Then the authors give dynamic programming algorithms for some special cases of multiple-order problem with Assumption(a).At last,the authors present some structural properties of the optimal schedules for single-order problem without Assumption(a) and give a polynomial-time algorithm for it.
文摘To improve the productivity,the resource utilization and reduce the production cost of flexible job shops,this paper designs an improved two-layer optimization algorithm for the dual-resource scheduling optimization problem of flexible job shop considering workpiece batching.Firstly,a mathematical model is established to minimize the maximum completion time.Secondly,an improved two-layer optimization algorithm is designed:the outer layer algorithm uses an improved PSO(Particle Swarm Optimization)to solve the workpiece batching problem,and the inner layer algorithm uses an improved GA(Genetic Algorithm)to solve the dual-resource scheduling problem.Then,a rescheduling method is designed to solve the task disturbance problem,represented by machine failures,occurring in the workshop production process.Finally,the superiority and effectiveness of the improved two-layer optimization algorithm are verified by two typical cases.The case results show that the improved two-layer optimization algorithm increases the average productivity by 7.44% compared to the ordinary two-layer optimization algorithm.By setting the different numbers of AGVs(Automated Guided Vehicles)and analyzing the impact on the production cycle of the whole order,this paper uses two indicators,the maximum completion time decreasing rate and the average AGV load time,to obtain the optimal number of AGVs,which saves the cost of production while ensuring the production efficiency.This research combines the solved problem with the real production process,which improves the productivity and reduces the production cost of the flexible job shop,and provides new ideas for the subsequent research.