Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers...Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers.However,users’sensitive data would then become unregulated.In the event of data loss,cloud storage providers might conceal the fact that data has been compromised to protect their reputation and mitigate losses.Ensuring the integrity of data stored in the cloud remains a pressing issue that urgently needs to be addressed.In this paper,we propose a data auditing scheme for cloud-based EHRs that incorporates recoverability and batch auditing,alongside a thorough security and performance evaluation.Our scheme builds upon the indistinguishability-based privacy-preserving auditing approach proposed by Zhou et al.We identify that this scheme is insecure and vulnerable to forgery attacks on data storage proofs.To address these vulnerabilities,we enhanced the auditing process using masking techniques and designed new algorithms to strengthen security.We also provide formal proof of the security of the signature algorithm and the auditing scheme.Furthermore,our results show that our scheme effectively protects user privacy and is resilient against malicious attacks.Experimental results indicate that our scheme is not only secure and efficient but also supports batch auditing of cloud data.Specifically,when auditing 10,000 users,batch auditing reduces computational overhead by 101 s compared to normal auditing.展开更多
On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to f...On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.展开更多
The flexible satellite batch production line is a complex discrete production system with multiple cross-disciplinary fields and mixed serial parallel tasks.As the source of the satellite batch production line process...The flexible satellite batch production line is a complex discrete production system with multiple cross-disciplinary fields and mixed serial parallel tasks.As the source of the satellite batch production line process,the warehousing system has urgent needs such as uncertain production scale and rapid iteration and optimization of business processes.Therefore,the requirements and architecture of complex discrete warehousing systems such as flexible satellite batch production lines are studied.The physical system of intelligent equipment is abstracted as a digital model to form the underlying module,and a digital fusion framework of“business domain+middleware platform+intelligent equipment information model”is constructed.The granularity of microservice splitting is calculated based on the dynamic correlation relationship between user access instances and database table structures.The general warehousing functions of the platform are divided to achieve module customization,addition,and configuration.An open discrete warehousing system based on microservices is designed.Software architecture and design develop complex discrete warehousing systems based on the SpringCloud framework.This architecture achieves the decoupling of business logic and physical hardware,enhances the maintainability and scalability of the system,and greatly improves the system’s adaptability to different complex discrete warehousing business scenarios.展开更多
As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that a...As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.展开更多
Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of ...Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of DWBDM process is challenging,since inherently dynamic and highly nonlinear,which make it difficult to give the controller reasonable set value or optimal temperature profile for temperature control scheme.To overcome this obstacle,this study proposes a new strategy to develop temperature control scheme for DWBDM combining neural network soft-sensor with fuzzy control.Dynamic model of DWBDM was firstly developed and numerically solved by Python,with three control schemes:composition control by PID and fuzzy control respectively,and temperature control by fuzzy control with neural network soft-sensor.For dynamic process,the neural networks with memory functions,such as RNN,LSTM and GRU,are used to handle with time-series data.The results from a case example show that the new control scheme can perform a good temperature control of DWBDM with the same or even better product purities as traditional PID or fuzzy control,and fuzzy control could reduce the effect of prediction error from neural network,indicating that it is a highly feasible and effective control approach for DWBDM,and could even be extended to other dynamic processes.展开更多
Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing ...Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.展开更多
The initial step in the resource utilization of Chinese medicine residues(CMRs)involves dehydration pretreatment,which results in high concentrations of organic wastewater and leads to environmental pollution.Meanwhil...The initial step in the resource utilization of Chinese medicine residues(CMRs)involves dehydration pretreatment,which results in high concentrations of organic wastewater and leads to environmental pollution.Meanwhile,to address the issue of anaerobic systems failing due to acidification under shock loading,a microaerobic expanded granular sludge bed(EGSB)and moving bed sequencing batch reactor(MBSBR)combined process was proposed in this study.Microaeration facilitated hydrolysis,improved the removal of nitrogen and phosphorus pollutants,maintained a low concentration of volatile fatty acids(VFAs),and enhanced system stability.In addition,microaeration promoted microbial richness and diversity,enriching three phyla:Bacteroidota,Synergistota and Firmicutes associated with hydrolytic acidification.Furthermore,aeration intensity in MBSBR was optimized.Elevated levels of dissolved oxygen(DO)impacted biofilm structure,suppressed denitrifying bacteria activity,led to nitrate accumulation,and hindered simultaneous nitrification and denitrification(SND).Maintaining a DO concentration of 2 mg/L enhanced the removal of nitrogen and phosphorus while conserving energy.The combined process achieved removal efficiencies of 98.25%,90.49%,and 98.55%for chemical oxygen demand(COD),total nitrogen(TN),and total phosphorus(TP),respectively.Typical pollutants liquiritin(LQ)and glycyrrhizic acid(GA)were completely degraded.This study presents an innovative approach for the treatment of high-concentration organic wastewater and provides a reliable solution for the pollution control in utilization of CMRs resources.展开更多
Simultaneous nitrification and denitrification(SND)is considered an attractive alternative to traditionally biological nitrogen removal technology.Knowing the effects of heavy metals on the SND process is essential fo...Simultaneous nitrification and denitrification(SND)is considered an attractive alternative to traditionally biological nitrogen removal technology.Knowing the effects of heavy metals on the SND process is essential for engineering.In this study,the responses of SND performance to Zn(Ⅱ)exposure were investigated in a biofilm reactor.The results indicated that Zn(Ⅱ)at low concentration(≤2 mg·L^(-1))had negligible effects on the removal of nitrogen and COD in the SND process compared to that without Zn(Ⅱ),while the removal of ammonium and COD was strongly inhibited with an increasing in the concentration of Zn(Ⅱ)at 5 or 10 mg·L^(-1).Large amounts of extracellular polymeric substance(EPS),especially protein(PN),were secreted to protect microorganisms from the increasing Zn(Ⅱ)damage.High-throughput sequencing analysis indicated that Zn(Ⅱ)exposure could significantly reduce the microbial diversity and change the structure of microbial community.The RDA analysis further confirmed that Azoarcus-Thauera-cluster was the dominant genus in response to low exposure of Zn(Ⅱ)from 1 to 2 mg·L^(-1),while the genus Klebsiella and Enterobacter indicated their adaptability to the presence of elevated Zn(Ⅱ).According to PICRUSt,the abundance of key genes encoding ammonia monooxygenase(EC:1.14.99.39)was obviously reduced after exposure to Zn(Ⅱ),suggesting that the influence of Zn(Ⅱ)on nitrification was greater than that of denitrification,leading to a decrease in ammonium removal of SND system.This study provides a theoretical foundation for understanding the influence of Zn(Ⅱ)on the SND process in a biofilm system,which should be a source of great concern.展开更多
The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in clo...The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.展开更多
The maturity of 5G technology has enabled crowd-sensing services to collect multimedia data over wireless network,so it has promoted the applications of crowd-sensing services in different fields,but also brings more ...The maturity of 5G technology has enabled crowd-sensing services to collect multimedia data over wireless network,so it has promoted the applications of crowd-sensing services in different fields,but also brings more privacy security challenges,the most commom which is privacy leakage.As a privacy protection technology combining data integrity check and identity anonymity,ring signature is widely used in the field of privacy protection.However,introducing signature technology leads to additional signature verification overhead.In the scenario of crowd-sensing,the existing signature schemes have low efficiency in multi-signature verification.Therefore,it is necessary to design an efficient multi-signature verification scheme while ensuring security.In this paper,a batch-verifiable signature scheme is proposed based on the crowd-sensing background,which supports the sensing platform to verify the uploaded multiple signature data efficiently,so as to overcoming the defects of the traditional signature scheme in multi-signature verification.In our proposal,a method for linking homologous data was presented,which was valuable for incentive mechanism and data analysis.Simulation results showed that the proposed scheme has good performance in terms of security and efficiency in crowd-sensing applications with a large number of users and data.展开更多
For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and all...For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and allows privacy information to be preserved.Data owners can tightly manage their data with efficient revocation and only grant one-time adaptive access for the fulfillment of the requester.We prove that our protocol is semanticallly secure,blind,and secure against oblivious requesters and malicious file keepers.We also provide security analysis in the context of four typical attacks.展开更多
Neural networks are often viewed as pure‘black box’models,lacking interpretability and extrapolation capabilities of pure mechanistic models.This work proposes a new approach that,with the help of neural networks,im...Neural networks are often viewed as pure‘black box’models,lacking interpretability and extrapolation capabilities of pure mechanistic models.This work proposes a new approach that,with the help of neural networks,improves the conformity of the first-principal model to the actual plant.The final result is still a first-principal model rather than a hybrid model,which maintains the advantage of the high interpretability of first-principal model.This work better simulates industrial batch distillation which separates four components:water,ethylene glycol,diethylene glycol,and triethylene glycol.GRU(gated recurrent neural network)and LSTM(long short-term memory)were used to obtain empirical parameters of mechanistic model that are difficult to measure directly.These were used to improve the empirical processes in mechanistic model,thus correcting unreasonable model assumptions and achieving better predictability for batch distillation.The proposed method was verified using a case study from one industrial plant case,and the results show its advancement in improving model predictions and the potential to extend to other similar systems.展开更多
Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competi...Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.展开更多
Brain signal analysis from electroencephalogram(EEG)recordings is the gold standard for diagnosing various neural disorders especially epileptic seizure.Seizure signals are highly chaotic compared to normal brain sign...Brain signal analysis from electroencephalogram(EEG)recordings is the gold standard for diagnosing various neural disorders especially epileptic seizure.Seizure signals are highly chaotic compared to normal brain signals and thus can be identified from EEG recordings.In the current seizure detection and classification landscape,most models primarily focus on binary classification—distinguishing between seizure and non-seizure states.While effective for basic detection,these models fail to address the nuanced stages of seizures and the intervals between them.Accurate identification of per-seizure or interictal stages and the timing between seizures is crucial for an effective seizure alert system.This granularity is essential for improving patient-specific interventions and developing proactive seizure management strategies.This study addresses this gap by proposing a novel AI-based approach for seizure stage classification using a Deep Convolutional Neural Network(DCNN).The developed model goes beyond traditional binary classification by categorizing EEG recordings into three distinct classes,thus providing a more detailed analysis of seizure stages.To enhance the model’s performance,we have optimized the DCNN using two advanced techniques:the Stochastic Gradient Algorithm(SGA)and the evolutionary Genetic Algorithm(GA).These optimization strategies are designed to fine-tune the model’s accuracy and robustness.Moreover,k-fold cross-validation ensures the model’s reliability and generalizability across different data sets.Trained and validated on the Bonn EEG data sets,the proposed optimized DCNN model achieved a test accuracy of 93.2%,demonstrating its ability to accurately classify EEG signals.In summary,the key advancement of the present research lies in addressing the limitations of existing models by providing a more detailed seizure classification system,thus potentially enhancing the effectiveness of real-time seizure prediction and management systems in clinical settings.With its inherent classification performance,the proposed approach represents a significant step forward in improving patient outcomes through advanced AI techniques.展开更多
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca...The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.展开更多
To improve the productivity,the resource utilization and reduce the production cost of flexible job shops,this paper designs an improved two-layer optimization algorithm for the dual-resource scheduling optimization p...To improve the productivity,the resource utilization and reduce the production cost of flexible job shops,this paper designs an improved two-layer optimization algorithm for the dual-resource scheduling optimization problem of flexible job shop considering workpiece batching.Firstly,a mathematical model is established to minimize the maximum completion time.Secondly,an improved two-layer optimization algorithm is designed:the outer layer algorithm uses an improved PSO(Particle Swarm Optimization)to solve the workpiece batching problem,and the inner layer algorithm uses an improved GA(Genetic Algorithm)to solve the dual-resource scheduling problem.Then,a rescheduling method is designed to solve the task disturbance problem,represented by machine failures,occurring in the workshop production process.Finally,the superiority and effectiveness of the improved two-layer optimization algorithm are verified by two typical cases.The case results show that the improved two-layer optimization algorithm increases the average productivity by 7.44% compared to the ordinary two-layer optimization algorithm.By setting the different numbers of AGVs(Automated Guided Vehicles)and analyzing the impact on the production cycle of the whole order,this paper uses two indicators,the maximum completion time decreasing rate and the average AGV load time,to obtain the optimal number of AGVs,which saves the cost of production while ensuring the production efficiency.This research combines the solved problem with the real production process,which improves the productivity and reduces the production cost of the flexible job shop,and provides new ideas for the subsequent research.展开更多
基金supported by National Natural Science Foundation of China(No.62172436)Additionally,it is supported by Natural Science Foundation of Shaanxi Province(No.2023-JC-YB-584)Engineering University of PAP’s Funding for Scientific Research Innovation Team and Key Researcher(No.KYGG202011).
文摘Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers.However,users’sensitive data would then become unregulated.In the event of data loss,cloud storage providers might conceal the fact that data has been compromised to protect their reputation and mitigate losses.Ensuring the integrity of data stored in the cloud remains a pressing issue that urgently needs to be addressed.In this paper,we propose a data auditing scheme for cloud-based EHRs that incorporates recoverability and batch auditing,alongside a thorough security and performance evaluation.Our scheme builds upon the indistinguishability-based privacy-preserving auditing approach proposed by Zhou et al.We identify that this scheme is insecure and vulnerable to forgery attacks on data storage proofs.To address these vulnerabilities,we enhanced the auditing process using masking techniques and designed new algorithms to strengthen security.We also provide formal proof of the security of the signature algorithm and the auditing scheme.Furthermore,our results show that our scheme effectively protects user privacy and is resilient against malicious attacks.Experimental results indicate that our scheme is not only secure and efficient but also supports batch auditing of cloud data.Specifically,when auditing 10,000 users,batch auditing reduces computational overhead by 101 s compared to normal auditing.
基金supported by the National Research Foundation of Korea(NRF)grant for RLRC funded by the Korea government(MSIT)(No.2022R1A5A8026986,RLRC)supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2020-0-01304,Development of Self-Learnable Mobile Recursive Neural Network Processor Technology)+3 种基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the Grand Information Technology Research Center support program(IITP-2024-2020-0-01462,Grand-ICT)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Korea Technology and Information Promotion Agency for SMEs(TIPA)supported by the Korean government(Ministry of SMEs and Startups)’s Smart Manufacturing Innovation R&D(RS-2024-00434259).
文摘On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.
文摘The flexible satellite batch production line is a complex discrete production system with multiple cross-disciplinary fields and mixed serial parallel tasks.As the source of the satellite batch production line process,the warehousing system has urgent needs such as uncertain production scale and rapid iteration and optimization of business processes.Therefore,the requirements and architecture of complex discrete warehousing systems such as flexible satellite batch production lines are studied.The physical system of intelligent equipment is abstracted as a digital model to form the underlying module,and a digital fusion framework of“business domain+middleware platform+intelligent equipment information model”is constructed.The granularity of microservice splitting is calculated based on the dynamic correlation relationship between user access instances and database table structures.The general warehousing functions of the platform are divided to achieve module customization,addition,and configuration.An open discrete warehousing system based on microservices is designed.Software architecture and design develop complex discrete warehousing systems based on the SpringCloud framework.This architecture achieves the decoupling of business logic and physical hardware,enhances the maintainability and scalability of the system,and greatly improves the system’s adaptability to different complex discrete warehousing business scenarios.
基金supported by the National Natural Science Foundation of China(Grant Number 61573264).
文摘As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.
基金supported by Beijing Natural Science Foundation(2222037)the Special Educating Project of the Talent for Carbon Peak and Carbon Neutrality of University of Chinese Academy of Sciences(Innovation of talent cultivation model for“dual carbon”in chemical engineering industry,E3E56501A2).
文摘Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of DWBDM process is challenging,since inherently dynamic and highly nonlinear,which make it difficult to give the controller reasonable set value or optimal temperature profile for temperature control scheme.To overcome this obstacle,this study proposes a new strategy to develop temperature control scheme for DWBDM combining neural network soft-sensor with fuzzy control.Dynamic model of DWBDM was firstly developed and numerically solved by Python,with three control schemes:composition control by PID and fuzzy control respectively,and temperature control by fuzzy control with neural network soft-sensor.For dynamic process,the neural networks with memory functions,such as RNN,LSTM and GRU,are used to handle with time-series data.The results from a case example show that the new control scheme can perform a good temperature control of DWBDM with the same or even better product purities as traditional PID or fuzzy control,and fuzzy control could reduce the effect of prediction error from neural network,indicating that it is a highly feasible and effective control approach for DWBDM,and could even be extended to other dynamic processes.
文摘Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.
基金funding from the National Key R&D Program of China(No.2019YFC1906600)the National Natural Science Foundation of China(No.52200049)+3 种基金the China Postdoctoral Science Foundation(No.2022TQ0089)the Heilongjiang Province Postdoctoral Science Foundation(No.LBH-Z22181)the State Key Laboratory of Urban Water Resource and Environment(Harbin Institute of Technology)(No.2023DX06)the Fundamental Research Funds for the Central Universities。
文摘The initial step in the resource utilization of Chinese medicine residues(CMRs)involves dehydration pretreatment,which results in high concentrations of organic wastewater and leads to environmental pollution.Meanwhile,to address the issue of anaerobic systems failing due to acidification under shock loading,a microaerobic expanded granular sludge bed(EGSB)and moving bed sequencing batch reactor(MBSBR)combined process was proposed in this study.Microaeration facilitated hydrolysis,improved the removal of nitrogen and phosphorus pollutants,maintained a low concentration of volatile fatty acids(VFAs),and enhanced system stability.In addition,microaeration promoted microbial richness and diversity,enriching three phyla:Bacteroidota,Synergistota and Firmicutes associated with hydrolytic acidification.Furthermore,aeration intensity in MBSBR was optimized.Elevated levels of dissolved oxygen(DO)impacted biofilm structure,suppressed denitrifying bacteria activity,led to nitrate accumulation,and hindered simultaneous nitrification and denitrification(SND).Maintaining a DO concentration of 2 mg/L enhanced the removal of nitrogen and phosphorus while conserving energy.The combined process achieved removal efficiencies of 98.25%,90.49%,and 98.55%for chemical oxygen demand(COD),total nitrogen(TN),and total phosphorus(TP),respectively.Typical pollutants liquiritin(LQ)and glycyrrhizic acid(GA)were completely degraded.This study presents an innovative approach for the treatment of high-concentration organic wastewater and provides a reliable solution for the pollution control in utilization of CMRs resources.
基金supported by the Shanxi Province Science Foundation for Youths(20210302124348 and 202103021223099)the Basic Research Project for the ShanxiZheda Institute of Advanced Materials and Chemical Engineering(2021SX-AT004)the National Natural Science Foundation of China(51778397).
文摘Simultaneous nitrification and denitrification(SND)is considered an attractive alternative to traditionally biological nitrogen removal technology.Knowing the effects of heavy metals on the SND process is essential for engineering.In this study,the responses of SND performance to Zn(Ⅱ)exposure were investigated in a biofilm reactor.The results indicated that Zn(Ⅱ)at low concentration(≤2 mg·L^(-1))had negligible effects on the removal of nitrogen and COD in the SND process compared to that without Zn(Ⅱ),while the removal of ammonium and COD was strongly inhibited with an increasing in the concentration of Zn(Ⅱ)at 5 or 10 mg·L^(-1).Large amounts of extracellular polymeric substance(EPS),especially protein(PN),were secreted to protect microorganisms from the increasing Zn(Ⅱ)damage.High-throughput sequencing analysis indicated that Zn(Ⅱ)exposure could significantly reduce the microbial diversity and change the structure of microbial community.The RDA analysis further confirmed that Azoarcus-Thauera-cluster was the dominant genus in response to low exposure of Zn(Ⅱ)from 1 to 2 mg·L^(-1),while the genus Klebsiella and Enterobacter indicated their adaptability to the presence of elevated Zn(Ⅱ).According to PICRUSt,the abundance of key genes encoding ammonia monooxygenase(EC:1.14.99.39)was obviously reduced after exposure to Zn(Ⅱ),suggesting that the influence of Zn(Ⅱ)on nitrification was greater than that of denitrification,leading to a decrease in ammonium removal of SND system.This study provides a theoretical foundation for understanding the influence of Zn(Ⅱ)on the SND process in a biofilm system,which should be a source of great concern.
基金supported by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project number RSP2025R498.
文摘The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.
基金supported by National Natural Science Foundation of China under Grant No.61972360Shandong Provincial Natural Science Foundation of China under Grant Nos.ZR2020MF148,ZR2020QF108.
文摘The maturity of 5G technology has enabled crowd-sensing services to collect multimedia data over wireless network,so it has promoted the applications of crowd-sensing services in different fields,but also brings more privacy security challenges,the most commom which is privacy leakage.As a privacy protection technology combining data integrity check and identity anonymity,ring signature is widely used in the field of privacy protection.However,introducing signature technology leads to additional signature verification overhead.In the scenario of crowd-sensing,the existing signature schemes have low efficiency in multi-signature verification.Therefore,it is necessary to design an efficient multi-signature verification scheme while ensuring security.In this paper,a batch-verifiable signature scheme is proposed based on the crowd-sensing background,which supports the sensing platform to verify the uploaded multiple signature data efficiently,so as to overcoming the defects of the traditional signature scheme in multi-signature verification.In our proposal,a method for linking homologous data was presented,which was valuable for incentive mechanism and data analysis.Simulation results showed that the proposed scheme has good performance in terms of security and efficiency in crowd-sensing applications with a large number of users and data.
基金partially supported by the National Natural Science Foundation of China under grant no.62372245the Foundation of Yunnan Key Laboratory of Blockchain Application Technology under Grant 202105AG070005+1 种基金in part by the Foundation of State Key Laboratory of Public Big Datain part by the Foundation of Key Laboratory of Computational Science and Application of Hainan Province under Grant JSKX202202。
文摘For the goals of security and privacy preservation,we propose a blind batch encryption-and public ledger-based data sharing protocol that allows the integrity of sensitive data to be audited by a public ledger and allows privacy information to be preserved.Data owners can tightly manage their data with efficient revocation and only grant one-time adaptive access for the fulfillment of the requester.We prove that our protocol is semanticallly secure,blind,and secure against oblivious requesters and malicious file keepers.We also provide security analysis in the context of four typical attacks.
基金supported by Beijing Natural Science Foundation(2222037)by the Fundamental Research Funds for the Central Universities.
文摘Neural networks are often viewed as pure‘black box’models,lacking interpretability and extrapolation capabilities of pure mechanistic models.This work proposes a new approach that,with the help of neural networks,improves the conformity of the first-principal model to the actual plant.The final result is still a first-principal model rather than a hybrid model,which maintains the advantage of the high interpretability of first-principal model.This work better simulates industrial batch distillation which separates four components:water,ethylene glycol,diethylene glycol,and triethylene glycol.GRU(gated recurrent neural network)and LSTM(long short-term memory)were used to obtain empirical parameters of mechanistic model that are difficult to measure directly.These were used to improve the empirical processes in mechanistic model,thus correcting unreasonable model assumptions and achieving better predictability for batch distillation.The proposed method was verified using a case study from one industrial plant case,and the results show its advancement in improving model predictions and the potential to extend to other similar systems.
基金supported by the NationalNatural Science Foundation of China(No.61972118)the Key R&D Program of Zhejiang Province(No.2023C01028).
文摘Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.
基金funded by the Researchers Supporting Program at King Saud University(RSPD2024R809).
文摘Brain signal analysis from electroencephalogram(EEG)recordings is the gold standard for diagnosing various neural disorders especially epileptic seizure.Seizure signals are highly chaotic compared to normal brain signals and thus can be identified from EEG recordings.In the current seizure detection and classification landscape,most models primarily focus on binary classification—distinguishing between seizure and non-seizure states.While effective for basic detection,these models fail to address the nuanced stages of seizures and the intervals between them.Accurate identification of per-seizure or interictal stages and the timing between seizures is crucial for an effective seizure alert system.This granularity is essential for improving patient-specific interventions and developing proactive seizure management strategies.This study addresses this gap by proposing a novel AI-based approach for seizure stage classification using a Deep Convolutional Neural Network(DCNN).The developed model goes beyond traditional binary classification by categorizing EEG recordings into three distinct classes,thus providing a more detailed analysis of seizure stages.To enhance the model’s performance,we have optimized the DCNN using two advanced techniques:the Stochastic Gradient Algorithm(SGA)and the evolutionary Genetic Algorithm(GA).These optimization strategies are designed to fine-tune the model’s accuracy and robustness.Moreover,k-fold cross-validation ensures the model’s reliability and generalizability across different data sets.Trained and validated on the Bonn EEG data sets,the proposed optimized DCNN model achieved a test accuracy of 93.2%,demonstrating its ability to accurately classify EEG signals.In summary,the key advancement of the present research lies in addressing the limitations of existing models by providing a more detailed seizure classification system,thus potentially enhancing the effectiveness of real-time seizure prediction and management systems in clinical settings.With its inherent classification performance,the proposed approach represents a significant step forward in improving patient outcomes through advanced AI techniques.
基金supported in part by the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(Grant No.2022C03174)the National Natural Science Foundation of China(No.92067103)+4 种基金the Key Research and Development Program of Shaanxi,China(No.2021ZDLGY06-02)the Natural Science Foundation of Shaanxi Province(No.2019ZDLGY12-02)the Shaanxi Innovation Team Project(No.2018TD-007)the Xi'an Science and technology Innovation Plan(No.201809168CX9JC10)the Fundamental Research Funds for the Central Universities(No.YJS2212)and National 111 Program of China B16037.
文摘The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.
文摘To improve the productivity,the resource utilization and reduce the production cost of flexible job shops,this paper designs an improved two-layer optimization algorithm for the dual-resource scheduling optimization problem of flexible job shop considering workpiece batching.Firstly,a mathematical model is established to minimize the maximum completion time.Secondly,an improved two-layer optimization algorithm is designed:the outer layer algorithm uses an improved PSO(Particle Swarm Optimization)to solve the workpiece batching problem,and the inner layer algorithm uses an improved GA(Genetic Algorithm)to solve the dual-resource scheduling problem.Then,a rescheduling method is designed to solve the task disturbance problem,represented by machine failures,occurring in the workshop production process.Finally,the superiority and effectiveness of the improved two-layer optimization algorithm are verified by two typical cases.The case results show that the improved two-layer optimization algorithm increases the average productivity by 7.44% compared to the ordinary two-layer optimization algorithm.By setting the different numbers of AGVs(Automated Guided Vehicles)and analyzing the impact on the production cycle of the whole order,this paper uses two indicators,the maximum completion time decreasing rate and the average AGV load time,to obtain the optimal number of AGVs,which saves the cost of production while ensuring the production efficiency.This research combines the solved problem with the real production process,which improves the productivity and reduces the production cost of the flexible job shop,and provides new ideas for the subsequent research.