Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers...Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers.However,users’sensitive data would then become unregulated.In the event of data loss,cloud storage providers might conceal the fact that data has been compromised to protect their reputation and mitigate losses.Ensuring the integrity of data stored in the cloud remains a pressing issue that urgently needs to be addressed.In this paper,we propose a data auditing scheme for cloud-based EHRs that incorporates recoverability and batch auditing,alongside a thorough security and performance evaluation.Our scheme builds upon the indistinguishability-based privacy-preserving auditing approach proposed by Zhou et al.We identify that this scheme is insecure and vulnerable to forgery attacks on data storage proofs.To address these vulnerabilities,we enhanced the auditing process using masking techniques and designed new algorithms to strengthen security.We also provide formal proof of the security of the signature algorithm and the auditing scheme.Furthermore,our results show that our scheme effectively protects user privacy and is resilient against malicious attacks.Experimental results indicate that our scheme is not only secure and efficient but also supports batch auditing of cloud data.Specifically,when auditing 10,000 users,batch auditing reduces computational overhead by 101 s compared to normal auditing.展开更多
On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to f...On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.展开更多
The flexible satellite batch production line is a complex discrete production system with multiple cross-disciplinary fields and mixed serial parallel tasks.As the source of the satellite batch production line process...The flexible satellite batch production line is a complex discrete production system with multiple cross-disciplinary fields and mixed serial parallel tasks.As the source of the satellite batch production line process,the warehousing system has urgent needs such as uncertain production scale and rapid iteration and optimization of business processes.Therefore,the requirements and architecture of complex discrete warehousing systems such as flexible satellite batch production lines are studied.The physical system of intelligent equipment is abstracted as a digital model to form the underlying module,and a digital fusion framework of“business domain+middleware platform+intelligent equipment information model”is constructed.The granularity of microservice splitting is calculated based on the dynamic correlation relationship between user access instances and database table structures.The general warehousing functions of the platform are divided to achieve module customization,addition,and configuration.An open discrete warehousing system based on microservices is designed.Software architecture and design develop complex discrete warehousing systems based on the SpringCloud framework.This architecture achieves the decoupling of business logic and physical hardware,enhances the maintainability and scalability of the system,and greatly improves the system’s adaptability to different complex discrete warehousing business scenarios.展开更多
As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that a...As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.展开更多
Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of ...Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of DWBDM process is challenging,since inherently dynamic and highly nonlinear,which make it difficult to give the controller reasonable set value or optimal temperature profile for temperature control scheme.To overcome this obstacle,this study proposes a new strategy to develop temperature control scheme for DWBDM combining neural network soft-sensor with fuzzy control.Dynamic model of DWBDM was firstly developed and numerically solved by Python,with three control schemes:composition control by PID and fuzzy control respectively,and temperature control by fuzzy control with neural network soft-sensor.For dynamic process,the neural networks with memory functions,such as RNN,LSTM and GRU,are used to handle with time-series data.The results from a case example show that the new control scheme can perform a good temperature control of DWBDM with the same or even better product purities as traditional PID or fuzzy control,and fuzzy control could reduce the effect of prediction error from neural network,indicating that it is a highly feasible and effective control approach for DWBDM,and could even be extended to other dynamic processes.展开更多
Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing ...Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.展开更多
The initial step in the resource utilization of Chinese medicine residues(CMRs)involves dehydration pretreatment,which results in high concentrations of organic wastewater and leads to environmental pollution.Meanwhil...The initial step in the resource utilization of Chinese medicine residues(CMRs)involves dehydration pretreatment,which results in high concentrations of organic wastewater and leads to environmental pollution.Meanwhile,to address the issue of anaerobic systems failing due to acidification under shock loading,a microaerobic expanded granular sludge bed(EGSB)and moving bed sequencing batch reactor(MBSBR)combined process was proposed in this study.Microaeration facilitated hydrolysis,improved the removal of nitrogen and phosphorus pollutants,maintained a low concentration of volatile fatty acids(VFAs),and enhanced system stability.In addition,microaeration promoted microbial richness and diversity,enriching three phyla:Bacteroidota,Synergistota and Firmicutes associated with hydrolytic acidification.Furthermore,aeration intensity in MBSBR was optimized.Elevated levels of dissolved oxygen(DO)impacted biofilm structure,suppressed denitrifying bacteria activity,led to nitrate accumulation,and hindered simultaneous nitrification and denitrification(SND).Maintaining a DO concentration of 2 mg/L enhanced the removal of nitrogen and phosphorus while conserving energy.The combined process achieved removal efficiencies of 98.25%,90.49%,and 98.55%for chemical oxygen demand(COD),total nitrogen(TN),and total phosphorus(TP),respectively.Typical pollutants liquiritin(LQ)and glycyrrhizic acid(GA)were completely degraded.This study presents an innovative approach for the treatment of high-concentration organic wastewater and provides a reliable solution for the pollution control in utilization of CMRs resources.展开更多
Simultaneous nitrification and denitrification(SND)is considered an attractive alternative to traditionally biological nitrogen removal technology.Knowing the effects of heavy metals on the SND process is essential fo...Simultaneous nitrification and denitrification(SND)is considered an attractive alternative to traditionally biological nitrogen removal technology.Knowing the effects of heavy metals on the SND process is essential for engineering.In this study,the responses of SND performance to Zn(Ⅱ)exposure were investigated in a biofilm reactor.The results indicated that Zn(Ⅱ)at low concentration(≤2 mg·L^(-1))had negligible effects on the removal of nitrogen and COD in the SND process compared to that without Zn(Ⅱ),while the removal of ammonium and COD was strongly inhibited with an increasing in the concentration of Zn(Ⅱ)at 5 or 10 mg·L^(-1).Large amounts of extracellular polymeric substance(EPS),especially protein(PN),were secreted to protect microorganisms from the increasing Zn(Ⅱ)damage.High-throughput sequencing analysis indicated that Zn(Ⅱ)exposure could significantly reduce the microbial diversity and change the structure of microbial community.The RDA analysis further confirmed that Azoarcus-Thauera-cluster was the dominant genus in response to low exposure of Zn(Ⅱ)from 1 to 2 mg·L^(-1),while the genus Klebsiella and Enterobacter indicated their adaptability to the presence of elevated Zn(Ⅱ).According to PICRUSt,the abundance of key genes encoding ammonia monooxygenase(EC:1.14.99.39)was obviously reduced after exposure to Zn(Ⅱ),suggesting that the influence of Zn(Ⅱ)on nitrification was greater than that of denitrification,leading to a decrease in ammonium removal of SND system.This study provides a theoretical foundation for understanding the influence of Zn(Ⅱ)on the SND process in a biofilm system,which should be a source of great concern.展开更多
The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in clo...The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.展开更多
Based on the difference in tendency to polymerize between tungsten and molybdenum, a new method using D309 resin was propounded. The batch tests indicate that the optimum pH value and contact time for the separation a...Based on the difference in tendency to polymerize between tungsten and molybdenum, a new method using D309 resin was propounded. The batch tests indicate that the optimum pH value and contact time for the separation are 7.0 and 4 h respectively, the maxium separation factor of W and Mo is 9.29. And the experimental resules show that isothermal absorbing tungsten and molybdenum belongs to Langmuir model and Freundlich model respectively, and the absorbing kinetics for tungsten is controlled by intra-particle diffusion. With a solution containing 70 g/L WO3 and 28.97 g/L Mo, the effluent with a mass ratio of Mo to WO3 of 76 and the eluate with a mass ratio of WO3 to Mo of 53.33 are obtained after column test.展开更多
基金supported by National Natural Science Foundation of China(No.62172436)Additionally,it is supported by Natural Science Foundation of Shaanxi Province(No.2023-JC-YB-584)Engineering University of PAP’s Funding for Scientific Research Innovation Team and Key Researcher(No.KYGG202011).
文摘Cloud storage,a core component of cloud computing,plays a vital role in the storage and management of data.Electronic Health Records(EHRs),which document users’health information,are typically stored on cloud servers.However,users’sensitive data would then become unregulated.In the event of data loss,cloud storage providers might conceal the fact that data has been compromised to protect their reputation and mitigate losses.Ensuring the integrity of data stored in the cloud remains a pressing issue that urgently needs to be addressed.In this paper,we propose a data auditing scheme for cloud-based EHRs that incorporates recoverability and batch auditing,alongside a thorough security and performance evaluation.Our scheme builds upon the indistinguishability-based privacy-preserving auditing approach proposed by Zhou et al.We identify that this scheme is insecure and vulnerable to forgery attacks on data storage proofs.To address these vulnerabilities,we enhanced the auditing process using masking techniques and designed new algorithms to strengthen security.We also provide formal proof of the security of the signature algorithm and the auditing scheme.Furthermore,our results show that our scheme effectively protects user privacy and is resilient against malicious attacks.Experimental results indicate that our scheme is not only secure and efficient but also supports batch auditing of cloud data.Specifically,when auditing 10,000 users,batch auditing reduces computational overhead by 101 s compared to normal auditing.
基金supported by the National Research Foundation of Korea(NRF)grant for RLRC funded by the Korea government(MSIT)(No.2022R1A5A8026986,RLRC)supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2020-0-01304,Development of Self-Learnable Mobile Recursive Neural Network Processor Technology)+3 种基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the Grand Information Technology Research Center support program(IITP-2024-2020-0-01462,Grand-ICT)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Korea Technology and Information Promotion Agency for SMEs(TIPA)supported by the Korean government(Ministry of SMEs and Startups)’s Smart Manufacturing Innovation R&D(RS-2024-00434259).
文摘On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.
文摘The flexible satellite batch production line is a complex discrete production system with multiple cross-disciplinary fields and mixed serial parallel tasks.As the source of the satellite batch production line process,the warehousing system has urgent needs such as uncertain production scale and rapid iteration and optimization of business processes.Therefore,the requirements and architecture of complex discrete warehousing systems such as flexible satellite batch production lines are studied.The physical system of intelligent equipment is abstracted as a digital model to form the underlying module,and a digital fusion framework of“business domain+middleware platform+intelligent equipment information model”is constructed.The granularity of microservice splitting is calculated based on the dynamic correlation relationship between user access instances and database table structures.The general warehousing functions of the platform are divided to achieve module customization,addition,and configuration.An open discrete warehousing system based on microservices is designed.Software architecture and design develop complex discrete warehousing systems based on the SpringCloud framework.This architecture achieves the decoupling of business logic and physical hardware,enhances the maintainability and scalability of the system,and greatly improves the system’s adaptability to different complex discrete warehousing business scenarios.
基金supported by the National Natural Science Foundation of China(Grant Number 61573264).
文摘As a complicated optimization problem,parallel batch processing machines scheduling problem(PBPMSP)exists in many real-life manufacturing industries such as textiles and semiconductors.Machine eligibility means that at least one machine is not eligible for at least one job.PBPMSP and scheduling problems with machine eligibility are frequently considered;however,PBPMSP with machine eligibility is seldom explored.This study investigates PBPMSP with machine eligibility in fabric dyeing and presents a novel shuffled frog-leaping algorithm with competition(CSFLA)to minimize makespan.In CSFLA,the initial population is produced in a heuristic and random way,and the competitive search of memeplexes comprises two phases.Competition between any two memeplexes is done in the first phase,then iteration times are adjusted based on competition,and search strategies are adjusted adaptively based on the evolution quality of memeplexes in the second phase.An adaptive population shuffling is given.Computational experiments are conducted on 100 instances.The computational results showed that the new strategies of CSFLA are effective and that CSFLA has promising advantages in solving the considered PBPMSP.
基金supported by Beijing Natural Science Foundation(2222037)the Special Educating Project of the Talent for Carbon Peak and Carbon Neutrality of University of Chinese Academy of Sciences(Innovation of talent cultivation model for“dual carbon”in chemical engineering industry,E3E56501A2).
文摘Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of DWBDM process is challenging,since inherently dynamic and highly nonlinear,which make it difficult to give the controller reasonable set value or optimal temperature profile for temperature control scheme.To overcome this obstacle,this study proposes a new strategy to develop temperature control scheme for DWBDM combining neural network soft-sensor with fuzzy control.Dynamic model of DWBDM was firstly developed and numerically solved by Python,with three control schemes:composition control by PID and fuzzy control respectively,and temperature control by fuzzy control with neural network soft-sensor.For dynamic process,the neural networks with memory functions,such as RNN,LSTM and GRU,are used to handle with time-series data.The results from a case example show that the new control scheme can perform a good temperature control of DWBDM with the same or even better product purities as traditional PID or fuzzy control,and fuzzy control could reduce the effect of prediction error from neural network,indicating that it is a highly feasible and effective control approach for DWBDM,and could even be extended to other dynamic processes.
文摘Fabric dyeing is a critical production process in the clothing industry and heavily relies on batch processing machines(BPM).In this study,the parallel BPM scheduling problem with machine eligibility in fabric dyeing is considered,and an adaptive cooperated shuffled frog-leaping algorithm(ACSFLA)is proposed to minimize makespan and total tardiness simultaneously.ACSFLA determines the search times for each memeplex based on its quality,with more searches in high-quality memeplexes.An adaptive cooperated and diversified search mechanism is applied,dynamically adjusting search strategies for each memeplex based on their dominance relationships and quality.During the cooperated search,ACSFLA uses a segmented and dynamic targeted search approach,while in non-cooperated scenarios,the search focuses on local search around superior solutions to improve efficiency.Furthermore,ACSFLA employs adaptive population division and partial population shuffling strategies.Through these strategies,memeplexes with low evolutionary potential are selected for reconstruction in the next generation,while thosewithhighevolutionarypotential are retained to continue their evolution.Toevaluate the performance of ACSFLA,comparative experiments were conducted using ACSFLA,SFLA,ASFLA,MOABC,and NSGA-CC in 90 instances.The computational results reveal that ACSFLA outperforms the other algorithms in 78 of the 90 test cases,highlighting its advantages in solving the parallel BPM scheduling problem with machine eligibility.
基金funding from the National Key R&D Program of China(No.2019YFC1906600)the National Natural Science Foundation of China(No.52200049)+3 种基金the China Postdoctoral Science Foundation(No.2022TQ0089)the Heilongjiang Province Postdoctoral Science Foundation(No.LBH-Z22181)the State Key Laboratory of Urban Water Resource and Environment(Harbin Institute of Technology)(No.2023DX06)the Fundamental Research Funds for the Central Universities。
文摘The initial step in the resource utilization of Chinese medicine residues(CMRs)involves dehydration pretreatment,which results in high concentrations of organic wastewater and leads to environmental pollution.Meanwhile,to address the issue of anaerobic systems failing due to acidification under shock loading,a microaerobic expanded granular sludge bed(EGSB)and moving bed sequencing batch reactor(MBSBR)combined process was proposed in this study.Microaeration facilitated hydrolysis,improved the removal of nitrogen and phosphorus pollutants,maintained a low concentration of volatile fatty acids(VFAs),and enhanced system stability.In addition,microaeration promoted microbial richness and diversity,enriching three phyla:Bacteroidota,Synergistota and Firmicutes associated with hydrolytic acidification.Furthermore,aeration intensity in MBSBR was optimized.Elevated levels of dissolved oxygen(DO)impacted biofilm structure,suppressed denitrifying bacteria activity,led to nitrate accumulation,and hindered simultaneous nitrification and denitrification(SND).Maintaining a DO concentration of 2 mg/L enhanced the removal of nitrogen and phosphorus while conserving energy.The combined process achieved removal efficiencies of 98.25%,90.49%,and 98.55%for chemical oxygen demand(COD),total nitrogen(TN),and total phosphorus(TP),respectively.Typical pollutants liquiritin(LQ)and glycyrrhizic acid(GA)were completely degraded.This study presents an innovative approach for the treatment of high-concentration organic wastewater and provides a reliable solution for the pollution control in utilization of CMRs resources.
基金supported by the Shanxi Province Science Foundation for Youths(20210302124348 and 202103021223099)the Basic Research Project for the ShanxiZheda Institute of Advanced Materials and Chemical Engineering(2021SX-AT004)the National Natural Science Foundation of China(51778397).
文摘Simultaneous nitrification and denitrification(SND)is considered an attractive alternative to traditionally biological nitrogen removal technology.Knowing the effects of heavy metals on the SND process is essential for engineering.In this study,the responses of SND performance to Zn(Ⅱ)exposure were investigated in a biofilm reactor.The results indicated that Zn(Ⅱ)at low concentration(≤2 mg·L^(-1))had negligible effects on the removal of nitrogen and COD in the SND process compared to that without Zn(Ⅱ),while the removal of ammonium and COD was strongly inhibited with an increasing in the concentration of Zn(Ⅱ)at 5 or 10 mg·L^(-1).Large amounts of extracellular polymeric substance(EPS),especially protein(PN),were secreted to protect microorganisms from the increasing Zn(Ⅱ)damage.High-throughput sequencing analysis indicated that Zn(Ⅱ)exposure could significantly reduce the microbial diversity and change the structure of microbial community.The RDA analysis further confirmed that Azoarcus-Thauera-cluster was the dominant genus in response to low exposure of Zn(Ⅱ)from 1 to 2 mg·L^(-1),while the genus Klebsiella and Enterobacter indicated their adaptability to the presence of elevated Zn(Ⅱ).According to PICRUSt,the abundance of key genes encoding ammonia monooxygenase(EC:1.14.99.39)was obviously reduced after exposure to Zn(Ⅱ),suggesting that the influence of Zn(Ⅱ)on nitrification was greater than that of denitrification,leading to a decrease in ammonium removal of SND system.This study provides a theoretical foundation for understanding the influence of Zn(Ⅱ)on the SND process in a biofilm system,which should be a source of great concern.
基金supported by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project number RSP2025R498.
文摘The exponential growth of the Internet of Things(IoT)has revolutionized various domains such as healthcare,smart cities,and agriculture,generating vast volumes of data that require secure processing and storage in cloud environments.However,reliance on cloud infrastructure raises critical security challenges,particularly regarding data integrity.While existing cryptographic methods provide robust integrity verification,they impose significant computational and energy overheads on resource-constrained IoT devices,limiting their applicability in large-scale,real-time scenarios.To address these challenges,we propose the Cognitive-Based Integrity Verification Model(C-BIVM),which leverages Belief-Desire-Intention(BDI)cognitive intelligence and algebraic signatures to enable lightweight,efficient,and scalable data integrity verification.The model incorporates batch auditing,reducing resource consumption in large-scale IoT environments by approximately 35%,while achieving an accuracy of over 99.2%in detecting data corruption.C-BIVM dynamically adapts integrity checks based on real-time conditions,optimizing resource utilization by minimizing redundant operations by more than 30%.Furthermore,blind verification techniques safeguard sensitive IoT data,ensuring privacy compliance by preventing unauthorized access during integrity checks.Extensive experimental evaluations demonstrate that C-BIVM reduces computation time for integrity checks by up to 40%compared to traditional bilinear pairing-based methods,making it particularly suitable for IoT-driven applications in smart cities,healthcare,and beyond.These results underscore the effectiveness of C-BIVM in delivering a secure,scalable,and resource-efficient solution tailored to the evolving needs of IoT ecosystems.
基金Project(51174232)supported by the National Natural Science Foundation of China
文摘Based on the difference in tendency to polymerize between tungsten and molybdenum, a new method using D309 resin was propounded. The batch tests indicate that the optimum pH value and contact time for the separation are 7.0 and 4 h respectively, the maxium separation factor of W and Mo is 9.29. And the experimental resules show that isothermal absorbing tungsten and molybdenum belongs to Langmuir model and Freundlich model respectively, and the absorbing kinetics for tungsten is controlled by intra-particle diffusion. With a solution containing 70 g/L WO3 and 28.97 g/L Mo, the effluent with a mass ratio of Mo to WO3 of 76 and the eluate with a mass ratio of WO3 to Mo of 53.33 are obtained after column test.