Membrane distillation(MD)is an advanced membrane separation process that employs hydrophobic microporous membranes to sepa rate non-volatile solutes from the feed solution,driven by vapor pressure gradients generated ...Membrane distillation(MD)is an advanced membrane separation process that employs hydrophobic microporous membranes to sepa rate non-volatile solutes from the feed solution,driven by vapor pressure gradients generated through thermal difference.This technology offers strong desalination capabilities and efficiently harnesses low-grade thermal energy sources,including geothermal and waste heat,making it a cost-effective solution for freshwater scarcity.Nevertheless,hydrophobic membranes are prone to contamination by surfactants,inorganic salts,and other substances in feed solutions.To address this,low-surface-energy composite nano-inorganic materials composed of carbon nanotubes and silica were modified and synthesized via organosilicon chemistry.A superhydrophobic surface exhibiting a water contact angle of157.96°was successfully fabricated using above nano-materials on poly(vinylidene fluoride)(PVDF)membrane surface with micro-nano structures via a one-step spray-coating method.Compared to unmodified PVDF membra ne,the superhydrophobic membrane demonstrated superior resistance to common scaling agents such as CaCl_(2),Mg(OH)_(2),CaCO_(3),and CaSO_(4),while maintaining stable permeate flux(13.4 kg·m^(-2)·h^(-1))during MD tests.Additionally,the modified membra ne exhibited enhanced wetting resistance when treating feed solutions containing sodium dodecyl sulfate(SDS),significantly extending the operational lifespan of the membrane.Due to its outstanding performance,this superhydrophobic membrane is expected to promote the practical application of MD technology in the treatment of complex wa stewater and efficient seawater desalination.展开更多
Magnesium(Mg)alloys are widely used lightweight structural materials for automobiles and help reduce carbon emissions.However,their use increases the production of Mg alloy scrap,which is recycled at a much lower rate...Magnesium(Mg)alloys are widely used lightweight structural materials for automobiles and help reduce carbon emissions.However,their use increases the production of Mg alloy scrap,which is recycled at a much lower rate than aluminum,and its greater complexity poses challenges to existing recycling processes.Although vacuum distillation can be used to recycle Mg alloy scrap,this requires optimizing and maximizing metal recirculation,but there has been no thermodynamic analysis of this process.In this study,the feasibility and controllability of separating inclusions and 23 metal impurities were evaluated,and their distribution and removal limits were quantified.Thermodynamic analyses and experimental results showed that inclusions and impurity metals of separation coefficient lgβ_(i)≤-5,including Cu,Fe,Co,and Ni below 0.001 ppm,could be removed from the matrix.All Zn entered the recycled Mg,while impurities with-1<lgβ_(i)<-5 such as Li,Ca,and Mn severely affected the purity of the recycled Mg during the later stage of distillation.Therefore,an optimization strategy for vacuum distillation recycling:lower temperatures and higher system pressures for Zn separation in the early stage,and the early termination of the recovery process in the later stage or a continuous supply of raw melt can also prevent contamination during recycling.The alloying elements Al and Zn in Mg alloy scrap can be further recovered and purified by vacuum distillation when economically feasible,to maximize the recycling of metal resources.展开更多
High-purity indium finds extensive application in the aerospace,electronics,medical,energy,and national defense sectors.Its purity and impurity contents significantly influence its performance in these applications.Hi...High-purity indium finds extensive application in the aerospace,electronics,medical,energy,and national defense sectors.Its purity and impurity contents significantly influence its performance in these applications.High-purity indium was prepared by combining zone refining with vacuum distillation.Results show that the average removal efficiency of impurity Sb can approach 95%,while the removal efficiency of impurities Sn and Bi can reach over 95%,and the removal efficiency of Si,Fe,Ni,and Pb can reach over 85%.Ultimately,the amount of Sn and Sb impurities is reduced to 2.0 and 4.1μg/kg,respectively,and that of most impurities,including Fe,Ni,Pb,and Bi,is reduced to levels below the instrumental detection limit.The average impurity removal efficiency is 90.9%,and the indium purity reaches 7N9.展开更多
Nervonic acid(NA) is a long-chain monounsaturated fatty acid with significant potential for neural fiber repair.In this study,a mixed fatty acid methyl ester was synthesized as the raw material through saponification ...Nervonic acid(NA) is a long-chain monounsaturated fatty acid with significant potential for neural fiber repair.In this study,a mixed fatty acid methyl ester was synthesized as the raw material through saponification of Acer truncatum Bunge seed oil.Based on the differences in boiling points and relative volatilities of various components,a four-stage vacuum batch distillation process was employed to enrich the nervonic acid methyl ester(NAME).The effect of distillation process parameters on enrichment efficiency was investigated,including distillation temperature,operating pressure,and reflux ratio.The purity of NAME achieved as 91.20% under optimal conditions and the corresponding yield was 48.91%.To further increase the purity,a low-temperature crystallization process was adopted and a final purity of NAME was obtained as 97.56%.Simulation of the above four-stage batch distillation was conducted using Aspen Plus software,and a continuous distillation processes was further simulated to establish a theoretical basis for future industrial-scale production.The results of experiments and simulation demonstrate that the integrated process of vacuum distillation and low-temperature crystallization exhibits remarkable separation performances,providing robust guidance for the production of high-purity NA.展开更多
Acetone-butanol-ethanol(ABE)fermentation is a primary strategy for producing bio-based n-butanol from abundant renewable biomass.In the typical ABE production chain,distillation is an essential unit for high purity AB...Acetone-butanol-ethanol(ABE)fermentation is a primary strategy for producing bio-based n-butanol from abundant renewable biomass.In the typical ABE production chain,distillation is an essential unit for high purity ABE productions,but has long been criticized by the energy-inefficient processes due to the extremely low solvents concentration received in the upstream fermentation system.Over the past decades,efforts have been dedicated to developing eco-efficient ABE distillation processes aimed at reducing both energy costs and capital investments.In this review,a comprehensive overview on ABE distillation systems is provided from physico-chemical properties in feed and thermodynamics to the process constructions and applications.The recent trends in distillation sequence construction that fitting with the rapid developed upstream in situ product recovery(ISPR)systems are emphasized.Furthermore,towards developing a more efficient ABE distillation system,the review takes a broad overview of the intensification strategies for ABE distillation.Along with systematic introduction of the key examples,the future directions for ABE distillation techniques development are also discussed towards a sustainable and low-carbon emission biorefineries.展开更多
Membrane distillation(MD)has gained extensive attention for treating highly saline wastewater.However,membrane scaling during the MD process has hindered the rapid development of this technology.Current approaches to ...Membrane distillation(MD)has gained extensive attention for treating highly saline wastewater.However,membrane scaling during the MD process has hindered the rapid development of this technology.Current approaches to mitigate scaling in membrane distillation focus primarily on achieving enhanced hydrophobicity and even superhydrophobicity via utilizing fluorinated fibrous membrane or introducing perfluorosilane modification.Considering the environmental hazards posed by fluorinated compounds,it is highly desirable to develop non-fluorinated membranes with enhanced anti-scaling properties for effective membrane distillation.In this study,we present a non-fluorinated liquid-like MD membrane with exceptional anti-scaling performance.This membrane was facilely fabricated by grafting linear polydimethylsiloxane(LPDMS)onto a hydrophilic polyether sulfone(PES)membrane pre-coated with the intermediate layers of polydopamine and silica(denoted as LPDMS-PES).Remarkably,LPDMS-PES manifested a drastically improved scaling resistance in continuous MD tests than its perfluorinated counterpart,i.e.,1H,1H,2H,2H-perfluorooctyltrichlorosilane-modified PES membrane(PFOS-PES),in both heterogeneous nucleation-dominated and crystal deposition-dominated scaling processes,despite the latter having a smaller surface energy.LPDMS-PES demonstrated a reduction of crystal accumulation of approximately 85%for Na Cl and 73%for Ca SO_(4) in the heterogeneous nucleation-dominated scaling process compared to PFOS-PES.Additionally,in the crystal deposition-dominated scaling process LPDMS-PES exhibited a reduction of about 70%in scale accumulation.These results explicitly evidenced the great potential of the liquid-like membrane to minimize scaling in membrane distillation by inhibiting both scale nucleation and adhesion onto the membrane.We believe the findings of this study have important implications for the design of high-performance MD membranes,particularly in the quest for environmentally sustainable alternatives to perfluorinated materials.展开更多
The operational state of distillation columns significantly impacts product quality and production efficiency.However,due to the complex operation and diverse influencing factors,ensuring the safety and efficient oper...The operational state of distillation columns significantly impacts product quality and production efficiency.However,due to the complex operation and diverse influencing factors,ensuring the safety and efficient operation of the distillation columns becomes paramount.This research combines passive acoustic monitoring with artificial intelligence techniques,proposed a technology based on residual network(ResNet),which involves the transformation of the acoustic signals emitted by three distillation columns under different operating states.The acoustic signals were initially in one-dimensional waveform format and then converted into two-dimensional Mel-Frequency Cepstral Coefficients spectrogram database using fast Fourier transform.Ultimately,this database was employed to train a ResNet for the purpose of identifying the operational states of the distillation columns.Through this approach,the operational states of distillation columns were monitored.Various faults,including flooding,entrainment,dry-tray,etc.,were diagnosed with an accuracy of 98.91%.Moreover,an intermediate transitional state between normal operation and fault was identified and accurately recognized by the proposed method.Under the transitional state,the acoustic signals achieved an accuracy of 97.85%on the ResNet,which enables early warnings before faults occur,enhancing the safety of chemical production processes.The approach presents a powerful tool for the monitoring and diagnosis of chemical equipment,particularly distillation columns,ensuring the safety and efficiency.展开更多
The production of high-purity propylene glycol monomethyl ether acetate(PMA)through the transesterification of propylene glycol monomethyl ether(PM)and methyl acetate(MeOAc)is traditionally catalyzed by sodium methoxi...The production of high-purity propylene glycol monomethyl ether acetate(PMA)through the transesterification of propylene glycol monomethyl ether(PM)and methyl acetate(MeOAc)is traditionally catalyzed by sodium methoxide.However,the practical application of this method is significantly hindered by the inherent limitations of sodium methoxide,such as its high sensitivity to moisture and propensity for solid precipitation,which impede its effective use in continuous processes.This work proposed a continuous catalytic distillation(CD)process utilizing Amberlyst 15 cation exchange resin as the catalyst.A comprehensive series of reaction kinetic and CD experiments were conducted to evaluate the performance of the proposed process.The results demonstrate that under the optimal operating conditions,namely an ester-to-ether molar ratio of 6:1,a refluxratio of 5:1,a total feed rate of 0.92 g‧min^(-1),and an evaporation rate of 266.47 m^(3)‧m^(-2)‧h^(-1),the conversion rate of PM achieves 99.95%,and the PMA yield is 97.31%.Based on these findings,a process flowsheet for a continuous CD process tailored for the production of electronic-grade PMA is presented.This design incorporates light and heavy removal steps to ensure the production of PMA with a purity of 99.99%.Additionally,the process utilizes pressure swing distillation to recover MeOAc,thereby enhancing the overall efficiencyand sustainability of the production process.The proposed continuous CD process offers a highly efficient,cost-effective,and environmentally sustainable solution for the production of electronic-grade PMA.展开更多
The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classificati...The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments.展开更多
Graph neural networks(GNNs)have demonstrated excellent performance in graph representation learning.However,as the volume of graph data grows,issues related to cost and efficiency become increasingly prominent.Graph d...Graph neural networks(GNNs)have demonstrated excellent performance in graph representation learning.However,as the volume of graph data grows,issues related to cost and efficiency become increasingly prominent.Graph distillation methods address this challenge by extracting a smaller,reduced graph,ensuring that GNNs trained on both the original and reduced graphs show similar performance.Existing methods,however,primarily optimize the feature matrix of the reduced graph and rely on correlation information from GNNs,while neglecting the original graph’s structure and redundant nodes.This often results in a loss of critical information within the reduced graph.To overcome this limitation,we propose a graph distillation method guided by network symmetry.Specifically,we identify symmetric nodes with equivalent neighborhood structures and merge them into“super nodes”,thereby simplifying the network structure,reducing redundant parameter optimization and enhancing training efficiency.At the same time,instead of relying on the original node features,we employ gradient descent to match optimal features that align with the original features,thus improving downstream task performance.Theoretically,our method guarantees that the reduced graph retains the key information present in the original graph.Extensive experiments demonstrate that our approach achieves significant improvements in graph distillation,exhibiting strong generalization capability and outperforming existing graph reduction methods.展开更多
Defect detection based on computer vision is a critical component in ensuring the quality of industrial products.However,existing detection methods encounter several challenges in practical applications,including the ...Defect detection based on computer vision is a critical component in ensuring the quality of industrial products.However,existing detection methods encounter several challenges in practical applications,including the scarcity of labeled samples,limited adaptability of pre-trained models,and the data heterogeneity in distributed environments.To address these issues,this research proposes an unsupervised defect detection method,FLAME(Federated Learning with Adaptive Multi-Model Embeddings).The method comprises three stages:(1)Feature learning stage:this work proposes FADE(Feature-Adaptive Domain-Specific Embeddings),a framework employs Gaussian noise injection to simulate defective patterns and implements a feature discriminator for defect detection,thereby enhancing the pre-trained model’s industrial imagery representation capabilities.(2)Knowledge distillation co-training stage:a multi-model feature knowledge distillation mechanism is introduced.Through feature-level knowledge transfer between the global model and historical local models,the current local model is guided to learn better feature representations from the global model.The approach prevents local models from converging to local optima and mitigates performance degradation caused by data heterogeneity.(3)Model parameter aggregation stage:participating clients utilize weighted averaging aggregation to synthesize an updated global model,facilitating efficient knowledge consolidation.Experimental results demonstrate that FADE improves the average image-level Area under the Receiver Operating Characteristic Curve(AUROC)by 7.34%compared to methods directly utilizing pre-trained models.In federated learning environments,FLAME’s multi-model feature knowledge distillation mechanism outperforms the classic FedAvg algorithm by 2.34%in average image-level AUROC,while exhibiting superior convergence properties.展开更多
The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combin...The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combinatorial optimization.Recently,reinforcement learning approaches such as 2D Array Pointer Networks(2D-Ptr)have demonstrated remarkable speed in decision-making by modeling multiple agents’concurrent choices as a sequence of consecutive actions.However,these learning-based models often struggle with generalization,meaning they cannot seamlessly adapt to new scenarios with varying numbers of vehicles or customers without retraining.Inspired by the potential of multi-teacher knowledge distillation to harness diverse knowledge from multiple sources and craft a comprehensive student model,we propose to enhance the generalization capability of 2D-Ptr through Multiple Teacher-forcing Knowledge Distillation(MTKD).We initially train 12 unique 2D-Ptr models under various settings to serve as teacher models.Subsequently,we randomly sample a teacher model and a batch of problem instances,focusing on those where the chosen teacher performed best.This teacher model then solves these instances,generating high-reward action sequences to guide knowledge transfer to the student model.We conduct rigorous evaluations across four distinct datasets,each comprising four HCVRP instances of varying scales.Our empirical findings underscore the proposed method superiority over existing learning-based methods in terms of both computational efficiency and solution quality.展开更多
Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,...Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,which may limit the comprehensive learning of the student network.Additionally,the imbalance between the foreground and background also affects the performance of the model.To address these issues,this paper employs feature-based distillation to enhance the detection performance of the bounding box localization part,and logit-based distillation to improve the detection performance of the category prediction part.Specifically,for the intermediate layer feature distillation,we introduce feature resampling to reduce the risk of the student model merely imitating the teacher model.At the same time,we incorporate a Spatial Attention Mechanism(SAM)to highlight the foreground features learned by the student model.In terms of output layer feature distillation,we divide the traditional distillation targets into target-class objects and non-target-class objects,aiming to improve overall distillation performance.Furthermore,we introduce a one-to-many matching distillation strategy based on Feature Alignment Module(FAM),which further enhances the studentmodel’s feature representation ability,making its feature distribution closer to that of the teacher model,and thus demonstrating superior localization and classification capabilities in object detection tasks.Experimental results demonstrate that our proposedmethodology outperforms conventional distillation techniques in terms of object detecting performance.展开更多
In order to improve the reconstruction accuracy of magnetic resonance imaging(MRI),an accurate natural image compressed sensing(CS)reconstruction network is proposed,which combines the advantages of model-based and de...In order to improve the reconstruction accuracy of magnetic resonance imaging(MRI),an accurate natural image compressed sensing(CS)reconstruction network is proposed,which combines the advantages of model-based and deep learning-based CS-MRI methods.In theory,enhancing geometric texture details in linear reconstruction is possible.First,the optimization problem is decomposed into two problems:linear approximation and geometric compensation.Aimed at the problem of image linear approximation,the data consistency module is used to deal with it.Since the processing process will lose texture details,a neural network layer that explicitly combines image and frequency feature representation is proposed,which is named butterfly dilated geometric distillation network.The network introduces the idea of butterfly operation,skillfully integrates the features of image domain and frequency domain,and avoids the loss of texture details when extracting features in a single domain.Finally,a channel feature fusion module is designed by combining channel attention mechanism and dilated convolution.The attention of the channel makes the final output feature map focus on the more important part,thus improving the feature representation ability.The dilated convolution enlarges the receptive field,thereby obtaining more dense image feature data.The experimental results show that the peak signal-to-noise ratio of the network is 5.43 dB,5.24 dB and 3.89 dB higher than that of ISTA-Net+,FISTA and DGDN networks on the brain data set with a Cartesian sampling mask CS ratio of 10%.展开更多
Within the realm of multimodal neural machine translation(MNMT),addressing the challenge of seamlessly integrating textual data with corresponding image data to enhance translation accuracy has become a pressing issue...Within the realm of multimodal neural machine translation(MNMT),addressing the challenge of seamlessly integrating textual data with corresponding image data to enhance translation accuracy has become a pressing issue.We saw that discrepancies between textual content and associated images can lead to visual noise,potentially diverting the model’s focus away from the textual data and so affecting the translation’s comprehensive effectiveness.To solve this visual noise problem,we propose an innovative KDNR-MNMT model.Themodel combines the knowledge distillation technique with an anti-noise interaction mechanism,which makes full use of the synthesized graphic knowledge and local image interaction masks,aiming to extract more effective visual features.Meanwhile,the KDNR-MNMT model adopts a multimodal adaptive gating fusion strategy to enhance the constructive interaction of different modal information.By integrating a perceptual attention mechanism,which uses cross-modal interaction cues within the Transformer framework,our approach notably enhances the quality of machine translation outputs.To confirmthemodel’s performance,we carried out extensive testing and assessment on the extensively utilized Multi30K dataset.The outcomes of our experiments prove substantial enhancements in our model’s BLEU and METEOR scores,with respective increases of 0.78 and 0.99 points over prevailing methods.This accomplishment affirms the potency of our strategy for mitigating visual interference and heralds groundbreaking advancements within themultimodal NMT domain,further propelling the evolution of this scholarly pursuit.展开更多
Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and...Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and remove those deemed unimportant.However,different layers of the neural network exhibit varying filter distributions,making it inappropriate to implement the same pruning criterion for all layers.Additionally,some approaches apply different criteria from the set of pre-defined pruning rules for different layers,but the limited space leads to the difficulty of covering all layers.If criteria for all layers are manually designed,it is costly and difficult to generalize to other networks.To solve this problem,we present a novel neural network pruning method based on the Criterion Learner and Attention Distillation(CLAD).Specifically,CLAD develops a differentiable criterion learner,which is integrated into each layer of the network.The learner can automatically learn the appropriate pruning criterion according to the filter parameters of each layer,thus the requirement of manual design is eliminated.Furthermore,the criterion learner is trained end-to-end by the gradient optimization algorithm to achieve efficient pruning.In addition,attention distillation,which fully utilizes the knowledge of unpruned networks to guide the optimization of the learner and improve the pruned network performance,is introduced in the process of learner optimization.Experiments conducted on various datasets and networks demonstrate the effectiveness of the proposed method.Notably,CLAD reduces the FLOPs of Res Net-110 by about 53%on the CIFAR-10 dataset,while simultaneously improves the network's accuracy by 0.05%.Moreover,it reduces the FLOPs of Res Net-50 by about 46%on the Image Net-1K dataset,and maintains a top-1 accuracy of 75.45%.展开更多
Under low-illumination conditions, the quality of image signals deteriorates significantly, typically characterized by a peak signal-to-noise ratio (PSNR) below 10 dB, which severely limits the usability of the images...Under low-illumination conditions, the quality of image signals deteriorates significantly, typically characterized by a peak signal-to-noise ratio (PSNR) below 10 dB, which severely limits the usability of the images. Supervised methods, which utilize paired high-low light images as training sets, can enhance the PSNR to around 20 dB, significantly improving image quality. However, such data is challenging to obtain. In recent years, unsupervised low-light image enhancement (LIE) methods based on the Retinex framework have been proposed, but they generally lag behind supervised methods by 5–10 dB in performance. In this paper, we introduce the Denoising-Distilled Retine (DDR) method, an unsupervised approach that integrates denoising priors into a Retinex-based training framework. By explicitly incorporating denoising, the DDR method effectively addresses the challenges of noise and artifacts in low-light images, thereby enhancing the performance of the Retinex framework. The model achieved a PSNR of 19.82 dB on the LOL dataset, which is comparable to the performance of supervised methods. Furthermore, by applying knowledge distillation, the DDR method optimizes the model for real-time processing of low-light images, achieving a processing speed of 199.7 fps without incurring additional computational costs. While the DDR method has demonstrated superior performance in terms of image quality and processing speed, there is still room for improvement in terms of robustness across different color spaces and under highly resource-constrained conditions. Future research will focus on enhancing the model’s generalizability and adaptability to address these challenges. Our rigorous testing on public datasets further substantiates the DDR method’s state-of-the-art performance in both image quality and processing speed.展开更多
Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training per...Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training performance, the ability to effectively exploit the dataset, and the ability to adapt to complex environments when deploying the model. By utilizing the knowledge distillation techniques, the article strives to overcome the above challenges with the inheritance of the advantages of both the teacher model and the student model. More precisely, the ResNet152-PSP-Net model’s characteristics are utilized to train the ResNet18-PSP-Net model. Pyramid pooling blocks are utilized to decode multi-scale feature maps, creating a complete semantic map inference. The student model not only preserves the strong segmentation performance from the teacher model but also improves the inference speed of the prediction results. The proposed method exhibits a clear advantage over conventional convolutional neural network (CNN) models, as evident from the conducted experiments. Furthermore, the proposed model also shows remarkable improvement in processing speed when compared with light-weight models such as MobileNetV2 and EfficientNet based on latency and throughput parameters. The proposed KD-SegNet model obtains an accuracy of 96.3% and a mIoU (mean Intersection over Union) of 77%, outperforming the performance of existing models by more than 15% on the same training dataset. The suggested method has an average training time that is only 0.51 times less than same field models, while still achieving comparable segmentation performance. Hence, the semantic segmentation frames are collected, forming the motion trajectory for the system in the environment. Overall, this architecture shows great promise for the development of knowledge-based systems for MR’s navigation.展开更多
Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of ...Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of DWBDM process is challenging,since inherently dynamic and highly nonlinear,which make it difficult to give the controller reasonable set value or optimal temperature profile for temperature control scheme.To overcome this obstacle,this study proposes a new strategy to develop temperature control scheme for DWBDM combining neural network soft-sensor with fuzzy control.Dynamic model of DWBDM was firstly developed and numerically solved by Python,with three control schemes:composition control by PID and fuzzy control respectively,and temperature control by fuzzy control with neural network soft-sensor.For dynamic process,the neural networks with memory functions,such as RNN,LSTM and GRU,are used to handle with time-series data.The results from a case example show that the new control scheme can perform a good temperature control of DWBDM with the same or even better product purities as traditional PID or fuzzy control,and fuzzy control could reduce the effect of prediction error from neural network,indicating that it is a highly feasible and effective control approach for DWBDM,and could even be extended to other dynamic processes.展开更多
The oversupply of diesel in China necessitates efficient separation of polycyclic aromatic hydrocarbons from fluidized catalytic cracking diesel for value-added utilization.However,purification is hindered by alkane a...The oversupply of diesel in China necessitates efficient separation of polycyclic aromatic hydrocarbons from fluidized catalytic cracking diesel for value-added utilization.However,purification is hindered by alkane and monocyclic aromatic interference.In this work,we propose a solvent-screening strategy for extractive distillation based on molecular polarity and interaction energy analysis.Quantum chemical calculations identified ethylene glycol(aromatic solubility) and N,N-dimethylformamide(alkane selectivity) as optimal solvents,with weak hydrogen bonds(e.g.,O-H…π,C-H…π) governing aromatic interactions.Two process designs were developed:(1) solvent extraction followed by primary extractive distillation(purity >95.0%(mass)) and(2) direct two-stage extractive distillation(purity>92.0%(mass)).This work provides a flexible framework for polycyclic aromatic hydrocarbon separation tailored to market demands while elucidating solvent-solute interactions at the molecular level.展开更多
基金financially supported by the National Natural Science Foundation of China(No.22308085)Science and Technology Plan Project of Shijiazhuang(Nos.241130547A and241791337A)Natural Science Foundation of Hebei Province(No.B2020208083)。
文摘Membrane distillation(MD)is an advanced membrane separation process that employs hydrophobic microporous membranes to sepa rate non-volatile solutes from the feed solution,driven by vapor pressure gradients generated through thermal difference.This technology offers strong desalination capabilities and efficiently harnesses low-grade thermal energy sources,including geothermal and waste heat,making it a cost-effective solution for freshwater scarcity.Nevertheless,hydrophobic membranes are prone to contamination by surfactants,inorganic salts,and other substances in feed solutions.To address this,low-surface-energy composite nano-inorganic materials composed of carbon nanotubes and silica were modified and synthesized via organosilicon chemistry.A superhydrophobic surface exhibiting a water contact angle of157.96°was successfully fabricated using above nano-materials on poly(vinylidene fluoride)(PVDF)membrane surface with micro-nano structures via a one-step spray-coating method.Compared to unmodified PVDF membra ne,the superhydrophobic membrane demonstrated superior resistance to common scaling agents such as CaCl_(2),Mg(OH)_(2),CaCO_(3),and CaSO_(4),while maintaining stable permeate flux(13.4 kg·m^(-2)·h^(-1))during MD tests.Additionally,the modified membra ne exhibited enhanced wetting resistance when treating feed solutions containing sodium dodecyl sulfate(SDS),significantly extending the operational lifespan of the membrane.Due to its outstanding performance,this superhydrophobic membrane is expected to promote the practical application of MD technology in the treatment of complex wa stewater and efficient seawater desalination.
文摘Magnesium(Mg)alloys are widely used lightweight structural materials for automobiles and help reduce carbon emissions.However,their use increases the production of Mg alloy scrap,which is recycled at a much lower rate than aluminum,and its greater complexity poses challenges to existing recycling processes.Although vacuum distillation can be used to recycle Mg alloy scrap,this requires optimizing and maximizing metal recirculation,but there has been no thermodynamic analysis of this process.In this study,the feasibility and controllability of separating inclusions and 23 metal impurities were evaluated,and their distribution and removal limits were quantified.Thermodynamic analyses and experimental results showed that inclusions and impurity metals of separation coefficient lgβ_(i)≤-5,including Cu,Fe,Co,and Ni below 0.001 ppm,could be removed from the matrix.All Zn entered the recycled Mg,while impurities with-1<lgβ_(i)<-5 such as Li,Ca,and Mn severely affected the purity of the recycled Mg during the later stage of distillation.Therefore,an optimization strategy for vacuum distillation recycling:lower temperatures and higher system pressures for Zn separation in the early stage,and the early termination of the recovery process in the later stage or a continuous supply of raw melt can also prevent contamination during recycling.The alloying elements Al and Zn in Mg alloy scrap can be further recovered and purified by vacuum distillation when economically feasible,to maximize the recycling of metal resources.
基金National Key Research and Development Program of China(2023YFC2907904)National Natural Science Foundation of China(52374364)。
文摘High-purity indium finds extensive application in the aerospace,electronics,medical,energy,and national defense sectors.Its purity and impurity contents significantly influence its performance in these applications.High-purity indium was prepared by combining zone refining with vacuum distillation.Results show that the average removal efficiency of impurity Sb can approach 95%,while the removal efficiency of impurities Sn and Bi can reach over 95%,and the removal efficiency of Si,Fe,Ni,and Pb can reach over 85%.Ultimately,the amount of Sn and Sb impurities is reduced to 2.0 and 4.1μg/kg,respectively,and that of most impurities,including Fe,Ni,Pb,and Bi,is reduced to levels below the instrumental detection limit.The average impurity removal efficiency is 90.9%,and the indium purity reaches 7N9.
基金supported by the National Natural Science Foundation of China(22125802,22108150,22338001)。
文摘Nervonic acid(NA) is a long-chain monounsaturated fatty acid with significant potential for neural fiber repair.In this study,a mixed fatty acid methyl ester was synthesized as the raw material through saponification of Acer truncatum Bunge seed oil.Based on the differences in boiling points and relative volatilities of various components,a four-stage vacuum batch distillation process was employed to enrich the nervonic acid methyl ester(NAME).The effect of distillation process parameters on enrichment efficiency was investigated,including distillation temperature,operating pressure,and reflux ratio.The purity of NAME achieved as 91.20% under optimal conditions and the corresponding yield was 48.91%.To further increase the purity,a low-temperature crystallization process was adopted and a final purity of NAME was obtained as 97.56%.Simulation of the above four-stage batch distillation was conducted using Aspen Plus software,and a continuous distillation processes was further simulated to establish a theoretical basis for future industrial-scale production.The results of experiments and simulation demonstrate that the integrated process of vacuum distillation and low-temperature crystallization exhibits remarkable separation performances,providing robust guidance for the production of high-purity NA.
基金funded by the National Natural Science Foundation of China(22078018)the Natural Science Foundation of Beijing(2222016).
文摘Acetone-butanol-ethanol(ABE)fermentation is a primary strategy for producing bio-based n-butanol from abundant renewable biomass.In the typical ABE production chain,distillation is an essential unit for high purity ABE productions,but has long been criticized by the energy-inefficient processes due to the extremely low solvents concentration received in the upstream fermentation system.Over the past decades,efforts have been dedicated to developing eco-efficient ABE distillation processes aimed at reducing both energy costs and capital investments.In this review,a comprehensive overview on ABE distillation systems is provided from physico-chemical properties in feed and thermodynamics to the process constructions and applications.The recent trends in distillation sequence construction that fitting with the rapid developed upstream in situ product recovery(ISPR)systems are emphasized.Furthermore,towards developing a more efficient ABE distillation system,the review takes a broad overview of the intensification strategies for ABE distillation.Along with systematic introduction of the key examples,the future directions for ABE distillation techniques development are also discussed towards a sustainable and low-carbon emission biorefineries.
基金supported by National Natural Science Foundation of China(Nos.22072185,12072381)Guangdong Basic and Applied Basic Research Foundation(No.2021A1515110221)Fundamental Research Funds for the Central Universities,Sun Yatsen University(No.23yxqntd002)。
文摘Membrane distillation(MD)has gained extensive attention for treating highly saline wastewater.However,membrane scaling during the MD process has hindered the rapid development of this technology.Current approaches to mitigate scaling in membrane distillation focus primarily on achieving enhanced hydrophobicity and even superhydrophobicity via utilizing fluorinated fibrous membrane or introducing perfluorosilane modification.Considering the environmental hazards posed by fluorinated compounds,it is highly desirable to develop non-fluorinated membranes with enhanced anti-scaling properties for effective membrane distillation.In this study,we present a non-fluorinated liquid-like MD membrane with exceptional anti-scaling performance.This membrane was facilely fabricated by grafting linear polydimethylsiloxane(LPDMS)onto a hydrophilic polyether sulfone(PES)membrane pre-coated with the intermediate layers of polydopamine and silica(denoted as LPDMS-PES).Remarkably,LPDMS-PES manifested a drastically improved scaling resistance in continuous MD tests than its perfluorinated counterpart,i.e.,1H,1H,2H,2H-perfluorooctyltrichlorosilane-modified PES membrane(PFOS-PES),in both heterogeneous nucleation-dominated and crystal deposition-dominated scaling processes,despite the latter having a smaller surface energy.LPDMS-PES demonstrated a reduction of crystal accumulation of approximately 85%for Na Cl and 73%for Ca SO_(4) in the heterogeneous nucleation-dominated scaling process compared to PFOS-PES.Additionally,in the crystal deposition-dominated scaling process LPDMS-PES exhibited a reduction of about 70%in scale accumulation.These results explicitly evidenced the great potential of the liquid-like membrane to minimize scaling in membrane distillation by inhibiting both scale nucleation and adhesion onto the membrane.We believe the findings of this study have important implications for the design of high-performance MD membranes,particularly in the quest for environmentally sustainable alternatives to perfluorinated materials.
基金the National Natural Science Foundation of China(22308079)the Natural Science Foundation of Hebei Province,China(B2022202008,B2023202025)the Science and Technology Project of Hebei Education Department,China(BJK2022037).
文摘The operational state of distillation columns significantly impacts product quality and production efficiency.However,due to the complex operation and diverse influencing factors,ensuring the safety and efficient operation of the distillation columns becomes paramount.This research combines passive acoustic monitoring with artificial intelligence techniques,proposed a technology based on residual network(ResNet),which involves the transformation of the acoustic signals emitted by three distillation columns under different operating states.The acoustic signals were initially in one-dimensional waveform format and then converted into two-dimensional Mel-Frequency Cepstral Coefficients spectrogram database using fast Fourier transform.Ultimately,this database was employed to train a ResNet for the purpose of identifying the operational states of the distillation columns.Through this approach,the operational states of distillation columns were monitored.Various faults,including flooding,entrainment,dry-tray,etc.,were diagnosed with an accuracy of 98.91%.Moreover,an intermediate transitional state between normal operation and fault was identified and accurately recognized by the proposed method.Under the transitional state,the acoustic signals achieved an accuracy of 97.85%on the ResNet,which enables early warnings before faults occur,enhancing the safety of chemical production processes.The approach presents a powerful tool for the monitoring and diagnosis of chemical equipment,particularly distillation columns,ensuring the safety and efficiency.
基金supported by the National Natural Science Foundation of China(22378065,22278077 and 22278076)the Key Program of Natural Science Foundation of Fujian Province of China(2022J02019).
文摘The production of high-purity propylene glycol monomethyl ether acetate(PMA)through the transesterification of propylene glycol monomethyl ether(PM)and methyl acetate(MeOAc)is traditionally catalyzed by sodium methoxide.However,the practical application of this method is significantly hindered by the inherent limitations of sodium methoxide,such as its high sensitivity to moisture and propensity for solid precipitation,which impede its effective use in continuous processes.This work proposed a continuous catalytic distillation(CD)process utilizing Amberlyst 15 cation exchange resin as the catalyst.A comprehensive series of reaction kinetic and CD experiments were conducted to evaluate the performance of the proposed process.The results demonstrate that under the optimal operating conditions,namely an ester-to-ether molar ratio of 6:1,a refluxratio of 5:1,a total feed rate of 0.92 g‧min^(-1),and an evaporation rate of 266.47 m^(3)‧m^(-2)‧h^(-1),the conversion rate of PM achieves 99.95%,and the PMA yield is 97.31%.Based on these findings,a process flowsheet for a continuous CD process tailored for the production of electronic-grade PMA is presented.This design incorporates light and heavy removal steps to ensure the production of PMA with a purity of 99.99%.Additionally,the process utilizes pressure swing distillation to recover MeOAc,thereby enhancing the overall efficiencyand sustainability of the production process.The proposed continuous CD process offers a highly efficient,cost-effective,and environmentally sustainable solution for the production of electronic-grade PMA.
文摘The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments.
基金Project supported by the National Natural Science Foundation of China(Grant No.62176217)the Program from the Sichuan Provincial Science and Technology,China(Grant No.2018RZ0081)the Fundamental Research Funds of China West Normal University(Grant No.17E063).
文摘Graph neural networks(GNNs)have demonstrated excellent performance in graph representation learning.However,as the volume of graph data grows,issues related to cost and efficiency become increasingly prominent.Graph distillation methods address this challenge by extracting a smaller,reduced graph,ensuring that GNNs trained on both the original and reduced graphs show similar performance.Existing methods,however,primarily optimize the feature matrix of the reduced graph and rely on correlation information from GNNs,while neglecting the original graph’s structure and redundant nodes.This often results in a loss of critical information within the reduced graph.To overcome this limitation,we propose a graph distillation method guided by network symmetry.Specifically,we identify symmetric nodes with equivalent neighborhood structures and merge them into“super nodes”,thereby simplifying the network structure,reducing redundant parameter optimization and enhancing training efficiency.At the same time,instead of relying on the original node features,we employ gradient descent to match optimal features that align with the original features,thus improving downstream task performance.Theoretically,our method guarantees that the reduced graph retains the key information present in the original graph.Extensive experiments demonstrate that our approach achieves significant improvements in graph distillation,exhibiting strong generalization capability and outperforming existing graph reduction methods.
基金supported in part by the National Natural Science Foundation of China under Grants 32171909,52205254,32301704the Guangdong Basic and Applied Basic Research Foundation under Grants 2023A1515011255,2024A1515010199+1 种基金the Scientific Research Projects of Universities in Guangdong Province under Grants 2024ZDZX1042,2024ZDZX3057the Ji-Hua Laboratory Open Project under Grant X220931UZ230.
文摘Defect detection based on computer vision is a critical component in ensuring the quality of industrial products.However,existing detection methods encounter several challenges in practical applications,including the scarcity of labeled samples,limited adaptability of pre-trained models,and the data heterogeneity in distributed environments.To address these issues,this research proposes an unsupervised defect detection method,FLAME(Federated Learning with Adaptive Multi-Model Embeddings).The method comprises three stages:(1)Feature learning stage:this work proposes FADE(Feature-Adaptive Domain-Specific Embeddings),a framework employs Gaussian noise injection to simulate defective patterns and implements a feature discriminator for defect detection,thereby enhancing the pre-trained model’s industrial imagery representation capabilities.(2)Knowledge distillation co-training stage:a multi-model feature knowledge distillation mechanism is introduced.Through feature-level knowledge transfer between the global model and historical local models,the current local model is guided to learn better feature representations from the global model.The approach prevents local models from converging to local optima and mitigates performance degradation caused by data heterogeneity.(3)Model parameter aggregation stage:participating clients utilize weighted averaging aggregation to synthesize an updated global model,facilitating efficient knowledge consolidation.Experimental results demonstrate that FADE improves the average image-level Area under the Receiver Operating Characteristic Curve(AUROC)by 7.34%compared to methods directly utilizing pre-trained models.In federated learning environments,FLAME’s multi-model feature knowledge distillation mechanism outperforms the classic FedAvg algorithm by 2.34%in average image-level AUROC,while exhibiting superior convergence properties.
基金in part by the National Science Foundation of China under Grant No.62276238in part by the National Science Foundation for Distinguished Young Scholars of China under Grant No.62325602in part by the Natural Science Foundation of Henan,China under Grant No.232300421095.
文摘The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combinatorial optimization.Recently,reinforcement learning approaches such as 2D Array Pointer Networks(2D-Ptr)have demonstrated remarkable speed in decision-making by modeling multiple agents’concurrent choices as a sequence of consecutive actions.However,these learning-based models often struggle with generalization,meaning they cannot seamlessly adapt to new scenarios with varying numbers of vehicles or customers without retraining.Inspired by the potential of multi-teacher knowledge distillation to harness diverse knowledge from multiple sources and craft a comprehensive student model,we propose to enhance the generalization capability of 2D-Ptr through Multiple Teacher-forcing Knowledge Distillation(MTKD).We initially train 12 unique 2D-Ptr models under various settings to serve as teacher models.Subsequently,we randomly sample a teacher model and a batch of problem instances,focusing on those where the chosen teacher performed best.This teacher model then solves these instances,generating high-reward action sequences to guide knowledge transfer to the student model.We conduct rigorous evaluations across four distinct datasets,each comprising four HCVRP instances of varying scales.Our empirical findings underscore the proposed method superiority over existing learning-based methods in terms of both computational efficiency and solution quality.
基金funded by National Natural Science Foundation of China(61603245).
文摘Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,which may limit the comprehensive learning of the student network.Additionally,the imbalance between the foreground and background also affects the performance of the model.To address these issues,this paper employs feature-based distillation to enhance the detection performance of the bounding box localization part,and logit-based distillation to improve the detection performance of the category prediction part.Specifically,for the intermediate layer feature distillation,we introduce feature resampling to reduce the risk of the student model merely imitating the teacher model.At the same time,we incorporate a Spatial Attention Mechanism(SAM)to highlight the foreground features learned by the student model.In terms of output layer feature distillation,we divide the traditional distillation targets into target-class objects and non-target-class objects,aiming to improve overall distillation performance.Furthermore,we introduce a one-to-many matching distillation strategy based on Feature Alignment Module(FAM),which further enhances the studentmodel’s feature representation ability,making its feature distribution closer to that of the teacher model,and thus demonstrating superior localization and classification capabilities in object detection tasks.Experimental results demonstrate that our proposedmethodology outperforms conventional distillation techniques in terms of object detecting performance.
基金the National Natural Science Foundation of China(No.61962032)。
文摘In order to improve the reconstruction accuracy of magnetic resonance imaging(MRI),an accurate natural image compressed sensing(CS)reconstruction network is proposed,which combines the advantages of model-based and deep learning-based CS-MRI methods.In theory,enhancing geometric texture details in linear reconstruction is possible.First,the optimization problem is decomposed into two problems:linear approximation and geometric compensation.Aimed at the problem of image linear approximation,the data consistency module is used to deal with it.Since the processing process will lose texture details,a neural network layer that explicitly combines image and frequency feature representation is proposed,which is named butterfly dilated geometric distillation network.The network introduces the idea of butterfly operation,skillfully integrates the features of image domain and frequency domain,and avoids the loss of texture details when extracting features in a single domain.Finally,a channel feature fusion module is designed by combining channel attention mechanism and dilated convolution.The attention of the channel makes the final output feature map focus on the more important part,thus improving the feature representation ability.The dilated convolution enlarges the receptive field,thereby obtaining more dense image feature data.The experimental results show that the peak signal-to-noise ratio of the network is 5.43 dB,5.24 dB and 3.89 dB higher than that of ISTA-Net+,FISTA and DGDN networks on the brain data set with a Cartesian sampling mask CS ratio of 10%.
基金supported by the Henan Provincial Science and Technology Research Project:232102211017,232102211006,232102210044,242102211020 and 242102211007the ZhengzhouUniversity of Light Industry Science and Technology Innovation Team Program Project:23XNKJTD0205.
文摘Within the realm of multimodal neural machine translation(MNMT),addressing the challenge of seamlessly integrating textual data with corresponding image data to enhance translation accuracy has become a pressing issue.We saw that discrepancies between textual content and associated images can lead to visual noise,potentially diverting the model’s focus away from the textual data and so affecting the translation’s comprehensive effectiveness.To solve this visual noise problem,we propose an innovative KDNR-MNMT model.Themodel combines the knowledge distillation technique with an anti-noise interaction mechanism,which makes full use of the synthesized graphic knowledge and local image interaction masks,aiming to extract more effective visual features.Meanwhile,the KDNR-MNMT model adopts a multimodal adaptive gating fusion strategy to enhance the constructive interaction of different modal information.By integrating a perceptual attention mechanism,which uses cross-modal interaction cues within the Transformer framework,our approach notably enhances the quality of machine translation outputs.To confirmthemodel’s performance,we carried out extensive testing and assessment on the extensively utilized Multi30K dataset.The outcomes of our experiments prove substantial enhancements in our model’s BLEU and METEOR scores,with respective increases of 0.78 and 0.99 points over prevailing methods.This accomplishment affirms the potency of our strategy for mitigating visual interference and heralds groundbreaking advancements within themultimodal NMT domain,further propelling the evolution of this scholarly pursuit.
基金supported in part by the National Natural Science Foundation of China under grants 62073085,61973330 and 62350055in part by the Shenzhen Science and Technology Program,China under grant JCYJ20230807093513027in part by the Fundamental Research Funds for the Central Universities,China under grant 1243300008。
文摘Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and remove those deemed unimportant.However,different layers of the neural network exhibit varying filter distributions,making it inappropriate to implement the same pruning criterion for all layers.Additionally,some approaches apply different criteria from the set of pre-defined pruning rules for different layers,but the limited space leads to the difficulty of covering all layers.If criteria for all layers are manually designed,it is costly and difficult to generalize to other networks.To solve this problem,we present a novel neural network pruning method based on the Criterion Learner and Attention Distillation(CLAD).Specifically,CLAD develops a differentiable criterion learner,which is integrated into each layer of the network.The learner can automatically learn the appropriate pruning criterion according to the filter parameters of each layer,thus the requirement of manual design is eliminated.Furthermore,the criterion learner is trained end-to-end by the gradient optimization algorithm to achieve efficient pruning.In addition,attention distillation,which fully utilizes the knowledge of unpruned networks to guide the optimization of the learner and improve the pruned network performance,is introduced in the process of learner optimization.Experiments conducted on various datasets and networks demonstrate the effectiveness of the proposed method.Notably,CLAD reduces the FLOPs of Res Net-110 by about 53%on the CIFAR-10 dataset,while simultaneously improves the network's accuracy by 0.05%.Moreover,it reduces the FLOPs of Res Net-50 by about 46%on the Image Net-1K dataset,and maintains a top-1 accuracy of 75.45%.
基金support by the Guangxi Natural Science Foundation(Grant No.2024GXNSFAA010484)the NationalNatural Science Foundation of China(No.62466013),this work has been made possible.
文摘Under low-illumination conditions, the quality of image signals deteriorates significantly, typically characterized by a peak signal-to-noise ratio (PSNR) below 10 dB, which severely limits the usability of the images. Supervised methods, which utilize paired high-low light images as training sets, can enhance the PSNR to around 20 dB, significantly improving image quality. However, such data is challenging to obtain. In recent years, unsupervised low-light image enhancement (LIE) methods based on the Retinex framework have been proposed, but they generally lag behind supervised methods by 5–10 dB in performance. In this paper, we introduce the Denoising-Distilled Retine (DDR) method, an unsupervised approach that integrates denoising priors into a Retinex-based training framework. By explicitly incorporating denoising, the DDR method effectively addresses the challenges of noise and artifacts in low-light images, thereby enhancing the performance of the Retinex framework. The model achieved a PSNR of 19.82 dB on the LOL dataset, which is comparable to the performance of supervised methods. Furthermore, by applying knowledge distillation, the DDR method optimizes the model for real-time processing of low-light images, achieving a processing speed of 199.7 fps without incurring additional computational costs. While the DDR method has demonstrated superior performance in terms of image quality and processing speed, there is still room for improvement in terms of robustness across different color spaces and under highly resource-constrained conditions. Future research will focus on enhancing the model’s generalizability and adaptability to address these challenges. Our rigorous testing on public datasets further substantiates the DDR method’s state-of-the-art performance in both image quality and processing speed.
基金funded by Hanoi University of Science and Technology(HUST)under project number T2023-PC-008.
文摘Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training performance, the ability to effectively exploit the dataset, and the ability to adapt to complex environments when deploying the model. By utilizing the knowledge distillation techniques, the article strives to overcome the above challenges with the inheritance of the advantages of both the teacher model and the student model. More precisely, the ResNet152-PSP-Net model’s characteristics are utilized to train the ResNet18-PSP-Net model. Pyramid pooling blocks are utilized to decode multi-scale feature maps, creating a complete semantic map inference. The student model not only preserves the strong segmentation performance from the teacher model but also improves the inference speed of the prediction results. The proposed method exhibits a clear advantage over conventional convolutional neural network (CNN) models, as evident from the conducted experiments. Furthermore, the proposed model also shows remarkable improvement in processing speed when compared with light-weight models such as MobileNetV2 and EfficientNet based on latency and throughput parameters. The proposed KD-SegNet model obtains an accuracy of 96.3% and a mIoU (mean Intersection over Union) of 77%, outperforming the performance of existing models by more than 15% on the same training dataset. The suggested method has an average training time that is only 0.51 times less than same field models, while still achieving comparable segmentation performance. Hence, the semantic segmentation frames are collected, forming the motion trajectory for the system in the environment. Overall, this architecture shows great promise for the development of knowledge-based systems for MR’s navigation.
基金supported by Beijing Natural Science Foundation(2222037)the Special Educating Project of the Talent for Carbon Peak and Carbon Neutrality of University of Chinese Academy of Sciences(Innovation of talent cultivation model for“dual carbon”in chemical engineering industry,E3E56501A2).
文摘Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of DWBDM process is challenging,since inherently dynamic and highly nonlinear,which make it difficult to give the controller reasonable set value or optimal temperature profile for temperature control scheme.To overcome this obstacle,this study proposes a new strategy to develop temperature control scheme for DWBDM combining neural network soft-sensor with fuzzy control.Dynamic model of DWBDM was firstly developed and numerically solved by Python,with three control schemes:composition control by PID and fuzzy control respectively,and temperature control by fuzzy control with neural network soft-sensor.For dynamic process,the neural networks with memory functions,such as RNN,LSTM and GRU,are used to handle with time-series data.The results from a case example show that the new control scheme can perform a good temperature control of DWBDM with the same or even better product purities as traditional PID or fuzzy control,and fuzzy control could reduce the effect of prediction error from neural network,indicating that it is a highly feasible and effective control approach for DWBDM,and could even be extended to other dynamic processes.
基金supported by the National Natural Science Foundation of China (22021004)。
文摘The oversupply of diesel in China necessitates efficient separation of polycyclic aromatic hydrocarbons from fluidized catalytic cracking diesel for value-added utilization.However,purification is hindered by alkane and monocyclic aromatic interference.In this work,we propose a solvent-screening strategy for extractive distillation based on molecular polarity and interaction energy analysis.Quantum chemical calculations identified ethylene glycol(aromatic solubility) and N,N-dimethylformamide(alkane selectivity) as optimal solvents,with weak hydrogen bonds(e.g.,O-H…π,C-H…π) governing aromatic interactions.Two process designs were developed:(1) solvent extraction followed by primary extractive distillation(purity >95.0%(mass)) and(2) direct two-stage extractive distillation(purity>92.0%(mass)).This work provides a flexible framework for polycyclic aromatic hydrocarbon separation tailored to market demands while elucidating solvent-solute interactions at the molecular level.