期刊文献+
共找到17,484篇文章
< 1 2 250 >
每页显示 20 50 100
Fabrication of Superhydrophobic Membrane via One-step Spraying Strategy Utilizing Organosilicon Chemistry and Its Performance in Membrane Distillation
1
作者 Tian-Tian Li Zheng Xu +3 位作者 Yu-Jing Zhang Ming-Han Su Shun-Da Liu Shao-Fei Zhang 《Chinese Journal of Polymer Science》 2026年第1期223-233,I0016,共12页
Membrane distillation(MD)is an advanced membrane separation process that employs hydrophobic microporous membranes to sepa rate non-volatile solutes from the feed solution,driven by vapor pressure gradients generated ... Membrane distillation(MD)is an advanced membrane separation process that employs hydrophobic microporous membranes to sepa rate non-volatile solutes from the feed solution,driven by vapor pressure gradients generated through thermal difference.This technology offers strong desalination capabilities and efficiently harnesses low-grade thermal energy sources,including geothermal and waste heat,making it a cost-effective solution for freshwater scarcity.Nevertheless,hydrophobic membranes are prone to contamination by surfactants,inorganic salts,and other substances in feed solutions.To address this,low-surface-energy composite nano-inorganic materials composed of carbon nanotubes and silica were modified and synthesized via organosilicon chemistry.A superhydrophobic surface exhibiting a water contact angle of157.96°was successfully fabricated using above nano-materials on poly(vinylidene fluoride)(PVDF)membrane surface with micro-nano structures via a one-step spray-coating method.Compared to unmodified PVDF membra ne,the superhydrophobic membrane demonstrated superior resistance to common scaling agents such as CaCl_(2),Mg(OH)_(2),CaCO_(3),and CaSO_(4),while maintaining stable permeate flux(13.4 kg·m^(-2)·h^(-1))during MD tests.Additionally,the modified membra ne exhibited enhanced wetting resistance when treating feed solutions containing sodium dodecyl sulfate(SDS),significantly extending the operational lifespan of the membrane.Due to its outstanding performance,this superhydrophobic membrane is expected to promote the practical application of MD technology in the treatment of complex wa stewater and efficient seawater desalination. 展开更多
关键词 Membrane distillation SUPERHYDROPHOBICITY ANTI-SCALING Micro-nano structure
原文传递
Thermodynamic and experimental evaluation of the sustainable recycling of magnesium alloy scrap by vacuum distillation based on vapor-liquid equilibrium 被引量:1
2
作者 Lipeng Wang Dong Liang +6 位作者 Yang Tian Jianxue Chai Rui Li Shuji Wu Bin Yang Baoqiang Xu Yong Deng 《Journal of Magnesium and Alloys》 2025年第1期283-295,共13页
Magnesium(Mg)alloys are widely used lightweight structural materials for automobiles and help reduce carbon emissions.However,their use increases the production of Mg alloy scrap,which is recycled at a much lower rate... Magnesium(Mg)alloys are widely used lightweight structural materials for automobiles and help reduce carbon emissions.However,their use increases the production of Mg alloy scrap,which is recycled at a much lower rate than aluminum,and its greater complexity poses challenges to existing recycling processes.Although vacuum distillation can be used to recycle Mg alloy scrap,this requires optimizing and maximizing metal recirculation,but there has been no thermodynamic analysis of this process.In this study,the feasibility and controllability of separating inclusions and 23 metal impurities were evaluated,and their distribution and removal limits were quantified.Thermodynamic analyses and experimental results showed that inclusions and impurity metals of separation coefficient lgβ_(i)≤-5,including Cu,Fe,Co,and Ni below 0.001 ppm,could be removed from the matrix.All Zn entered the recycled Mg,while impurities with-1<lgβ_(i)<-5 such as Li,Ca,and Mn severely affected the purity of the recycled Mg during the later stage of distillation.Therefore,an optimization strategy for vacuum distillation recycling:lower temperatures and higher system pressures for Zn separation in the early stage,and the early termination of the recovery process in the later stage or a continuous supply of raw melt can also prevent contamination during recycling.The alloying elements Al and Zn in Mg alloy scrap can be further recovered and purified by vacuum distillation when economically feasible,to maximize the recycling of metal resources. 展开更多
关键词 Magnesium alloy Scrap recycling Thermodynamic analysis Impurity removal Vacuum distillation
在线阅读 下载PDF
Preparation of 7N High-Purity Indium by Vacuum Distillation-Zone Refining Combination
3
作者 Tian Qinghua Hu Zhixiang +3 位作者 He Zhiqiang Guo Xueyi Zhu Liu Xu Zhipeng 《稀有金属材料与工程》 北大核心 2025年第8期1947-1955,共9页
High-purity indium finds extensive application in the aerospace,electronics,medical,energy,and national defense sectors.Its purity and impurity contents significantly influence its performance in these applications.Hi... High-purity indium finds extensive application in the aerospace,electronics,medical,energy,and national defense sectors.Its purity and impurity contents significantly influence its performance in these applications.High-purity indium was prepared by combining zone refining with vacuum distillation.Results show that the average removal efficiency of impurity Sb can approach 95%,while the removal efficiency of impurities Sn and Bi can reach over 95%,and the removal efficiency of Si,Fe,Ni,and Pb can reach over 85%.Ultimately,the amount of Sn and Sb impurities is reduced to 2.0 and 4.1μg/kg,respectively,and that of most impurities,including Fe,Ni,Pb,and Bi,is reduced to levels below the instrumental detection limit.The average impurity removal efficiency is 90.9%,and the indium purity reaches 7N9. 展开更多
关键词 INDIUM HIGH-PURITY vacuum distillation zone refining
原文传递
Enrichment and purification of nervonic acid from Acer truncatum seed oil by combining vacuum distillation and low-temperature crystallization:Experiments and process modeling
4
作者 Yingxi Gao Tong Wei +4 位作者 Jie Wang Yuming Tu Zhiyong Zhou Chencan Du Zhongqi Ren 《Chinese Journal of Chemical Engineering》 2025年第6期116-124,共9页
Nervonic acid(NA) is a long-chain monounsaturated fatty acid with significant potential for neural fiber repair.In this study,a mixed fatty acid methyl ester was synthesized as the raw material through saponification ... Nervonic acid(NA) is a long-chain monounsaturated fatty acid with significant potential for neural fiber repair.In this study,a mixed fatty acid methyl ester was synthesized as the raw material through saponification of Acer truncatum Bunge seed oil.Based on the differences in boiling points and relative volatilities of various components,a four-stage vacuum batch distillation process was employed to enrich the nervonic acid methyl ester(NAME).The effect of distillation process parameters on enrichment efficiency was investigated,including distillation temperature,operating pressure,and reflux ratio.The purity of NAME achieved as 91.20% under optimal conditions and the corresponding yield was 48.91%.To further increase the purity,a low-temperature crystallization process was adopted and a final purity of NAME was obtained as 97.56%.Simulation of the above four-stage batch distillation was conducted using Aspen Plus software,and a continuous distillation processes was further simulated to establish a theoretical basis for future industrial-scale production.The results of experiments and simulation demonstrate that the integrated process of vacuum distillation and low-temperature crystallization exhibits remarkable separation performances,providing robust guidance for the production of high-purity NA. 展开更多
关键词 Nervonic acid PURIFICATION distilLATION CRYSTALLIZATION Aspen simulation VACUUM
在线阅读 下载PDF
Current advances in distillation processes for fermentative acetonebutanol-ethanol purification
5
作者 Xuedan Hou Pengfei Zhao +4 位作者 Xiaohui Lin Yunxing Gao Huidong Chen Di Cai Peiyong Qin 《Chinese Journal of Chemical Engineering》 2025年第3期91-108,共18页
Acetone-butanol-ethanol(ABE)fermentation is a primary strategy for producing bio-based n-butanol from abundant renewable biomass.In the typical ABE production chain,distillation is an essential unit for high purity AB... Acetone-butanol-ethanol(ABE)fermentation is a primary strategy for producing bio-based n-butanol from abundant renewable biomass.In the typical ABE production chain,distillation is an essential unit for high purity ABE productions,but has long been criticized by the energy-inefficient processes due to the extremely low solvents concentration received in the upstream fermentation system.Over the past decades,efforts have been dedicated to developing eco-efficient ABE distillation processes aimed at reducing both energy costs and capital investments.In this review,a comprehensive overview on ABE distillation systems is provided from physico-chemical properties in feed and thermodynamics to the process constructions and applications.The recent trends in distillation sequence construction that fitting with the rapid developed upstream in situ product recovery(ISPR)systems are emphasized.Furthermore,towards developing a more efficient ABE distillation system,the review takes a broad overview of the intensification strategies for ABE distillation.Along with systematic introduction of the key examples,the future directions for ABE distillation techniques development are also discussed towards a sustainable and low-carbon emission biorefineries. 展开更多
关键词 N-BUTANOL ABE fermentation In situ product recovery distilLATION Process intensification
在线阅读 下载PDF
A non-fluorinated liquid-like membrane with excellent anti-scaling performance for membrane distillation
6
作者 Jianwen Zhao Shuai Wang +3 位作者 Shanshan Zhao Liwei Chen Fangang Meng Xuelin Tian 《Chinese Chemical Letters》 2025年第1期521-528,共8页
Membrane distillation(MD)has gained extensive attention for treating highly saline wastewater.However,membrane scaling during the MD process has hindered the rapid development of this technology.Current approaches to ... Membrane distillation(MD)has gained extensive attention for treating highly saline wastewater.However,membrane scaling during the MD process has hindered the rapid development of this technology.Current approaches to mitigate scaling in membrane distillation focus primarily on achieving enhanced hydrophobicity and even superhydrophobicity via utilizing fluorinated fibrous membrane or introducing perfluorosilane modification.Considering the environmental hazards posed by fluorinated compounds,it is highly desirable to develop non-fluorinated membranes with enhanced anti-scaling properties for effective membrane distillation.In this study,we present a non-fluorinated liquid-like MD membrane with exceptional anti-scaling performance.This membrane was facilely fabricated by grafting linear polydimethylsiloxane(LPDMS)onto a hydrophilic polyether sulfone(PES)membrane pre-coated with the intermediate layers of polydopamine and silica(denoted as LPDMS-PES).Remarkably,LPDMS-PES manifested a drastically improved scaling resistance in continuous MD tests than its perfluorinated counterpart,i.e.,1H,1H,2H,2H-perfluorooctyltrichlorosilane-modified PES membrane(PFOS-PES),in both heterogeneous nucleation-dominated and crystal deposition-dominated scaling processes,despite the latter having a smaller surface energy.LPDMS-PES demonstrated a reduction of crystal accumulation of approximately 85%for Na Cl and 73%for Ca SO_(4) in the heterogeneous nucleation-dominated scaling process compared to PFOS-PES.Additionally,in the crystal deposition-dominated scaling process LPDMS-PES exhibited a reduction of about 70%in scale accumulation.These results explicitly evidenced the great potential of the liquid-like membrane to minimize scaling in membrane distillation by inhibiting both scale nucleation and adhesion onto the membrane.We believe the findings of this study have important implications for the design of high-performance MD membranes,particularly in the quest for environmentally sustainable alternatives to perfluorinated materials. 展开更多
关键词 Surface modification Liquid-like membranes ANTI-SCALING NON-FLUORINATED Direct contact membrane distillation
原文传递
State surveillance and fault diagnosis of distillation columns using residual network-based passive acoustic monitoring
7
作者 Haotian Zheng Zhixi Zhang +7 位作者 Guangyan Wang Yatao Wang Jun Liang Weiyi Su Yuqi Hu Xiong Yu Chunli Li Honghai Wang 《Chinese Journal of Chemical Engineering》 2025年第1期248-258,共11页
The operational state of distillation columns significantly impacts product quality and production efficiency.However,due to the complex operation and diverse influencing factors,ensuring the safety and efficient oper... The operational state of distillation columns significantly impacts product quality and production efficiency.However,due to the complex operation and diverse influencing factors,ensuring the safety and efficient operation of the distillation columns becomes paramount.This research combines passive acoustic monitoring with artificial intelligence techniques,proposed a technology based on residual network(ResNet),which involves the transformation of the acoustic signals emitted by three distillation columns under different operating states.The acoustic signals were initially in one-dimensional waveform format and then converted into two-dimensional Mel-Frequency Cepstral Coefficients spectrogram database using fast Fourier transform.Ultimately,this database was employed to train a ResNet for the purpose of identifying the operational states of the distillation columns.Through this approach,the operational states of distillation columns were monitored.Various faults,including flooding,entrainment,dry-tray,etc.,were diagnosed with an accuracy of 98.91%.Moreover,an intermediate transitional state between normal operation and fault was identified and accurately recognized by the proposed method.Under the transitional state,the acoustic signals achieved an accuracy of 97.85%on the ResNet,which enables early warnings before faults occur,enhancing the safety of chemical production processes.The approach presents a powerful tool for the monitoring and diagnosis of chemical equipment,particularly distillation columns,ensuring the safety and efficiency. 展开更多
关键词 distilLATION COLUMN Acoustic signal Neural network
在线阅读 下载PDF
Kinetic and process analysis of continuous catalytic distillation for high-purity propylene glycol monomethyl ether acetate production
8
作者 Qinglian Wang Dingbang Zhao +5 位作者 Huaifang Li Xin Gao Weifeng Shen Chen Yang Changshen Ye Ting Qiu 《Chinese Journal of Chemical Engineering》 2025年第10期200-210,共11页
The production of high-purity propylene glycol monomethyl ether acetate(PMA)through the transesterification of propylene glycol monomethyl ether(PM)and methyl acetate(MeOAc)is traditionally catalyzed by sodium methoxi... The production of high-purity propylene glycol monomethyl ether acetate(PMA)through the transesterification of propylene glycol monomethyl ether(PM)and methyl acetate(MeOAc)is traditionally catalyzed by sodium methoxide.However,the practical application of this method is significantly hindered by the inherent limitations of sodium methoxide,such as its high sensitivity to moisture and propensity for solid precipitation,which impede its effective use in continuous processes.This work proposed a continuous catalytic distillation(CD)process utilizing Amberlyst 15 cation exchange resin as the catalyst.A comprehensive series of reaction kinetic and CD experiments were conducted to evaluate the performance of the proposed process.The results demonstrate that under the optimal operating conditions,namely an ester-to-ether molar ratio of 6:1,a refluxratio of 5:1,a total feed rate of 0.92 g‧min^(-1),and an evaporation rate of 266.47 m^(3)‧m^(-2)‧h^(-1),the conversion rate of PM achieves 99.95%,and the PMA yield is 97.31%.Based on these findings,a process flowsheet for a continuous CD process tailored for the production of electronic-grade PMA is presented.This design incorporates light and heavy removal steps to ensure the production of PMA with a purity of 99.99%.Additionally,the process utilizes pressure swing distillation to recover MeOAc,thereby enhancing the overall efficiencyand sustainability of the production process.The proposed continuous CD process offers a highly efficient,cost-effective,and environmentally sustainable solution for the production of electronic-grade PMA. 展开更多
关键词 Propylene glycol monomethyl ether acetate Reactive distillation CATALYST KINETIC Electronic-grade
在线阅读 下载PDF
Optimizing BERT for Bengali Emotion Classification: Evaluating Knowledge Distillation, Pruning, and Quantization
9
作者 Md Hasibur Rahman Mohammed Arif Uddin +1 位作者 Zinnat Fowzia Ria Rashedur M.Rahman 《Computer Modeling in Engineering & Sciences》 2025年第2期1637-1666,共30页
The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classificati... The rapid growth of digital data necessitates advanced natural language processing(NLP)models like BERT(Bidi-rectional Encoder Representations from Transformers),known for its superior performance in text classification.However,BERT’s size and computational demands limit its practicality,especially in resource-constrained settings.This research compresses the BERT base model for Bengali emotion classification through knowledge distillation(KD),pruning,and quantization techniques.Despite Bengali being the sixth most spoken language globally,NLP research in this area is limited.Our approach addresses this gap by creating an efficient BERT-based model for Bengali text.We have explored 20 combinations for KD,quantization,and pruning,resulting in improved speedup,fewer parameters,and reduced memory size.Our best results demonstrate significant improvements in both speed and efficiency.For instance,in the case of mBERT,we achieved a 3.87×speedup and 4×compression ratio with a combination of Distil+Prune+Quant that reduced parameters from 178 to 46 M,while the memory size decreased from 711 to 178 MB.These results offer scalable solutions for NLP tasks in various languages and advance the field of model compression,making these models suitable for real-world applications in resource-limited environments. 展开更多
关键词 Bengali NLP black-box distillation emotion classification model compression post-training quantization unstructured pruning
在线阅读 下载PDF
Graph distillation with network symmetry
10
作者 Feng Lin Jia-Lin He 《Chinese Physics B》 2025年第4期262-271,共10页
Graph neural networks(GNNs)have demonstrated excellent performance in graph representation learning.However,as the volume of graph data grows,issues related to cost and efficiency become increasingly prominent.Graph d... Graph neural networks(GNNs)have demonstrated excellent performance in graph representation learning.However,as the volume of graph data grows,issues related to cost and efficiency become increasingly prominent.Graph distillation methods address this challenge by extracting a smaller,reduced graph,ensuring that GNNs trained on both the original and reduced graphs show similar performance.Existing methods,however,primarily optimize the feature matrix of the reduced graph and rely on correlation information from GNNs,while neglecting the original graph’s structure and redundant nodes.This often results in a loss of critical information within the reduced graph.To overcome this limitation,we propose a graph distillation method guided by network symmetry.Specifically,we identify symmetric nodes with equivalent neighborhood structures and merge them into“super nodes”,thereby simplifying the network structure,reducing redundant parameter optimization and enhancing training efficiency.At the same time,instead of relying on the original node features,we employ gradient descent to match optimal features that align with the original features,thus improving downstream task performance.Theoretically,our method guarantees that the reduced graph retains the key information present in the original graph.Extensive experiments demonstrate that our approach achieves significant improvements in graph distillation,exhibiting strong generalization capability and outperforming existing graph reduction methods. 展开更多
关键词 graph neural networks graph distillation network symmetry super nodes feature optimization
原文传递
An Optimized Unsupervised Defect Detection Approach via Federated Learning and Adaptive Embeddings Knowledge Distillation
11
作者 Jinhai Wang Junwei Xue +5 位作者 Hongyan Zhang Hui Xiao Huiling Wei Mingyou Chen Jiang Liao Lufeng Luo 《Computers, Materials & Continua》 2025年第7期1839-1861,共23页
Defect detection based on computer vision is a critical component in ensuring the quality of industrial products.However,existing detection methods encounter several challenges in practical applications,including the ... Defect detection based on computer vision is a critical component in ensuring the quality of industrial products.However,existing detection methods encounter several challenges in practical applications,including the scarcity of labeled samples,limited adaptability of pre-trained models,and the data heterogeneity in distributed environments.To address these issues,this research proposes an unsupervised defect detection method,FLAME(Federated Learning with Adaptive Multi-Model Embeddings).The method comprises three stages:(1)Feature learning stage:this work proposes FADE(Feature-Adaptive Domain-Specific Embeddings),a framework employs Gaussian noise injection to simulate defective patterns and implements a feature discriminator for defect detection,thereby enhancing the pre-trained model’s industrial imagery representation capabilities.(2)Knowledge distillation co-training stage:a multi-model feature knowledge distillation mechanism is introduced.Through feature-level knowledge transfer between the global model and historical local models,the current local model is guided to learn better feature representations from the global model.The approach prevents local models from converging to local optima and mitigates performance degradation caused by data heterogeneity.(3)Model parameter aggregation stage:participating clients utilize weighted averaging aggregation to synthesize an updated global model,facilitating efficient knowledge consolidation.Experimental results demonstrate that FADE improves the average image-level Area under the Receiver Operating Characteristic Curve(AUROC)by 7.34%compared to methods directly utilizing pre-trained models.In federated learning environments,FLAME’s multi-model feature knowledge distillation mechanism outperforms the classic FedAvg algorithm by 2.34%in average image-level AUROC,while exhibiting superior convergence properties. 展开更多
关键词 Federated learning defect detection knowledge distillation unsupervised learning
在线阅读 下载PDF
Enhancing the generalization capability of 2D array pointer networks through multiple teacher-forcing knowledge distillation
12
作者 Qidong Liu Xin Shen +3 位作者 Chaoyue Liu Dong Chen Xin Zhou Mingliang Xu 《Journal of Automation and Intelligence》 2025年第1期29-38,共10页
The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combin... The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combinatorial optimization.Recently,reinforcement learning approaches such as 2D Array Pointer Networks(2D-Ptr)have demonstrated remarkable speed in decision-making by modeling multiple agents’concurrent choices as a sequence of consecutive actions.However,these learning-based models often struggle with generalization,meaning they cannot seamlessly adapt to new scenarios with varying numbers of vehicles or customers without retraining.Inspired by the potential of multi-teacher knowledge distillation to harness diverse knowledge from multiple sources and craft a comprehensive student model,we propose to enhance the generalization capability of 2D-Ptr through Multiple Teacher-forcing Knowledge Distillation(MTKD).We initially train 12 unique 2D-Ptr models under various settings to serve as teacher models.Subsequently,we randomly sample a teacher model and a batch of problem instances,focusing on those where the chosen teacher performed best.This teacher model then solves these instances,generating high-reward action sequences to guide knowledge transfer to the student model.We conduct rigorous evaluations across four distinct datasets,each comprising four HCVRP instances of varying scales.Our empirical findings underscore the proposed method superiority over existing learning-based methods in terms of both computational efficiency and solution quality. 展开更多
关键词 Vehicle routing problem Multi-teacher knowledge distillation Teacher-forcing Pointer network
在线阅读 下载PDF
An Improved Knowledge Distillation Algorithm and Its Application to Object Detection
13
作者 Min Yao Guofeng Liu +1 位作者 Yaozu Zhang Guangjie Hu 《Computers, Materials & Continua》 2025年第5期2189-2205,共17页
Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,... Knowledge distillation(KD)is an emerging model compression technique for learning compact object detector models.Previous KD often focused solely on distilling from the logits layer or the feature intermediate layers,which may limit the comprehensive learning of the student network.Additionally,the imbalance between the foreground and background also affects the performance of the model.To address these issues,this paper employs feature-based distillation to enhance the detection performance of the bounding box localization part,and logit-based distillation to improve the detection performance of the category prediction part.Specifically,for the intermediate layer feature distillation,we introduce feature resampling to reduce the risk of the student model merely imitating the teacher model.At the same time,we incorporate a Spatial Attention Mechanism(SAM)to highlight the foreground features learned by the student model.In terms of output layer feature distillation,we divide the traditional distillation targets into target-class objects and non-target-class objects,aiming to improve overall distillation performance.Furthermore,we introduce a one-to-many matching distillation strategy based on Feature Alignment Module(FAM),which further enhances the studentmodel’s feature representation ability,making its feature distribution closer to that of the teacher model,and thus demonstrating superior localization and classification capabilities in object detection tasks.Experimental results demonstrate that our proposedmethodology outperforms conventional distillation techniques in terms of object detecting performance. 展开更多
关键词 Deep learning model compression knowledge distillation object detection
在线阅读 下载PDF
Magnetic Resonance Imaging Reconstruction Based on Butterfly Dilated Geometric Distillation
14
作者 DUO Lin XU Boyu +1 位作者 REN Yong YANG Xin 《Journal of Shanghai Jiaotong university(Science)》 2025年第3期590-599,共10页
In order to improve the reconstruction accuracy of magnetic resonance imaging(MRI),an accurate natural image compressed sensing(CS)reconstruction network is proposed,which combines the advantages of model-based and de... In order to improve the reconstruction accuracy of magnetic resonance imaging(MRI),an accurate natural image compressed sensing(CS)reconstruction network is proposed,which combines the advantages of model-based and deep learning-based CS-MRI methods.In theory,enhancing geometric texture details in linear reconstruction is possible.First,the optimization problem is decomposed into two problems:linear approximation and geometric compensation.Aimed at the problem of image linear approximation,the data consistency module is used to deal with it.Since the processing process will lose texture details,a neural network layer that explicitly combines image and frequency feature representation is proposed,which is named butterfly dilated geometric distillation network.The network introduces the idea of butterfly operation,skillfully integrates the features of image domain and frequency domain,and avoids the loss of texture details when extracting features in a single domain.Finally,a channel feature fusion module is designed by combining channel attention mechanism and dilated convolution.The attention of the channel makes the final output feature map focus on the more important part,thus improving the feature representation ability.The dilated convolution enlarges the receptive field,thereby obtaining more dense image feature data.The experimental results show that the peak signal-to-noise ratio of the network is 5.43 dB,5.24 dB and 3.89 dB higher than that of ISTA-Net+,FISTA and DGDN networks on the brain data set with a Cartesian sampling mask CS ratio of 10%. 展开更多
关键词 butterfly geometric distillation dilation convolution channel attention image reconstruction
原文传递
Multimodal Neural Machine Translation Based on Knowledge Distillation and Anti-Noise Interaction
15
作者 Erlin Tian Zengchao Zhu +1 位作者 Fangmei Liu Zuhe Li 《Computers, Materials & Continua》 2025年第5期2305-2322,共18页
Within the realm of multimodal neural machine translation(MNMT),addressing the challenge of seamlessly integrating textual data with corresponding image data to enhance translation accuracy has become a pressing issue... Within the realm of multimodal neural machine translation(MNMT),addressing the challenge of seamlessly integrating textual data with corresponding image data to enhance translation accuracy has become a pressing issue.We saw that discrepancies between textual content and associated images can lead to visual noise,potentially diverting the model’s focus away from the textual data and so affecting the translation’s comprehensive effectiveness.To solve this visual noise problem,we propose an innovative KDNR-MNMT model.Themodel combines the knowledge distillation technique with an anti-noise interaction mechanism,which makes full use of the synthesized graphic knowledge and local image interaction masks,aiming to extract more effective visual features.Meanwhile,the KDNR-MNMT model adopts a multimodal adaptive gating fusion strategy to enhance the constructive interaction of different modal information.By integrating a perceptual attention mechanism,which uses cross-modal interaction cues within the Transformer framework,our approach notably enhances the quality of machine translation outputs.To confirmthemodel’s performance,we carried out extensive testing and assessment on the extensively utilized Multi30K dataset.The outcomes of our experiments prove substantial enhancements in our model’s BLEU and METEOR scores,with respective increases of 0.78 and 0.99 points over prevailing methods.This accomplishment affirms the potency of our strategy for mitigating visual interference and heralds groundbreaking advancements within themultimodal NMT domain,further propelling the evolution of this scholarly pursuit. 展开更多
关键词 Knowledge distillation anti-noise interaction mask occlusion door control fusion
在线阅读 下载PDF
CLAD:Criterion learner and attention distillation for automated CNN pruning
16
作者 Zheng Li Jiaxin Li +2 位作者 Shaojie Liu Bo Zhao Derong Liu 《Journal of Automation and Intelligence》 2025年第4期254-265,共12页
Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and... Filter pruning effectively compresses the neural network by reducing both its parameters and computational cost.Existing pruning methods typically rely on pre-designed pruning criteria to measure filter importance and remove those deemed unimportant.However,different layers of the neural network exhibit varying filter distributions,making it inappropriate to implement the same pruning criterion for all layers.Additionally,some approaches apply different criteria from the set of pre-defined pruning rules for different layers,but the limited space leads to the difficulty of covering all layers.If criteria for all layers are manually designed,it is costly and difficult to generalize to other networks.To solve this problem,we present a novel neural network pruning method based on the Criterion Learner and Attention Distillation(CLAD).Specifically,CLAD develops a differentiable criterion learner,which is integrated into each layer of the network.The learner can automatically learn the appropriate pruning criterion according to the filter parameters of each layer,thus the requirement of manual design is eliminated.Furthermore,the criterion learner is trained end-to-end by the gradient optimization algorithm to achieve efficient pruning.In addition,attention distillation,which fully utilizes the knowledge of unpruned networks to guide the optimization of the learner and improve the pruned network performance,is introduced in the process of learner optimization.Experiments conducted on various datasets and networks demonstrate the effectiveness of the proposed method.Notably,CLAD reduces the FLOPs of Res Net-110 by about 53%on the CIFAR-10 dataset,while simultaneously improves the network's accuracy by 0.05%.Moreover,it reduces the FLOPs of Res Net-50 by about 46%on the Image Net-1K dataset,and maintains a top-1 accuracy of 75.45%. 展开更多
关键词 Neural network pruning Model compression Knowledge distillation Feature attention Polar regularization
在线阅读 下载PDF
Unsupervised Low-Light Image Enhancement Based on Explicit Denoising and Knowledge Distillation
17
作者 Wenkai Zhang Hao Zhang +3 位作者 Xianming Liu Xiaoyu Guo Xinzhe Wang Shuiwang Li 《Computers, Materials & Continua》 2025年第2期2537-2554,共18页
Under low-illumination conditions, the quality of image signals deteriorates significantly, typically characterized by a peak signal-to-noise ratio (PSNR) below 10 dB, which severely limits the usability of the images... Under low-illumination conditions, the quality of image signals deteriorates significantly, typically characterized by a peak signal-to-noise ratio (PSNR) below 10 dB, which severely limits the usability of the images. Supervised methods, which utilize paired high-low light images as training sets, can enhance the PSNR to around 20 dB, significantly improving image quality. However, such data is challenging to obtain. In recent years, unsupervised low-light image enhancement (LIE) methods based on the Retinex framework have been proposed, but they generally lag behind supervised methods by 5–10 dB in performance. In this paper, we introduce the Denoising-Distilled Retine (DDR) method, an unsupervised approach that integrates denoising priors into a Retinex-based training framework. By explicitly incorporating denoising, the DDR method effectively addresses the challenges of noise and artifacts in low-light images, thereby enhancing the performance of the Retinex framework. The model achieved a PSNR of 19.82 dB on the LOL dataset, which is comparable to the performance of supervised methods. Furthermore, by applying knowledge distillation, the DDR method optimizes the model for real-time processing of low-light images, achieving a processing speed of 199.7 fps without incurring additional computational costs. While the DDR method has demonstrated superior performance in terms of image quality and processing speed, there is still room for improvement in terms of robustness across different color spaces and under highly resource-constrained conditions. Future research will focus on enhancing the model’s generalizability and adaptability to address these challenges. Our rigorous testing on public datasets further substantiates the DDR method’s state-of-the-art performance in both image quality and processing speed. 展开更多
关键词 Deep learning low-light image enhancement real-time processing knowledge distillation
在线阅读 下载PDF
KD-SegNet: Efficient Semantic Segmentation Network with Knowledge Distillation Based on Monocular Camera
18
作者 Thai-Viet Dang Nhu-Nghia Bui Phan Xuan Tan 《Computers, Materials & Continua》 2025年第2期2001-2026,共26页
Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training per... Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training performance, the ability to effectively exploit the dataset, and the ability to adapt to complex environments when deploying the model. By utilizing the knowledge distillation techniques, the article strives to overcome the above challenges with the inheritance of the advantages of both the teacher model and the student model. More precisely, the ResNet152-PSP-Net model’s characteristics are utilized to train the ResNet18-PSP-Net model. Pyramid pooling blocks are utilized to decode multi-scale feature maps, creating a complete semantic map inference. The student model not only preserves the strong segmentation performance from the teacher model but also improves the inference speed of the prediction results. The proposed method exhibits a clear advantage over conventional convolutional neural network (CNN) models, as evident from the conducted experiments. Furthermore, the proposed model also shows remarkable improvement in processing speed when compared with light-weight models such as MobileNetV2 and EfficientNet based on latency and throughput parameters. The proposed KD-SegNet model obtains an accuracy of 96.3% and a mIoU (mean Intersection over Union) of 77%, outperforming the performance of existing models by more than 15% on the same training dataset. The suggested method has an average training time that is only 0.51 times less than same field models, while still achieving comparable segmentation performance. Hence, the semantic segmentation frames are collected, forming the motion trajectory for the system in the environment. Overall, this architecture shows great promise for the development of knowledge-based systems for MR’s navigation. 展开更多
关键词 Mobile robot navigation semantic segmentation knowledge distillation pyramid scene parsing fully convolutional networks
在线阅读 下载PDF
Dynamic temperature control of dividing wall batch distillation with middle vessel based on neural network soft-sensor and fuzzy control
19
作者 Xiaoyu Zhou Erwei Song +1 位作者 Mingmei Wang Erqiang Wang 《Chinese Journal of Chemical Engineering》 2025年第3期200-211,共12页
Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of ... Dividing wall batch distillation with middle vessel(DWBDM)is a new type of batch distillation column,with outstanding advantages of low capital cost,energy saving and flexible operation.However,temperature control of DWBDM process is challenging,since inherently dynamic and highly nonlinear,which make it difficult to give the controller reasonable set value or optimal temperature profile for temperature control scheme.To overcome this obstacle,this study proposes a new strategy to develop temperature control scheme for DWBDM combining neural network soft-sensor with fuzzy control.Dynamic model of DWBDM was firstly developed and numerically solved by Python,with three control schemes:composition control by PID and fuzzy control respectively,and temperature control by fuzzy control with neural network soft-sensor.For dynamic process,the neural networks with memory functions,such as RNN,LSTM and GRU,are used to handle with time-series data.The results from a case example show that the new control scheme can perform a good temperature control of DWBDM with the same or even better product purities as traditional PID or fuzzy control,and fuzzy control could reduce the effect of prediction error from neural network,indicating that it is a highly feasible and effective control approach for DWBDM,and could even be extended to other dynamic processes. 展开更多
关键词 Dividing wall batch distillation column Middle-vessel Temperature control Neural network soft-sensor Fuzzy control
在线阅读 下载PDF
Separation of polycyclic aromatic hydrocarbons by solvent screening assisted extractive distillation from FCC diesel
20
作者 Jun Li Wanting Yu Jinsen Gao 《Chinese Journal of Chemical Engineering》 2025年第11期89-102,共14页
The oversupply of diesel in China necessitates efficient separation of polycyclic aromatic hydrocarbons from fluidized catalytic cracking diesel for value-added utilization.However,purification is hindered by alkane a... The oversupply of diesel in China necessitates efficient separation of polycyclic aromatic hydrocarbons from fluidized catalytic cracking diesel for value-added utilization.However,purification is hindered by alkane and monocyclic aromatic interference.In this work,we propose a solvent-screening strategy for extractive distillation based on molecular polarity and interaction energy analysis.Quantum chemical calculations identified ethylene glycol(aromatic solubility) and N,N-dimethylformamide(alkane selectivity) as optimal solvents,with weak hydrogen bonds(e.g.,O-H…π,C-H…π) governing aromatic interactions.Two process designs were developed:(1) solvent extraction followed by primary extractive distillation(purity >95.0%(mass)) and(2) direct two-stage extractive distillation(purity>92.0%(mass)).This work provides a flexible framework for polycyclic aromatic hydrocarbon separation tailored to market demands while elucidating solvent-solute interactions at the molecular level. 展开更多
关键词 Polycyclic aromatic hydrocarbons Extractive distillation Solvent screening Quantum chemical calculation
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部