Predicting player performance in sports is a critical challenge with significant implications for team success,fan engagement,and financial outcomes.Although,inMajor League Baseball(MLB),statistical methodologies such...Predicting player performance in sports is a critical challenge with significant implications for team success,fan engagement,and financial outcomes.Although,inMajor League Baseball(MLB),statistical methodologies such as sabermetrics have been widely used,the dynamic nature of sports makes accurate performance prediction a difficult task.Enhanced forecasts can provide immense value to team managers by aiding strategic player contract and acquisition decisions.This study addresses this challenge by employing the temporal fusion transformer(TFT),an advanced and cutting-edge deep learning model for complex data,to predict pitchers’earned run average(ERA),a key metric in baseball performance analysis.The performance of the TFT model is evaluated against recurrent neural network-based approaches and existing projection systems.In experimental results,the TFT based model consistently outperformed its counterparts,demonstrating superior accuracy in pitcher performance prediction.By leveraging the advanced capabilities of TFT,this study contributes to more precise player evaluations and improves strategic planning in baseball.展开更多
Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and treatment.However,achieving precise segmentation remains a challenge due to vari...Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and treatment.However,achieving precise segmentation remains a challenge due to various factors,including scattering noise,low contrast,and limited resolution in ultrasound images.Although existing segmentation models have made progress,they still suffer from several limitations,such as high error rates,low generalizability,overfitting,limited feature learning capability,etc.To address these challenges,this paper proposes a Multi-level Relation Transformer-based U-Net(MLRT-UNet)to improve thyroid nodule segmentation.The MLRTUNet leverages a novel Relation Transformer,which processes images at multiple scales,overcoming the limitations of traditional encoding methods.This transformer integrates both local and global features effectively through selfattention and cross-attention units,capturing intricate relationships within the data.The approach also introduces a Co-operative Transformer Fusion(CTF)module to combine multi-scale features from different encoding layers,enhancing the model’s ability to capture complex patterns in the data.Furthermore,the Relation Transformer block enhances long-distance dependencies during the decoding process,improving segmentation accuracy.Experimental results showthat the MLRT-UNet achieves high segmentation accuracy,reaching 98.2% on the Digital Database Thyroid Image(DDT)dataset,97.8% on the Thyroid Nodule 3493(TG3K)dataset,and 98.2% on the Thyroid Nodule3K(TN3K)dataset.These findings demonstrate that the proposed method significantly enhances the accuracy of thyroid nodule segmentation,addressing the limitations of existing models.展开更多
Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imag...Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.展开更多
Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model an...Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model and effect of ENVISAT/SAR and HJ-1A satel ite multispectral remote sensing images. Based on the ARSIS strat-egy, using the wavelet transform and the Interaction between the Band Structure Model (IBSM), the research progressed the ENVISAT satel ite SAR and the HJ-1A satel ite CCD images wavelet decomposition, and low/high frequency coefficient re-construction, and obtained the fusion images through the inverse wavelet transform. In the light of low and high-frequency images have different characteristics in differ-ent areas, different fusion rules which can enhance the integration process of self-adaptive were taken, with comparisons with the PCA transformation, IHS transfor-mation and other traditional methods by subjective and the corresponding quantita-tive evaluation. Furthermore, the research extracted the bands and NDVI values around the fusion with GPS samples, analyzed and explained the fusion effect. The results showed that the spectral distortion of wavelet fusion, IHS transform, PCA transform images was 0.101 6, 0.326 1 and 1.277 2, respectively and entropy was 14.701 5, 11.899 3 and 13.229 3, respectively, the wavelet fusion is the highest. The method of wavelet maintained good spectral capability, and visual effects while improved the spatial resolution, the information interpretation effect was much better than other two methods.展开更多
A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both sp...A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both spectral and spatial information simultaneously in fused image.NMF is a matrix factorization method,which can extract the local feature by choosing suitable dimension of the feature subspace.Firstly the multispectral image was represented in intensity hue saturation(IHS) system.Then the I component and panchromatic image were decomposed by NSCT.Next we used NMF to learn the feature of both multispectral and panchromatic images' low-frequency subbands,and the selection principle of the other coefficients was absolute maximum criterion.Finally the new coefficients were reconstructed to get the fused image.Experiments are carried out and the results are compared with some other methods,which show that the new method performs better in improving the spatial resolution and preserving the feature information than the other existing relative methods.展开更多
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.展开更多
Accurate estimation of land surface solar irradiation is critical for effective solar energy utilization and planning of solar photovoltaic planning.Although traditional machine learning methods have been demonstrated...Accurate estimation of land surface solar irradiation is critical for effective solar energy utilization and planning of solar photovoltaic planning.Although traditional machine learning methods have been demonstrated to estimate solar irradiation effectively,they face challenges in modeling over large regions,as well as lacking of ability to model spatial diversity and temporal dynamics of solar irradiation,and providing limited interpretability.To address these limitations,this study proposed a geospatial artificial intelligence framework augmented by Temporal Fusion Transformer for hourly estimation of land surface solar irradiation.As a case study in Australia,the results demonstrate superior performance with the coefficient of the determination,the mean absolute error,and Root Mean Square Error as high as 0.90,0.25(kWh/m^(2)),and 0.63(kWh/m^(2)),showing improvements of 21.62–66.67%,78.37–85.98%,and 62.81–73.25%,respectively,compared to the benchmarks of other methods,including Support Vector Regression,Random Forest,Gradient Boosting Machine,AdaBoost,Long Short-Term Memory,Temporal Convolutional Network,ConvLSTM,Transformer,and Graph Neural Network.Furthermore,interpretability results of the model indicate that among the temporal variables,observed solar irradiation and clear sky solar irradiation significantly contribute to the model’s performance.The results show this framework enhanced accuracy and interpretability for solar irradiation estimation over large areas,providing valuable insights for future studies and supporting decision-making for developing the renewable energy industry.展开更多
Building integrated energy systems(BIESs)are pivotal for enhancing energy efficiency by accounting for a significant proportion of global energy consumption.Two key barriers that reduce the BIES operational efficiency...Building integrated energy systems(BIESs)are pivotal for enhancing energy efficiency by accounting for a significant proportion of global energy consumption.Two key barriers that reduce the BIES operational efficiency mainly lie in the renewable generation uncertainty and operational non-convexity of combined heat and power(CHP)units.To this end,this paper proposes a soft actor-critic(SAC)algorithm to solve the scheduling problem of BIES,which overcomes the model non-convexity and shows advantages in robustness and generalization.This paper also adopts a temporal fusion transformer(TFT)to enhance the optimal solution for the SAC algorithm by forecasting the renewable generation and energy demand.The TFT can effectively capture the complex temporal patterns and dependencies that span multiple steps.Furthermore,its forecasting results are interpretable due to the employment of a self-attention layer so as to assist in more trustworthy decision-making in the SAC algorithm.The proposed hybrid data-driven approach integrating TFT and SAC algorithm,i.e.,TFT-SAC approach,is trained and tested on a real-world dataset to validate its superior performance in reducing the energy cost and computational time compared with the benchmark approaches.The generalization performance for the scheduling policy,as well as the sensitivity analysis,are examined in the case studies.展开更多
A novel image fusion algorithm based on bandelet transform is proposed. Bandelet transform can take advantage of the geometrical regularity of image structure and represent sharp image transitions such as edges effici...A novel image fusion algorithm based on bandelet transform is proposed. Bandelet transform can take advantage of the geometrical regularity of image structure and represent sharp image transitions such as edges efficiently in image fusion. For reconstructing the fused image, the maximum rule is used to select source images' geometric flow and bandelet coefficients. Experimental results indicate that the bandelet-based fusion algorithm represents the edge and detailed information well and outperforms the wavelet-based and Laplacian pyramid-based fusion algorithms, especially when the abundant texture and edges are contained in the source images.展开更多
Life evaluation for newly developed lithium-ion batteries is often constrained by the time-intensive and costly nature of battery testing.This is particularly true in the aerospace industry,where limited comprehensive...Life evaluation for newly developed lithium-ion batteries is often constrained by the time-intensive and costly nature of battery testing.This is particularly true in the aerospace industry,where limited comprehensive data availability significantly hampers life evaluations.Data collected from batteries under diverse operating con-ditions and cell mechanisms provides valuable insights for constructing degradation models.Nevertheless,the nonlinearity in battery degradation across operating conditions,combined with data distribution discrepancies among different cell mechanisms,presents significant challenges in developing degradation models for newly designed batteries.In this study,a stress-informed transfer learning methodology is proposed to accelerate the life evaluation process.Firstly,a stochastic model is employed to capture the nonlinear dynamics inherent in battery degradation under diverse operating conditions.Model migration is implemented to adapt stochastic models to unique degradation trends,ensuring precision under varying stresses.Secondly,a Transformer-based model is developed to accommodate variations in data distributions across different cell mechanisms.Domain-adaptive fine-tuning with specified loss function is then incorporated to address the challenge of limited target degradation features.Finally,a hybrid model is devised by integrating these foundational components,realizing accelerated life evaluation through the utilization of multi-modal data.Experimental results demonstrate that the proposed methodology achieves improvements of 63.40%in MAE and 58.55%in RMSE with 30%training data length compared to mainstream benchmark methods.This highlights the method’s potential as an early-stage screening and assessment tool for newly developed space lithium-ion batteries,complementing conventional cycle life evaluation protocols with accelerated evaluations from limited degradation data.展开更多
Increased penetration of renewables for power generation has negatively impacted the dynamics of conventional fossil fuel-based power plants.The power plants operating on the base load are forced to cycle,to adjust to...Increased penetration of renewables for power generation has negatively impacted the dynamics of conventional fossil fuel-based power plants.The power plants operating on the base load are forced to cycle,to adjust to the fluctuating power demands.This results in an inefficient operation of the coal power plants,which leads up to higher operating losses.To overcome such operational challenge associated with cycling and to develop an optimal process control,this work analyzes a set of models for predicting power generation.Moreover,the power generation is intrinsically affected by the state of the power plant components,and therefore our model development also incorporates additional power plant process variables while forecasting the power generation.We present and compare multiple state-of-the-art forecasting data-driven methods for power generation to determine the most adequate and accurate model.We also develop an interpretable attention-based transformer model to explain the importance of process variables during training and forecasting.The trained deep neural network(DNN)LSTM model has good accuracy in predicting gross power generation under various prediction horizons with/without cycling events and outperforms the other models for long-term forecasting.The DNN memory-based models show significant superiority over other state-of-the-art machine learning models for short,medium and long range predictions.The transformer-based model with attention enhances the selection of historical data for multi-horizon forecasting,and also allows to interpret the significance of internal power plant components on the power generation.This newly gained insights can be used by operation engineers to anticipate and monitor the health of power plant equipment during high cycling periods.展开更多
基金supported by SKKU Global Research Platform Research Fund,Sungkyunkwan University,2024-2025.
文摘Predicting player performance in sports is a critical challenge with significant implications for team success,fan engagement,and financial outcomes.Although,inMajor League Baseball(MLB),statistical methodologies such as sabermetrics have been widely used,the dynamic nature of sports makes accurate performance prediction a difficult task.Enhanced forecasts can provide immense value to team managers by aiding strategic player contract and acquisition decisions.This study addresses this challenge by employing the temporal fusion transformer(TFT),an advanced and cutting-edge deep learning model for complex data,to predict pitchers’earned run average(ERA),a key metric in baseball performance analysis.The performance of the TFT model is evaluated against recurrent neural network-based approaches and existing projection systems.In experimental results,the TFT based model consistently outperformed its counterparts,demonstrating superior accuracy in pitcher performance prediction.By leveraging the advanced capabilities of TFT,this study contributes to more precise player evaluations and improves strategic planning in baseball.
文摘Thyroid nodules,a common disorder in the endocrine system,require accurate segmentation in ultrasound images for effective diagnosis and treatment.However,achieving precise segmentation remains a challenge due to various factors,including scattering noise,low contrast,and limited resolution in ultrasound images.Although existing segmentation models have made progress,they still suffer from several limitations,such as high error rates,low generalizability,overfitting,limited feature learning capability,etc.To address these challenges,this paper proposes a Multi-level Relation Transformer-based U-Net(MLRT-UNet)to improve thyroid nodule segmentation.The MLRTUNet leverages a novel Relation Transformer,which processes images at multiple scales,overcoming the limitations of traditional encoding methods.This transformer integrates both local and global features effectively through selfattention and cross-attention units,capturing intricate relationships within the data.The approach also introduces a Co-operative Transformer Fusion(CTF)module to combine multi-scale features from different encoding layers,enhancing the model’s ability to capture complex patterns in the data.Furthermore,the Relation Transformer block enhances long-distance dependencies during the decoding process,improving segmentation accuracy.Experimental results showthat the MLRT-UNet achieves high segmentation accuracy,reaching 98.2% on the Digital Database Thyroid Image(DDT)dataset,97.8% on the Thyroid Nodule 3493(TG3K)dataset,and 98.2% on the Thyroid Nodule3K(TN3K)dataset.These findings demonstrate that the proposed method significantly enhances the accuracy of thyroid nodule segmentation,addressing the limitations of existing models.
文摘Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.
基金supported by the National Natural Science Foundation of China(41171336)the Project of Jiangsu Province Agricultural Science and Technology Innovation Fund(CX12-3054)
文摘Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model and effect of ENVISAT/SAR and HJ-1A satel ite multispectral remote sensing images. Based on the ARSIS strat-egy, using the wavelet transform and the Interaction between the Band Structure Model (IBSM), the research progressed the ENVISAT satel ite SAR and the HJ-1A satel ite CCD images wavelet decomposition, and low/high frequency coefficient re-construction, and obtained the fusion images through the inverse wavelet transform. In the light of low and high-frequency images have different characteristics in differ-ent areas, different fusion rules which can enhance the integration process of self-adaptive were taken, with comparisons with the PCA transformation, IHS transfor-mation and other traditional methods by subjective and the corresponding quantita-tive evaluation. Furthermore, the research extracted the bands and NDVI values around the fusion with GPS samples, analyzed and explained the fusion effect. The results showed that the spectral distortion of wavelet fusion, IHS transform, PCA transform images was 0.101 6, 0.326 1 and 1.277 2, respectively and entropy was 14.701 5, 11.899 3 and 13.229 3, respectively, the wavelet fusion is the highest. The method of wavelet maintained good spectral capability, and visual effects while improved the spatial resolution, the information interpretation effect was much better than other two methods.
基金Supported by the National Natural Science Foundation of China(60872065)
文摘A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both spectral and spatial information simultaneously in fused image.NMF is a matrix factorization method,which can extract the local feature by choosing suitable dimension of the feature subspace.Firstly the multispectral image was represented in intensity hue saturation(IHS) system.Then the I component and panchromatic image were decomposed by NSCT.Next we used NMF to learn the feature of both multispectral and panchromatic images' low-frequency subbands,and the selection principle of the other coefficients was absolute maximum criterion.Finally the new coefficients were reconstructed to get the fused image.Experiments are carried out and the results are compared with some other methods,which show that the new method performs better in improving the spatial resolution and preserving the feature information than the other existing relative methods.
基金supported by Scientific Research Deanship at University of Ha’il,Saudi Arabia through project number RG-23137.
文摘The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.
基金substantially funded by the General Research Fund(Grant No.15603923 and 15603920)the Collaborative Research Fund(Grant No.C5062-21GF)+1 种基金Young Collaborate Research Fund(Grant No.C6003-22Y)from the Research Grants Council,Hong Kong,Chinafunding support(Grant No.BBG2 and CD81)from the Research Institute for Sustainable Urban Develop-ment,Research Institute of Land and Space,The Hong Kong Polytechnic University,Kowloon,Hong Kong,China.
文摘Accurate estimation of land surface solar irradiation is critical for effective solar energy utilization and planning of solar photovoltaic planning.Although traditional machine learning methods have been demonstrated to estimate solar irradiation effectively,they face challenges in modeling over large regions,as well as lacking of ability to model spatial diversity and temporal dynamics of solar irradiation,and providing limited interpretability.To address these limitations,this study proposed a geospatial artificial intelligence framework augmented by Temporal Fusion Transformer for hourly estimation of land surface solar irradiation.As a case study in Australia,the results demonstrate superior performance with the coefficient of the determination,the mean absolute error,and Root Mean Square Error as high as 0.90,0.25(kWh/m^(2)),and 0.63(kWh/m^(2)),showing improvements of 21.62–66.67%,78.37–85.98%,and 62.81–73.25%,respectively,compared to the benchmarks of other methods,including Support Vector Regression,Random Forest,Gradient Boosting Machine,AdaBoost,Long Short-Term Memory,Temporal Convolutional Network,ConvLSTM,Transformer,and Graph Neural Network.Furthermore,interpretability results of the model indicate that among the temporal variables,observed solar irradiation and clear sky solar irradiation significantly contribute to the model’s performance.The results show this framework enhanced accuracy and interpretability for solar irradiation estimation over large areas,providing valuable insights for future studies and supporting decision-making for developing the renewable energy industry.
文摘Building integrated energy systems(BIESs)are pivotal for enhancing energy efficiency by accounting for a significant proportion of global energy consumption.Two key barriers that reduce the BIES operational efficiency mainly lie in the renewable generation uncertainty and operational non-convexity of combined heat and power(CHP)units.To this end,this paper proposes a soft actor-critic(SAC)algorithm to solve the scheduling problem of BIES,which overcomes the model non-convexity and shows advantages in robustness and generalization.This paper also adopts a temporal fusion transformer(TFT)to enhance the optimal solution for the SAC algorithm by forecasting the renewable generation and energy demand.The TFT can effectively capture the complex temporal patterns and dependencies that span multiple steps.Furthermore,its forecasting results are interpretable due to the employment of a self-attention layer so as to assist in more trustworthy decision-making in the SAC algorithm.The proposed hybrid data-driven approach integrating TFT and SAC algorithm,i.e.,TFT-SAC approach,is trained and tested on a real-world dataset to validate its superior performance in reducing the energy cost and computational time compared with the benchmark approaches.The generalization performance for the scheduling policy,as well as the sensitivity analysis,are examined in the case studies.
基金This work was supported by the Navigation Science Foundation (No.05F07001)the National Natural Science Foundation of China (No.60472081).
文摘A novel image fusion algorithm based on bandelet transform is proposed. Bandelet transform can take advantage of the geometrical regularity of image structure and represent sharp image transitions such as edges efficiently in image fusion. For reconstructing the fused image, the maximum rule is used to select source images' geometric flow and bandelet coefficients. Experimental results indicate that the bandelet-based fusion algorithm represents the edge and detailed information well and outperforms the wavelet-based and Laplacian pyramid-based fusion algorithms, especially when the abundant texture and edges are contained in the source images.
基金supported by the National Natural Science Foundation of China under Grant No 62201177,No 62371168Natural Science Foundation of Heilongjiang Province under Grant No YQ2023F006.
文摘Life evaluation for newly developed lithium-ion batteries is often constrained by the time-intensive and costly nature of battery testing.This is particularly true in the aerospace industry,where limited comprehensive data availability significantly hampers life evaluations.Data collected from batteries under diverse operating con-ditions and cell mechanisms provides valuable insights for constructing degradation models.Nevertheless,the nonlinearity in battery degradation across operating conditions,combined with data distribution discrepancies among different cell mechanisms,presents significant challenges in developing degradation models for newly designed batteries.In this study,a stress-informed transfer learning methodology is proposed to accelerate the life evaluation process.Firstly,a stochastic model is employed to capture the nonlinear dynamics inherent in battery degradation under diverse operating conditions.Model migration is implemented to adapt stochastic models to unique degradation trends,ensuring precision under varying stresses.Secondly,a Transformer-based model is developed to accommodate variations in data distributions across different cell mechanisms.Domain-adaptive fine-tuning with specified loss function is then incorporated to address the challenge of limited target degradation features.Finally,a hybrid model is devised by integrating these foundational components,realizing accelerated life evaluation through the utilization of multi-modal data.Experimental results demonstrate that the proposed methodology achieves improvements of 63.40%in MAE and 58.55%in RMSE with 30%training data length compared to mainstream benchmark methods.This highlights the method’s potential as an early-stage screening and assessment tool for newly developed space lithium-ion batteries,complementing conventional cycle life evaluation protocols with accelerated evaluations from limited degradation data.
文摘Increased penetration of renewables for power generation has negatively impacted the dynamics of conventional fossil fuel-based power plants.The power plants operating on the base load are forced to cycle,to adjust to the fluctuating power demands.This results in an inefficient operation of the coal power plants,which leads up to higher operating losses.To overcome such operational challenge associated with cycling and to develop an optimal process control,this work analyzes a set of models for predicting power generation.Moreover,the power generation is intrinsically affected by the state of the power plant components,and therefore our model development also incorporates additional power plant process variables while forecasting the power generation.We present and compare multiple state-of-the-art forecasting data-driven methods for power generation to determine the most adequate and accurate model.We also develop an interpretable attention-based transformer model to explain the importance of process variables during training and forecasting.The trained deep neural network(DNN)LSTM model has good accuracy in predicting gross power generation under various prediction horizons with/without cycling events and outperforms the other models for long-term forecasting.The DNN memory-based models show significant superiority over other state-of-the-art machine learning models for short,medium and long range predictions.The transformer-based model with attention enhances the selection of historical data for multi-horizon forecasting,and also allows to interpret the significance of internal power plant components on the power generation.This newly gained insights can be used by operation engineers to anticipate and monitor the health of power plant equipment during high cycling periods.