This paper focuses on the numerical solution of a tumor growth model under a data-driven approach.Based on the inherent laws of the data and reasonable assumptions,an ordinary differential equation model for tumor gro...This paper focuses on the numerical solution of a tumor growth model under a data-driven approach.Based on the inherent laws of the data and reasonable assumptions,an ordinary differential equation model for tumor growth is established.Nonlinear fitting is employed to obtain the optimal parameter estimation of the mathematical model,and the numerical solution is carried out using the Matlab software.By comparing the clinical data with the simulation results,a good agreement is achieved,which verifies the rationality and feasibility of the model.展开更多
Permanent magnet synchronous motor(PMSM)is widely used in alternating current servo systems as it provides high eficiency,high power density,and a wide speed regulation range.The servo system is placing higher demands...Permanent magnet synchronous motor(PMSM)is widely used in alternating current servo systems as it provides high eficiency,high power density,and a wide speed regulation range.The servo system is placing higher demands on its control performance.The model predictive control(MPC)algorithm is emerging as a potential high-performance motor control algorithm due to its capability of handling multiple-input and multipleoutput variables and imposed constraints.For the MPC used in the PMSM control process,there is a nonlinear disturbance caused by the change of electromagnetic parameters or load disturbance that may lead to a mismatch between the nominal model and the controlled object,which causes the prediction error and thus affects the dynamic stability of the control system.This paper proposes a data-driven MPC strategy in which the historical data in an appropriate range are utilized to eliminate the impact of parameter mismatch and further improve the control performance.The stability of the proposed algorithm is proved as the simulation demonstrates the feasibility.Compared with the classical MPC strategy,the superiority of the algorithm has also been verified.展开更多
This paper aims to develop Machine Learning algorithms to classify electronic articles related to this phenomenon by retrieving information and topic modelling.The Methodology of this study is categorized into three p...This paper aims to develop Machine Learning algorithms to classify electronic articles related to this phenomenon by retrieving information and topic modelling.The Methodology of this study is categorized into three phases:the Text Classification Approach(TCA),the Proposed Algorithms Interpretation(PAI),andfinally,Information Retrieval Approach(IRA).The TCA reflects the text preprocessing pipeline called a clean corpus.The Global Vec-tors for Word Representation(Glove)pre-trained model,FastText,Term Frequency-Inverse Document Fre-quency(TF-IDF),and Bag-of-Words(BOW)for extracting the features have been interpreted in this research.The PAI manifests the Bidirectional Long Short-Term Memory(Bi-LSTM)and Convolutional Neural Network(CNN)to classify the COVID-19 news.Again,the IRA explains the mathematical interpretation of Latent Dirich-let Allocation(LDA),obtained for modelling the topic of Information Retrieval(IR).In this study,99%accuracy was obtained by performing K-fold cross-validation on Bi-LSTM with Glove.A comparative analysis between Deep Learning and Machine Learning based on feature extraction and computational complexity exploration has been performed in this research.Furthermore,some text analyses and the most influential aspects of each document have been explored in this study.We have utilized Bidirectional Encoder Representations from Trans-formers(BERT)as a Deep Learning mechanism in our model training,but the result has not been uncovered satisfactory.However,the proposed system can be adjustable in the real-time news classification of COVID-19.展开更多
Due to the signal reflection and diffraction,site-specific unmodeled errors like multipath effect and Non-Line-of-Sight reception are significant error sources in Global Navigation Satellite System since they cannot b...Due to the signal reflection and diffraction,site-specific unmodeled errors like multipath effect and Non-Line-of-Sight reception are significant error sources in Global Navigation Satellite System since they cannot be easily mitigated.However,how to characterize and model the internal mechanisms and external influences of these site-specific unmodeled errors are still to be investigated.Therefore,we propose a method for characterizing and modeling site-specific unmodeled errors under reflection and diffraction using a data-driven approach.Specifically,we first consider all the popular potential features,which generate the site-specific unmodeled errors.We then use the random forest regression to comprehensively analyze the correlations between the site-specific unmodeled errors and the potential features.We finally characterize and model the site-specific unmodeled errors.Two 7-consecutive datasets dominated by signal reflection and diffraction were conducted.The results show that there are significant differences in the correlations with potential features.They are highly related to the application scenarios,observation types,and satellite types.Notably,the innovation vector often shows a strong correlation with the code site-specific unmodeled errors.For the phase site-specific unmodeled errors,they have high correlations with elevation,azimuth,number of visible satellites,and between-frequency differenced phase observations.In the environments of reflection and diffraction,the sum of the correlations of the top six potential features can reach approximately 88.5 and 87.7%,respectively.Meanwhile,these correlations are stable for different observation types and satellite types.With the integration of a transformer model with the random forest method,a high-precision unmodeled error prediction model is established,demonstrating the necessity to include multiple features for accurate and efficient characterization and modeling of site-specific unmodeled errors.展开更多
Manufacturers are striving to achieve higher energy efficiency without compromising production performance and quality standards.Parallel-serial structures,commonly found in modern production systems,offer a unique ba...Manufacturers are striving to achieve higher energy efficiency without compromising production performance and quality standards.Parallel-serial structures,commonly found in modern production systems,offer a unique balance of flexibility and efficiency by combining parallel processes with sequential workflows.However,their inherent complexity poses significant challenges,particularly in optimizing energy efficiency and ensuring consistent product quality.In data-driven manufacturing environments,it is not clear how to leverage production data to enhance the energy efficiency of production systems.Therefore,this paper studied a data-driven approach to improving energy efficiency in parallel-serial production lines with product quality issues.Firstly,the authors developed a data-driven performance analysis method to evaluate the effects of disruption events,such as energy-saving control actions,machine breakdowns,and product quality failures,on system throughput and energy consumption.Secondly,a periodic energy-saving control method was developed to enhance system energy efficiency using a non-linear programming model.To reduce complexity and improve computational efficiency,the model was simplified by leveraging the intrinsic properties of parallel-serial production lines and solved using an adaptive genetic algorithm.Finally,the effectiveness of the proposed data-driven approach was validated through case studies,providing actionable insights into achieving data-driven energy efficiency optimization in complex production systems.展开更多
Building integrated energy systems(BIESs)are pivotal for enhancing energy efficiency by accounting for a significant proportion of global energy consumption.Two key barriers that reduce the BIES operational efficiency...Building integrated energy systems(BIESs)are pivotal for enhancing energy efficiency by accounting for a significant proportion of global energy consumption.Two key barriers that reduce the BIES operational efficiency mainly lie in the renewable generation uncertainty and operational non-convexity of combined heat and power(CHP)units.To this end,this paper proposes a soft actor-critic(SAC)algorithm to solve the scheduling problem of BIES,which overcomes the model non-convexity and shows advantages in robustness and generalization.This paper also adopts a temporal fusion transformer(TFT)to enhance the optimal solution for the SAC algorithm by forecasting the renewable generation and energy demand.The TFT can effectively capture the complex temporal patterns and dependencies that span multiple steps.Furthermore,its forecasting results are interpretable due to the employment of a self-attention layer so as to assist in more trustworthy decision-making in the SAC algorithm.The proposed hybrid data-driven approach integrating TFT and SAC algorithm,i.e.,TFT-SAC approach,is trained and tested on a real-world dataset to validate its superior performance in reducing the energy cost and computational time compared with the benchmark approaches.The generalization performance for the scheduling policy,as well as the sensitivity analysis,are examined in the case studies.展开更多
The prediction of excitation band edge wavelength(EBEW)and peak emission wavelength(PEW)for Eu^(2+)-activated phosphors is intricate in practice,although a theoretical interpretation has been well established.A data-d...The prediction of excitation band edge wavelength(EBEW)and peak emission wavelength(PEW)for Eu^(2+)-activated phosphors is intricate in practice,although a theoretical interpretation has been well established.A data-driven approach could be of great help for EBEW and PEW prediction.We collected 91 Eu^(2+)-activated phosphors,the host structures of which exhibit a single activator site and the EBEW and PEW of which are available at the critical activator concentration.We extracted 29 descriptors(input features)that implicate the elemental and structural traits of phosphor hosts,and set up an integrated machine-learning(ML)platform consisting of 18 ML algorithms that allowed prediction of the EBEW and PEW as well as the DFT-calculated band gap(Eg).The acquired dataset involving 91 phosphors was insufficient for the 29-input-feature problem and the real-world data collected from the literature have a so-called dirty nature due to inaccurate,unstandardized experiments.Despite an unavoidable paucity of data and the dirty-data problems of real-world data-based ML implementation,we obtained acceptable holdout dataset test results for PEW predications such as R^(2)>0.6,MSE<0.02,and test_R^(2)/training_R^(2)>0.77 for four ML algorithms.The EBEW and E_(g)predictions returned slightly better test results than these PEW examples.展开更多
To address the complex coupling between aerodynamic characteristics and guidance control for morphing flight missiles,this study proposes a data-driven approach to integrated adaptive morphing and guidance.Firstly,an ...To address the complex coupling between aerodynamic characteristics and guidance control for morphing flight missiles,this study proposes a data-driven approach to integrated adaptive morphing and guidance.Firstly,an aerodynamic surrogate model is constructed using a fully connected neural network(FCNN),mapping the configuration parameters to aerodynamic parameters.Secondly,an adaptive physical parameters optimization network(PPON)is developed to optimize aerodynamic characteristics based on predictions from the aerodynamic surrogate model.Thirdly,an integrated morphing and guidance model is derived by applying the proximal policy optimization(PPO)algorithm from deep reinforcement learning(DRL),embedded with the adaptive aerodynamic optimization model.Eventually,the proposed integrated approach is applied to the guidance task of a morphing cruise missile with variable camber wings.Simulation results demonstrate that the integrated guidance model significantly enhances aerodynamic performance and generates more continuous guidance commands within approximately 4.3 s,outperforming the deep Q-Network(DQN)algorithm under morphing flight conditions.Moreover,compared to the PPO and DQN-based guidance laws without morphing flight conditions,the integrated model improves both the guidance accuracy and terminal kinetic energy.Furthermore,the integrated guidance model,trained on stationary targets,remains effective for engaging moving and maneuvering targets,showcasing its robust generalization capability.展开更多
This paper solves the problem of model-free dual-arm space robot maneuvering after non-cooperative target capture under high control quality requirements.The explicit system model is unavailable,and the maneuvering mi...This paper solves the problem of model-free dual-arm space robot maneuvering after non-cooperative target capture under high control quality requirements.The explicit system model is unavailable,and the maneuvering mission is disturbed by the measurement noise and the target adversarial behavior.To address these problems,a model-free Combined Adaptive-length Datadriven Predictive Controller(CADPC)is proposed.It consists of a separated subsystem identification method and a combined predictive control strategy.The subsystem identification method is composed of an adaptive data length,thereby reducing sensitivity to undetermined measurement noises and disturbances.Based on the subsystem identification,the combined predictive controller is established,reducing calculating resource.The stability of the CADPC is rigorously proven using the Input-to-State Stable(ISS)theorem and the small-gain theorem.Simulations demonstrate that CADPC effectively handles the model-free space robot post operation in the presence of significant disturbances,state measurement noise,and control input errors.It achieves improved steady-state accuracy,reduced steady-state control consumption,and minimized control input chattering.展开更多
BACKGROUND Due to the increasing rate of thyroid nodules diagnosis,and the desire to avoid the unsightly cervical scar,remote thyroidectomies were invented and are increasingly performed.Transoral endoscopic thyroidec...BACKGROUND Due to the increasing rate of thyroid nodules diagnosis,and the desire to avoid the unsightly cervical scar,remote thyroidectomies were invented and are increasingly performed.Transoral endoscopic thyroidectomy vestibular approach and trans-areolar approaches(TAA)are the two most commonly used remote approaches.No previous meta-analysis has compared postoperative infections and swallowing difficulties among the two procedures.AIM To compared the same among patients undergoing lobectomy for unilateral thyroid carcinoma/benign thyroid nodule.METHODS We searched PubMed MEDLINE,Google Scholar,and Cochrane Library from the date of the first published article up to August 2025.The term used were transoral thyroidectomy vestibular approach,trans areolar thyroidectomy,scarless thyroidectomy,remote thyroidectomy,infections,postoperative,inflammation,dysphagia,and swallowing difficulties.We identified 130 studies,of them,30 full texts were screened and only six studies were included in the final meta-analysis.RESULTS Postoperative infections were not different between the two approaches,odd ratio=1.33,95%confidence interval:0.50-3.53,theχ2 was 1.92 and the P-value for overall effect of 0.57.Similarly,transient swallowing difficulty was not different between the two forms of surgery,with odd ratio=0.91,95%confidence interval:0.35-2.40;theχ2 was 1.32,and the P-value for overall effect of 0.85.CONCLUSION No significant statistical differences were evident between trans-oral endoscopic Mirghani H.Infections and swallowing difficulty in scarless thyroidectomy WJCC https://www.wjgnet.com 2 January 6,2026 Volume 14 Issue 1 thyroidectomy vestibular approach and trans-areolar approach regarding postoperative infection and transient swallowing difficulties.Further longer randomized trials are needed.展开更多
This study integrates multiple sources of data(transaction data,policy text,public opinion data)with visualization techniques(such as heat maps,time-series trend charts,3D building brochures)to construct an analysis f...This study integrates multiple sources of data(transaction data,policy text,public opinion data)with visualization techniques(such as heat maps,time-series trend charts,3D building brochures)to construct an analysis framework for the Chengdu real estate market.By using the Adaptive Neuro-Fuzzy Inference System(ANFIS)prediction model,spatial GIS(Geographic Information System analysis)analysis,and interactive dashboards,this study reveals market differentiation,policy impacts,and changes in demand structure,thereby providing decision support for the government,enterprises,and homebuyers.展开更多
To address the issue of instability or even imbalance in the orientation and attitude control of quadrotor unmanned aerial vehicles(QUAVs)under random disturbances,this paper proposes a distributed antidisturbance dat...To address the issue of instability or even imbalance in the orientation and attitude control of quadrotor unmanned aerial vehicles(QUAVs)under random disturbances,this paper proposes a distributed antidisturbance data-driven event-triggered fusion control method,which achieves efficient fault diagnosis while suppressing random disturbances and mitigating communication conflicts within the QUAV swarm.First,the impact of random disturbances on the UAV swarm is analyzed,and a model for orientation and attitude control of QUAVs under stochastic perturbations is established,with the disturbance gain threshold determined.Second,a fault diagnosis system based on a high-gain observer is designed,constructing a fault gain criterion by integrating orientation and attitude information from QUAVs.Subsequently,a model-free dynamic linearization-based data modeling(MFDLDM)framework is developed using model-free adaptive control,which efficiently fits the nonlinear control model of the QUAV swarm while reducing temporal constraints on control data.On this basis,this paper constructs a distributed data-driven event-triggered controller based on the staggered communication mechanism,which consists of an equivalent QUAV controller and an event-triggered controller,and is able to reduce the communication conflicts while suppressing the influence of random interference.Finally,by incorporating random disturbances into the controller,comparative experiments and physical validations are conducted on the QUAV platforms,fully demonstrating the strong adaptability and robustness of the proposed distributed event-triggered fault-tolerant control system.展开更多
Wetting deformation in earth-rockfill dams is a critical factor influencingdam safety.Although numerous mathematical models have been developed to describe this phenomenon,most of them rely on empirical formulations a...Wetting deformation in earth-rockfill dams is a critical factor influencingdam safety.Although numerous mathematical models have been developed to describe this phenomenon,most of them rely on empirical formulations and lack prior knowledge of model parameters,which is essential for Bayesian parameter inversion to enhance accuracy and reduce uncertainty.This study introduces a datadriven approach to establishing prior knowledge of earth-rockfill dams.Driving factors are utilized to determine the potential range of model parameters,and settlement changes within this range are calculated.The results are iteratively compared with actual monitoring data until the calculated range encompasses the observed data,thereby providing prior knowledge of the model parameters.The proposed method is applied to the right-bank earth-rockfilldam of Danjiangkou.Employing a Gibbs sample size of 30,000,the proposed method effectively calibrates the prior knowledge of the wetting model parameters,achieving a root mean square error(RMSE)of 5.18 mm for the settlement predictions.By comparison,the use of non-informative priors with sample sizes of 30,000 and 50,000 results in significantly larger RMSE values of 11.97 mm and 16.07 mm,respectively.Furthermore,the computational efficiencyof the proposed method is demonstrated by an inversion computation time of 902 s for 30,000 samples,which is notably shorter than the 1026 s and 1558 s required for noninformative priors with 30,000 and 50,000 samples,respectively.These findingsunderscore the superior performance of the proposed approach in terms of both prediction accuracy and computational efficiency.These results demonstrate that the proposed method not only improves the predictive accuracy but also enhances the computational efficiency,enabling optimal parameter identificationwith reduced computational effort.This approach provides a robust and efficientframework for advancing dam safety assessments.展开更多
Storm-enhanced density(SED)and the tongue of ionization(TOI)are key ionospheric storm-time structures whose rapid evolution and fine-scale variability remain challenging to capture with conventional empirical high-lat...Storm-enhanced density(SED)and the tongue of ionization(TOI)are key ionospheric storm-time structures whose rapid evolution and fine-scale variability remain challenging to capture with conventional empirical high-latitude drivers.In this study,we examine the May 10–11,2024,superstorm using the Thermosphere–Ionosphere–Electrodynamics General Circulation Model(TIEGCM)with observation-constrained high-latitude forcing.Auroral precipitation parameters(energy flux and mean energy)are assimilated from a Defense Meteorological Satellite Program(DMSP)Special Sensor Ultraviolet Spectrographic Imager(SSUSI)using a multi-resolution Gaussian process(Lattice Kriging)approach,whereas high-latitude convection potentials are derived by assimilating Super Dual Auroral Radar Network(SuperDARN)observations with the Thomas and Shepherd(2018)model(TS18).For comparison,an additional simulation is performed using empirical models for both convection and auroral forcing.The results show that during the main phase of the May 10 storm,the data-driven simulation provides a more realistic depiction of the SED source region than does the empirical model run by capturing its rapid intensification more clearly and reproducing its spatial location and structural features with higher fidelity.These improvements lead to a more accurate representation of its poleward extension into the polar cap that develops into the TOI.Above the ionospheric F2 peak over the SED source region,SuperDARN-constrained potentials generate stronger and more localized E×B drifts that dominate plasma uplift and drive its transport into the polar cap,although neutral winds and downward ambipolar diffusion partially offset these effects.Below the F2 peak,neutral winds and photochemical processes play a major role in shaping the spatial extent and intensity of the SED and TOI.These results highlight the role of observation-constrained high-latitude drivers in representing ionosphere–thermosphere responses during extreme storms and suggest their relevance for improving physical interpretation and model performance.展开更多
This paper addresses the three-dimensional(3-D)approach angle constrained cooperative guidance problem for speed-varying missiles against maneuvering targets.First,the guidance problem is formulated in a relative refe...This paper addresses the three-dimensional(3-D)approach angle constrained cooperative guidance problem for speed-varying missiles against maneuvering targets.First,the guidance problem is formulated in a relative reference frame and a virtual control input is selected.Then,the cooperative guidance law is designed on the basis of a prediction-correction framework.The time-to-go under the baseline command is estimated by an efficient prediction method with a realistic aerodynamic model and a biased command is developed by utilizing the time-to-go predictions for synchronizing different missiles'impact times.The design of the biased command is decoupled into the individual design of its direction and magnitude.It is proved that the designed cooperative guidance law can make the time-to-go consensus error converge to zero before interception.Finally,the designed guidance law is validated through a series of numerical simulations.展开更多
The Wufeng–Longmaxi Formation derives its name from the Upper Ordovician Wufeng Formation and the Lower Silurian Longmaxi Formation,found in sequence in the Sichuan Basin.This formation hosts rich shale gas reservoir...The Wufeng–Longmaxi Formation derives its name from the Upper Ordovician Wufeng Formation and the Lower Silurian Longmaxi Formation,found in sequence in the Sichuan Basin.This formation hosts rich shale gas reservoirs,and its shale gas enrichment patterns are examined in this study using data from 1197 shale samples collected from 14 wells.Five basic and three key parameters,eight in all,are assessed for each sample.The five basic parameters include burial depth and the contents of four mineral types—quartz,clay,carbonate,and other minerals;the three key parameters,representing shale gas enrichment,are total organic carbon(TOC)content,porosity,and gas content.The SHapley Additive exPlanations(SHAP)analysis originated in game theory is used here in an interpretable machine learning framework,to address issues of heterogeneous data structure,noisy relationships,and multi-objective optimization.An evaluation of the ranking,contribution values,and conditions of changes for these parameters offers new quantitative insights into shale gas enrichment patterns.A quantitative analysis of the relationship between data-sets identifies the primary factors controlling TOC,porosity,and gas content of shale gas reservoirs.The results show that TOC and porosity jointly influence gas content;mineral content has a significant impact on both,TOC and porosity;and the burial depth governs porosity which,in turn,affects the conditions under which shale gas is preserved.Input parameter thresholds are also determined and provide a basis for the establishment of quantitative criteria to evaluate shale gas enrichment.The predictive accuracy of the model used in this study is significantly improved by the step-wise addition of two input parameters,namely TOC and porosity,separately and together.Thus,the game theory method in big data-driven analysis uses a combination of TOC and porosity to evaluate the gas content with encouraging results—suggesting that these are the key parameters that indicate source rock and reservoir properties.展开更多
A Cu-1.9Ni-1.9Co-0.9Si(mass fraction,%)alloy with high strength and electrical conductivity was designed by cluster formula approach.The microstructure evolution of the alloy during thermomechanical treatment was syst...A Cu-1.9Ni-1.9Co-0.9Si(mass fraction,%)alloy with high strength and electrical conductivity was designed by cluster formula approach.The microstructure evolution of the alloy during thermomechanical treatment was systematically investigated.The strengthening mechanism and electrical conductivity of the alloy were discussed in detail.The optimal thermomechanical treatment process was as follows:solid solution→80%cold rolling→(450℃,4 h)aging→50%cold rolling→(400℃,4 h)aging.The designed alloy achieved excellent comprehensive properties with a microhardness of HV 260,a yield strength of 843 MPa,a tensile strength of 884 MPa,and an electrical conductivity of 42.6%(IACS).Compared to direct aging treatment,the designed alloy subjected to multi-stage thermomechanical treatment had refined grains,high density of dislocations,and accelerated of precipitation of(Ni,Co)_(2)Si precipitates.High strength was mainly attributed to the combined effect of dislocation strengthening,work hardening and sub-grain strengthening,while good electrical conductivity was maintained through the precipitation of the large number of nanoparticles.展开更多
The key challenge in the preparation of perovskite solar cells is to enhance the reproducibility of PSC manufacturing,particularly by better controlling multiple high-dimensional process parameters.This study proposes...The key challenge in the preparation of perovskite solar cells is to enhance the reproducibility of PSC manufacturing,particularly by better controlling multiple high-dimensional process parameters.This study proposes a machine learning(ML)approach to efficiently predict and analyze perovskite film fabrication processes.By evaluating five classic ML algorithms on 130 experimental data sets from blade-coating parameters,the Random Forest(RF)model was identified as the most effective,enabling rapid prediction of over 100,000 parameter sets in just 10 min-equivalent to 3 years of manual experimentation.The RF model demonstrated strong predictive accuracy,with an R^(2) close to 0.8.This approach led to the identification of optimal process parameter combinations,significantly improving the reproducibility of PSCs and reducing performance variance by approximately threefold,thereby advancing the development of scalable manufacturing processes.展开更多
The photoacoustic imaging of lipid is intrinsically constrained by the feeble nature of endogenous lipid signals,posing a persistent sensitivity challenge that demands innovative solutions.Although adopting high-effic...The photoacoustic imaging of lipid is intrinsically constrained by the feeble nature of endogenous lipid signals,posing a persistent sensitivity challenge that demands innovative solutions.Although adopting high-efficiency excitation and detection elements may improve the imaging sensitivity to a certain extent,the application of the elements is inevitably subject to various limitations in practical applications,particularly during in vivo imaging and endoscopic imaging.In this study,we propose a multi-combinatorial approach to enhance the sensitivity of lipid photoacoustic imaging.The approach involves wavelet transform processing of one-dimensional A-line signals,gradient-based denoising of two-dimensional B-scan images,and finally,threedimensional spatial weighted averaging of the data processed by the previous two steps.This method not only significantly improves the signal-to-noise ratio(SNR)in distinguished feature regions of the image by around 10 dB,but also efficiently extracts weak signals with no distinct features in the original image.After processing with this method,the images acquired under single scanning were compared with those obtained under multiple scanning.The results showed highly consistent image features,with the structural similarity index increasing from 0.2 to 0.8,confirming the accuracy and reliability of the multi-combinatorial approach.展开更多
This work contributes to the theoretical foundation for pricing in data markets and offers practical insights for managing digital data exchanges in the era of big data.We propose a structured pricing model for data e...This work contributes to the theoretical foundation for pricing in data markets and offers practical insights for managing digital data exchanges in the era of big data.We propose a structured pricing model for data exchanges transitioning from quasi-public to marketoriented operations.To address the complex dynamics among data exchanges,suppliers,and consumers,the authors develop a threestage Stackelberg game framework.In this model,the data exchange acts as a leader setting transaction commission rates,suppliers are intermediate leaders determining unit prices,and consumers are followers making purchasing decisions.Two pricing strategies are examined:the Independent Pricing Approach(IPA)and the novel Perfectly Competitive Pricing Approach(PCPA),which accounts for competition among data providers.Using backward induction,the study derives subgame-perfect equilibria and proves the existence and uniqueness of Stackelberg equilibria under both approaches.Extensive numerical simulations are carried out in the model,demonstrating that PCPA enhances data demander utility,encourages supplier competition,increases transaction volume,and improves the overall profitability and sustainability of data exchanges.Social welfare analysis further confirms PCPA’s superiority in promoting efficient and fair data markets.展开更多
基金National Natural Science Foundation of China(Project No.:12371428)Projects of the Provincial College Students’Innovation and Training Program in 2024(Project No.:S202413023106,S202413023110)。
文摘This paper focuses on the numerical solution of a tumor growth model under a data-driven approach.Based on the inherent laws of the data and reasonable assumptions,an ordinary differential equation model for tumor growth is established.Nonlinear fitting is employed to obtain the optimal parameter estimation of the mathematical model,and the numerical solution is carried out using the Matlab software.By comparing the clinical data with the simulation results,a good agreement is achieved,which verifies the rationality and feasibility of the model.
文摘Permanent magnet synchronous motor(PMSM)is widely used in alternating current servo systems as it provides high eficiency,high power density,and a wide speed regulation range.The servo system is placing higher demands on its control performance.The model predictive control(MPC)algorithm is emerging as a potential high-performance motor control algorithm due to its capability of handling multiple-input and multipleoutput variables and imposed constraints.For the MPC used in the PMSM control process,there is a nonlinear disturbance caused by the change of electromagnetic parameters or load disturbance that may lead to a mismatch between the nominal model and the controlled object,which causes the prediction error and thus affects the dynamic stability of the control system.This paper proposes a data-driven MPC strategy in which the historical data in an appropriate range are utilized to eliminate the impact of parameter mismatch and further improve the control performance.The stability of the proposed algorithm is proved as the simulation demonstrates the feasibility.Compared with the classical MPC strategy,the superiority of the algorithm has also been verified.
文摘This paper aims to develop Machine Learning algorithms to classify electronic articles related to this phenomenon by retrieving information and topic modelling.The Methodology of this study is categorized into three phases:the Text Classification Approach(TCA),the Proposed Algorithms Interpretation(PAI),andfinally,Information Retrieval Approach(IRA).The TCA reflects the text preprocessing pipeline called a clean corpus.The Global Vec-tors for Word Representation(Glove)pre-trained model,FastText,Term Frequency-Inverse Document Fre-quency(TF-IDF),and Bag-of-Words(BOW)for extracting the features have been interpreted in this research.The PAI manifests the Bidirectional Long Short-Term Memory(Bi-LSTM)and Convolutional Neural Network(CNN)to classify the COVID-19 news.Again,the IRA explains the mathematical interpretation of Latent Dirich-let Allocation(LDA),obtained for modelling the topic of Information Retrieval(IR).In this study,99%accuracy was obtained by performing K-fold cross-validation on Bi-LSTM with Glove.A comparative analysis between Deep Learning and Machine Learning based on feature extraction and computational complexity exploration has been performed in this research.Furthermore,some text analyses and the most influential aspects of each document have been explored in this study.We have utilized Bidirectional Encoder Representations from Trans-formers(BERT)as a Deep Learning mechanism in our model training,but the result has not been uncovered satisfactory.However,the proposed system can be adjustable in the real-time news classification of COVID-19.
基金funded by the National Natural Science Foundation of China(42374014,42004014).
文摘Due to the signal reflection and diffraction,site-specific unmodeled errors like multipath effect and Non-Line-of-Sight reception are significant error sources in Global Navigation Satellite System since they cannot be easily mitigated.However,how to characterize and model the internal mechanisms and external influences of these site-specific unmodeled errors are still to be investigated.Therefore,we propose a method for characterizing and modeling site-specific unmodeled errors under reflection and diffraction using a data-driven approach.Specifically,we first consider all the popular potential features,which generate the site-specific unmodeled errors.We then use the random forest regression to comprehensively analyze the correlations between the site-specific unmodeled errors and the potential features.We finally characterize and model the site-specific unmodeled errors.Two 7-consecutive datasets dominated by signal reflection and diffraction were conducted.The results show that there are significant differences in the correlations with potential features.They are highly related to the application scenarios,observation types,and satellite types.Notably,the innovation vector often shows a strong correlation with the code site-specific unmodeled errors.For the phase site-specific unmodeled errors,they have high correlations with elevation,azimuth,number of visible satellites,and between-frequency differenced phase observations.In the environments of reflection and diffraction,the sum of the correlations of the top six potential features can reach approximately 88.5 and 87.7%,respectively.Meanwhile,these correlations are stable for different observation types and satellite types.With the integration of a transformer model with the random forest method,a high-precision unmodeled error prediction model is established,demonstrating the necessity to include multiple features for accurate and efficient characterization and modeling of site-specific unmodeled errors.
基金supported in part by Major Project of the National Social Science Fund of China,under Grant No.23&ZD050in part by National Natural Science Foundation of China(NSFC),under Grant Nos.72402031 and 71971052+2 种基金in part by Open Project Program of State Key Laboratory of Massive Personalized Customization System and Technology,under Grant No.H&C-MPC-2023-04-03in part by the Fundamental Research Funds for the Central Universities,under Grant No.N25ZJL015the Joint Funds of the Natural Science Foundation of Liaoning,under Grant No.2023-BSBA-139.
文摘Manufacturers are striving to achieve higher energy efficiency without compromising production performance and quality standards.Parallel-serial structures,commonly found in modern production systems,offer a unique balance of flexibility and efficiency by combining parallel processes with sequential workflows.However,their inherent complexity poses significant challenges,particularly in optimizing energy efficiency and ensuring consistent product quality.In data-driven manufacturing environments,it is not clear how to leverage production data to enhance the energy efficiency of production systems.Therefore,this paper studied a data-driven approach to improving energy efficiency in parallel-serial production lines with product quality issues.Firstly,the authors developed a data-driven performance analysis method to evaluate the effects of disruption events,such as energy-saving control actions,machine breakdowns,and product quality failures,on system throughput and energy consumption.Secondly,a periodic energy-saving control method was developed to enhance system energy efficiency using a non-linear programming model.To reduce complexity and improve computational efficiency,the model was simplified by leveraging the intrinsic properties of parallel-serial production lines and solved using an adaptive genetic algorithm.Finally,the effectiveness of the proposed data-driven approach was validated through case studies,providing actionable insights into achieving data-driven energy efficiency optimization in complex production systems.
文摘Building integrated energy systems(BIESs)are pivotal for enhancing energy efficiency by accounting for a significant proportion of global energy consumption.Two key barriers that reduce the BIES operational efficiency mainly lie in the renewable generation uncertainty and operational non-convexity of combined heat and power(CHP)units.To this end,this paper proposes a soft actor-critic(SAC)algorithm to solve the scheduling problem of BIES,which overcomes the model non-convexity and shows advantages in robustness and generalization.This paper also adopts a temporal fusion transformer(TFT)to enhance the optimal solution for the SAC algorithm by forecasting the renewable generation and energy demand.The TFT can effectively capture the complex temporal patterns and dependencies that span multiple steps.Furthermore,its forecasting results are interpretable due to the employment of a self-attention layer so as to assist in more trustworthy decision-making in the SAC algorithm.The proposed hybrid data-driven approach integrating TFT and SAC algorithm,i.e.,TFT-SAC approach,is trained and tested on a real-world dataset to validate its superior performance in reducing the energy cost and computational time compared with the benchmark approaches.The generalization performance for the scheduling policy,as well as the sensitivity analysis,are examined in the case studies.
基金supported by the Creative Materials Discovery Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT,and Future Planning(2015M3D1A1069705),(2021R1A2C1011642)and(2021R1A2C1009144)partly by the Alchemist Project(20012196)Digital manufacturing platform(N0002598)funded by MOTIE,Korea.
文摘The prediction of excitation band edge wavelength(EBEW)and peak emission wavelength(PEW)for Eu^(2+)-activated phosphors is intricate in practice,although a theoretical interpretation has been well established.A data-driven approach could be of great help for EBEW and PEW prediction.We collected 91 Eu^(2+)-activated phosphors,the host structures of which exhibit a single activator site and the EBEW and PEW of which are available at the critical activator concentration.We extracted 29 descriptors(input features)that implicate the elemental and structural traits of phosphor hosts,and set up an integrated machine-learning(ML)platform consisting of 18 ML algorithms that allowed prediction of the EBEW and PEW as well as the DFT-calculated band gap(Eg).The acquired dataset involving 91 phosphors was insufficient for the 29-input-feature problem and the real-world data collected from the literature have a so-called dirty nature due to inaccurate,unstandardized experiments.Despite an unavoidable paucity of data and the dirty-data problems of real-world data-based ML implementation,we obtained acceptable holdout dataset test results for PEW predications such as R^(2)>0.6,MSE<0.02,and test_R^(2)/training_R^(2)>0.77 for four ML algorithms.The EBEW and E_(g)predictions returned slightly better test results than these PEW examples.
基金supported by the National Natural Science Foundation of China under Grant Number U2341215the China Postdoctoral Science Foundation under Grant Number 2024M764224.
文摘To address the complex coupling between aerodynamic characteristics and guidance control for morphing flight missiles,this study proposes a data-driven approach to integrated adaptive morphing and guidance.Firstly,an aerodynamic surrogate model is constructed using a fully connected neural network(FCNN),mapping the configuration parameters to aerodynamic parameters.Secondly,an adaptive physical parameters optimization network(PPON)is developed to optimize aerodynamic characteristics based on predictions from the aerodynamic surrogate model.Thirdly,an integrated morphing and guidance model is derived by applying the proximal policy optimization(PPO)algorithm from deep reinforcement learning(DRL),embedded with the adaptive aerodynamic optimization model.Eventually,the proposed integrated approach is applied to the guidance task of a morphing cruise missile with variable camber wings.Simulation results demonstrate that the integrated guidance model significantly enhances aerodynamic performance and generates more continuous guidance commands within approximately 4.3 s,outperforming the deep Q-Network(DQN)algorithm under morphing flight conditions.Moreover,compared to the PPO and DQN-based guidance laws without morphing flight conditions,the integrated model improves both the guidance accuracy and terminal kinetic energy.Furthermore,the integrated guidance model,trained on stationary targets,remains effective for engaging moving and maneuvering targets,showcasing its robust generalization capability.
基金supported by the National Natural Science Foundation of China(No.12372045)the National Key Research and the Development Program of China(Nos.2023YFC2205900,2023YFC2205901)。
文摘This paper solves the problem of model-free dual-arm space robot maneuvering after non-cooperative target capture under high control quality requirements.The explicit system model is unavailable,and the maneuvering mission is disturbed by the measurement noise and the target adversarial behavior.To address these problems,a model-free Combined Adaptive-length Datadriven Predictive Controller(CADPC)is proposed.It consists of a separated subsystem identification method and a combined predictive control strategy.The subsystem identification method is composed of an adaptive data length,thereby reducing sensitivity to undetermined measurement noises and disturbances.Based on the subsystem identification,the combined predictive controller is established,reducing calculating resource.The stability of the CADPC is rigorously proven using the Input-to-State Stable(ISS)theorem and the small-gain theorem.Simulations demonstrate that CADPC effectively handles the model-free space robot post operation in the presence of significant disturbances,state measurement noise,and control input errors.It achieves improved steady-state accuracy,reduced steady-state control consumption,and minimized control input chattering.
文摘BACKGROUND Due to the increasing rate of thyroid nodules diagnosis,and the desire to avoid the unsightly cervical scar,remote thyroidectomies were invented and are increasingly performed.Transoral endoscopic thyroidectomy vestibular approach and trans-areolar approaches(TAA)are the two most commonly used remote approaches.No previous meta-analysis has compared postoperative infections and swallowing difficulties among the two procedures.AIM To compared the same among patients undergoing lobectomy for unilateral thyroid carcinoma/benign thyroid nodule.METHODS We searched PubMed MEDLINE,Google Scholar,and Cochrane Library from the date of the first published article up to August 2025.The term used were transoral thyroidectomy vestibular approach,trans areolar thyroidectomy,scarless thyroidectomy,remote thyroidectomy,infections,postoperative,inflammation,dysphagia,and swallowing difficulties.We identified 130 studies,of them,30 full texts were screened and only six studies were included in the final meta-analysis.RESULTS Postoperative infections were not different between the two approaches,odd ratio=1.33,95%confidence interval:0.50-3.53,theχ2 was 1.92 and the P-value for overall effect of 0.57.Similarly,transient swallowing difficulty was not different between the two forms of surgery,with odd ratio=0.91,95%confidence interval:0.35-2.40;theχ2 was 1.32,and the P-value for overall effect of 0.85.CONCLUSION No significant statistical differences were evident between trans-oral endoscopic Mirghani H.Infections and swallowing difficulty in scarless thyroidectomy WJCC https://www.wjgnet.com 2 January 6,2026 Volume 14 Issue 1 thyroidectomy vestibular approach and trans-areolar approach regarding postoperative infection and transient swallowing difficulties.Further longer randomized trials are needed.
基金Chengdu City Philosophy and Social Sciences Research Center“artificial intelligence+urban communication”theory and Application Research Center Project“Chengdu real estate vertical market public opinion data visualization research”(Project No.RZCC2025017).
文摘This study integrates multiple sources of data(transaction data,policy text,public opinion data)with visualization techniques(such as heat maps,time-series trend charts,3D building brochures)to construct an analysis framework for the Chengdu real estate market.By using the Adaptive Neuro-Fuzzy Inference System(ANFIS)prediction model,spatial GIS(Geographic Information System analysis)analysis,and interactive dashboards,this study reveals market differentiation,policy impacts,and changes in demand structure,thereby providing decision support for the government,enterprises,and homebuyers.
基金supported in part by the National Natural Science Foundation of China,Grant/Award Number:62003267the Key Research and Development Program of Shaanxi Province,Grant/Award Number:2023-GHZD-33Open Project of the State Key Laboratory of Intelligent Game,Grant/Award Number:ZBKF-23-05。
文摘To address the issue of instability or even imbalance in the orientation and attitude control of quadrotor unmanned aerial vehicles(QUAVs)under random disturbances,this paper proposes a distributed antidisturbance data-driven event-triggered fusion control method,which achieves efficient fault diagnosis while suppressing random disturbances and mitigating communication conflicts within the QUAV swarm.First,the impact of random disturbances on the UAV swarm is analyzed,and a model for orientation and attitude control of QUAVs under stochastic perturbations is established,with the disturbance gain threshold determined.Second,a fault diagnosis system based on a high-gain observer is designed,constructing a fault gain criterion by integrating orientation and attitude information from QUAVs.Subsequently,a model-free dynamic linearization-based data modeling(MFDLDM)framework is developed using model-free adaptive control,which efficiently fits the nonlinear control model of the QUAV swarm while reducing temporal constraints on control data.On this basis,this paper constructs a distributed data-driven event-triggered controller based on the staggered communication mechanism,which consists of an equivalent QUAV controller and an event-triggered controller,and is able to reduce the communication conflicts while suppressing the influence of random interference.Finally,by incorporating random disturbances into the controller,comparative experiments and physical validations are conducted on the QUAV platforms,fully demonstrating the strong adaptability and robustness of the proposed distributed event-triggered fault-tolerant control system.
基金supported by the National Key R&D Program of China(Grant No.2023YFC3209504)Natural Science Foundation of Wuhan(Grant No.2024040801020271)the Fundamental Research Funds for Central Public Welfare Research Institutes(Grant No.CKSF2025718/YT).
文摘Wetting deformation in earth-rockfill dams is a critical factor influencingdam safety.Although numerous mathematical models have been developed to describe this phenomenon,most of them rely on empirical formulations and lack prior knowledge of model parameters,which is essential for Bayesian parameter inversion to enhance accuracy and reduce uncertainty.This study introduces a datadriven approach to establishing prior knowledge of earth-rockfill dams.Driving factors are utilized to determine the potential range of model parameters,and settlement changes within this range are calculated.The results are iteratively compared with actual monitoring data until the calculated range encompasses the observed data,thereby providing prior knowledge of the model parameters.The proposed method is applied to the right-bank earth-rockfilldam of Danjiangkou.Employing a Gibbs sample size of 30,000,the proposed method effectively calibrates the prior knowledge of the wetting model parameters,achieving a root mean square error(RMSE)of 5.18 mm for the settlement predictions.By comparison,the use of non-informative priors with sample sizes of 30,000 and 50,000 results in significantly larger RMSE values of 11.97 mm and 16.07 mm,respectively.Furthermore,the computational efficiencyof the proposed method is demonstrated by an inversion computation time of 902 s for 30,000 samples,which is notably shorter than the 1026 s and 1558 s required for noninformative priors with 30,000 and 50,000 samples,respectively.These findingsunderscore the superior performance of the proposed approach in terms of both prediction accuracy and computational efficiency.These results demonstrate that the proposed method not only improves the predictive accuracy but also enhances the computational efficiency,enabling optimal parameter identificationwith reduced computational effort.This approach provides a robust and efficientframework for advancing dam safety assessments.
基金The Shandong Provincial Natural Science Foundation(Grant No.ZR2022JQ18)supported this worksupported by the National Natural Science Foundation of China(NNFSC)Youth Program(Grant No.42304168)+1 种基金supported by the National Key R&D Program of China(Grant No.2022YFF0504400)the NNSFC(Grant Nos.42188101 and 42174210)。
文摘Storm-enhanced density(SED)and the tongue of ionization(TOI)are key ionospheric storm-time structures whose rapid evolution and fine-scale variability remain challenging to capture with conventional empirical high-latitude drivers.In this study,we examine the May 10–11,2024,superstorm using the Thermosphere–Ionosphere–Electrodynamics General Circulation Model(TIEGCM)with observation-constrained high-latitude forcing.Auroral precipitation parameters(energy flux and mean energy)are assimilated from a Defense Meteorological Satellite Program(DMSP)Special Sensor Ultraviolet Spectrographic Imager(SSUSI)using a multi-resolution Gaussian process(Lattice Kriging)approach,whereas high-latitude convection potentials are derived by assimilating Super Dual Auroral Radar Network(SuperDARN)observations with the Thomas and Shepherd(2018)model(TS18).For comparison,an additional simulation is performed using empirical models for both convection and auroral forcing.The results show that during the main phase of the May 10 storm,the data-driven simulation provides a more realistic depiction of the SED source region than does the empirical model run by capturing its rapid intensification more clearly and reproducing its spatial location and structural features with higher fidelity.These improvements lead to a more accurate representation of its poleward extension into the polar cap that develops into the TOI.Above the ionospheric F2 peak over the SED source region,SuperDARN-constrained potentials generate stronger and more localized E×B drifts that dominate plasma uplift and drive its transport into the polar cap,although neutral winds and downward ambipolar diffusion partially offset these effects.Below the F2 peak,neutral winds and photochemical processes play a major role in shaping the spatial extent and intensity of the SED and TOI.These results highlight the role of observation-constrained high-latitude drivers in representing ionosphere–thermosphere responses during extreme storms and suggest their relevance for improving physical interpretation and model performance.
基金supported by Key R&D Program(Soft Science Project)of Shandong Province,China(No.2020CXGC011502)National Natural Science Foundation of China(Nos.62273043 and 62103049).
文摘This paper addresses the three-dimensional(3-D)approach angle constrained cooperative guidance problem for speed-varying missiles against maneuvering targets.First,the guidance problem is formulated in a relative reference frame and a virtual control input is selected.Then,the cooperative guidance law is designed on the basis of a prediction-correction framework.The time-to-go under the baseline command is estimated by an efficient prediction method with a realistic aerodynamic model and a biased command is developed by utilizing the time-to-go predictions for synchronizing different missiles'impact times.The design of the biased command is decoupled into the individual design of its direction and magnitude.It is proved that the designed cooperative guidance law can make the time-to-go consensus error converge to zero before interception.Finally,the designed guidance law is validated through a series of numerical simulations.
基金funded by the Technical Development(Entrusted)Project of Science and Department of SINOPEC(Grant No.P23240-4)the National Natural Science Foundation of China(Grant Nos.42172165,42272143 and 2025ZD1403901-05)。
文摘The Wufeng–Longmaxi Formation derives its name from the Upper Ordovician Wufeng Formation and the Lower Silurian Longmaxi Formation,found in sequence in the Sichuan Basin.This formation hosts rich shale gas reservoirs,and its shale gas enrichment patterns are examined in this study using data from 1197 shale samples collected from 14 wells.Five basic and three key parameters,eight in all,are assessed for each sample.The five basic parameters include burial depth and the contents of four mineral types—quartz,clay,carbonate,and other minerals;the three key parameters,representing shale gas enrichment,are total organic carbon(TOC)content,porosity,and gas content.The SHapley Additive exPlanations(SHAP)analysis originated in game theory is used here in an interpretable machine learning framework,to address issues of heterogeneous data structure,noisy relationships,and multi-objective optimization.An evaluation of the ranking,contribution values,and conditions of changes for these parameters offers new quantitative insights into shale gas enrichment patterns.A quantitative analysis of the relationship between data-sets identifies the primary factors controlling TOC,porosity,and gas content of shale gas reservoirs.The results show that TOC and porosity jointly influence gas content;mineral content has a significant impact on both,TOC and porosity;and the burial depth governs porosity which,in turn,affects the conditions under which shale gas is preserved.Input parameter thresholds are also determined and provide a basis for the establishment of quantitative criteria to evaluate shale gas enrichment.The predictive accuracy of the model used in this study is significantly improved by the step-wise addition of two input parameters,namely TOC and porosity,separately and together.Thus,the game theory method in big data-driven analysis uses a combination of TOC and porosity to evaluate the gas content with encouraging results—suggesting that these are the key parameters that indicate source rock and reservoir properties.
基金the financial support by the National Natural Science Foundation of China(No.U2202255)the Hunan Provincial Natural Science Foundation of China(No.2024JJ2076)the Key Research and Development Program of Ningbo,China(No.2023Z092)。
文摘A Cu-1.9Ni-1.9Co-0.9Si(mass fraction,%)alloy with high strength and electrical conductivity was designed by cluster formula approach.The microstructure evolution of the alloy during thermomechanical treatment was systematically investigated.The strengthening mechanism and electrical conductivity of the alloy were discussed in detail.The optimal thermomechanical treatment process was as follows:solid solution→80%cold rolling→(450℃,4 h)aging→50%cold rolling→(400℃,4 h)aging.The designed alloy achieved excellent comprehensive properties with a microhardness of HV 260,a yield strength of 843 MPa,a tensile strength of 884 MPa,and an electrical conductivity of 42.6%(IACS).Compared to direct aging treatment,the designed alloy subjected to multi-stage thermomechanical treatment had refined grains,high density of dislocations,and accelerated of precipitation of(Ni,Co)_(2)Si precipitates.High strength was mainly attributed to the combined effect of dislocation strengthening,work hardening and sub-grain strengthening,while good electrical conductivity was maintained through the precipitation of the large number of nanoparticles.
基金Key Research and Development Program of Hubei Province,China(Grant No.2022BAA096)Zhejiang Provincial Natural Science Foundation of China(This material is based upon work funded by Zhejiang Provincial Natural Science Foundation of China under Grant No.LR25A020002)support of the Center for Materials Analysis and Characterization,Material Characterization Lab,and Nanofabrication Lab at Hubei University。
文摘The key challenge in the preparation of perovskite solar cells is to enhance the reproducibility of PSC manufacturing,particularly by better controlling multiple high-dimensional process parameters.This study proposes a machine learning(ML)approach to efficiently predict and analyze perovskite film fabrication processes.By evaluating five classic ML algorithms on 130 experimental data sets from blade-coating parameters,the Random Forest(RF)model was identified as the most effective,enabling rapid prediction of over 100,000 parameter sets in just 10 min-equivalent to 3 years of manual experimentation.The RF model demonstrated strong predictive accuracy,with an R^(2) close to 0.8.This approach led to the identification of optimal process parameter combinations,significantly improving the reproducibility of PSCs and reducing performance variance by approximately threefold,thereby advancing the development of scalable manufacturing processes.
基金supported by the National Key Research and Development Program of China(2022YFC2402400)the National Natural Science Foundation of China(82027803,62275062)+7 种基金the Guangdong Provincial Key Laboratory of Biomedical Optical Imaging Technology(2020B121201010)the Shenzhen Science and Technology Innovation Committee under Grant(JCYJ20220818101417039)the Shenzhen Key Laboratory for Molecular lmaging(ZDSY20130401165820357)the Shenzhen Medical Research Fund(D2404002)the Project of Shandong Innovation and Startup Community of High-end Medical Apparatus and Instruments(2023-SGTTXM-002 and 2024-SGTTXM-005)the Shandong Province Technology Innovation Guidance Plan(Central Leading Local Science and Technology Development Fund)(YDZX2023115)the Taishan Scholar Special Funding Project of Shandong Provinceand the Shandong Laboratory of Advanced Biomaterials and Medical Devices in Weihai(ZL202402).
文摘The photoacoustic imaging of lipid is intrinsically constrained by the feeble nature of endogenous lipid signals,posing a persistent sensitivity challenge that demands innovative solutions.Although adopting high-efficiency excitation and detection elements may improve the imaging sensitivity to a certain extent,the application of the elements is inevitably subject to various limitations in practical applications,particularly during in vivo imaging and endoscopic imaging.In this study,we propose a multi-combinatorial approach to enhance the sensitivity of lipid photoacoustic imaging.The approach involves wavelet transform processing of one-dimensional A-line signals,gradient-based denoising of two-dimensional B-scan images,and finally,threedimensional spatial weighted averaging of the data processed by the previous two steps.This method not only significantly improves the signal-to-noise ratio(SNR)in distinguished feature regions of the image by around 10 dB,but also efficiently extracts weak signals with no distinct features in the original image.After processing with this method,the images acquired under single scanning were compared with those obtained under multiple scanning.The results showed highly consistent image features,with the structural similarity index increasing from 0.2 to 0.8,confirming the accuracy and reliability of the multi-combinatorial approach.
基金supported by the National Natural Science Foundation of China[grant numbers 12171158,12371474 and 12571510]Fundamental Research Funds for the Central Universities[grant number 2025ECNU-WLJC006].
文摘This work contributes to the theoretical foundation for pricing in data markets and offers practical insights for managing digital data exchanges in the era of big data.We propose a structured pricing model for data exchanges transitioning from quasi-public to marketoriented operations.To address the complex dynamics among data exchanges,suppliers,and consumers,the authors develop a threestage Stackelberg game framework.In this model,the data exchange acts as a leader setting transaction commission rates,suppliers are intermediate leaders determining unit prices,and consumers are followers making purchasing decisions.Two pricing strategies are examined:the Independent Pricing Approach(IPA)and the novel Perfectly Competitive Pricing Approach(PCPA),which accounts for competition among data providers.Using backward induction,the study derives subgame-perfect equilibria and proves the existence and uniqueness of Stackelberg equilibria under both approaches.Extensive numerical simulations are carried out in the model,demonstrating that PCPA enhances data demander utility,encourages supplier competition,increases transaction volume,and improves the overall profitability and sustainability of data exchanges.Social welfare analysis further confirms PCPA’s superiority in promoting efficient and fair data markets.