Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze we...Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze weather conditions degrade image qualityand reduce the precision of environmental monitoring systems. To address this problem,this research proposes a remote sensing image dehazingmethod based on the atmosphericscattering model and a dark channel prior constrained network. The method consists ofa dehazing network, a dark channel information injection network (DCIIN), and a transmissionmap network. Within the dehazing network, the branch fusion module optimizesfeature weights to enhance the dehazing effect. By leveraging dark channel information,the DCIIN enables high-quality estimation of the atmospheric veil. To ensure the outputof the deep learning model aligns with physical laws, we reconstruct the haze image usingthe prediction results from the three networks. Subsequently, we apply the traditionalloss function and dark channel loss function between the reconstructed haze image and theoriginal haze image. This approach enhances interpretability and reliabilitywhile maintainingadherence to physical principles. Furthermore, the network is trained on a synthesizednon-homogeneous haze remote sensing dataset using dark channel information from cloudmaps. The experimental results show that the proposed network can achieve better imagedehazing on both synthetic and real remote sensing images with non-homogeneous hazedistribution. This research provides a new idea for solving the problem of decreased accuracyof environmental monitoring systems under haze weather conditions and has strongpracticability.展开更多
BACKGROUND To investigate the preoperative factors influencing textbook outcomes(TO)in Intrahepatic cholangiocarcinoma(ICC)patients and evaluate the feasibility of an interpretable machine learning model for preoperat...BACKGROUND To investigate the preoperative factors influencing textbook outcomes(TO)in Intrahepatic cholangiocarcinoma(ICC)patients and evaluate the feasibility of an interpretable machine learning model for preoperative prediction of TO,we developed a machine learning model for preoperative prediction of TO and used the SHapley Additive exPlanations(SHAP)technique to illustrate the prediction process.AIM To analyze the factors influencing textbook outcomes before surgery and to establish interpretable machine learning models for preoperative prediction.METHODS A total of 376 patients diagnosed with ICC were retrospectively collected from four major medical institutions in China,covering the period from 2011 to 2017.Logistic regression analysis was conducted to identify preoperative variables associated with achieving TO.Based on these variables,an EXtreme Gradient Boosting(XGBoost)machine learning prediction model was constructed using the XGBoost package.The SHAP(package:Shapviz)algorithm was employed to visualize each variable's contribution to the model's predictions.Kaplan-Meier survival analysis was performed to compare the prognostic differences between the TO-achieving and non-TO-achieving groups.RESULTS Among 376 patients,287 were included in the training group and 89 in the validation group.Logistic regression identified the following preoperative variables influencing TO:Child-Pugh classification,Eastern Cooperative Oncology Group(ECOG)score,hepatitis B,and tumor size.The XGBoost prediction model demonstrated high accuracy in internal validation(AUC=0.8825)and external validation(AUC=0.8346).Survival analysis revealed that the disease-free survival rates for patients achieving TO at 1,2,and 3 years were 64.2%,56.8%,and 43.4%,respectively.CONCLUSION Child-Pugh classification,ECOG score,hepatitis B,and tumor size are preoperative predictors of TO.In both the training group and the validation group,the machine learning model had certain effectiveness in predicting TO before surgery.The SHAP algorithm provided intuitive visualization of the machine learning prediction process,enhancing its interpretability.展开更多
Artificial intelligence(AI)has emerged as a transformative technology in accelerating drug discovery and development within natural medicines research.Natural medicines,characterized by their complex chemical composit...Artificial intelligence(AI)has emerged as a transformative technology in accelerating drug discovery and development within natural medicines research.Natural medicines,characterized by their complex chemical compositions and multifaceted pharmacological mechanisms,demonstrate widespread application in treating diverse diseases.However,research and development face significant challenges,including component complexity,extraction difficulties,and efficacy validation.AI technology,particularly through deep learning(DL)and machine learning(ML)approaches,enables efficient analysis of extensive datasets,facilitating drug screening,component analysis,and pharmacological mechanism elucidation.The implementation of AI technology demonstrates considerable potential in virtual screening,compound optimization,and synthetic pathway design,thereby enhancing natural medicines’bioavailability and safety profiles.Nevertheless,current applications encounter limitations regarding data quality,model interpretability,and ethical considerations.As AI technologies continue to evolve,natural medicines research and development will achieve greater efficiency and precision,advancing both personalized medicine and contemporary drug development approaches.展开更多
The application of machine learning in alloy design is increasingly widespread,yet traditional models still face challenges when dealing with limited datasets and complex nonlinear relationships.This work proposes an ...The application of machine learning in alloy design is increasingly widespread,yet traditional models still face challenges when dealing with limited datasets and complex nonlinear relationships.This work proposes an interpretable machine learning method based on data augmentation and reconstruction,excavating high-performance low-alloyed magnesium(Mg)alloys.The data augmentation technique expands the original dataset through Gaussian noise.The data reconstruction method reorganizes and transforms the original data to extract more representative features,significantly improving the model's generalization ability and prediction accuracy,with a coefficient of determination(R^(2))of 95.9%for the ultimate tensile strength(UTS)model and a R^(2)of 95.3%for the elongation-to-failure(EL)model.The correlation coefficient assisted screening(CCAS)method is proposed to filter low-alloyed target alloys.A new Mg-2.2Mn-0.4Zn-0.2Al-0.2Ca(MZAX2000,wt%)alloy is designed and extruded into bar at given processing parameters,achieving room-temperature strength-ductility synergy showing an excellent UTS of 395 MPa and a high EL of 17.9%.This is closely related to its hetero-structured characteristic in the as-extruded MZAX2000 alloy consisting of coarse grains(16%),fine grains(75%),and fiber regions(9%).Therefore,this work offers new insights into optimizing alloy compositions and processing parameters for attaining new high strong and ductile low-alloyed Mg alloys.展开更多
Topographic maps,as essential tools and sources of information for geographic research,contain precise spatial locations and rich map features,and they illustrate spatio-temporal information on the distribution and di...Topographic maps,as essential tools and sources of information for geographic research,contain precise spatial locations and rich map features,and they illustrate spatio-temporal information on the distribution and differences of various surface features.Currently,topographic maps are mainly stored in raster and vector formats.Extraction of the spatio-temporal knowledge in the maps—such as spatial distribution patterns,feature relationships,and dynamic evolution—still primarily relies on manual interpretation.However,manual interpretation is time-consuming and laborious,especially for large-scale,long-term map knowledge extraction and application.With the development of artificial intelligence technology,it is possible to improve the automation level of map knowledge interpretation.Therefore,the present study proposes an automatic interpretation method for raster topographic map knowledge based on deep learning.To address the limitations of current data-driven intelligent technology in learning map spatial relations and cognitive logic,we establish a formal description of map knowledge by mapping the relationship between map knowledge and features,thereby ensuring interpretation accuracy.Subsequently,deep learning techniques are employed to extract map features automatically,and the spatio-temporal knowledge is constructed by combining formal descriptions of geographic feature knowledge.Validation experiments demonstrate that the proposed method effectively achieves automatic interpretation of spatio-temporal knowledge of geographic features in maps,with an accuracy exceeding 80%.The findings of the present study contribute to machine understanding of spatio-temporal differences in map knowledge and advances the intelligent interpretation and utilization of cartographic information.展开更多
As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigat...As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigation of degradation mechanisms.However,dynamic operating conditions,cell-to-cell inconsistencies,and limited availability of labeled data have posed significant challenges to accurate and robust prognosis and diagnosis.Herein,we introduce a time-series-decomposition-based ensembled lightweight learning model(TELL-Me),which employs a synergistic dual-module framework to facilitate accurate and reliable forecasting.The feature module formulates features with physical implications and sheds light on battery aging mechanisms,while the gradient module monitors capacity degradation rates and captures aging trend.TELL-Me achieves high accuracy in end-of-life prediction using minimal historical data from a single battery without requiring offline training dataset,and demonstrates impressive generality and robustness across various operating conditions and battery types.Additionally,by correlating feature contributions with degradation mechanisms across different datasets,TELL-Me is endowed with the diagnostic ability that not only enhances prediction reliability but also provides critical insights into the design and optimization of next-generation batteries.展开更多
Dear Editor,I am writing in response to Jamil's letter,"Interpretative Challenges of the Missing Perilymph'Sign in PLF Diagnosis."I concur with the author's emphasis on the necessity for cautious...Dear Editor,I am writing in response to Jamil's letter,"Interpretative Challenges of the Missing Perilymph'Sign in PLF Diagnosis."I concur with the author's emphasis on the necessity for cautious interpretation of low-signal areas as evidence of active perilymph leakage,requiring correlation with clinical findings,surgical confirmation,and longitudinal imaging changes.展开更多
Deep Learning(DL)model has been widely used in the field of Synthetic Aperture Radar Automatic Target Recognition(SAR-ATR)and has achieved excellent performance.However,the black-box nature of DL models has been the f...Deep Learning(DL)model has been widely used in the field of Synthetic Aperture Radar Automatic Target Recognition(SAR-ATR)and has achieved excellent performance.However,the black-box nature of DL models has been the focus of criticism,especially in the application of SARATR,which is closely associated with the national defense and security domain.To address these issues,a new interpretable recognition model Physics-Guided BagNet(PGBN)is proposed in this article.The model adopts an interpretable convolutional neural network framework and uses time–frequency analysis to extract physical scattering features in SAR images.Based on the physical scattering features,an unsupervised segmentation method is proposed to distinguish targets from the background in SAR images.On the basis of the segmentation result,a structure is designed,which constrains the model's spatial attention to focus more on the targets themselves rather than the background,thereby making the model's decision-making more in line with physical principles.In contrast to previous interpretable research methods,this model combines interpretable structure with physical interpretability,further reducing the model's risk of error recognition.Experiments on the MSTAR dataset verify that the PGBN model exhibits excellent interpretability and recognition performance,and comparative experiments with heatmaps indicate that the physical feature guidance module presented in this article can constrain the model to focus more on the target itself rather than the background.展开更多
Based on 1,003 articles about empirical research on interpreting teaching from 2002 to 2022 retrieved from China National Knowledge Internet,this paper extracts three main research methods,uncovering common problems i...Based on 1,003 articles about empirical research on interpreting teaching from 2002 to 2022 retrieved from China National Knowledge Internet,this paper extracts three main research methods,uncovering common problems in interpreting education and practical teaching suggestions:(1)Corpus-based researches collect numerous audios to study typical mistakes made by interpreting learners,particularly pause and self-repair,and suggest interpreting teaching improve learners’ability to use language chunks and encourage students to interpret smoothly;(2)Questionnaire surveys help understand requirements for professional interpreters and how interpreting teaching meets market demands;(3)Teaching experiments last for one to two semesters,addressing issues like outdated teaching materials and modes,and show how teaching materials and modes integrate modern technology.But empirical researches need to build new corpora,professional interpreters’corpora and address problems that haven’t been adequately discussed.This paper is helpful for improving interpreting education in China and other countries and for making clear tasks to be fulfilled in empirical research on interpreting education.展开更多
The potential toxicity of ionic liquids(ILs)affects their applications;how to control the toxicity is one of the key issues in their applications.To understand its toxicity structure relationship and promote its green...The potential toxicity of ionic liquids(ILs)affects their applications;how to control the toxicity is one of the key issues in their applications.To understand its toxicity structure relationship and promote its greener application,six different machine learning algorithms,including Bagging,Adaptive Boosting(AdaBoost),Gradient Boosting(GBoost),Stacking,Voting and Categorical Boosting(CatBoost),are established to model the toxicity of ILs on four distinct datasets including Leukemia rat cell line IPC-81(IPC-81),Acetylcholinesterase(AChE),Escherichia coli(E.coli)and Vibrio fischeri.Molecular descriptors obtained from the simplified molecular input line entry system(SMILES)are used to characterize ILs.All models are assessed by the mean square error(MSE),root mean square error(RMSE),mean absolute error(MAE)and correlation coefficient(R^(2)).Additionally,an interpretation model based on SHapley Additive exPlanations(SHAP)is built to determine the positive and negative effects of each molecular feature on toxicity.With additional parameters and complexity,the Catboost model outperforms the other models,making it a more reliable model for ILs'toxicity prediction.The results of the model's interpretation indicate that the most significant positive features,SMR_VSA5,PEOE_VSA8,Kappa2,PEOE_VSA6,SMR_VSA5,PEOE_VSA6 and EState_VSA1,can increase the toxicity of ILs as their levels rise,while the most significant negative features,VSA_EState7,EState_VSA8,PEOE_VSA9 and FpDensityMorgan1,can decrease the toxicity as their levels rise.Also,an IL's toxicity will grow as its average molecular weight and number of pyridine rings increase,whereas its toxicity will decrease as its hydrogen bond acceptors increase.This finding offers a theoretical foundation for rapid screening and synthesis of environmentally-benign ILs.展开更多
1 Introduction According to the World Health Organization,heart disease has been the leading cause of death worldwide for the past 20 years.Electrocardiography(ECG or EKG)records the electrophysiological activity of t...1 Introduction According to the World Health Organization,heart disease has been the leading cause of death worldwide for the past 20 years.Electrocardiography(ECG or EKG)records the electrophysiological activity of the heart in time,allowing accurate diagnoses by clinicians[1].Despite the relative simplicity of ECG acquisition,its interpretation requires extensive training.Manual examination and re-examination of ECG paper records can be time-consuming,potentially delaying diagnosis.Machine learning,which uses algorithms to identify patterns within data and make predictive analyses,has played a significant role in interpreting ECGs[2].展开更多
Developing machine learning frameworks with predictive power,interpretability,and transferability is crucial,yet it faces challenges in the field of electrocatalysis.To achieve this,we employed rigorous feature engine...Developing machine learning frameworks with predictive power,interpretability,and transferability is crucial,yet it faces challenges in the field of electrocatalysis.To achieve this,we employed rigorous feature engineering to establish a finely tuned gradient boosting regressor(GBR)model,which adeptly captures the physical complexity from feature space to target variables.We demonstrated that environmental electron effects and atomic number significantly govern the success of the mapping process via global and local explanations.The finely tuned GBR model exhibits exceptional robustness in predicting CO adsorption energies(R_(ave)^(2)=0.937,RMSE=0.153 eV).Moreover,the model demonstrated remarkable transfer learning ability,showing excellent predictive power for OH,NO,and N_(2) adsorption.Importantly,the GBR model exhibits exceptional predictive capability across an extensive search space,thereby demonstrating profound adaptability and versatility.Our research framework significantly enhances the interpretability and transferability of machine learning in electrocatalysis,offering vital insights for further advancements.展开更多
Low-temperature hydrogenation of silicon tetrachloride(STC)is an essential step in polysilicon production.The addition of CuCl to silicon powder is currently a commonly used catalytic method and the silicon powder act...Low-temperature hydrogenation of silicon tetrachloride(STC)is an essential step in polysilicon production.The addition of CuCl to silicon powder is currently a commonly used catalytic method and the silicon powder acts as both a reactant and a catalyst.However,the reaction mechanism and the structure-activity relationship of this process have not been fully elucidated.In this work,a comprehensive study of the reaction mechanism in the presence of Si and Cu_(3)Si was carried out using density functional theory(DFT)combined with experiments,respectively.The results indicated that the ratedetermining step(RDS)in the presence of Si is the phase transition of Si atom,meanwhile,the RDS in the presence of Cu_(3)Si is the TCS-generation process.The activation barrier of the latter is smaller,highlighting that the interaction of Si with the bulk phase is the pivotal factor influencing the catalytic activity.The feasibility of transition metal doping to facilitate this step was further investigated.The Si disengage energy(E_(d))was used as a quantitative parameter to assess the catalytic activity of the catalysts,and the optimal descriptor was determined through interpretable machine learning.It was demonstrated that d-band center and electron transfer play a crucial role in regulating the level of Ed.This work reveals the mechanism and structure-activity relationship for the low-temperature hydrogenation reaction of STC,and provides a basis for the rational design of catalysts.展开更多
In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defe...In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defect prediction methods based on software metric elements highly rely on software metric data.However,redundant software metric data is not conducive to efficient defect prediction,posing severe challenges to current software defect prediction tasks.To address these issues,this paper focuses on the rational clustering of software metric data.Firstly,multiple software projects are evaluated to determine the preset number of clusters for software metrics,and various clustering methods are employed to cluster the metric elements.Subsequently,a co-occurrence matrix is designed to comprehensively quantify the number of times that metrics appear in the same category.Based on the comprehensive results,the software metric data are divided into two semantic views containing different metrics,thereby analyzing the semantic information behind the software metrics.On this basis,this paper also conducts an in-depth analysis of the impact of different semantic view of metrics on defect prediction results,as well as the performance of various classification models under these semantic views.Experiments show that the joint use of the two semantic views can significantly improve the performance of models in software defect prediction,providing a new understanding and approach at the semantic view level for defect prediction research based on software metrics.展开更多
The 2016–2022 monitoring data from three ecological buoys in the Wenzhou coastal region of Zhejiang Province and the dataset European Centre for Medium-Range Weather Forecasts were examined to clarify the elaborate r...The 2016–2022 monitoring data from three ecological buoys in the Wenzhou coastal region of Zhejiang Province and the dataset European Centre for Medium-Range Weather Forecasts were examined to clarify the elaborate relationship between variations in ecological parameters during spring algal bloom incidents and the associated changes in temperature and wind fields in this study.A long short-term memory recurrent neural network was employed,and a predictive model for spring algal bloom in this region was developed.This model integrated various inputs,including temperature,wind speed,and other pertinent variables,and chlorophyll concentration served as the primary output indicator.The model training used chlorophyll concentration data,which were supplemented by reanalysis and forecast temperature and wind field data.The model demonstrated proficiency in forecasting next-day chlorophyll concentrations and assessing the likelihood of spring algal bloom occurrences using a defined chlorophyll concentration threshold.The historical validation from 2016 to 2019 corroborated the model's accuracy with an 81.71%probability of correct prediction,which was further proven by its precise prediction of two spring algal bloom incidents in late April 2023 and early May 2023.An interpretable machine learning-based model for spring algal bloom prediction,displaying effective forecasting with limited data,was established through the detailed analysis of the spring algal bloom mechanism and the careful selection of input variables.The insights gained from this study offer valuable contributions to the development of early warning systems for spring algal bloom in the Wenzhou coastal area of Zhejiang Province.展开更多
Cultural Relics World Issue 1,2025 Research on the Zodiac Snake Cultural Artifacts With the“Spring Festival,a Social Practice of the Chinese People Celebrating the Traditional New Year”being successfully included in...Cultural Relics World Issue 1,2025 Research on the Zodiac Snake Cultural Artifacts With the“Spring Festival,a Social Practice of the Chinese People Celebrating the Traditional New Year”being successfully included in UNESCO’s Representative List of the Intangible Cultural Heritage of Humanity,the Chinese traditional festival with the zodiac as the symbol has more and more extensive influence in the world.The snake,as one of the early totems of the Chinese nation,ranks 6th in the twelve zodiac signs,corresponding to“Si”in the 12 terrestrial branches.The issue launches the special topic of“Research on the Zodiac Snake Cultural Artifacts”,compiling 10 articles with the purpose of exploring the types,characteristics,origins and evolution of snake cultural relics from multiple perspectives,interpreting their cultural connotations,and the changes in the reflected spiritual beliefs and ideas.展开更多
Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpret...Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.展开更多
The Husimi function(Q-function)of a quantum state is the distribution function of the density operator in the coherent state representation.It is widely used in theoretical research,such as in quantum optics.The Wehrl...The Husimi function(Q-function)of a quantum state is the distribution function of the density operator in the coherent state representation.It is widely used in theoretical research,such as in quantum optics.The Wehrl entropy is the Shannon entropy of the Husimi function,and is nonzero even for pure states.This entropy has been extensively studied in mathematical physics.Recent research also suggests a significant connection between the Wehrl entropy and manybody quantum entanglement in spin systems.We investigate the statistical interpretation of the Husimi function and the Wehrl entropy,taking the system of N spin-1/2 particles as an example.Due to the completeness of coherent states,the Husimi function and Wehrl entropy can be explained via the positive operator-valued measurement(POVM)theory,although the coherent states are not a set of orthonormal basis.Here,with the help of the Bayes’theorem,we provide an alternative probabilistic interpretation for the Husimi function and the Wehrl entropy.This interpretation is based on direct measurements of the system,and thus does not require the introduction of an ancillary system as in the POVM theory.Moreover,under this interpretation the classical correspondences of the Husimi function and the Wehrl entropy are just phase-space probability distribution function of N classical tops,and its associated entropy,respectively.Therefore,this explanation contributes to a better understanding of the relationship between the Husimi function,Wehrl entropy,and classical-quantum correspondence.The generalization of this statistical interpretation to continuous-variable systems is also discussed.展开更多
“As a director,I hope that the audience will see a movie in cinemas instead of watching threeminute commentaries,about which I’m really speechless,”internationally renowned film director Zhang Yimou said in a recen...“As a director,I hope that the audience will see a movie in cinemas instead of watching threeminute commentaries,about which I’m really speechless,”internationally renowned film director Zhang Yimou said in a recent interview.Zhang said he believes the ritual and immersive feeling brought about by watching movies in the cinema cannot be replaced by watching movie commentaries online.He called on audiences to go to the cinema and experience in person the audio-visual feast delivered by the big screen.Zhang’s views on the so-called“threeminute movies”have become a trending topic on social media.The new genre,in which content creators on short-video platforms create summarized versions of films,also includes the creator’s own observations.The rise of these platforms has lowered barriers to entry for aspiring movie content creators,giving rise to not only high volume and great diversity but also large discrepancies in quality.Moreover,this content sometimes infringes the copyrights of the movies it features,while also misinterpreting their plots and themes.展开更多
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.
基金supported by the National Natural Science Foundation of China(No.51605054).
文摘Environmentalmonitoring systems based on remote sensing technology have a wider monitoringrange and longer timeliness, which makes them widely used in the detection andmanagement of pollution sources. However, haze weather conditions degrade image qualityand reduce the precision of environmental monitoring systems. To address this problem,this research proposes a remote sensing image dehazingmethod based on the atmosphericscattering model and a dark channel prior constrained network. The method consists ofa dehazing network, a dark channel information injection network (DCIIN), and a transmissionmap network. Within the dehazing network, the branch fusion module optimizesfeature weights to enhance the dehazing effect. By leveraging dark channel information,the DCIIN enables high-quality estimation of the atmospheric veil. To ensure the outputof the deep learning model aligns with physical laws, we reconstruct the haze image usingthe prediction results from the three networks. Subsequently, we apply the traditionalloss function and dark channel loss function between the reconstructed haze image and theoriginal haze image. This approach enhances interpretability and reliabilitywhile maintainingadherence to physical principles. Furthermore, the network is trained on a synthesizednon-homogeneous haze remote sensing dataset using dark channel information from cloudmaps. The experimental results show that the proposed network can achieve better imagedehazing on both synthetic and real remote sensing images with non-homogeneous hazedistribution. This research provides a new idea for solving the problem of decreased accuracyof environmental monitoring systems under haze weather conditions and has strongpracticability.
基金Supported by National Key Research and Development Program,No.2022YFC2407304Major Research Project for Middle-Aged and Young Scientists of Fujian Provincial Health Commission,No.2021ZQNZD013+2 种基金The National Natural Science Foundation of China,No.62275050Fujian Province Science and Technology Innovation Joint Fund Project,No.2019Y9108Major Science and Technology Projects of Fujian Province,No.2021YZ036017.
文摘BACKGROUND To investigate the preoperative factors influencing textbook outcomes(TO)in Intrahepatic cholangiocarcinoma(ICC)patients and evaluate the feasibility of an interpretable machine learning model for preoperative prediction of TO,we developed a machine learning model for preoperative prediction of TO and used the SHapley Additive exPlanations(SHAP)technique to illustrate the prediction process.AIM To analyze the factors influencing textbook outcomes before surgery and to establish interpretable machine learning models for preoperative prediction.METHODS A total of 376 patients diagnosed with ICC were retrospectively collected from four major medical institutions in China,covering the period from 2011 to 2017.Logistic regression analysis was conducted to identify preoperative variables associated with achieving TO.Based on these variables,an EXtreme Gradient Boosting(XGBoost)machine learning prediction model was constructed using the XGBoost package.The SHAP(package:Shapviz)algorithm was employed to visualize each variable's contribution to the model's predictions.Kaplan-Meier survival analysis was performed to compare the prognostic differences between the TO-achieving and non-TO-achieving groups.RESULTS Among 376 patients,287 were included in the training group and 89 in the validation group.Logistic regression identified the following preoperative variables influencing TO:Child-Pugh classification,Eastern Cooperative Oncology Group(ECOG)score,hepatitis B,and tumor size.The XGBoost prediction model demonstrated high accuracy in internal validation(AUC=0.8825)and external validation(AUC=0.8346).Survival analysis revealed that the disease-free survival rates for patients achieving TO at 1,2,and 3 years were 64.2%,56.8%,and 43.4%,respectively.CONCLUSION Child-Pugh classification,ECOG score,hepatitis B,and tumor size are preoperative predictors of TO.In both the training group and the validation group,the machine learning model had certain effectiveness in predicting TO before surgery.The SHAP algorithm provided intuitive visualization of the machine learning prediction process,enhancing its interpretability.
基金supports from the National Key Research and Development Program of China(No.2020YFE0202200)the National Natural Science Foundation of China(Nos.81903538,82322073,92253303)+1 种基金the Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine(No.ZYYCXTD-D-202004)the Science and Technology Commission of Shanghai Municipality(Nos.22ZR1474200,24JS2830200).
文摘Artificial intelligence(AI)has emerged as a transformative technology in accelerating drug discovery and development within natural medicines research.Natural medicines,characterized by their complex chemical compositions and multifaceted pharmacological mechanisms,demonstrate widespread application in treating diverse diseases.However,research and development face significant challenges,including component complexity,extraction difficulties,and efficacy validation.AI technology,particularly through deep learning(DL)and machine learning(ML)approaches,enables efficient analysis of extensive datasets,facilitating drug screening,component analysis,and pharmacological mechanism elucidation.The implementation of AI technology demonstrates considerable potential in virtual screening,compound optimization,and synthetic pathway design,thereby enhancing natural medicines’bioavailability and safety profiles.Nevertheless,current applications encounter limitations regarding data quality,model interpretability,and ethical considerations.As AI technologies continue to evolve,natural medicines research and development will achieve greater efficiency and precision,advancing both personalized medicine and contemporary drug development approaches.
基金funded by the National Natural Science Foundation of China(No.52204407)the Natural Science Foundation of Jiangsu Province(No.BK20220595)+1 种基金the China Postdoctoral Science Foundation(No.2022M723689)the Industrial Collaborative Innovation Project of Shanghai(No.XTCX-KJ-2022-2-11)。
文摘The application of machine learning in alloy design is increasingly widespread,yet traditional models still face challenges when dealing with limited datasets and complex nonlinear relationships.This work proposes an interpretable machine learning method based on data augmentation and reconstruction,excavating high-performance low-alloyed magnesium(Mg)alloys.The data augmentation technique expands the original dataset through Gaussian noise.The data reconstruction method reorganizes and transforms the original data to extract more representative features,significantly improving the model's generalization ability and prediction accuracy,with a coefficient of determination(R^(2))of 95.9%for the ultimate tensile strength(UTS)model and a R^(2)of 95.3%for the elongation-to-failure(EL)model.The correlation coefficient assisted screening(CCAS)method is proposed to filter low-alloyed target alloys.A new Mg-2.2Mn-0.4Zn-0.2Al-0.2Ca(MZAX2000,wt%)alloy is designed and extruded into bar at given processing parameters,achieving room-temperature strength-ductility synergy showing an excellent UTS of 395 MPa and a high EL of 17.9%.This is closely related to its hetero-structured characteristic in the as-extruded MZAX2000 alloy consisting of coarse grains(16%),fine grains(75%),and fiber regions(9%).Therefore,this work offers new insights into optimizing alloy compositions and processing parameters for attaining new high strong and ductile low-alloyed Mg alloys.
基金Deep-time Digital Earth(DDE)Big Science Program(No.GJ-C03-SGF-2025-004)National Natural Science Foundation of China(No.42394063)Sichuan Science and Technology Program(No.2025ZNSFSC0325).
文摘Topographic maps,as essential tools and sources of information for geographic research,contain precise spatial locations and rich map features,and they illustrate spatio-temporal information on the distribution and differences of various surface features.Currently,topographic maps are mainly stored in raster and vector formats.Extraction of the spatio-temporal knowledge in the maps—such as spatial distribution patterns,feature relationships,and dynamic evolution—still primarily relies on manual interpretation.However,manual interpretation is time-consuming and laborious,especially for large-scale,long-term map knowledge extraction and application.With the development of artificial intelligence technology,it is possible to improve the automation level of map knowledge interpretation.Therefore,the present study proposes an automatic interpretation method for raster topographic map knowledge based on deep learning.To address the limitations of current data-driven intelligent technology in learning map spatial relations and cognitive logic,we establish a formal description of map knowledge by mapping the relationship between map knowledge and features,thereby ensuring interpretation accuracy.Subsequently,deep learning techniques are employed to extract map features automatically,and the spatio-temporal knowledge is constructed by combining formal descriptions of geographic feature knowledge.Validation experiments demonstrate that the proposed method effectively achieves automatic interpretation of spatio-temporal knowledge of geographic features in maps,with an accuracy exceeding 80%.The findings of the present study contribute to machine understanding of spatio-temporal differences in map knowledge and advances the intelligent interpretation and utilization of cartographic information.
基金supported by the National Natural Science Foundation of China(22379021 and 22479021)。
文摘As batteries become increasingly essential for energy storage technologies,battery prognosis,and diagnosis remain central to ensure reliable operation and effective management,as well as to aid the in-depth investigation of degradation mechanisms.However,dynamic operating conditions,cell-to-cell inconsistencies,and limited availability of labeled data have posed significant challenges to accurate and robust prognosis and diagnosis.Herein,we introduce a time-series-decomposition-based ensembled lightweight learning model(TELL-Me),which employs a synergistic dual-module framework to facilitate accurate and reliable forecasting.The feature module formulates features with physical implications and sheds light on battery aging mechanisms,while the gradient module monitors capacity degradation rates and captures aging trend.TELL-Me achieves high accuracy in end-of-life prediction using minimal historical data from a single battery without requiring offline training dataset,and demonstrates impressive generality and robustness across various operating conditions and battery types.Additionally,by correlating feature contributions with degradation mechanisms across different datasets,TELL-Me is endowed with the diagnostic ability that not only enhances prediction reliability but also provides critical insights into the design and optimization of next-generation batteries.
文摘Dear Editor,I am writing in response to Jamil's letter,"Interpretative Challenges of the Missing Perilymph'Sign in PLF Diagnosis."I concur with the author's emphasis on the necessity for cautious interpretation of low-signal areas as evidence of active perilymph leakage,requiring correlation with clinical findings,surgical confirmation,and longitudinal imaging changes.
基金co-supported by the National Natural Science Foundation of China(No.62001507)the Youth Talent Lifting Project of the China Association for Science and Technology(No.2021-JCJQ-QT-018)+1 种基金the Program of the Youth Innovation Team of Shaanxi Universitiesthe Natural Science Basic Research Plan in Shaanxi Province of China(No.2023-JC-YB-491)。
文摘Deep Learning(DL)model has been widely used in the field of Synthetic Aperture Radar Automatic Target Recognition(SAR-ATR)and has achieved excellent performance.However,the black-box nature of DL models has been the focus of criticism,especially in the application of SARATR,which is closely associated with the national defense and security domain.To address these issues,a new interpretable recognition model Physics-Guided BagNet(PGBN)is proposed in this article.The model adopts an interpretable convolutional neural network framework and uses time–frequency analysis to extract physical scattering features in SAR images.Based on the physical scattering features,an unsupervised segmentation method is proposed to distinguish targets from the background in SAR images.On the basis of the segmentation result,a structure is designed,which constrains the model's spatial attention to focus more on the targets themselves rather than the background,thereby making the model's decision-making more in line with physical principles.In contrast to previous interpretable research methods,this model combines interpretable structure with physical interpretability,further reducing the model's risk of error recognition.Experiments on the MSTAR dataset verify that the PGBN model exhibits excellent interpretability and recognition performance,and comparative experiments with heatmaps indicate that the physical feature guidance module presented in this article can constrain the model to focus more on the target itself rather than the background.
基金USST Construction Project of English-taught Courses for International Students in 2024Key Course Construction Project in Universities of Shanghai in 2024USST Teaching Achievement Award(postgraduate)Cultivation Project in 2024。
文摘Based on 1,003 articles about empirical research on interpreting teaching from 2002 to 2022 retrieved from China National Knowledge Internet,this paper extracts three main research methods,uncovering common problems in interpreting education and practical teaching suggestions:(1)Corpus-based researches collect numerous audios to study typical mistakes made by interpreting learners,particularly pause and self-repair,and suggest interpreting teaching improve learners’ability to use language chunks and encourage students to interpret smoothly;(2)Questionnaire surveys help understand requirements for professional interpreters and how interpreting teaching meets market demands;(3)Teaching experiments last for one to two semesters,addressing issues like outdated teaching materials and modes,and show how teaching materials and modes integrate modern technology.But empirical researches need to build new corpora,professional interpreters’corpora and address problems that haven’t been adequately discussed.This paper is helpful for improving interpreting education in China and other countries and for making clear tasks to be fulfilled in empirical research on interpreting education.
基金funded by Research Platforms and Projects for Higher Education Institutions of Department of Education of Guangdong Province in 2024(2024KTSCX256)2023 Guangdong Province Higher Vocational Education Teaching Quality and Teaching Reform Project(2023JG080).
文摘The potential toxicity of ionic liquids(ILs)affects their applications;how to control the toxicity is one of the key issues in their applications.To understand its toxicity structure relationship and promote its greener application,six different machine learning algorithms,including Bagging,Adaptive Boosting(AdaBoost),Gradient Boosting(GBoost),Stacking,Voting and Categorical Boosting(CatBoost),are established to model the toxicity of ILs on four distinct datasets including Leukemia rat cell line IPC-81(IPC-81),Acetylcholinesterase(AChE),Escherichia coli(E.coli)and Vibrio fischeri.Molecular descriptors obtained from the simplified molecular input line entry system(SMILES)are used to characterize ILs.All models are assessed by the mean square error(MSE),root mean square error(RMSE),mean absolute error(MAE)and correlation coefficient(R^(2)).Additionally,an interpretation model based on SHapley Additive exPlanations(SHAP)is built to determine the positive and negative effects of each molecular feature on toxicity.With additional parameters and complexity,the Catboost model outperforms the other models,making it a more reliable model for ILs'toxicity prediction.The results of the model's interpretation indicate that the most significant positive features,SMR_VSA5,PEOE_VSA8,Kappa2,PEOE_VSA6,SMR_VSA5,PEOE_VSA6 and EState_VSA1,can increase the toxicity of ILs as their levels rise,while the most significant negative features,VSA_EState7,EState_VSA8,PEOE_VSA9 and FpDensityMorgan1,can decrease the toxicity as their levels rise.Also,an IL's toxicity will grow as its average molecular weight and number of pyridine rings increase,whereas its toxicity will decrease as its hydrogen bond acceptors increase.This finding offers a theoretical foundation for rapid screening and synthesis of environmentally-benign ILs.
基金supported by the NSFC-FDCT Grant 62361166662the National Key R&D Program of China(2023YFC3503400,2022YFC3400400)+4 种基金the Innovative Research Group Project of Hunan Province(2024JJ1002)the Key R&D Program of Hunan Province(2023GK2004,2023SK2059,2023SK2060)the Top 10 Technical Key Project in Hunan Province(2023GK1010)the Key Technologies R&D Program of Guangdong Province(2023B1111030004 to FFH)the Funds of the National Supercomputing Center in Changsha.
文摘1 Introduction According to the World Health Organization,heart disease has been the leading cause of death worldwide for the past 20 years.Electrocardiography(ECG or EKG)records the electrophysiological activity of the heart in time,allowing accurate diagnoses by clinicians[1].Despite the relative simplicity of ECG acquisition,its interpretation requires extensive training.Manual examination and re-examination of ECG paper records can be time-consuming,potentially delaying diagnosis.Machine learning,which uses algorithms to identify patterns within data and make predictive analyses,has played a significant role in interpreting ECGs[2].
基金supported by the Research Grants Council of Hong Kong(CityU 11305919 and 11308620)and NSFC/RGC Joint Research Scheme N_CityU104/19Hong Kong Research Grant Council Collaborative Research Fund:C1002-21G and C1017-22Gsupported by the Hong Kong Research Grant Council Collaborative Research Fund:C6021-19E.
文摘Developing machine learning frameworks with predictive power,interpretability,and transferability is crucial,yet it faces challenges in the field of electrocatalysis.To achieve this,we employed rigorous feature engineering to establish a finely tuned gradient boosting regressor(GBR)model,which adeptly captures the physical complexity from feature space to target variables.We demonstrated that environmental electron effects and atomic number significantly govern the success of the mapping process via global and local explanations.The finely tuned GBR model exhibits exceptional robustness in predicting CO adsorption energies(R_(ave)^(2)=0.937,RMSE=0.153 eV).Moreover,the model demonstrated remarkable transfer learning ability,showing excellent predictive power for OH,NO,and N_(2) adsorption.Importantly,the GBR model exhibits exceptional predictive capability across an extensive search space,thereby demonstrating profound adaptability and versatility.Our research framework significantly enhances the interpretability and transferability of machine learning in electrocatalysis,offering vital insights for further advancements.
基金supported by Hubei Three Gorges Laboratory Open Innovation Fund Project(SC231002)CFD Simulation to Explore the Mass and Heat Transfer Laws of Thermal Decomposition of Mixed Salt Organic Compounds Project(2021YFC 3201404).
文摘Low-temperature hydrogenation of silicon tetrachloride(STC)is an essential step in polysilicon production.The addition of CuCl to silicon powder is currently a commonly used catalytic method and the silicon powder acts as both a reactant and a catalyst.However,the reaction mechanism and the structure-activity relationship of this process have not been fully elucidated.In this work,a comprehensive study of the reaction mechanism in the presence of Si and Cu_(3)Si was carried out using density functional theory(DFT)combined with experiments,respectively.The results indicated that the ratedetermining step(RDS)in the presence of Si is the phase transition of Si atom,meanwhile,the RDS in the presence of Cu_(3)Si is the TCS-generation process.The activation barrier of the latter is smaller,highlighting that the interaction of Si with the bulk phase is the pivotal factor influencing the catalytic activity.The feasibility of transition metal doping to facilitate this step was further investigated.The Si disengage energy(E_(d))was used as a quantitative parameter to assess the catalytic activity of the catalysts,and the optimal descriptor was determined through interpretable machine learning.It was demonstrated that d-band center and electron transfer play a crucial role in regulating the level of Ed.This work reveals the mechanism and structure-activity relationship for the low-temperature hydrogenation reaction of STC,and provides a basis for the rational design of catalysts.
基金supported by the CCF-NSFOCUS‘Kunpeng’Research Fund(CCF-NSFOCUS2024012).
文摘In recent years,with the rapid development of software systems,the continuous expansion of software scale and the increasing complexity of systems have led to the emergence of a growing number of software metrics.Defect prediction methods based on software metric elements highly rely on software metric data.However,redundant software metric data is not conducive to efficient defect prediction,posing severe challenges to current software defect prediction tasks.To address these issues,this paper focuses on the rational clustering of software metric data.Firstly,multiple software projects are evaluated to determine the preset number of clusters for software metrics,and various clustering methods are employed to cluster the metric elements.Subsequently,a co-occurrence matrix is designed to comprehensively quantify the number of times that metrics appear in the same category.Based on the comprehensive results,the software metric data are divided into two semantic views containing different metrics,thereby analyzing the semantic information behind the software metrics.On this basis,this paper also conducts an in-depth analysis of the impact of different semantic view of metrics on defect prediction results,as well as the performance of various classification models under these semantic views.Experiments show that the joint use of the two semantic views can significantly improve the performance of models in software defect prediction,providing a new understanding and approach at the semantic view level for defect prediction research based on software metrics.
基金the Zhejiang Provincial Natural Science Foundation of China(No.LY21D 060003)the Project of State Key Laboratory of Satellite Ocean Environment Dynamics,Second Institute of Ocean-ography,MNR(No.SOEDZZ2103)+1 种基金the National Natural Science Foundation of China(No.42076216)the Open Research Fund of the Key Laboratory of Marine Ecological Monitoring and Restoration Technologies,MNR(No.MEMRT202210)。
文摘The 2016–2022 monitoring data from three ecological buoys in the Wenzhou coastal region of Zhejiang Province and the dataset European Centre for Medium-Range Weather Forecasts were examined to clarify the elaborate relationship between variations in ecological parameters during spring algal bloom incidents and the associated changes in temperature and wind fields in this study.A long short-term memory recurrent neural network was employed,and a predictive model for spring algal bloom in this region was developed.This model integrated various inputs,including temperature,wind speed,and other pertinent variables,and chlorophyll concentration served as the primary output indicator.The model training used chlorophyll concentration data,which were supplemented by reanalysis and forecast temperature and wind field data.The model demonstrated proficiency in forecasting next-day chlorophyll concentrations and assessing the likelihood of spring algal bloom occurrences using a defined chlorophyll concentration threshold.The historical validation from 2016 to 2019 corroborated the model's accuracy with an 81.71%probability of correct prediction,which was further proven by its precise prediction of two spring algal bloom incidents in late April 2023 and early May 2023.An interpretable machine learning-based model for spring algal bloom prediction,displaying effective forecasting with limited data,was established through the detailed analysis of the spring algal bloom mechanism and the careful selection of input variables.The insights gained from this study offer valuable contributions to the development of early warning systems for spring algal bloom in the Wenzhou coastal area of Zhejiang Province.
文摘Cultural Relics World Issue 1,2025 Research on the Zodiac Snake Cultural Artifacts With the“Spring Festival,a Social Practice of the Chinese People Celebrating the Traditional New Year”being successfully included in UNESCO’s Representative List of the Intangible Cultural Heritage of Humanity,the Chinese traditional festival with the zodiac as the symbol has more and more extensive influence in the world.The snake,as one of the early totems of the Chinese nation,ranks 6th in the twelve zodiac signs,corresponding to“Si”in the 12 terrestrial branches.The issue launches the special topic of“Research on the Zodiac Snake Cultural Artifacts”,compiling 10 articles with the purpose of exploring the types,characteristics,origins and evolution of snake cultural relics from multiple perspectives,interpreting their cultural connotations,and the changes in the reflected spiritual beliefs and ideas.
基金Supported in part by Science Center for Gas Turbine Project(Project No.P2022-DC-I-003-001)National Natural Science Foundation of China(Grant No.52275130).
文摘Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.
基金supported by the National Key Research and Development Program of China[Grant No.2022YFA1405300(PZ)]the Innovation Program for Quantum Science and Technology(Grant No.2023ZD0300700)。
文摘The Husimi function(Q-function)of a quantum state is the distribution function of the density operator in the coherent state representation.It is widely used in theoretical research,such as in quantum optics.The Wehrl entropy is the Shannon entropy of the Husimi function,and is nonzero even for pure states.This entropy has been extensively studied in mathematical physics.Recent research also suggests a significant connection between the Wehrl entropy and manybody quantum entanglement in spin systems.We investigate the statistical interpretation of the Husimi function and the Wehrl entropy,taking the system of N spin-1/2 particles as an example.Due to the completeness of coherent states,the Husimi function and Wehrl entropy can be explained via the positive operator-valued measurement(POVM)theory,although the coherent states are not a set of orthonormal basis.Here,with the help of the Bayes’theorem,we provide an alternative probabilistic interpretation for the Husimi function and the Wehrl entropy.This interpretation is based on direct measurements of the system,and thus does not require the introduction of an ancillary system as in the POVM theory.Moreover,under this interpretation the classical correspondences of the Husimi function and the Wehrl entropy are just phase-space probability distribution function of N classical tops,and its associated entropy,respectively.Therefore,this explanation contributes to a better understanding of the relationship between the Husimi function,Wehrl entropy,and classical-quantum correspondence.The generalization of this statistical interpretation to continuous-variable systems is also discussed.
文摘“As a director,I hope that the audience will see a movie in cinemas instead of watching threeminute commentaries,about which I’m really speechless,”internationally renowned film director Zhang Yimou said in a recent interview.Zhang said he believes the ritual and immersive feeling brought about by watching movies in the cinema cannot be replaced by watching movie commentaries online.He called on audiences to go to the cinema and experience in person the audio-visual feast delivered by the big screen.Zhang’s views on the so-called“threeminute movies”have become a trending topic on social media.The new genre,in which content creators on short-video platforms create summarized versions of films,also includes the creator’s own observations.The rise of these platforms has lowered barriers to entry for aspiring movie content creators,giving rise to not only high volume and great diversity but also large discrepancies in quality.Moreover,this content sometimes infringes the copyrights of the movies it features,while also misinterpreting their plots and themes.