Topographic maps,as essential tools and sources of information for geographic research,contain precise spatial locations and rich map features,and they illustrate spatio-temporal information on the distribution and di...Topographic maps,as essential tools and sources of information for geographic research,contain precise spatial locations and rich map features,and they illustrate spatio-temporal information on the distribution and differences of various surface features.Currently,topographic maps are mainly stored in raster and vector formats.Extraction of the spatio-temporal knowledge in the maps—such as spatial distribution patterns,feature relationships,and dynamic evolution—still primarily relies on manual interpretation.However,manual interpretation is time-consuming and laborious,especially for large-scale,long-term map knowledge extraction and application.With the development of artificial intelligence technology,it is possible to improve the automation level of map knowledge interpretation.Therefore,the present study proposes an automatic interpretation method for raster topographic map knowledge based on deep learning.To address the limitations of current data-driven intelligent technology in learning map spatial relations and cognitive logic,we establish a formal description of map knowledge by mapping the relationship between map knowledge and features,thereby ensuring interpretation accuracy.Subsequently,deep learning techniques are employed to extract map features automatically,and the spatio-temporal knowledge is constructed by combining formal descriptions of geographic feature knowledge.Validation experiments demonstrate that the proposed method effectively achieves automatic interpretation of spatio-temporal knowledge of geographic features in maps,with an accuracy exceeding 80%.The findings of the present study contribute to machine understanding of spatio-temporal differences in map knowledge and advances the intelligent interpretation and utilization of cartographic information.展开更多
The potential toxicity of ionic liquids(ILs)affects their applications;how to control the toxicity is one of the key issues in their applications.To understand its toxicity structure relationship and promote its green...The potential toxicity of ionic liquids(ILs)affects their applications;how to control the toxicity is one of the key issues in their applications.To understand its toxicity structure relationship and promote its greener application,six different machine learning algorithms,including Bagging,Adaptive Boosting(AdaBoost),Gradient Boosting(GBoost),Stacking,Voting and Categorical Boosting(CatBoost),are established to model the toxicity of ILs on four distinct datasets including Leukemia rat cell line IPC-81(IPC-81),Acetylcholinesterase(AChE),Escherichia coli(E.coli)and Vibrio fischeri.Molecular descriptors obtained from the simplified molecular input line entry system(SMILES)are used to characterize ILs.All models are assessed by the mean square error(MSE),root mean square error(RMSE),mean absolute error(MAE)and correlation coefficient(R^(2)).Additionally,an interpretation model based on SHapley Additive exPlanations(SHAP)is built to determine the positive and negative effects of each molecular feature on toxicity.With additional parameters and complexity,the Catboost model outperforms the other models,making it a more reliable model for ILs'toxicity prediction.The results of the model's interpretation indicate that the most significant positive features,SMR_VSA5,PEOE_VSA8,Kappa2,PEOE_VSA6,SMR_VSA5,PEOE_VSA6 and EState_VSA1,can increase the toxicity of ILs as their levels rise,while the most significant negative features,VSA_EState7,EState_VSA8,PEOE_VSA9 and FpDensityMorgan1,can decrease the toxicity as their levels rise.Also,an IL's toxicity will grow as its average molecular weight and number of pyridine rings increase,whereas its toxicity will decrease as its hydrogen bond acceptors increase.This finding offers a theoretical foundation for rapid screening and synthesis of environmentally-benign ILs.展开更多
Based on the Many Worlds Interpretation,I describe reality as a multilayer spacetime,where parallel layers play the role of alternative timelines.I link physics to ethics,arguing that one’s moral choices shape one’s...Based on the Many Worlds Interpretation,I describe reality as a multilayer spacetime,where parallel layers play the role of alternative timelines.I link physics to ethics,arguing that one’s moral choices shape one’s course in the multiverse.I consider one’s ethical decisions as decoherence events,leading to movement between alternative timelines,lighter(higher)or heavier(lower)realities.Sometimes in one’s curvilinear path in spacetime,one can even experience falling toward lower layers,slipping through wormholes.This theory supports free will and the simulation hypothesis.With this background,I explore the idea that a new theory of gravity might open new possibilities to shape matter and change our worldview through the invention of new technology,transforming information into waves and then into solid matter,paving the way for a new Multiverse Aeon for humanity.展开更多
Accurate determination of rockhead is crucial for underground construction.Traditionally,borehole data are mainly used for this purpose.However,borehole drilling is costly,time-consuming,and sparsely distributed.Non-i...Accurate determination of rockhead is crucial for underground construction.Traditionally,borehole data are mainly used for this purpose.However,borehole drilling is costly,time-consuming,and sparsely distributed.Non-invasive geophysical methods,particularly those using passive seismic surface waves,have emerged as viable alternatives for geological profiling and rockhead detection.This study proposes three interpretation methods for rockhead determination using passive seismic surface wave data from Microtremor Array Measurement(MAM)and Horizontal-to-Vertical Spectral Ratio(HVSR)tests.These are:(1)the Wavelength-Normalized phase velocity(WN)method in which a nonlinear relationship between rockhead depth and wavelength is established;(2)the Statistically Determined-shear wave velocity(SD-V_(s))method in which the representative V_(s) value for rockhead is automatically determined using a statistical method;and(3)the empirical HVSR method in which the rockhead is determined by interpreting resonant frequencies using a reliably calibrated empirical equation.These methods were implemented to determine rockhead depths at 28 locations across two distinct geological formations in Singapore,and the results were evaluated using borehole data.The WN method can determine rockhead depths accurately and reliably with minimal absolute errors(average RMSE=3.11 m),demonstrating robust performance across both geological formations.Its advantage lies in interpreting dispersion curves alone,without the need for the inversion process.The SD-V_(s) method is practical in engineering practice owing to its simplicity.The empirical HVSR method reasonably determines rockhead depths with moderate accuracy,benefiting from a reliably calibrated empirical equation.展开更多
The Husimi function(Q-function)of a quantum state is the distribution function of the density operator in the coherent state representation.It is widely used in theoretical research,such as in quantum optics.The Wehrl...The Husimi function(Q-function)of a quantum state is the distribution function of the density operator in the coherent state representation.It is widely used in theoretical research,such as in quantum optics.The Wehrl entropy is the Shannon entropy of the Husimi function,and is nonzero even for pure states.This entropy has been extensively studied in mathematical physics.Recent research also suggests a significant connection between the Wehrl entropy and manybody quantum entanglement in spin systems.We investigate the statistical interpretation of the Husimi function and the Wehrl entropy,taking the system of N spin-1/2 particles as an example.Due to the completeness of coherent states,the Husimi function and Wehrl entropy can be explained via the positive operator-valued measurement(POVM)theory,although the coherent states are not a set of orthonormal basis.Here,with the help of the Bayes’theorem,we provide an alternative probabilistic interpretation for the Husimi function and the Wehrl entropy.This interpretation is based on direct measurements of the system,and thus does not require the introduction of an ancillary system as in the POVM theory.Moreover,under this interpretation the classical correspondences of the Husimi function and the Wehrl entropy are just phase-space probability distribution function of N classical tops,and its associated entropy,respectively.Therefore,this explanation contributes to a better understanding of the relationship between the Husimi function,Wehrl entropy,and classical-quantum correspondence.The generalization of this statistical interpretation to continuous-variable systems is also discussed.展开更多
Dynamic stress adjustment in deep-buried high geostress hard rock tunnels frequently triggers catastrophic failures such as rockbursts and collapses.While a comprehensive understanding of this process is critical for ...Dynamic stress adjustment in deep-buried high geostress hard rock tunnels frequently triggers catastrophic failures such as rockbursts and collapses.While a comprehensive understanding of this process is critical for evaluating surrounding rock stability,its dynamic evolution are often overlooked in engineering practice.This study systematically summarizes a novel classification framework for stress adjustment types—stabilizing(two-zoned),shallow failure(three-zoned),and deep failure(four-zoned)—characterized by distinct stress adjustment stages.A dynamic interpretation technology system is developed based on microseismic monitoring,integrating key microseismic parameters(energy index EI,apparent stressσa,microseismic activity S),seismic source parameter space clustering,and microseismic paths.This approach enables precise identification of evolutionary stages,stress adjustment types,and failure precursors,thereby elucidating the intrinsic linkage between geomechanical processes(stress redistribution)and failure risks.The study establishes criteria and procedures for identifying stress adjustment types and their associated failure risks,which were successfully applied in the Grand Canyon Tunnel of the E-han Highway to detect 50 instances of disaster risks.The findings offer invaluable insights into understanding the evolution process of stress adjustment and pinpointing the disaster risks linked to hard rock in comparable high geostress tunnels.展开更多
The Interpretation of Nursing Guidelines for Intravenous Thrombolysis in Acute Ischemic Stroke offers comprehensive recommendations across five key domains:hospital organizational management,patient condition monitori...The Interpretation of Nursing Guidelines for Intravenous Thrombolysis in Acute Ischemic Stroke offers comprehensive recommendations across five key domains:hospital organizational management,patient condition monitoring,complication observation and management,positioning and mobility away from the bed,and quality assurance.These Guidelines encompass all the phases of intravenous thrombolysis care for patients experiencing acute ischemic stroke.This article aims to elucidate the Guidelines by discussing their developmental background,the designation process,usage recommendations,and the interpretation of evolving perspectives,thereby providing valuable insights for clinical practice.展开更多
The study employs the theoretical framework of Nanosyntax to analyze the generative mechanism of verbal ABAB reduplication pattern in Mandarin Chinese.The research characterizes ABAB reduplication as an inflectional o...The study employs the theoretical framework of Nanosyntax to analyze the generative mechanism of verbal ABAB reduplication pattern in Mandarin Chinese.The research characterizes ABAB reduplication as an inflectional operation involving functional projections of pluractionality and aspect.It distinguishes between event-internal and event-external pluralization,as well as inner and outer aspect in verbal reduplication.Following the One-Function-One-Head Principle in Nanosyntax,verbal ABAB form occurs through the merging of categoryless roots that are categorized by little v,with the RED affix syncretizing multiple functional morphemes.This framework reduces lexical burden and precisely represents the unique syntactic structure of Chinese verbal reduplication.展开更多
This study introduces a comprehensive and automated framework that leverages data-driven method-ologies to address various challenges in shale gas development and production.Specifically,it harnesses the power of Auto...This study introduces a comprehensive and automated framework that leverages data-driven method-ologies to address various challenges in shale gas development and production.Specifically,it harnesses the power of Automated Machine Learning(AutoML)to construct an ensemble model to predict the estimated ultimate recovery(EUR)of shale gas wells.To demystify the“black-box”nature of the ensemble model,KernelSHAP,a kernel-based approach to compute Shapley values,is utilized for elucidating the influential factors that affect shale gas production at both global and local scales.Furthermore,a bi-objective optimization algorithm named NSGA-Ⅱ is seamlessly incorporated to opti-mize hydraulic fracturing designs for production boost and cost control.This innovative framework addresses critical limitations often encountered in applying machine learning(ML)to shale gas pro-duction:the challenge of achieving sufficient model accuracy with limited samples,the multidisciplinary expertise required for developing robust ML models,and the need for interpretability in“black-box”models.Validation with field data from the Fuling shale gas field in the Sichuan Basin substantiates the framework's efficacy in enhancing the precision and applicability of data-driven techniques.The test accuracy of the ensemble ML model reached 83%compared to a maximum of 72%of single ML models.The contribution of each geological and engineering factor to the overall production was quantitatively evaluated.Fracturing design optimization raised EUR by 7%-34%under different production and cost tradeoff scenarios.The results empower domain experts to conduct more precise and objective data-driven analyses and optimizations for shale gas production with minimal expertise in data science.展开更多
The Agadem block is an area of major oil interest located in the large sedimentary basin of Termit,in the south-east of the Republic of Niger.Since the 1950s,this basin has known geological and geophysical research ac...The Agadem block is an area of major oil interest located in the large sedimentary basin of Termit,in the south-east of the Republic of Niger.Since the 1950s,this basin has known geological and geophysical research activities.However,despite the extensive research carried out,we believe that a geophysical contribution in terms of magnetic properties and their repercussions on the structure of the Agadem block allowing the improvement of existing knowledge is essential.The present study aims to study the structural characteristics of the Agadem block associated with magnetic anomalies.For this,after data shaping,several filtering techniques were applied to the aeromagnetic data to identify and map deep geological structures.The reduction to the pole map shows large negative wavelength anomalies in the southeast half of the block and short positive wavelength anomalies in the northwest part embedded in a large positive anomaly occupying the lower northern half of the block.The maps of the total horizontal derivative and tilt angle show lineaments globally distributed along the NW-SE direction in accordance with the structural style of the study area.The resulting map highlights numerous lineaments that may be associated with faults hidden by the sedimentary cover.The calculation of the Euler deconvolution allowed us to locate and estimate the depths of magnetic sources at variable depths of up to 4000 m.The compilation of the results obtained allowed us to locate zones of high and low intensities which correspond respectively to horsts and grabens as major structures of the Agadem block.展开更多
To predict the endpoint carbon content and temperature in basic oxygen furnace (BOF), the industrial parameters of BOF steelmaking are taken as input values. Firstly, a series of preprocessing works such as the Pauta ...To predict the endpoint carbon content and temperature in basic oxygen furnace (BOF), the industrial parameters of BOF steelmaking are taken as input values. Firstly, a series of preprocessing works such as the Pauta criterion, hierarchical clustering, and principal component analysis on the original data were performed. Secondly, the prediction results of classic machine learning models of ridge regression, support vector machine, gradient boosting regression (GBR), random forest regression, back-propagation (BP) neural network models, and multi-layer perceptron (MLP) were compared before and after data preprocessing. An improved model was established based on the improved sparrow algorithm and BP using tent chaotic mapping (CSSA-BP). The CSSA-BP model showed the best performance for endpoint carbon prediction with the lowest mean absolute error (MAE) and root mean square error (RMSE) values of 0.01124 and 0.01345 mass% among seven models, respectively. And the lowest MAE and RMSE values of 8.9839 and 10.9321 ℃ for endpoint temperature prediction were obtained among seven models, respectively. Furthermore, the CSSA-BP and GBR models have the smallest error fluctuation range in both endpoint carbon content and temperature predictions. Finally, in order to improve the interpretability of the model, SHapley additive interpretation (SHAP) was used to analyze the results.展开更多
The“First Multidisciplinary Forum on COVID-19,”as one of the pivotal medical forums organized by China during the initial outbreak of the pandemic,garnered significant attention from numerous countries worldwide.Ens...The“First Multidisciplinary Forum on COVID-19,”as one of the pivotal medical forums organized by China during the initial outbreak of the pandemic,garnered significant attention from numerous countries worldwide.Ensuring the seamless execution of the forum’s translation necessitated exceptionally high standards for simultaneous interpreters.This paper,through the lens of the Translation as Adaptation and Selection theory within Eco-Translatology,conducts an analytical study of the live simultaneous interpretation at the First Multidisciplinary Forum on COVID-19.It examines the interpreters’adaptations and selections across multiple dimensions-namely,linguistic,cultural,and communicative-with the aim of elucidating the guiding role and recommendations that Eco-Translatology can offer to simultaneous interpretation.Furthermore,it seeks to provide insights that may enhance the quality of interpreters’oral translations.展开更多
In recent years,deeps learning has been widely applied in synthetic aperture radar(SAR)image processing.However,the collection of large-scale labeled SAR images is challenging and costly,and the classification accurac...In recent years,deeps learning has been widely applied in synthetic aperture radar(SAR)image processing.However,the collection of large-scale labeled SAR images is challenging and costly,and the classification accuracy is often poor when only limited SAR images are available.To address this issue,we propose a novel framework for sparse SAR target classification under few-shot cases,termed the transfer learning-based interpretable lightweight convolutional neural network(TL-IL-CNN).Additionally,we employ enhanced gradient-weighted class activation mapping(Grad-CAM)to mitigate the“black box”effect often associated with deep learning models and to explore the mechanisms by which a CNN classifies various sparse SAR targets.Initially,we apply a novel bidirectional iterative soft thresholding(BiIST)algorithm to generate sparse images of superior quality compared to those produced by traditional matched filtering(MF)techniques.Subsequently,we pretrain multiple shallow CNNs on a simulated SAR image dataset.Using the sparse SAR dataset as input for the CNNs,we assess the efficacy of transfer learning in sparse SAR target classification and suggest the integration of TL-IL-CNN to enhance the classification accuracy further.Finally,Grad-CAM is utilized to provide visual explanations for the predictions made by the classification framework.The experimental results on the MSTAR dataset reveal that the proposed TL-IL-CNN achieves nearly 90%classification accuracy with only 20%of the training data required under standard operating conditions(SOC),surpassing typical deep learning methods such as vision Transformer(ViT)in the context of small samples.Remarkably,it even presents better performance under extended operating conditions(EOC).Furthermore,the application of Grad-CAM elucidates the CNN’s differentiation process among various sparse SAR targets.The experiments indicate that the model focuses on the target and the background can differ among target classes.The study contributes to an enhanced understanding of the interpretability of such results and enables us to infer the classification outcomes for each category more accurately.展开更多
It is regretful that the data error due to the large number of samples tested.The correct data and figure should be as follows:This correction have no impact on the remainder of the manuscript,the interpretation of th...It is regretful that the data error due to the large number of samples tested.The correct data and figure should be as follows:This correction have no impact on the remainder of the manuscript,the interpretation of the data,or the conclusions reached.The authors would like to apologize for any inconvenience caused.展开更多
The establishment of a sound science and technology ethics governance system is an inevitable requirement for national modernization.Faced with the development of human gene technology and the chaos in research activi...The establishment of a sound science and technology ethics governance system is an inevitable requirement for national modernization.Faced with the development of human gene technology and the chaos in research activities,the ethical standards and legal positioning of human gene research activities urgently need to be clarified.The human rights ethics view has value inclusiveness and value fundamentality,and includes three levels of connotations:content dimension,relationship dimension,and obligation dimension.It should serve as the ethical standard for human gene research activities.Based on the provisions of China’s Constitution,the human rights ethics view on human gene research,as a constitutional ethics view,can elucidate different levels of rights content,such as human dignity,life and health,and research freedom.It also addresses the weighing of basic rights conflicts and the dual obligation subjects of public and private nature.Relying on the constitutional value embedding of the research ethics view to form ethical consensus,improving ethical review through framework legislation for human rights interests,and implementing ethical responsibility through the human rights-oriented interpretation of ethical legal norms are the three pathways to realizing the human rights ethics view on human gene research.展开更多
Ecological water supplement projects have been implemented in many coastal wetlands,influencing saltmarsh vegetation restoration by altering tidal creek development.To clarify the effectiveness of ecological water sup...Ecological water supplement projects have been implemented in many coastal wetlands,influencing saltmarsh vegetation restoration by altering tidal creek development.To clarify the effectiveness of ecological water supplement on tidal creek development and saltmarsh vegetation restoration,time series of high-resolution remote sensing images from2000 to 2021 were used to extract and analyze tidal creeks and saltmarsh vegetation in the Diaokou Estuary Reserve.The results are summarized as follows:(1)All tidal creek indices,except curvature,exhibited a significant linear increase with time,while curvature initially decreased and increased afterwards.The 5-year cumulative ecological water supplement volume was significantly positively correlated(R^(2)>0.7)with all tidal creek indices.(2)During the same period,most landscape pattern metrics of Phragmites australis increased,while those of Tamarix chinensis initially increased and then declined.Although the class area(CA)of Suaeda salsa increased with time,its perimeter-area fractal dimension(PAFRAC)and aggregation index(AI)fluctuated.(3)The CA of P.australis and S.salsa was significantly positively correlated with most tidal creek indices except curvature,while that of T.chinensis showed no significant correlation with any tidal creek index.Moreover,the AI and PAFRAC of P.australis were significantly positively correlated with fractal dimension,frequency/curvature of tidal creeks,respectively,while those of S.salsa and T.chinensis exhibited no significant correlation.In summary,the ecological water supplement project enhanced tidal creek development in coastal wetlands,promoting the restoration of P.australis and S.salsa,while having little impact on the restoration of T.chinensis.展开更多
BACKGROUND A recently developed method enables automated measurement of the hallux valgus angle(HVA)and the first intermetatarsal angle(IMA)from weightbearing foot radiographs.This approach employs bone segmentation t...BACKGROUND A recently developed method enables automated measurement of the hallux valgus angle(HVA)and the first intermetatarsal angle(IMA)from weightbearing foot radiographs.This approach employs bone segmentation to identify anatomical landmarks and provides standardized angle measurements based on established guidelines.While effective for HVA and IMA,preoperative radiograph analysis remains complex and requires additional measurements,such as the hallux interphalangeal angle(IPA),which has received limited research attention.AIM To expand the previous method,which measured HVA and IMA,by incorporating the automatic measurement of IPA,evaluating its accuracy and clinical relevance.METHODS A preexisting database of manually labeled foot radiographs was used to train a U-Net neural network for segmenting bones and identifying landmarks necessary for IPA measurement.Of the 265 radiographs in the dataset,161 were selected for training and 20 for validation.The U-Net neural network achieves a high mean Sørensen-Dice index(>0.97).The remaining 84 radiographs were used to assess the reliability of automated IPA measurements against those taken manually by two orthopedic surgeons(OA and OB)using computer-based tools.Each measurement was repeated to assess intraobserver(OA1 and OA2)and interobserver(O_(A2) and O_(B))reliability.Agreement between automated and manual methods was evaluated using the Intraclass Correlation Coefficient(ICC),and Bland-Altman analysis identified systematic differences.Standard error of measurement(SEM)and Pearson correlation coefficients quantified precision and linearity,and measurement times were recorded to evaluate efficiency.RESULTS The artificial intelligence(AI)-based system demonstrated excellent reliability,with ICC3.1 values of 0.92(AI vs OA2)and 0.88(AI vs O_(B)),both statistically significant(P<0.001).For manual measurements,ICC values were 0.95(OA2 vs OA1)and 0.95(OA2 vs OB),supporting both intraobserver and interobserver reliability.Bland-Altman analysis revealed minimal biases of:(1)1.61°(AI vs O_(A2));and(2)2.54°(AI vs O_(B)),with clinically acceptable limits of agreement.The AI system also showed high precision,as evidenced by low SEM values:(1)1.22°(O_(A2) vs O_(B));(2)1.77°(AI vs O_(A2));and(3)2.09°(AI vs O_(B)).Furthermore,Pearson correlation coefficients confirmed strong linear relationships between automated and manual measurements,with r=0.85(AI vs O_(A2))and r=0.90(AI vs O_(B)).The AI method significantly improved efficiency,completing all 84 measurements 8 times faster than manual methods,reducing the time required from an average 36 minutes to just 4.5 minutes.CONCLUSION The proposed AI-assisted IPA measurement method shows strong clinical potential,effectively corresponding with manual measurements.Integrating IPA with HVA and IMA assessments provides a comprehensive tool for automated forefoot deformity analysis,supporting hallux valgus severity classification and preoperative planning,while offering substantial time savings in high-volume clinical settings.展开更多
Conventional borehole image log interpretation of linear fractures on volcanic rocks,represented as sinusoids on unwrapped cylinder projections,is relatively straight-forward,however,interpreting non-linear rock struc...Conventional borehole image log interpretation of linear fractures on volcanic rocks,represented as sinusoids on unwrapped cylinder projections,is relatively straight-forward,however,interpreting non-linear rock structures and complex facies geometries can be more challenging.To characterize diverse volcanic paleoenvironments related to the formation of the South American continent,this study presents a new methodology based on image logs,petrography,seismic data,and outcrop analogues.The presented methodology used pseudo-boreholes images generated from outcrop photographs with typical igneous rock features worldwide simulating 2D unwrapped cylinder projections of a 31 cm(12.25 in)diameter well.These synthetic images and standard outcrop photographs were used to define morphological patterns of igneous structures and facies for comparison with wireline borehole image logs from subsurface volcanic and subvolcanic units,providing a“visual scale”for geological evaluation of volcanic facies,significantly enhancing the identification efficiency and reliability of complex geological structures.Our analysis focused on various scales of columnar jointing and pillow lava lobes with additional examples including pahoehoe lava,ignimbrite,hyaloclastite,and various intrusive features in Campos,Santos,and Parnaíba basins in Brazil.This approach increases confidence in the interpretation of subvolcanic,subaerial,and subaqueous deposits.The image log interpretation combined with regional geological knowledge has enabled paleoenvironmental insights into the rift magmatism system related to the breakup of Gondwana with associated implications for hydrocarbon exploration.展开更多
Artificial intelligence(AI)-augmented contrast-enhanced ultrasonography(CEUS)is emerging as a powerful tool in liver imaging,particularly in enhancing the accuracy of Liver Imaging Reporting and Data System(known as L...Artificial intelligence(AI)-augmented contrast-enhanced ultrasonography(CEUS)is emerging as a powerful tool in liver imaging,particularly in enhancing the accuracy of Liver Imaging Reporting and Data System(known as LI-RADS)classi-fication.This review synthesized published data on the integration of machine learning and deep learning techniques into CEUS,revealing that AI algorithms can improve the detection and quantification of contrast enhancement patterns.Such improvements led to more consistent LI-RADS categorization,reduced interoperator variability,and enabled real-time analysis that streamlined work-flow.The enhanced sensitivity of AI tools facilitated better differentiation between benign and malignant lesions,ultimately optimizing patient management.These advances suggest that AI-augmented CEUS could transform liver imaging by providing rapid,reliable,and objective assessments.However,the review also highlighted the need for further large-scale,multicenter studies to fully validate these findings and ensure the safe integration of AI into routine clinical practice.INTRODUCTION International hepatology society guidelines have established contrast-enhanced computed tomography(CT)and contrast-enhanced magnetic resonance imaging(MRI)as the imaging modalities of choice for diagnosing hepatocellular carcinoma(HCC)lesions larger than 1 cm.MRI remains the gold standard for detecting small HCC nodules in cirrhotic livers due to its superior soft-tissue contrast and functional imaging capabilities.However,early or atypical presentations remain challenging for differential diagnosis,staging,and treatment planning.In these scenarios contrast-enhanced ultrasonography(CEUS)is a valuable second-line tool,offering real-time,radiation-free evaluation and repeatability for follow-up.A recent meta-analysis of head-to-head studies reported comparable diagnostic performance between CEUS and CT/MRI with pooled sensitivities and specificities of 0.67/0.88 for CEUS vs 0.60/0.98 for CT/MRI in non-HCC malignancies,and similar specificities for HCC diagnosis(0.70 for CEUS vs 0.59 for CT;0.81 for CEUS vs 0.79 for MRI)[1].Given the limitations of individual imaging modalities,hybrid techniques and multimodal approaches are gaining traction for improving lesion detection,especially in cases where standard methods fall short.Artificial intelligence(AI)has emerged as a powerful tool in medical imaging,enhancing diagnostic accuracy and reliability across platforms.In CEUS liver imaging dynamic enhancement patterns often challenge consistent interpretation across observers.AI holds particular promise for standardizing assessments.The growing complexity of liver tumor evaluation has also driven interest in approaches that integrate serum bio-markers with advanced imaging.However,no single strategy currently meets all the diagnostic and prognostic re-quirements.Recent studies highlighted the potential of AI to bridge this gap by enabling precise image interpretation and facilitating the integration of heterogeneous clinical and imaging data[2].Altogether the convergence of CEUS with AI and radiomics offers a dynamic,quantitative,and potentially reproducible paradigm for liver lesion assessment,comple-menting traditional imaging methods.This review aimed to provide an overview of current advances in AI-driven CEUS for liver lesion assessment with a particular focus on automated Liver Imaging Reporting and Data System(LI-RADS)classification,radiomics-based models,and future clinical integration.While another recent systematic review[3]provided a comprehensive analysis of AI applications in CEUS,our approach offers a targeted perspective,emphasizing LI-RADS-centered scoring,automated lesion characterization,and clinical utility,particularly in the context of HCC diagnosis and management.In the methodological process of this narrative mini-review,the literature selection was primarily based on targeted PubMed searches.ChatGPT-4o(OpenAI)[4]was employed to assist in refining query parameters and identifying relevant,up-to-date peer-reviewed sources on CEUS-based AI applications.展开更多
Formation pore pressure is the foundation of well plan,and it is related to the safety and efficiency of drilling operations in oil and gas development.However,the traditional method for predicting formation pore pres...Formation pore pressure is the foundation of well plan,and it is related to the safety and efficiency of drilling operations in oil and gas development.However,the traditional method for predicting formation pore pressure involves applying post-drilling measurement data from nearby wells to the target well,which may not accurately reflect the formation pore pressure of the target well.In this paper,a novel method for predicting formation pore pressure ahead of the drill bit by embedding petrophysical theory into machine learning based on seismic and logging-while-drilling(LWD)data was proposed.Gated recurrent unit(GRU)and long short-term memory(LSTM)models were developed and validated using data from three wells in the Bohai Oilfield,and the Shapley additive explanations(SHAP)were utilized to visualize and interpret the models proposed in this study,thereby providing valuable insights into the relative importance and impact of input features.The results show that among the eight models trained in this study,almost all model prediction errors converge to 0.05 g/cm^(3),with the largest root mean square error(RMSE)being 0.03072 and the smallest RMSE being 0.008964.Moreover,continuously updating the model with the increasing training data during drilling operations can further improve accuracy.Compared to other approaches,this study accurately and precisely depicts formation pore pressure,while SHAP analysis guides effective model refinement and feature engineering strategies.This work underscores the potential of integrating advanced machine learning techniques with domain-specific knowledge to enhance predictive accuracy for petroleum engineering applications.展开更多
基金Deep-time Digital Earth(DDE)Big Science Program(No.GJ-C03-SGF-2025-004)National Natural Science Foundation of China(No.42394063)Sichuan Science and Technology Program(No.2025ZNSFSC0325).
文摘Topographic maps,as essential tools and sources of information for geographic research,contain precise spatial locations and rich map features,and they illustrate spatio-temporal information on the distribution and differences of various surface features.Currently,topographic maps are mainly stored in raster and vector formats.Extraction of the spatio-temporal knowledge in the maps—such as spatial distribution patterns,feature relationships,and dynamic evolution—still primarily relies on manual interpretation.However,manual interpretation is time-consuming and laborious,especially for large-scale,long-term map knowledge extraction and application.With the development of artificial intelligence technology,it is possible to improve the automation level of map knowledge interpretation.Therefore,the present study proposes an automatic interpretation method for raster topographic map knowledge based on deep learning.To address the limitations of current data-driven intelligent technology in learning map spatial relations and cognitive logic,we establish a formal description of map knowledge by mapping the relationship between map knowledge and features,thereby ensuring interpretation accuracy.Subsequently,deep learning techniques are employed to extract map features automatically,and the spatio-temporal knowledge is constructed by combining formal descriptions of geographic feature knowledge.Validation experiments demonstrate that the proposed method effectively achieves automatic interpretation of spatio-temporal knowledge of geographic features in maps,with an accuracy exceeding 80%.The findings of the present study contribute to machine understanding of spatio-temporal differences in map knowledge and advances the intelligent interpretation and utilization of cartographic information.
基金funded by Research Platforms and Projects for Higher Education Institutions of Department of Education of Guangdong Province in 2024(2024KTSCX256)2023 Guangdong Province Higher Vocational Education Teaching Quality and Teaching Reform Project(2023JG080).
文摘The potential toxicity of ionic liquids(ILs)affects their applications;how to control the toxicity is one of the key issues in their applications.To understand its toxicity structure relationship and promote its greener application,six different machine learning algorithms,including Bagging,Adaptive Boosting(AdaBoost),Gradient Boosting(GBoost),Stacking,Voting and Categorical Boosting(CatBoost),are established to model the toxicity of ILs on four distinct datasets including Leukemia rat cell line IPC-81(IPC-81),Acetylcholinesterase(AChE),Escherichia coli(E.coli)and Vibrio fischeri.Molecular descriptors obtained from the simplified molecular input line entry system(SMILES)are used to characterize ILs.All models are assessed by the mean square error(MSE),root mean square error(RMSE),mean absolute error(MAE)and correlation coefficient(R^(2)).Additionally,an interpretation model based on SHapley Additive exPlanations(SHAP)is built to determine the positive and negative effects of each molecular feature on toxicity.With additional parameters and complexity,the Catboost model outperforms the other models,making it a more reliable model for ILs'toxicity prediction.The results of the model's interpretation indicate that the most significant positive features,SMR_VSA5,PEOE_VSA8,Kappa2,PEOE_VSA6,SMR_VSA5,PEOE_VSA6 and EState_VSA1,can increase the toxicity of ILs as their levels rise,while the most significant negative features,VSA_EState7,EState_VSA8,PEOE_VSA9 and FpDensityMorgan1,can decrease the toxicity as their levels rise.Also,an IL's toxicity will grow as its average molecular weight and number of pyridine rings increase,whereas its toxicity will decrease as its hydrogen bond acceptors increase.This finding offers a theoretical foundation for rapid screening and synthesis of environmentally-benign ILs.
文摘Based on the Many Worlds Interpretation,I describe reality as a multilayer spacetime,where parallel layers play the role of alternative timelines.I link physics to ethics,arguing that one’s moral choices shape one’s course in the multiverse.I consider one’s ethical decisions as decoherence events,leading to movement between alternative timelines,lighter(higher)or heavier(lower)realities.Sometimes in one’s curvilinear path in spacetime,one can even experience falling toward lower layers,slipping through wormholes.This theory supports free will and the simulation hypothesis.With this background,I explore the idea that a new theory of gravity might open new possibilities to shape matter and change our worldview through the invention of new technology,transforming information into waves and then into solid matter,paving the way for a new Multiverse Aeon for humanity.
基金partially supported by the Singapore Ministry of National Development and the National Research Foundation,Prime Minister’s Office,Singapore,under the Land and Liveability National Innovation Challenge(L2 NIC)Research Program(Grant No.L2NICCFP2-2015-1)by the National Research Foundation(NRF)of Singapore,under the Virtual Singapore program(Grant No.NRF2019VSG-GMS-001).
文摘Accurate determination of rockhead is crucial for underground construction.Traditionally,borehole data are mainly used for this purpose.However,borehole drilling is costly,time-consuming,and sparsely distributed.Non-invasive geophysical methods,particularly those using passive seismic surface waves,have emerged as viable alternatives for geological profiling and rockhead detection.This study proposes three interpretation methods for rockhead determination using passive seismic surface wave data from Microtremor Array Measurement(MAM)and Horizontal-to-Vertical Spectral Ratio(HVSR)tests.These are:(1)the Wavelength-Normalized phase velocity(WN)method in which a nonlinear relationship between rockhead depth and wavelength is established;(2)the Statistically Determined-shear wave velocity(SD-V_(s))method in which the representative V_(s) value for rockhead is automatically determined using a statistical method;and(3)the empirical HVSR method in which the rockhead is determined by interpreting resonant frequencies using a reliably calibrated empirical equation.These methods were implemented to determine rockhead depths at 28 locations across two distinct geological formations in Singapore,and the results were evaluated using borehole data.The WN method can determine rockhead depths accurately and reliably with minimal absolute errors(average RMSE=3.11 m),demonstrating robust performance across both geological formations.Its advantage lies in interpreting dispersion curves alone,without the need for the inversion process.The SD-V_(s) method is practical in engineering practice owing to its simplicity.The empirical HVSR method reasonably determines rockhead depths with moderate accuracy,benefiting from a reliably calibrated empirical equation.
基金supported by the National Key Research and Development Program of China[Grant No.2022YFA1405300(PZ)]the Innovation Program for Quantum Science and Technology(Grant No.2023ZD0300700)。
文摘The Husimi function(Q-function)of a quantum state is the distribution function of the density operator in the coherent state representation.It is widely used in theoretical research,such as in quantum optics.The Wehrl entropy is the Shannon entropy of the Husimi function,and is nonzero even for pure states.This entropy has been extensively studied in mathematical physics.Recent research also suggests a significant connection between the Wehrl entropy and manybody quantum entanglement in spin systems.We investigate the statistical interpretation of the Husimi function and the Wehrl entropy,taking the system of N spin-1/2 particles as an example.Due to the completeness of coherent states,the Husimi function and Wehrl entropy can be explained via the positive operator-valued measurement(POVM)theory,although the coherent states are not a set of orthonormal basis.Here,with the help of the Bayes’theorem,we provide an alternative probabilistic interpretation for the Husimi function and the Wehrl entropy.This interpretation is based on direct measurements of the system,and thus does not require the introduction of an ancillary system as in the POVM theory.Moreover,under this interpretation the classical correspondences of the Husimi function and the Wehrl entropy are just phase-space probability distribution function of N classical tops,and its associated entropy,respectively.Therefore,this explanation contributes to a better understanding of the relationship between the Husimi function,Wehrl entropy,and classical-quantum correspondence.The generalization of this statistical interpretation to continuous-variable systems is also discussed.
基金supported by the National Natural Science Foundation of China(Nos.42177173,U23A20651 and 42130719)and the Outstanding Youth Science Fund Project of Sichuan Provincial Natural Science Foundation(No.2025NSFJQ0003)。
文摘Dynamic stress adjustment in deep-buried high geostress hard rock tunnels frequently triggers catastrophic failures such as rockbursts and collapses.While a comprehensive understanding of this process is critical for evaluating surrounding rock stability,its dynamic evolution are often overlooked in engineering practice.This study systematically summarizes a novel classification framework for stress adjustment types—stabilizing(two-zoned),shallow failure(three-zoned),and deep failure(four-zoned)—characterized by distinct stress adjustment stages.A dynamic interpretation technology system is developed based on microseismic monitoring,integrating key microseismic parameters(energy index EI,apparent stressσa,microseismic activity S),seismic source parameter space clustering,and microseismic paths.This approach enables precise identification of evolutionary stages,stress adjustment types,and failure precursors,thereby elucidating the intrinsic linkage between geomechanical processes(stress redistribution)and failure risks.The study establishes criteria and procedures for identifying stress adjustment types and their associated failure risks,which were successfully applied in the Grand Canyon Tunnel of the E-han Highway to detect 50 instances of disaster risks.The findings offer invaluable insights into understanding the evolution process of stress adjustment and pinpointing the disaster risks linked to hard rock in comparable high geostress tunnels.
文摘The Interpretation of Nursing Guidelines for Intravenous Thrombolysis in Acute Ischemic Stroke offers comprehensive recommendations across five key domains:hospital organizational management,patient condition monitoring,complication observation and management,positioning and mobility away from the bed,and quality assurance.These Guidelines encompass all the phases of intravenous thrombolysis care for patients experiencing acute ischemic stroke.This article aims to elucidate the Guidelines by discussing their developmental background,the designation process,usage recommendations,and the interpretation of evolving perspectives,thereby providing valuable insights for clinical practice.
文摘The study employs the theoretical framework of Nanosyntax to analyze the generative mechanism of verbal ABAB reduplication pattern in Mandarin Chinese.The research characterizes ABAB reduplication as an inflectional operation involving functional projections of pluractionality and aspect.It distinguishes between event-internal and event-external pluralization,as well as inner and outer aspect in verbal reduplication.Following the One-Function-One-Head Principle in Nanosyntax,verbal ABAB form occurs through the merging of categoryless roots that are categorized by little v,with the RED affix syncretizing multiple functional morphemes.This framework reduces lexical burden and precisely represents the unique syntactic structure of Chinese verbal reduplication.
基金funded by the National Natural Science Foundation of China(42050104).
文摘This study introduces a comprehensive and automated framework that leverages data-driven method-ologies to address various challenges in shale gas development and production.Specifically,it harnesses the power of Automated Machine Learning(AutoML)to construct an ensemble model to predict the estimated ultimate recovery(EUR)of shale gas wells.To demystify the“black-box”nature of the ensemble model,KernelSHAP,a kernel-based approach to compute Shapley values,is utilized for elucidating the influential factors that affect shale gas production at both global and local scales.Furthermore,a bi-objective optimization algorithm named NSGA-Ⅱ is seamlessly incorporated to opti-mize hydraulic fracturing designs for production boost and cost control.This innovative framework addresses critical limitations often encountered in applying machine learning(ML)to shale gas pro-duction:the challenge of achieving sufficient model accuracy with limited samples,the multidisciplinary expertise required for developing robust ML models,and the need for interpretability in“black-box”models.Validation with field data from the Fuling shale gas field in the Sichuan Basin substantiates the framework's efficacy in enhancing the precision and applicability of data-driven techniques.The test accuracy of the ensemble ML model reached 83%compared to a maximum of 72%of single ML models.The contribution of each geological and engineering factor to the overall production was quantitatively evaluated.Fracturing design optimization raised EUR by 7%-34%under different production and cost tradeoff scenarios.The results empower domain experts to conduct more precise and objective data-driven analyses and optimizations for shale gas production with minimal expertise in data science.
文摘The Agadem block is an area of major oil interest located in the large sedimentary basin of Termit,in the south-east of the Republic of Niger.Since the 1950s,this basin has known geological and geophysical research activities.However,despite the extensive research carried out,we believe that a geophysical contribution in terms of magnetic properties and their repercussions on the structure of the Agadem block allowing the improvement of existing knowledge is essential.The present study aims to study the structural characteristics of the Agadem block associated with magnetic anomalies.For this,after data shaping,several filtering techniques were applied to the aeromagnetic data to identify and map deep geological structures.The reduction to the pole map shows large negative wavelength anomalies in the southeast half of the block and short positive wavelength anomalies in the northwest part embedded in a large positive anomaly occupying the lower northern half of the block.The maps of the total horizontal derivative and tilt angle show lineaments globally distributed along the NW-SE direction in accordance with the structural style of the study area.The resulting map highlights numerous lineaments that may be associated with faults hidden by the sedimentary cover.The calculation of the Euler deconvolution allowed us to locate and estimate the depths of magnetic sources at variable depths of up to 4000 m.The compilation of the results obtained allowed us to locate zones of high and low intensities which correspond respectively to horsts and grabens as major structures of the Agadem block.
基金supported by the National Natural Science Foundation of China(Grant No.U1960202)the Science and Technology Commission of Shanghai Municipality(No.19DZ2270200).
文摘To predict the endpoint carbon content and temperature in basic oxygen furnace (BOF), the industrial parameters of BOF steelmaking are taken as input values. Firstly, a series of preprocessing works such as the Pauta criterion, hierarchical clustering, and principal component analysis on the original data were performed. Secondly, the prediction results of classic machine learning models of ridge regression, support vector machine, gradient boosting regression (GBR), random forest regression, back-propagation (BP) neural network models, and multi-layer perceptron (MLP) were compared before and after data preprocessing. An improved model was established based on the improved sparrow algorithm and BP using tent chaotic mapping (CSSA-BP). The CSSA-BP model showed the best performance for endpoint carbon prediction with the lowest mean absolute error (MAE) and root mean square error (RMSE) values of 0.01124 and 0.01345 mass% among seven models, respectively. And the lowest MAE and RMSE values of 8.9839 and 10.9321 ℃ for endpoint temperature prediction were obtained among seven models, respectively. Furthermore, the CSSA-BP and GBR models have the smallest error fluctuation range in both endpoint carbon content and temperature predictions. Finally, in order to improve the interpretability of the model, SHapley additive interpretation (SHAP) was used to analyze the results.
文摘The“First Multidisciplinary Forum on COVID-19,”as one of the pivotal medical forums organized by China during the initial outbreak of the pandemic,garnered significant attention from numerous countries worldwide.Ensuring the seamless execution of the forum’s translation necessitated exceptionally high standards for simultaneous interpreters.This paper,through the lens of the Translation as Adaptation and Selection theory within Eco-Translatology,conducts an analytical study of the live simultaneous interpretation at the First Multidisciplinary Forum on COVID-19.It examines the interpreters’adaptations and selections across multiple dimensions-namely,linguistic,cultural,and communicative-with the aim of elucidating the guiding role and recommendations that Eco-Translatology can offer to simultaneous interpretation.Furthermore,it seeks to provide insights that may enhance the quality of interpreters’oral translations.
基金supported in part by the National Natural Science Foundation(Nos.62271248,62401256)in part by the Natural Science Foundation of Ji-angsu Province(Nos.BK20230090,BK20241384)in part by the Key Laboratory of Land Satellite Remote Sens-ing Application,Ministry of Natural Resources of China(No.KLSMNR-K202303)。
文摘In recent years,deeps learning has been widely applied in synthetic aperture radar(SAR)image processing.However,the collection of large-scale labeled SAR images is challenging and costly,and the classification accuracy is often poor when only limited SAR images are available.To address this issue,we propose a novel framework for sparse SAR target classification under few-shot cases,termed the transfer learning-based interpretable lightweight convolutional neural network(TL-IL-CNN).Additionally,we employ enhanced gradient-weighted class activation mapping(Grad-CAM)to mitigate the“black box”effect often associated with deep learning models and to explore the mechanisms by which a CNN classifies various sparse SAR targets.Initially,we apply a novel bidirectional iterative soft thresholding(BiIST)algorithm to generate sparse images of superior quality compared to those produced by traditional matched filtering(MF)techniques.Subsequently,we pretrain multiple shallow CNNs on a simulated SAR image dataset.Using the sparse SAR dataset as input for the CNNs,we assess the efficacy of transfer learning in sparse SAR target classification and suggest the integration of TL-IL-CNN to enhance the classification accuracy further.Finally,Grad-CAM is utilized to provide visual explanations for the predictions made by the classification framework.The experimental results on the MSTAR dataset reveal that the proposed TL-IL-CNN achieves nearly 90%classification accuracy with only 20%of the training data required under standard operating conditions(SOC),surpassing typical deep learning methods such as vision Transformer(ViT)in the context of small samples.Remarkably,it even presents better performance under extended operating conditions(EOC).Furthermore,the application of Grad-CAM elucidates the CNN’s differentiation process among various sparse SAR targets.The experiments indicate that the model focuses on the target and the background can differ among target classes.The study contributes to an enhanced understanding of the interpretability of such results and enables us to infer the classification outcomes for each category more accurately.
文摘It is regretful that the data error due to the large number of samples tested.The correct data and figure should be as follows:This correction have no impact on the remainder of the manuscript,the interpretation of the data,or the conclusions reached.The authors would like to apologize for any inconvenience caused.
基金This paper is an interim result of“Constitutional Boundaries of the Application of Human Gene Editing Technology,”a Youth Project of the National Social Science Fund of China(Project Approval Number 23CFX040)supported by the“National Funded Programs for Postdoctoral Researchers”(GZC20230937).
文摘The establishment of a sound science and technology ethics governance system is an inevitable requirement for national modernization.Faced with the development of human gene technology and the chaos in research activities,the ethical standards and legal positioning of human gene research activities urgently need to be clarified.The human rights ethics view has value inclusiveness and value fundamentality,and includes three levels of connotations:content dimension,relationship dimension,and obligation dimension.It should serve as the ethical standard for human gene research activities.Based on the provisions of China’s Constitution,the human rights ethics view on human gene research,as a constitutional ethics view,can elucidate different levels of rights content,such as human dignity,life and health,and research freedom.It also addresses the weighing of basic rights conflicts and the dual obligation subjects of public and private nature.Relying on the constitutional value embedding of the research ethics view to form ethical consensus,improving ethical review through framework legislation for human rights interests,and implementing ethical responsibility through the human rights-oriented interpretation of ethical legal norms are the three pathways to realizing the human rights ethics view on human gene research.
基金National Key Research and Development Program of China,No.2022YFC3204302China National Admin-istration of Coal Geology,No.ZMKJ-2021-ZX04。
文摘Ecological water supplement projects have been implemented in many coastal wetlands,influencing saltmarsh vegetation restoration by altering tidal creek development.To clarify the effectiveness of ecological water supplement on tidal creek development and saltmarsh vegetation restoration,time series of high-resolution remote sensing images from2000 to 2021 were used to extract and analyze tidal creeks and saltmarsh vegetation in the Diaokou Estuary Reserve.The results are summarized as follows:(1)All tidal creek indices,except curvature,exhibited a significant linear increase with time,while curvature initially decreased and increased afterwards.The 5-year cumulative ecological water supplement volume was significantly positively correlated(R^(2)>0.7)with all tidal creek indices.(2)During the same period,most landscape pattern metrics of Phragmites australis increased,while those of Tamarix chinensis initially increased and then declined.Although the class area(CA)of Suaeda salsa increased with time,its perimeter-area fractal dimension(PAFRAC)and aggregation index(AI)fluctuated.(3)The CA of P.australis and S.salsa was significantly positively correlated with most tidal creek indices except curvature,while that of T.chinensis showed no significant correlation with any tidal creek index.Moreover,the AI and PAFRAC of P.australis were significantly positively correlated with fractal dimension,frequency/curvature of tidal creeks,respectively,while those of S.salsa and T.chinensis exhibited no significant correlation.In summary,the ecological water supplement project enhanced tidal creek development in coastal wetlands,promoting the restoration of P.australis and S.salsa,while having little impact on the restoration of T.chinensis.
文摘BACKGROUND A recently developed method enables automated measurement of the hallux valgus angle(HVA)and the first intermetatarsal angle(IMA)from weightbearing foot radiographs.This approach employs bone segmentation to identify anatomical landmarks and provides standardized angle measurements based on established guidelines.While effective for HVA and IMA,preoperative radiograph analysis remains complex and requires additional measurements,such as the hallux interphalangeal angle(IPA),which has received limited research attention.AIM To expand the previous method,which measured HVA and IMA,by incorporating the automatic measurement of IPA,evaluating its accuracy and clinical relevance.METHODS A preexisting database of manually labeled foot radiographs was used to train a U-Net neural network for segmenting bones and identifying landmarks necessary for IPA measurement.Of the 265 radiographs in the dataset,161 were selected for training and 20 for validation.The U-Net neural network achieves a high mean Sørensen-Dice index(>0.97).The remaining 84 radiographs were used to assess the reliability of automated IPA measurements against those taken manually by two orthopedic surgeons(OA and OB)using computer-based tools.Each measurement was repeated to assess intraobserver(OA1 and OA2)and interobserver(O_(A2) and O_(B))reliability.Agreement between automated and manual methods was evaluated using the Intraclass Correlation Coefficient(ICC),and Bland-Altman analysis identified systematic differences.Standard error of measurement(SEM)and Pearson correlation coefficients quantified precision and linearity,and measurement times were recorded to evaluate efficiency.RESULTS The artificial intelligence(AI)-based system demonstrated excellent reliability,with ICC3.1 values of 0.92(AI vs OA2)and 0.88(AI vs O_(B)),both statistically significant(P<0.001).For manual measurements,ICC values were 0.95(OA2 vs OA1)and 0.95(OA2 vs OB),supporting both intraobserver and interobserver reliability.Bland-Altman analysis revealed minimal biases of:(1)1.61°(AI vs O_(A2));and(2)2.54°(AI vs O_(B)),with clinically acceptable limits of agreement.The AI system also showed high precision,as evidenced by low SEM values:(1)1.22°(O_(A2) vs O_(B));(2)1.77°(AI vs O_(A2));and(3)2.09°(AI vs O_(B)).Furthermore,Pearson correlation coefficients confirmed strong linear relationships between automated and manual measurements,with r=0.85(AI vs O_(A2))and r=0.90(AI vs O_(B)).The AI method significantly improved efficiency,completing all 84 measurements 8 times faster than manual methods,reducing the time required from an average 36 minutes to just 4.5 minutes.CONCLUSION The proposed AI-assisted IPA measurement method shows strong clinical potential,effectively corresponding with manual measurements.Integrating IPA with HVA and IMA assessments provides a comprehensive tool for automated forefoot deformity analysis,supporting hallux valgus severity classification and preoperative planning,while offering substantial time savings in high-volume clinical settings.
文摘Conventional borehole image log interpretation of linear fractures on volcanic rocks,represented as sinusoids on unwrapped cylinder projections,is relatively straight-forward,however,interpreting non-linear rock structures and complex facies geometries can be more challenging.To characterize diverse volcanic paleoenvironments related to the formation of the South American continent,this study presents a new methodology based on image logs,petrography,seismic data,and outcrop analogues.The presented methodology used pseudo-boreholes images generated from outcrop photographs with typical igneous rock features worldwide simulating 2D unwrapped cylinder projections of a 31 cm(12.25 in)diameter well.These synthetic images and standard outcrop photographs were used to define morphological patterns of igneous structures and facies for comparison with wireline borehole image logs from subsurface volcanic and subvolcanic units,providing a“visual scale”for geological evaluation of volcanic facies,significantly enhancing the identification efficiency and reliability of complex geological structures.Our analysis focused on various scales of columnar jointing and pillow lava lobes with additional examples including pahoehoe lava,ignimbrite,hyaloclastite,and various intrusive features in Campos,Santos,and Parnaíba basins in Brazil.This approach increases confidence in the interpretation of subvolcanic,subaerial,and subaqueous deposits.The image log interpretation combined with regional geological knowledge has enabled paleoenvironmental insights into the rift magmatism system related to the breakup of Gondwana with associated implications for hydrocarbon exploration.
文摘Artificial intelligence(AI)-augmented contrast-enhanced ultrasonography(CEUS)is emerging as a powerful tool in liver imaging,particularly in enhancing the accuracy of Liver Imaging Reporting and Data System(known as LI-RADS)classi-fication.This review synthesized published data on the integration of machine learning and deep learning techniques into CEUS,revealing that AI algorithms can improve the detection and quantification of contrast enhancement patterns.Such improvements led to more consistent LI-RADS categorization,reduced interoperator variability,and enabled real-time analysis that streamlined work-flow.The enhanced sensitivity of AI tools facilitated better differentiation between benign and malignant lesions,ultimately optimizing patient management.These advances suggest that AI-augmented CEUS could transform liver imaging by providing rapid,reliable,and objective assessments.However,the review also highlighted the need for further large-scale,multicenter studies to fully validate these findings and ensure the safe integration of AI into routine clinical practice.INTRODUCTION International hepatology society guidelines have established contrast-enhanced computed tomography(CT)and contrast-enhanced magnetic resonance imaging(MRI)as the imaging modalities of choice for diagnosing hepatocellular carcinoma(HCC)lesions larger than 1 cm.MRI remains the gold standard for detecting small HCC nodules in cirrhotic livers due to its superior soft-tissue contrast and functional imaging capabilities.However,early or atypical presentations remain challenging for differential diagnosis,staging,and treatment planning.In these scenarios contrast-enhanced ultrasonography(CEUS)is a valuable second-line tool,offering real-time,radiation-free evaluation and repeatability for follow-up.A recent meta-analysis of head-to-head studies reported comparable diagnostic performance between CEUS and CT/MRI with pooled sensitivities and specificities of 0.67/0.88 for CEUS vs 0.60/0.98 for CT/MRI in non-HCC malignancies,and similar specificities for HCC diagnosis(0.70 for CEUS vs 0.59 for CT;0.81 for CEUS vs 0.79 for MRI)[1].Given the limitations of individual imaging modalities,hybrid techniques and multimodal approaches are gaining traction for improving lesion detection,especially in cases where standard methods fall short.Artificial intelligence(AI)has emerged as a powerful tool in medical imaging,enhancing diagnostic accuracy and reliability across platforms.In CEUS liver imaging dynamic enhancement patterns often challenge consistent interpretation across observers.AI holds particular promise for standardizing assessments.The growing complexity of liver tumor evaluation has also driven interest in approaches that integrate serum bio-markers with advanced imaging.However,no single strategy currently meets all the diagnostic and prognostic re-quirements.Recent studies highlighted the potential of AI to bridge this gap by enabling precise image interpretation and facilitating the integration of heterogeneous clinical and imaging data[2].Altogether the convergence of CEUS with AI and radiomics offers a dynamic,quantitative,and potentially reproducible paradigm for liver lesion assessment,comple-menting traditional imaging methods.This review aimed to provide an overview of current advances in AI-driven CEUS for liver lesion assessment with a particular focus on automated Liver Imaging Reporting and Data System(LI-RADS)classification,radiomics-based models,and future clinical integration.While another recent systematic review[3]provided a comprehensive analysis of AI applications in CEUS,our approach offers a targeted perspective,emphasizing LI-RADS-centered scoring,automated lesion characterization,and clinical utility,particularly in the context of HCC diagnosis and management.In the methodological process of this narrative mini-review,the literature selection was primarily based on targeted PubMed searches.ChatGPT-4o(OpenAI)[4]was employed to assist in refining query parameters and identifying relevant,up-to-date peer-reviewed sources on CEUS-based AI applications.
基金supported by the National Natural Science Foundation of China(Grant numbers:52174012,52394250,52394255,52234002,U22B20126,51804322).
文摘Formation pore pressure is the foundation of well plan,and it is related to the safety and efficiency of drilling operations in oil and gas development.However,the traditional method for predicting formation pore pressure involves applying post-drilling measurement data from nearby wells to the target well,which may not accurately reflect the formation pore pressure of the target well.In this paper,a novel method for predicting formation pore pressure ahead of the drill bit by embedding petrophysical theory into machine learning based on seismic and logging-while-drilling(LWD)data was proposed.Gated recurrent unit(GRU)and long short-term memory(LSTM)models were developed and validated using data from three wells in the Bohai Oilfield,and the Shapley additive explanations(SHAP)were utilized to visualize and interpret the models proposed in this study,thereby providing valuable insights into the relative importance and impact of input features.The results show that among the eight models trained in this study,almost all model prediction errors converge to 0.05 g/cm^(3),with the largest root mean square error(RMSE)being 0.03072 and the smallest RMSE being 0.008964.Moreover,continuously updating the model with the increasing training data during drilling operations can further improve accuracy.Compared to other approaches,this study accurately and precisely depicts formation pore pressure,while SHAP analysis guides effective model refinement and feature engineering strategies.This work underscores the potential of integrating advanced machine learning techniques with domain-specific knowledge to enhance predictive accuracy for petroleum engineering applications.