This paper presents a new programming paradigm named Notification-Oriented Paradigm (NOP) and analyses the performance aspects of NOP programs by means of an experiment. NOP provides a new manner to conceive, structur...This paper presents a new programming paradigm named Notification-Oriented Paradigm (NOP) and analyses the performance aspects of NOP programs by means of an experiment. NOP provides a new manner to conceive, structure, and execute software, which would allow better performance, causal-knowledge organization, and decoupling than standard solutions based upon usual paradigms. These paradigms are essentially Imperative Paradigm (IP) and Declarative Paradigm (DP). In short, DP solutions are considered easier to use than IP solutions due to the concept of high-level programming. However, they are considered slower in execution and less flexible in development. Anyway, both paradigms present similar drawbacks such as redundant causal-evaluation and strongly coupled entities, which decrease the software performance and the processing distribution feasibility. These problems exist due to an orientation to a monolithic inference mechanism based upon sequential evaluation by searching on passive computational entities. NOP proposes another way to structure software and make its inferences, which is based upon small, collaborative, and decoupled computational entities whose interaction happens through precise notifications. In this context, this paper presents a quantitative comparison between two equivalent implementations of a computer game simulator (Pacman simulator), one developed according to the principles of Object-Oriented Paradigm (OOP/IP) in C++ and other developed according to the principles of NOP. The results obtained from the experiments demonstrate, however, a quite lower performance of NOP implementation. This happened because NOP applications are still developed using a framework based on C++. Besides, the paper shows that optimizations in the NOP framework improve NOP program performance, thereby evidencing the necessity of developing a NOP language/compiler.展开更多
Incorporation of explainability features in the decision-making web-based systems is considered a primary concern to enhance accountability,transparency,and trust in the community.Multi-domain Sentiment Analysis is a ...Incorporation of explainability features in the decision-making web-based systems is considered a primary concern to enhance accountability,transparency,and trust in the community.Multi-domain Sentiment Analysis is a significant web-based system where the explainability feature is essential for achieving user satisfaction.Conventional design methodologies such as object-oriented design methodology(OODM)have been proposed for web-based application development,which facilitates code reuse,quantification,and security at the design level.However,OODM did not provide the feature of explainability in web-based decision-making systems.X-OODM modifies the OODM with added explainable models to introduce the explainability feature for such systems.This research introduces an explainable model leveraging X-OODM for designing transparent applications for multidomain sentiment analysis.The proposed design is evaluated using the design quality metrics defined for the evaluation of the X-OODM explainable model under user context.The design quality metrics,transferability,simulatability,informativeness,and decomposability were introduced one after another over time to the evaluation of the X-OODM user context.Auxiliary metrics of accessibility and algorithmic transparency were added to increase the degree of explainability for the design.The study results reveal that introducing such explainability parameters with X-OODM appropriately increases system transparency,trustworthiness,and user understanding.The experimental results validate the enhancement of decision-making for multi-domain sentiment analysis with integration at the design level of explainability.Future work can be built in this direction by extending this work to apply the proposed X-OODM framework over different datasets and sentiment analysis applications to further scrutinize its effectiveness in real-world scenarios.展开更多
A new implementation(SCKE—Stractured Communication Knowledge Entity)has been proposed towards combining the logic-with the object-oriented paradigm of computing.It is intended to explore the advantages in these two p...A new implementation(SCKE—Stractured Communication Knowledge Entity)has been proposed towards combining the logic-with the object-oriented paradigm of computing.It is intended to explore the advantages in these two paradigms in a structured,natural and efficient manner for large-scale know- ledge processing.The SCKE model supports modularity and protection for the structured development of knowledge systems.It also introduces the concepts that are typical for the object-oriented systems in the logic-oriented paradigm,without losing its advantages as a declarative language.Various inheritance hier- archies are supported in the SCKE model.They provide the semantics basis for various knowledge in AI systems.The M-entity/K-entity/Instance inheritance captures the relationship among the control, procedural and factual knowledge in AI systems,And,the super-entity/entity/instance inheritance shows the concepts of data abstraction in the knowledge of a particular domain.In addition,the SCKE model is not simply supported on top of Prolog like other attempts to integrate the object-into the log- ic-oriented paradigm.The SCKE model is a tighltly coupled model of the logic-and the object-oriented paradigm and its interpreter uniformly interprets the logic semantics and the object-oriented semantics.展开更多
Due to the small size,variety,and high degree of mixing of herbaceous vegetation,remote sensing-based identification of grassland types primarily focuses on extracting major grassland categories,lacking detailed depic...Due to the small size,variety,and high degree of mixing of herbaceous vegetation,remote sensing-based identification of grassland types primarily focuses on extracting major grassland categories,lacking detailed depiction.This limitation significantly hampers the development of effective evaluation and fine supervision for the rational utilization of grassland resources.To address this issue,this study concentrates on the representative grassland of Zhenglan Banner in Inner Mongolia as the study area.It integrates the strengths of Sentinel-1 and Sentinel-2 active-passive synergistic observations and introduces innovative object-oriented techniques for grassland type classification,thereby enhancing the accuracy and refinement of grassland classification.The results demonstrate the following:(1)To meet the supervision requirements of grassland resources,we propose a grassland type classification system based on remote sensing and the vegetation-habitat classification method,specifically applicable to natural grasslands in northern China.(2)By utilizing the high-spatial-resolution Normalized Difference Vegetation Index(NDVI)synthesized through the Spatial and Temporal Non-Local Filter-based Fusion Model(STNLFFM),we are able to capture the NDVI time profiles of grassland types,accurately extract vegetation phenological information within the year,and further enhance the temporal resolution.(3)The integration of multi-seasonal spectral,polarization,and phenological characteristics significantly improves the classification accuracy of grassland types.The overall accuracy reaches 82.61%,with a kappa coefficient of 0.79.Compared to using only multi-seasonal spectral features,the accuracy and kappa coefficient have improved by 15.94%and 0.19,respectively.Notably,the accuracy improvement of the gently sloping steppe is the highest,exceeding 38%.(4)Sandy grassland is the most widespread in the study area,and the growth season of grassland vegetation mainly occurs from May to September.The sandy meadow exhibits a longer growing season compared with typical grassland and meadow,and the distinct differences in phenological characteristics contribute to the accurate identification of various grassland types.展开更多
As one of the main geographical elements in urban areas,buildings are closely related to the development of the city.Therefore,how to quickly and accurately extract building information from remote sensing images is o...As one of the main geographical elements in urban areas,buildings are closely related to the development of the city.Therefore,how to quickly and accurately extract building information from remote sensing images is of great significance for urban map updating,urban planning and construction,etc.Extracting building information around power facilities,especially obtaining this information from high-resolution images,has become one of the current hot topics in remote sensing technology research.This study made full use of the characteristics of GF-2 satellite remote sensing images,adopted an object-oriented classification method,combined with multi-scale segmentation technology and CART classification algorithm,and successfully extracted the buildings in the study area.The research results showed that the overall classification accuracy reached 89.5%and the Kappa coefficient was 0.86.Using the object-oriented CART classification algorithm for building extraction could be closer to actual ground objects and had higher accuracy.The extraction of buildings in the city contributed to urban development planning and provided decision support for management.展开更多
Structural development defects essentially refer to code structure that violates object-oriented design principles. They make program maintenance challenging and deteriorate software quality over time. Various detecti...Structural development defects essentially refer to code structure that violates object-oriented design principles. They make program maintenance challenging and deteriorate software quality over time. Various detection approaches, ranging from traditional heuristic algorithms to machine learning methods, are used to identify these defects. Ensemble learning methods have strengthened the detection of these defects. However, existing approaches do not simultaneously exploit the capabilities of extracting relevant features from pre-trained models and the performance of neural networks for the classification task. Therefore, our goal has been to design a model that combines a pre-trained model to extract relevant features from code excerpts through transfer learning and a bagging method with a base estimator, a dense neural network, for defect classification. To achieve this, we composed multiple samples of the same size with replacements from the imbalanced dataset MLCQ1. For all the samples, we used the CodeT5-small variant to extract features and trained a bagging method with the neural network Roberta Classification Head to classify defects based on these features. We then compared this model to RandomForest, one of the ensemble methods that yields good results. Our experiments showed that the number of base estimators to use for bagging depends on the defect to be detected. Next, we observed that it was not necessary to use a data balancing technique with our model when the imbalance rate was 23%. Finally, for blob detection, RandomForest had a median MCC value of 0.36 compared to 0.12 for our method. However, our method was predominant in Long Method detection with a median MCC value of 0.53 compared to 0.42 for RandomForest. These results suggest that the performance of ensemble methods in detecting structural development defects is dependent on specific defects.展开更多
Modelling enterprises include two essential tasks:data modelling for static properties and behaviours modelling for dynamic properties.Traditionally,the data modelling and behaviours modelling are separated in differe...Modelling enterprises include two essential tasks:data modelling for static properties and behaviours modelling for dynamic properties.Traditionally,the data modelling and behaviours modelling are separated in different phases and also in different description forms,e.g.the former is modelled in entity-relationship diagrams(ERDs),and the latter is modelled in data flow diagrams (DFDs).The separation could result in an incorrect description of the relationships between da- ta and behaviours,so that the enterprise model couldn't reflect the actual conditions and demands of the enterprise.In this paper an object-oriented approach integrating data with behaviours in a model for the Enterprises Management Information Systems(in short,EMISs)is proposed.As an isomorphic mapping of enterprises an object-oriented model can,in a natural form,exactly describe the dynamic and static properties of enterprises in an integrated model.Therefore it can be easily used by the end-users(e.g.the experts in accounting,financial reporting,and business managers) to specify their demands and communicate with the system analysts and designers.Based on the model an EMIS can be prototyped quickly,and then be conveniently evolved with inheritance mechanism to an adaptive application system according to the actual demands of the enterprise.展开更多
This study examines the advent of agent interaction(AIx)as a transformative paradigm in humancomputer interaction(HCI),signifying a notable evolution beyond traditional graphical interfaces and touchscreen interaction...This study examines the advent of agent interaction(AIx)as a transformative paradigm in humancomputer interaction(HCI),signifying a notable evolution beyond traditional graphical interfaces and touchscreen interactions.Within the context of large models,AIx is characterized by its innovative interaction patterns and a plethora of application scenarios that hold great potential.The paper highlights the pivotal role of AIx in shaping the future landscape of the large model industry,emphasizing its adoption and necessity from a user's perspective.This study underscores the pivotal role of AIx in dictating the future trajectory of a large model industry by emphasizing the importance of its adoption and necessity from a user-centric perspective.The fundamental drivers of AIx include the introduction of novel capabilities,replication of capabilities(both anthropomorphic and superhuman),migration of capabilities,aggregation of intelligence,and multiplication of capabilities.These elements are essential for propelling innovation,expanding the frontiers of capability,and realizing the exponential superposition of capabilities,thereby mitigating labor redundancy and addressing a spectrum of human needs.Furthermore,this study provides an in-depth analysis of the structural components and operational mechanisms of agents supported by large models.Such advancements significantly enhance the capacity of agents to tackle complex problems and provide intelligent services,thereby facilitating a more intuitive,adaptive,and personalized engagement between humans and machines.The study further delineates four principal categories of interaction patterns that encompass eight distinct modalities of interaction,corresponding to twenty-one specific scenarios,including applications in smart home systems,health assistance,and elderly care.This emphasizes the significance of this new paradigm in advancing HCI,fostering technological advancements,and redefining user experiences.However,it also acknowledges the challenges and ethical considerations that accompany this paradigm shift,recognizing the need for a balanced approach to harness the full potential of AIx in modern society.展开更多
With the continuous improvement of the medical industry’s requirements for the professional capabilities of nursing talents,traditional nursing teaching models can hardly meet the needs of complex nursing work in neu...With the continuous improvement of the medical industry’s requirements for the professional capabilities of nursing talents,traditional nursing teaching models can hardly meet the needs of complex nursing work in neurology.This paper focuses on nursing education for neurology nursing students and explores the construction of the“one-on-one”teaching model,aiming to achieve a paradigm shift in nursing education.By analyzing the current status of neurology nursing education,this paper identifies the problems in traditional teaching models.Combining the advantages of the“one-on-one”teaching model,it elaborates on the construction path of this model from aspects such as the selection and training of teaching instructors,the design of teaching content,the innovation of teaching methods,and the improvement of the teaching evaluation system.The research shows that the“one-on-one”teaching model can significantly enhance nursing students’mastery of professional knowledge,clinical operation skills,communication skills,and emergency response capabilities,as well as strengthen their professional identity and sense of responsibility.It provides an effective way to cultivate high-quality nursing talents who can meet the needs of neurology nursing work and promotes the innovative development of nursing education.展开更多
This paper explores the paradigm reconstruction of interpreting pedagogy driven by generative AI technology.With the breakthroughs of AI technologies such as ChatGPT in natural language processing,traditional interpre...This paper explores the paradigm reconstruction of interpreting pedagogy driven by generative AI technology.With the breakthroughs of AI technologies such as ChatGPT in natural language processing,traditional interpreting education faces dual challenges of technological substitution and pedagogical transformation.Based on Kuhn’s paradigm theory,the study analyzes the limitations of three traditional interpreting teaching paradigms,language-centric,knowledge-based,and skill-acquisition-oriented,and proposes a novel“teacher-AI-learner”triadic collaborative paradigm.Through reconstructing teaching subjects,environments,and curriculum systems,the integration of real-time translation tools and intelligent terminology databases facilitates the transition from static skill training to dynamic human-machine collaboration.The research simultaneously highlights challenges in technological ethics and curriculum design transformation pressures,emphasizing the necessity to balance technological empowerment with humanistic education.展开更多
The integration of artificial intelligence(AI)is fundamentally reshaping the scientific research,giving rise to a new era of discovery and innovation.This paper explores this transformative shift,introducing an innova...The integration of artificial intelligence(AI)is fundamentally reshaping the scientific research,giving rise to a new era of discovery and innovation.This paper explores this transformative shift,introducing an innovative concept of the“AI-Driven Research Ecosystem”,a dynamic and collaborative research environment.Within this ecosystem,we focus on the unification of human-AI collaboration models and the emerging new research thinking paradigms.We analyze the multifaceted roles of AI within the research lifecycle,spanning from a passive tool to an active assistant and autonomous participants,and categorize these interactions into distinct human-AI collaboration models.Furthermore,we examine how the pervasive involvement of AI necessitates an evolution in human research thinking,emphasizing the significant roles of critical,creative,and computational thinking.Through a review of existing literature and illustrative case studies,this paper provides a comprehensive overview of the AI-driven research ecosystem,highlighting its potential for transforming scientific research.Our findings advance the current understanding of AI’s multiple roles in research and underscore its capacity to revolutionize both knowledge discovery and collaborative innovation,paving the way for a more integrated and impactful research paradigm.展开更多
Digital-intelligent technologies represent the advanced direction of new quality productive forces and are becoming a driving force for the digital transformation and high-quality development of the cultural industry....Digital-intelligent technologies represent the advanced direction of new quality productive forces and are becoming a driving force for the digital transformation and high-quality development of the cultural industry.Empowered by new quality productive forces,the digital cultural industry has demonstrated diverse characteristics,including the innovation of cultural production subjects,the intelligentization of production tools,the digitization of production objects,the systematization of production methods,and the diversification of production factors.Leveraging technologies such as AIGC,virtual-physical integration,and DAOs based on Web 3.0,the digital cultural industry has established an innovative paradigm,fostering a new method of AIGC production in the digital cultural industry,a new business format of virtual-physical integration,and a new collaborative ecosystem characterized by co-creation,co-building,and co-governance.Meanwhile,the innovative paradigm of the digital cultural industry also faces a series of new challenges,such as the adaptability issues with AIGC algorithm models,creative bottlenecks,and content quality control problems.Additionally,there are obstacles like the immaturity of international development channels for new business formats,the lack of cultural connotations in creative products,and the lag of the digital-intelligent governance of the industry ecosystem behind digital practices.In light of this,there is an urgent need to establish an optimization mechanism for the high-quality development of digital cultural industries driven by new quality productive forces.This includes optimizing the content production mechanism for AIGC-led high-quality innovation in the digital cultural industry;improving the leapfrog development mechanism for new digital cultural business formats through global-regional collaboration;and enhancing the accurate,high-quality governance mechanism for the digital cultural industry that is aligned with the goals of Chinese modernization.展开更多
Active inflammation in“inactive”progressive multiple sclerosis:Traditionally,the distinction between relapsing-remitting multiple sclerosis and progressive multiple sclerosis(PMS)has been framed as an inflammatory v...Active inflammation in“inactive”progressive multiple sclerosis:Traditionally,the distinction between relapsing-remitting multiple sclerosis and progressive multiple sclerosis(PMS)has been framed as an inflammatory versus degenerative dichotomy.This was based on a broad misconception regarding essentially all neurodegenerative conditions,depicting the degenerative process as passive and immune-independent occurring as a late byproduct of active inflammation in the central nervous system(CNS),which is(solely)systemically driven.展开更多
Head and neck cutaneous squamous cell carcinoma(HNCSCC)remains underexplored compared to oropharyngeal squamous cell carcinoma,particularly in relation to human papillomavirus(HPV)and molecular markers such as p16 and...Head and neck cutaneous squamous cell carcinoma(HNCSCC)remains underexplored compared to oropharyngeal squamous cell carcinoma,particularly in relation to human papillomavirus(HPV)and molecular markers such as p16 and p53.While p16 is a well-established surrogate for HPV in oropharyngeal cancer,our review highlights its unreliable role in HNCSCC,where positivity is instead associated with recurrence and metastasis.Similarly,p53 illustrates a dual role-wild-type as a genomic safeguard,mutated as an oncogenic driver-complicating prognostication.Methodological considerations,including the limitations of immunohistochemistry for HPV detection,underscore the need for multi-method and molecular validation in future studies.Ultraviolet radiation is posited as a key modifier of p16 function,decoupling expression from tumor suppression.To contextualize these findings,we draw parallels to glioblastoma(GBM),where subclonal evolution,p53 dysfunction,and intratumoral heterogeneity drive relapse despite aggressive multimodal therapies.GBM exemplifies how bulk-level biomarker generalizations often obscure dynamic cellular ecosystems,reinforcing the necessity of single-cell and spatial approaches.Multi-omics integration-encompassing genome,transcriptome,proteome,and tumor microenvironment mapping-coupled with single-cell RNA sequencing and spatial transcriptomics,offers a path forward for resolving subclonal dynamics in both HNCSCC and GBM.These technologies provide the resolution needed to track tumor-immunestromal co-evolution,identify therapy-resistant clones,and anticipate recurrence.We argue for a N-of-1,patient-and cell-centric paradigm that reframes biomarkers not as static surrogates but as dynamic readouts of cancer evolution across time and tissue contexts.Conceptually,we propose kinetic and microenvironmental frameworks(e.g.,“load-and-lock”barriers;dormancy and immunesynapse stabilization)as hypothesis-generating avenues to stall clonal handoffs and improve outcome prediction.Together,these perspectives argue for revised biomarker frameworks in HNCSCC and ethically inclusive,mechanism-anchored studies that bridge discovery with individualized care.By bridging insights from HNCSCC with the lessons of GBM,this review underscores the need for ethically inclusive,mechanistically informed frameworks that integrate subclonal evolution,biomarker re-interpretation,and precision-personalized hybrid models.Such an approach will be essential for advancing from one-size-fits-all strategies to individualized lifetime cancer care.展开更多
The inspection of engine lubricating oil can give an indication of the internal condition of an engine. By means of the Object-Oriented Programming (OOP), an expert system is developed in this paper to computerize the...The inspection of engine lubricating oil can give an indication of the internal condition of an engine. By means of the Object-Oriented Programming (OOP), an expert system is developed in this paper to computerize the inspection. The traditional components of an expert system, such us knowledge base, inference engine and user interface are reconstructed and integrated, based on the Microsoft Foundation Class (MFC) library. To testify the expert system, an inspection example is given at the end of this paper.展开更多
This paper uses three size metrics,which are collectable during the design phase,to analyze the potentially confounding effect of class size on the associations between object-oriented(OO)metrics and maintainability...This paper uses three size metrics,which are collectable during the design phase,to analyze the potentially confounding effect of class size on the associations between object-oriented(OO)metrics and maintainability.To draw as many general conclusions as possible,the confounding effect of class size is analyzed on 127 C++ systems and 113 Java systems.For each OO metric,the indirect effect that represents the distortion of the association caused by class size and its variance for individual systems is first computed.Then,a statistical meta-analysis technique is used to compute the average indirect effect over all the systems and to determine if it is significantly different from zero.The experimental results show that the confounding effects of class size on the associations between OO metrics and maintainability generally exist,regardless of whatever size metric is used.Therefore,empirical studies validating OO metrics on maintainability should consider class size as a confounding variable.展开更多
Expert systems (ESs) are being increasingly applied to the fault diagnosis of engines. Based on the idea of ES template (EST), an object-oriented rule-type EST is emphatically studied on such aspects as the object-ori...Expert systems (ESs) are being increasingly applied to the fault diagnosis of engines. Based on the idea of ES template (EST), an object-oriented rule-type EST is emphatically studied on such aspects as the object-oriented knowledge representation, the heuristic inference engine with an improved depth-first search (DFS) and the graphical user interface. A diagnositic ES instance for debris on magnetic chip detectors (MCDs) is then created with the EST. The spot running shows that the rule-type EST enhances the abilities of knowledge representation and heuristic inference, and breaks a new way for the rapid construction and implementation of ES.展开更多
A visual object-oriented software for lane following on intelligent highway system (IHS) is proposed. According to object-oriented theory, 3 typical user services of self-check, transfer of human driving and automatic...A visual object-oriented software for lane following on intelligent highway system (IHS) is proposed. According to object-oriented theory, 3 typical user services of self-check, transfer of human driving and automatic running and abnormal information input from the sensors are chosen out. In addition, the functions of real-time display, information exchanging interface, determination and operation interweaving in the 3 user services are separated into 5 object-oriented classes. Moreover, the 5 classes are organized in the visual development environment. At last, experimental result proves the validity and reliability of the control application.展开更多
Presents an object-oriented NBO(node-block-object)data model for hypermedia system.It takes advantage of object-oriented method,encapsulates all multimedia information as well as link functions in one unit,It has succ...Presents an object-oriented NBO(node-block-object)data model for hypermedia system.It takes advantage of object-oriented method,encapsulates all multimedia information as well as link functions in one unit,It has successfully achieved cross link to offer much better flexibility and two-way link to realize forward and backward searching in hypermedia system navigation.A conditional relation on links has also been realized,that is very helpful for time sensitive multimedia information processing and multimedia object cooperation.展开更多
From a perspective of theoretical study, there are some faults in the models of the existing object-oriented programming languages. For example, C# does not support metaclasses, the primitive types of Java and C# are ...From a perspective of theoretical study, there are some faults in the models of the existing object-oriented programming languages. For example, C# does not support metaclasses, the primitive types of Java and C# are not objects, etc. So, this paper designs a programming language, Shrek, which integrates many language features and constructions in a compact and consistent model. The Shrek language is a class-based purely object-oriented language. It has a dynamical strong type system, and adopts a single-inheritance mechanism with Mixin as its complement. It has a consistent class instantiation and inheritance structure, and the ability of intercessive structural computational reflection, which enables it to support safe metaclass programming. It also supports multi-thread programming and automatic garbage collection, and enforces its expressive power by adopting a native method mechanism. The prototype system of the Shrek language is implemented and anticipated design goals are achieved.展开更多
基金R.F.Banaszewski’s M.Sc.thesis[10]was supported by CAPES Foundation(Brazil)as well as R.F.Banas-zewski’s Ph.D.thesis and A.F.Ronszcka’s M.Sc.thesis are under CAPES support
文摘This paper presents a new programming paradigm named Notification-Oriented Paradigm (NOP) and analyses the performance aspects of NOP programs by means of an experiment. NOP provides a new manner to conceive, structure, and execute software, which would allow better performance, causal-knowledge organization, and decoupling than standard solutions based upon usual paradigms. These paradigms are essentially Imperative Paradigm (IP) and Declarative Paradigm (DP). In short, DP solutions are considered easier to use than IP solutions due to the concept of high-level programming. However, they are considered slower in execution and less flexible in development. Anyway, both paradigms present similar drawbacks such as redundant causal-evaluation and strongly coupled entities, which decrease the software performance and the processing distribution feasibility. These problems exist due to an orientation to a monolithic inference mechanism based upon sequential evaluation by searching on passive computational entities. NOP proposes another way to structure software and make its inferences, which is based upon small, collaborative, and decoupled computational entities whose interaction happens through precise notifications. In this context, this paper presents a quantitative comparison between two equivalent implementations of a computer game simulator (Pacman simulator), one developed according to the principles of Object-Oriented Paradigm (OOP/IP) in C++ and other developed according to the principles of NOP. The results obtained from the experiments demonstrate, however, a quite lower performance of NOP implementation. This happened because NOP applications are still developed using a framework based on C++. Besides, the paper shows that optimizations in the NOP framework improve NOP program performance, thereby evidencing the necessity of developing a NOP language/compiler.
基金support of the Deanship of Research and Graduate Studies at Ajman University under Projects 2024-IRG-ENiT-36 and 2024-IRG-ENIT-29.
文摘Incorporation of explainability features in the decision-making web-based systems is considered a primary concern to enhance accountability,transparency,and trust in the community.Multi-domain Sentiment Analysis is a significant web-based system where the explainability feature is essential for achieving user satisfaction.Conventional design methodologies such as object-oriented design methodology(OODM)have been proposed for web-based application development,which facilitates code reuse,quantification,and security at the design level.However,OODM did not provide the feature of explainability in web-based decision-making systems.X-OODM modifies the OODM with added explainable models to introduce the explainability feature for such systems.This research introduces an explainable model leveraging X-OODM for designing transparent applications for multidomain sentiment analysis.The proposed design is evaluated using the design quality metrics defined for the evaluation of the X-OODM explainable model under user context.The design quality metrics,transferability,simulatability,informativeness,and decomposability were introduced one after another over time to the evaluation of the X-OODM user context.Auxiliary metrics of accessibility and algorithmic transparency were added to increase the degree of explainability for the design.The study results reveal that introducing such explainability parameters with X-OODM appropriately increases system transparency,trustworthiness,and user understanding.The experimental results validate the enhancement of decision-making for multi-domain sentiment analysis with integration at the design level of explainability.Future work can be built in this direction by extending this work to apply the proposed X-OODM framework over different datasets and sentiment analysis applications to further scrutinize its effectiveness in real-world scenarios.
文摘A new implementation(SCKE—Stractured Communication Knowledge Entity)has been proposed towards combining the logic-with the object-oriented paradigm of computing.It is intended to explore the advantages in these two paradigms in a structured,natural and efficient manner for large-scale know- ledge processing.The SCKE model supports modularity and protection for the structured development of knowledge systems.It also introduces the concepts that are typical for the object-oriented systems in the logic-oriented paradigm,without losing its advantages as a declarative language.Various inheritance hier- archies are supported in the SCKE model.They provide the semantics basis for various knowledge in AI systems.The M-entity/K-entity/Instance inheritance captures the relationship among the control, procedural and factual knowledge in AI systems,And,the super-entity/entity/instance inheritance shows the concepts of data abstraction in the knowledge of a particular domain.In addition,the SCKE model is not simply supported on top of Prolog like other attempts to integrate the object-into the log- ic-oriented paradigm.The SCKE model is a tighltly coupled model of the logic-and the object-oriented paradigm and its interpreter uniformly interprets the logic semantics and the object-oriented semantics.
基金supported by the National Natural Science Foundation of China[grant number 42001386,42271407]within the ESA-MOST China Dragon 5 Cooperation(ID:59313).
文摘Due to the small size,variety,and high degree of mixing of herbaceous vegetation,remote sensing-based identification of grassland types primarily focuses on extracting major grassland categories,lacking detailed depiction.This limitation significantly hampers the development of effective evaluation and fine supervision for the rational utilization of grassland resources.To address this issue,this study concentrates on the representative grassland of Zhenglan Banner in Inner Mongolia as the study area.It integrates the strengths of Sentinel-1 and Sentinel-2 active-passive synergistic observations and introduces innovative object-oriented techniques for grassland type classification,thereby enhancing the accuracy and refinement of grassland classification.The results demonstrate the following:(1)To meet the supervision requirements of grassland resources,we propose a grassland type classification system based on remote sensing and the vegetation-habitat classification method,specifically applicable to natural grasslands in northern China.(2)By utilizing the high-spatial-resolution Normalized Difference Vegetation Index(NDVI)synthesized through the Spatial and Temporal Non-Local Filter-based Fusion Model(STNLFFM),we are able to capture the NDVI time profiles of grassland types,accurately extract vegetation phenological information within the year,and further enhance the temporal resolution.(3)The integration of multi-seasonal spectral,polarization,and phenological characteristics significantly improves the classification accuracy of grassland types.The overall accuracy reaches 82.61%,with a kappa coefficient of 0.79.Compared to using only multi-seasonal spectral features,the accuracy and kappa coefficient have improved by 15.94%and 0.19,respectively.Notably,the accuracy improvement of the gently sloping steppe is the highest,exceeding 38%.(4)Sandy grassland is the most widespread in the study area,and the growth season of grassland vegetation mainly occurs from May to September.The sandy meadow exhibits a longer growing season compared with typical grassland and meadow,and the distinct differences in phenological characteristics contribute to the accurate identification of various grassland types.
基金Research on Algorithm Model for Monitoring and Evaluating Typical Disaster Situations of Electric Power Equipment Based on Remote Sensing Imaging Technology of Heaven and Earth,South Grid Guangxi Power Grid Company Science and Technology Project(GXKJXM20222160).
文摘As one of the main geographical elements in urban areas,buildings are closely related to the development of the city.Therefore,how to quickly and accurately extract building information from remote sensing images is of great significance for urban map updating,urban planning and construction,etc.Extracting building information around power facilities,especially obtaining this information from high-resolution images,has become one of the current hot topics in remote sensing technology research.This study made full use of the characteristics of GF-2 satellite remote sensing images,adopted an object-oriented classification method,combined with multi-scale segmentation technology and CART classification algorithm,and successfully extracted the buildings in the study area.The research results showed that the overall classification accuracy reached 89.5%and the Kappa coefficient was 0.86.Using the object-oriented CART classification algorithm for building extraction could be closer to actual ground objects and had higher accuracy.The extraction of buildings in the city contributed to urban development planning and provided decision support for management.
文摘Structural development defects essentially refer to code structure that violates object-oriented design principles. They make program maintenance challenging and deteriorate software quality over time. Various detection approaches, ranging from traditional heuristic algorithms to machine learning methods, are used to identify these defects. Ensemble learning methods have strengthened the detection of these defects. However, existing approaches do not simultaneously exploit the capabilities of extracting relevant features from pre-trained models and the performance of neural networks for the classification task. Therefore, our goal has been to design a model that combines a pre-trained model to extract relevant features from code excerpts through transfer learning and a bagging method with a base estimator, a dense neural network, for defect classification. To achieve this, we composed multiple samples of the same size with replacements from the imbalanced dataset MLCQ1. For all the samples, we used the CodeT5-small variant to extract features and trained a bagging method with the neural network Roberta Classification Head to classify defects based on these features. We then compared this model to RandomForest, one of the ensemble methods that yields good results. Our experiments showed that the number of base estimators to use for bagging depends on the defect to be detected. Next, we observed that it was not necessary to use a data balancing technique with our model when the imbalance rate was 23%. Finally, for blob detection, RandomForest had a median MCC value of 0.36 compared to 0.12 for our method. However, our method was predominant in Long Method detection with a median MCC value of 0.53 compared to 0.42 for RandomForest. These results suggest that the performance of ensemble methods in detecting structural development defects is dependent on specific defects.
文摘Modelling enterprises include two essential tasks:data modelling for static properties and behaviours modelling for dynamic properties.Traditionally,the data modelling and behaviours modelling are separated in different phases and also in different description forms,e.g.the former is modelled in entity-relationship diagrams(ERDs),and the latter is modelled in data flow diagrams (DFDs).The separation could result in an incorrect description of the relationships between da- ta and behaviours,so that the enterprise model couldn't reflect the actual conditions and demands of the enterprise.In this paper an object-oriented approach integrating data with behaviours in a model for the Enterprises Management Information Systems(in short,EMISs)is proposed.As an isomorphic mapping of enterprises an object-oriented model can,in a natural form,exactly describe the dynamic and static properties of enterprises in an integrated model.Therefore it can be easily used by the end-users(e.g.the experts in accounting,financial reporting,and business managers) to specify their demands and communicate with the system analysts and designers.Based on the model an EMIS can be prototyped quickly,and then be conveniently evolved with inheritance mechanism to an adaptive application system according to the actual demands of the enterprise.
文摘This study examines the advent of agent interaction(AIx)as a transformative paradigm in humancomputer interaction(HCI),signifying a notable evolution beyond traditional graphical interfaces and touchscreen interactions.Within the context of large models,AIx is characterized by its innovative interaction patterns and a plethora of application scenarios that hold great potential.The paper highlights the pivotal role of AIx in shaping the future landscape of the large model industry,emphasizing its adoption and necessity from a user's perspective.This study underscores the pivotal role of AIx in dictating the future trajectory of a large model industry by emphasizing the importance of its adoption and necessity from a user-centric perspective.The fundamental drivers of AIx include the introduction of novel capabilities,replication of capabilities(both anthropomorphic and superhuman),migration of capabilities,aggregation of intelligence,and multiplication of capabilities.These elements are essential for propelling innovation,expanding the frontiers of capability,and realizing the exponential superposition of capabilities,thereby mitigating labor redundancy and addressing a spectrum of human needs.Furthermore,this study provides an in-depth analysis of the structural components and operational mechanisms of agents supported by large models.Such advancements significantly enhance the capacity of agents to tackle complex problems and provide intelligent services,thereby facilitating a more intuitive,adaptive,and personalized engagement between humans and machines.The study further delineates four principal categories of interaction patterns that encompass eight distinct modalities of interaction,corresponding to twenty-one specific scenarios,including applications in smart home systems,health assistance,and elderly care.This emphasizes the significance of this new paradigm in advancing HCI,fostering technological advancements,and redefining user experiences.However,it also acknowledges the challenges and ethical considerations that accompany this paradigm shift,recognizing the need for a balanced approach to harness the full potential of AIx in modern society.
文摘With the continuous improvement of the medical industry’s requirements for the professional capabilities of nursing talents,traditional nursing teaching models can hardly meet the needs of complex nursing work in neurology.This paper focuses on nursing education for neurology nursing students and explores the construction of the“one-on-one”teaching model,aiming to achieve a paradigm shift in nursing education.By analyzing the current status of neurology nursing education,this paper identifies the problems in traditional teaching models.Combining the advantages of the“one-on-one”teaching model,it elaborates on the construction path of this model from aspects such as the selection and training of teaching instructors,the design of teaching content,the innovation of teaching methods,and the improvement of the teaching evaluation system.The research shows that the“one-on-one”teaching model can significantly enhance nursing students’mastery of professional knowledge,clinical operation skills,communication skills,and emergency response capabilities,as well as strengthen their professional identity and sense of responsibility.It provides an effective way to cultivate high-quality nursing talents who can meet the needs of neurology nursing work and promotes the innovative development of nursing education.
基金2025 General Project of Humanities and Social Sciences Research in Henan Higher Education Institutions,“Research on the Dynamic Mechanisms and Paths of Innovative Development of Undergraduate Translation Programs Empowered by New Productive Forces”(Project No.:2025-ZDJH-885)2024 College-Level Undergraduate Teaching Reform Project of the School of Foreign Languages,Henan University of Technology,“Research on Implementation Paths of New Models for Interpreter Training Based on AI Large Models”(Project No.:2024YJWYJG06)+1 种基金2025 First-Class Undergraduate Program Construction Special Project of the School of Foreign Languages,Henan University of Technology,titled“Research on Development Paths for Innovative Development of Undergraduate Translation Programs Empowered by New Productive Forces”(Project No.:2025WYZYJS30)2025 Educational Reform Project of the School of International Education,Henan University of Technology,“A Study on the Language Competence Development Model for International Talents Based on the Al Large Model-Taking IELTS Reading and Writing Teaching Practice as an Example”(Project No.:GJXY202533)。
文摘This paper explores the paradigm reconstruction of interpreting pedagogy driven by generative AI technology.With the breakthroughs of AI technologies such as ChatGPT in natural language processing,traditional interpreting education faces dual challenges of technological substitution and pedagogical transformation.Based on Kuhn’s paradigm theory,the study analyzes the limitations of three traditional interpreting teaching paradigms,language-centric,knowledge-based,and skill-acquisition-oriented,and proposes a novel“teacher-AI-learner”triadic collaborative paradigm.Through reconstructing teaching subjects,environments,and curriculum systems,the integration of real-time translation tools and intelligent terminology databases facilitates the transition from static skill training to dynamic human-machine collaboration.The research simultaneously highlights challenges in technological ethics and curriculum design transformation pressures,emphasizing the necessity to balance technological empowerment with humanistic education.
基金funded by the General Program of the National Natural Science Foundation of China grant number 62277022.
文摘The integration of artificial intelligence(AI)is fundamentally reshaping the scientific research,giving rise to a new era of discovery and innovation.This paper explores this transformative shift,introducing an innovative concept of the“AI-Driven Research Ecosystem”,a dynamic and collaborative research environment.Within this ecosystem,we focus on the unification of human-AI collaboration models and the emerging new research thinking paradigms.We analyze the multifaceted roles of AI within the research lifecycle,spanning from a passive tool to an active assistant and autonomous participants,and categorize these interactions into distinct human-AI collaboration models.Furthermore,we examine how the pervasive involvement of AI necessitates an evolution in human research thinking,emphasizing the significant roles of critical,creative,and computational thinking.Through a review of existing literature and illustrative case studies,this paper provides a comprehensive overview of the AI-driven research ecosystem,highlighting its potential for transforming scientific research.Our findings advance the current understanding of AI’s multiple roles in research and underscore its capacity to revolutionize both knowledge discovery and collaborative innovation,paving the way for a more integrated and impactful research paradigm.
基金funded by Research on Policy Design and Implementation Path for High-Quality Development of Digital Cultural Industry(23&ZD087),a major project of the National Social Science Foundation of China.
文摘Digital-intelligent technologies represent the advanced direction of new quality productive forces and are becoming a driving force for the digital transformation and high-quality development of the cultural industry.Empowered by new quality productive forces,the digital cultural industry has demonstrated diverse characteristics,including the innovation of cultural production subjects,the intelligentization of production tools,the digitization of production objects,the systematization of production methods,and the diversification of production factors.Leveraging technologies such as AIGC,virtual-physical integration,and DAOs based on Web 3.0,the digital cultural industry has established an innovative paradigm,fostering a new method of AIGC production in the digital cultural industry,a new business format of virtual-physical integration,and a new collaborative ecosystem characterized by co-creation,co-building,and co-governance.Meanwhile,the innovative paradigm of the digital cultural industry also faces a series of new challenges,such as the adaptability issues with AIGC algorithm models,creative bottlenecks,and content quality control problems.Additionally,there are obstacles like the immaturity of international development channels for new business formats,the lack of cultural connotations in creative products,and the lag of the digital-intelligent governance of the industry ecosystem behind digital practices.In light of this,there is an urgent need to establish an optimization mechanism for the high-quality development of digital cultural industries driven by new quality productive forces.This includes optimizing the content production mechanism for AIGC-led high-quality innovation in the digital cultural industry;improving the leapfrog development mechanism for new digital cultural business formats through global-regional collaboration;and enhancing the accurate,high-quality governance mechanism for the digital cultural industry that is aligned with the goals of Chinese modernization.
文摘Active inflammation in“inactive”progressive multiple sclerosis:Traditionally,the distinction between relapsing-remitting multiple sclerosis and progressive multiple sclerosis(PMS)has been framed as an inflammatory versus degenerative dichotomy.This was based on a broad misconception regarding essentially all neurodegenerative conditions,depicting the degenerative process as passive and immune-independent occurring as a late byproduct of active inflammation in the central nervous system(CNS),which is(solely)systemically driven.
文摘Head and neck cutaneous squamous cell carcinoma(HNCSCC)remains underexplored compared to oropharyngeal squamous cell carcinoma,particularly in relation to human papillomavirus(HPV)and molecular markers such as p16 and p53.While p16 is a well-established surrogate for HPV in oropharyngeal cancer,our review highlights its unreliable role in HNCSCC,where positivity is instead associated with recurrence and metastasis.Similarly,p53 illustrates a dual role-wild-type as a genomic safeguard,mutated as an oncogenic driver-complicating prognostication.Methodological considerations,including the limitations of immunohistochemistry for HPV detection,underscore the need for multi-method and molecular validation in future studies.Ultraviolet radiation is posited as a key modifier of p16 function,decoupling expression from tumor suppression.To contextualize these findings,we draw parallels to glioblastoma(GBM),where subclonal evolution,p53 dysfunction,and intratumoral heterogeneity drive relapse despite aggressive multimodal therapies.GBM exemplifies how bulk-level biomarker generalizations often obscure dynamic cellular ecosystems,reinforcing the necessity of single-cell and spatial approaches.Multi-omics integration-encompassing genome,transcriptome,proteome,and tumor microenvironment mapping-coupled with single-cell RNA sequencing and spatial transcriptomics,offers a path forward for resolving subclonal dynamics in both HNCSCC and GBM.These technologies provide the resolution needed to track tumor-immunestromal co-evolution,identify therapy-resistant clones,and anticipate recurrence.We argue for a N-of-1,patient-and cell-centric paradigm that reframes biomarkers not as static surrogates but as dynamic readouts of cancer evolution across time and tissue contexts.Conceptually,we propose kinetic and microenvironmental frameworks(e.g.,“load-and-lock”barriers;dormancy and immunesynapse stabilization)as hypothesis-generating avenues to stall clonal handoffs and improve outcome prediction.Together,these perspectives argue for revised biomarker frameworks in HNCSCC and ethically inclusive,mechanism-anchored studies that bridge discovery with individualized care.By bridging insights from HNCSCC with the lessons of GBM,this review underscores the need for ethically inclusive,mechanistically informed frameworks that integrate subclonal evolution,biomarker re-interpretation,and precision-personalized hybrid models.Such an approach will be essential for advancing from one-size-fits-all strategies to individualized lifetime cancer care.
文摘The inspection of engine lubricating oil can give an indication of the internal condition of an engine. By means of the Object-Oriented Programming (OOP), an expert system is developed in this paper to computerize the inspection. The traditional components of an expert system, such us knowledge base, inference engine and user interface are reconstructed and integrated, based on the Microsoft Foundation Class (MFC) library. To testify the expert system, an inspection example is given at the end of this paper.
基金The National Natural Science Foundation of China(No.60425206,60633010)
文摘This paper uses three size metrics,which are collectable during the design phase,to analyze the potentially confounding effect of class size on the associations between object-oriented(OO)metrics and maintainability.To draw as many general conclusions as possible,the confounding effect of class size is analyzed on 127 C++ systems and 113 Java systems.For each OO metric,the indirect effect that represents the distortion of the association caused by class size and its variance for individual systems is first computed.Then,a statistical meta-analysis technique is used to compute the average indirect effect over all the systems and to determine if it is significantly different from zero.The experimental results show that the confounding effects of class size on the associations between OO metrics and maintainability generally exist,regardless of whatever size metric is used.Therefore,empirical studies validating OO metrics on maintainability should consider class size as a confounding variable.
文摘Expert systems (ESs) are being increasingly applied to the fault diagnosis of engines. Based on the idea of ES template (EST), an object-oriented rule-type EST is emphatically studied on such aspects as the object-oriented knowledge representation, the heuristic inference engine with an improved depth-first search (DFS) and the graphical user interface. A diagnositic ES instance for debris on magnetic chip detectors (MCDs) is then created with the EST. The spot running shows that the rule-type EST enhances the abilities of knowledge representation and heuristic inference, and breaks a new way for the rapid construction and implementation of ES.
文摘A visual object-oriented software for lane following on intelligent highway system (IHS) is proposed. According to object-oriented theory, 3 typical user services of self-check, transfer of human driving and automatic running and abnormal information input from the sensors are chosen out. In addition, the functions of real-time display, information exchanging interface, determination and operation interweaving in the 3 user services are separated into 5 object-oriented classes. Moreover, the 5 classes are organized in the visual development environment. At last, experimental result proves the validity and reliability of the control application.
文摘Presents an object-oriented NBO(node-block-object)data model for hypermedia system.It takes advantage of object-oriented method,encapsulates all multimedia information as well as link functions in one unit,It has successfully achieved cross link to offer much better flexibility and two-way link to realize forward and backward searching in hypermedia system navigation.A conditional relation on links has also been realized,that is very helpful for time sensitive multimedia information processing and multimedia object cooperation.
基金The National Science Fund for Distinguished Young Scholars (No.60425206)the National Natural Science Foundation of China (No.60633010)the Natural Science Foundation of Jiangsu Province(No.BK2006094)
文摘From a perspective of theoretical study, there are some faults in the models of the existing object-oriented programming languages. For example, C# does not support metaclasses, the primitive types of Java and C# are not objects, etc. So, this paper designs a programming language, Shrek, which integrates many language features and constructions in a compact and consistent model. The Shrek language is a class-based purely object-oriented language. It has a dynamical strong type system, and adopts a single-inheritance mechanism with Mixin as its complement. It has a consistent class instantiation and inheritance structure, and the ability of intercessive structural computational reflection, which enables it to support safe metaclass programming. It also supports multi-thread programming and automatic garbage collection, and enforces its expressive power by adopting a native method mechanism. The prototype system of the Shrek language is implemented and anticipated design goals are achieved.