Attributed graph clustering plays a vital role in uncovering hidden network structures,but it presents significant challenges.In recent years,various models have been proposed to identify meaningful clusters by integr...Attributed graph clustering plays a vital role in uncovering hidden network structures,but it presents significant challenges.In recent years,various models have been proposed to identify meaningful clusters by integrating both structural and attribute-based information.However,these models often emphasize node proximities without adequately balancing the efficiency of clustering based on both structural and attribute data.Furthermore,they tend to neglect the critical fuzzy information inherent in attributed graph clusters.To address these issues,we introduce a new framework,Markov lumpability optimization,for efficient clustering of large-scale attributed graphs.Specifically,we define a lumped Markov chain on an attribute-augmented graph and introduce a new metric,Markov lumpability,to quantify the differences between the original and lumped Markov transition probability matrices.To minimize this measure,we propose a conjugate gradient projectionbased approach that ensures the partitioning closely aligns with the intrinsic structure of fuzzy clusters through conditional optimization.Extensive experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed framework compared to existing clustering algorithms.This framework has many potential applications,including dynamic community analysis of social networks,user profiling in recommendation systems,functional module identification in biological molecular networks,and financial risk control,offering a new paradigm for mining complex patterns in high-dimensional attributed graph data.展开更多
Detecting overlapping communities in attributed networks remains a significant challenge due to the complexity of jointly modeling topological structure and node attributes,the unknown number of communities,and the ne...Detecting overlapping communities in attributed networks remains a significant challenge due to the complexity of jointly modeling topological structure and node attributes,the unknown number of communities,and the need to capture nodes with multiple memberships.To address these issues,we propose a novel framework named density peaks clustering with neutrosophic C-means.First,we construct a consensus embedding by aligning structure-based and attribute-based representations using spectral decomposition and canonical correlation analysis.Then,an improved density peaks algorithm automatically estimates the number of communities and selects initial cluster centers based on a newly designed cluster strength metric.Finally,a neutrosophic C-means algorithm refines the community assignments,modeling uncertainty and overlap explicitly.Experimental results on synthetic and real-world networks demonstrate that the proposed method achieves superior performance in terms of detection accuracy,stability,and its ability to identify overlapping structures.展开更多
The global monsoon system,encompassing the Asian-Australian,African,and American monsoons,sustains two-thirds of the world’s population by regulating water resources and agriculture.Monsoon anomalies pose severe risk...The global monsoon system,encompassing the Asian-Australian,African,and American monsoons,sustains two-thirds of the world’s population by regulating water resources and agriculture.Monsoon anomalies pose severe risks,including floods and droughts.Recent research associated with the implementation of the Global Monsoons Model Intercomparison Project under the umbrella of CMIP6 has advanced our understanding of its historical variability and driving mechanisms.Observational data reveal a 20th-century shift:increased rainfall pre-1950s,followed by aridification and partial recovery post-1980s,driven by both internal variability(e.g.,Atlantic Multidecadal Oscillation)and external forcings(greenhouse gases,aerosols),while ENSO drives interannual variability through ocean-atmosphere interactions.Future projections under greenhouse forcing suggest long-term monsoon intensification,though regional disparities and model uncertainties persist.Models indicate robust trends but struggle to quantify extremes,where thermodynamic effects(warming-induced moisture rise)uniformly boost heavy rainfall,while dynamical shifts(circulation changes)create spatial heterogeneity.Volcanic eruptions and proposed solar radiation modification(SRM)further complicate predictions:tropical eruptions suppress monsoons,whereas high-latitude events alter cross-equatorial flows,highlighting unresolved feedbacks.The emergent constraint approach is booming in terms of correcting future projections and reducing uncertainty with respect to the global monsoons.Critical challenges remain.Model biases and sparse 20th-century observational data hinder accurate attribution.The interplay between natural variability and anthropogenic forcings,along with nonlinear extreme precipitation risks under warming,demands deeper mechanistic insights.Additionally,SRM’s regional impacts and hemispheric monsoon interactions require systematic evaluation.Addressing these gaps necessitates enhanced observational networks,refined climate models,and interdisciplinary efforts to disentangle multiscale drivers,ultimately improving resilience strategies for monsoon-dependent regions.展开更多
Dear Editor,Severe fever with thrombocytopenia syndrome(SFTS)is an emerging tick-borne infectious disease caused by a novel bunyavirus called SFTS virus(SFTSV).It was initially identified in China in 2009(Yu et al.,20...Dear Editor,Severe fever with thrombocytopenia syndrome(SFTS)is an emerging tick-borne infectious disease caused by a novel bunyavirus called SFTS virus(SFTSV).It was initially identified in China in 2009(Yu et al.,2011).Since then,the number of reported SFTS cases has rapidly increased in China,South Korea,and Japan(Li,et al.,2018;Takahashi et al.,2014;Kim et al.,2018).Sporadic SFTS cases have also been identified in several other Asian countries,such as Vietnam,Pakistan,Myanmar,and Thailand(Takahashi et al.,2014;Tran et al.,2019;Li et al.,2021).This disease is recognized as a highly lethal viral hemorrhagic fever with a mortality rate ranging from 12%to 50%(Yu et al.,2011;Li et al.,2018;Takahashi et al.,2014;Kim et al.,2013).SFTS primarily spreads to humans through bites from ticks infected with SFTSV,with the Haemaphysalis longicornis tick acting as the predominant vector for SFTSV(Zhuang et al.,2018).展开更多
Purpose:Based on real-world academic data,this study aims to use network embedding technology to mining academic relationships,and investigate the effectiveness of the proposed embedding model on academic collaborator...Purpose:Based on real-world academic data,this study aims to use network embedding technology to mining academic relationships,and investigate the effectiveness of the proposed embedding model on academic collaborator recommendation tasks.Design/methodology/approach:We propose an academic collaborator recommendation model based on attributed network embedding(ACR-ANE),which can get enhanced scholar embedding and take full advantage of the topological structure of the network and multi-type scholar attributes.The non-local neighbors for scholars are defined to capture strong relationships among scholars.A deep auto-encoder is adopted to encode the academic collaboration network structure and scholar attributes into a low-dimensional representation space.Findings:1.The proposed non-local neighbors can better describe the relationships among scholars in the real world than the first-order neighbors.2.It is important to consider the structure of the academic collaboration network and scholar attributes when recommending collaborators for scholars simultaneously.Research limitations:The designed method works for static networks,without taking account of the network dynamics.Practical implications:The designed model is embedded in academic collaboration network structure and scholarly attributes,which can be used to help scholars recommend potential collaborators.Originality/value:Experiments on two real-world scholarly datasets,Aminer and APS,show that our proposed method performs better than other baselines.展开更多
The homogeneity analysis of multi-airport system can provide important decision-making support for the route layout and cooperative operation.Existing research seldom analyzes the homogeneity of multi-airport system f...The homogeneity analysis of multi-airport system can provide important decision-making support for the route layout and cooperative operation.Existing research seldom analyzes the homogeneity of multi-airport system from the perspective of route network analysis,and the attribute information of airport nodes in the airport route network is not appropriately integrated into the airport network.In order to solve this problem,a multi-airport system homogeneity analysis method based on airport attribute network representation learning is proposed.Firstly,the route network of a multi-airport system with attribute information is constructed.If there are flights between airports,an edge is added between airports,and regional attribute information is added for each airport node.Secondly,the airport attributes and the airport network vector are represented respectively.The airport attributes and the airport network vector are embedded into the unified airport representation vector space by the network representation learning method,and then the airport vector integrating the airport attributes and the airport network characteristics is obtained.By calculating the similarity of the airport vectors,it is convenient to calculate the degree of homogeneity between airports and the homogeneity of the multi-airport system.The experimental results on the Beijing-Tianjin-Hebei multi-airport system show that,compared with other existing algorithms,the homogeneity analysis method based on attributed network representation learning can get more consistent results with the current situation of Beijing-Tianjin-Hebei multi-airport system.展开更多
Objective To estimate the lung cancer burden that may be attributable to ambient fine particulate matter (PM2.5) pollution in Guangzhou city in China from 2005 to 2013. Methods The data regarding PM2.5 exposure were...Objective To estimate the lung cancer burden that may be attributable to ambient fine particulate matter (PM2.5) pollution in Guangzhou city in China from 2005 to 2013. Methods The data regarding PM2.5 exposure were obtained from the 'Ambient air pollution exposure estimation for the Global Burden of Disease 2013' dataset at 0.1° ×0.1° spatial resolution. Disability-adjusted life years (DALYs) were estimated based on the information of mortality and incidence of lung cancer. Comparative risk analysis and integrated exposure-response function were used to estimate attributed disease burden. Results The population-weighted average concentration of PM2.5 was increased by 34.6% between 1990 and 2013, from 38.37 μg/m3 to 51.31 μg/m^3. The lung cancer DALYs in both men and women were increased by 36.2% from 2005 to 2013. The PM2.5 attributed lung cancer DALYs increased from 12105.0 (8181.0 for males and 3924.0 for females) in 2005 to 16489.3 (11291.7 for males and 5197.6 for females) in 2013. An average of 23.1% lung cancer burden was attributable to PM2.5 pollution in 2013. Conclusion PM2.5 has caused serious but under-appreciated public health burden in Guangzhou and the trend deteriorates. Effective strategies are needed to tackle this major public health problem.展开更多
Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself disc...Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.展开更多
Software architectures shift the focus of developers from lines-of-code to coarser-grained architectural elements and their overall interconnection structure. There are, however, many features of the distributed softw...Software architectures shift the focus of developers from lines-of-code to coarser-grained architectural elements and their overall interconnection structure. There are, however, many features of the distributed software that make the developing methods of distributed software quite different from the traditional ways. Furthermore, the traditional centralized ways with fixed interfaces cannot adapt to the flexible requirements of distributed software. In this paper, the attributed grammar (AG) is extended to refine the characters of distributed software, and a distributed software architecture description language (DSADL) based on attributed grammar is introduced, and then a model of integrated environment for software architecture design is proposed. It can be demonstrated by the practice that DSADL can help the programmers to analyze and design distributed software effectively, so the efficiency of the development can be improved greatly.展开更多
Parameter estimation of the attributed scattering center(ASC) model is significant for automatic target recognition(ATR). Sparse representation based parameter estimation methods have developed rapidly. Construction o...Parameter estimation of the attributed scattering center(ASC) model is significant for automatic target recognition(ATR). Sparse representation based parameter estimation methods have developed rapidly. Construction of the separable dictionary is a key issue for sparse representation technology. A compressive time-domain dictionary(TD) for ASC model is presented. Two-dimensional frequency domain responses of the ASC are produced and transformed into the time domain. Then these time domain responses are cutoff and stacked into vectors. These vectored time-domain responses are amalgamated to form the TD. Compared with the traditional frequency-domain dictionary(FD), the TD is a matrix that is quite spare and can markedly reduce the data size of the dictionary. Based on the basic TD construction method, we present four extended TD construction methods, which are available for different applications. In the experiments, the performance of the TD, including the basic model and the extended models, has been firstly analyzed in comparison with the FD. Secondly, an example of parameter estimation from SAR synthetic aperture radar(SAR) measurements of a target collected in an anechoic room is exhibited. Finally, a sparse image reconstruction example is from two apart apertures. Experimental results demonstrate the effectiveness and efficiency of the proposed TD.展开更多
The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters' positions among a much large number of potentia...The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters' positions among a much large number of potential scatters' positions, and provide an effective approach to improve the SAR image resolution. Based on the attributed scatter center model, several experiments were performed with different practical considerations to evaluate the performance of five representative SR techniques, namely, sparse Bayesian learning (SBL), fast Bayesian matching pursuit (FBMP), smoothed 10 norm method (SL0), sparse reconstruction by separable approximation (SpaRSA), fast iterative shrinkage-thresholding algorithm (FISTA), and the parameter settings in five SR algorithms were discussed. In different situations, the performances of these algorithms were also discussed. Through the comparison of MSE and failure rate in each algorithm simulation, FBMP and SpaRSA are found suitable for dealing with problems in the SAR imaging based on attributed scattering center model. Although the SBL is time-consuming, it always get better performance when related to failure rate and high SNR.展开更多
Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of t...Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of the real‐word system are multiple relations,where entities are linked by different types of relations,and each relation is a view of the graph network.Second,the rich multi‐scale information(structure‐level and feature‐level)of the graph network can be seen as self‐supervised signals,which are not fully exploited.A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale(named CoLM^(2)S)information is presented in this study.It mainly contains two components:intra‐relation contrast learning and interrelation contrastive learning.Specifically,the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information(CoLMS)framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level selfsupervised signals is introduced first.The structure‐level information includes the edge structure and sub‐graph structure,and the feature‐level information represents the output of different graph convolutional layer.Second,according to the consensus assumption among inter‐relations,the CoLM^(2)S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding.The proposed method can fully distil the graph information.Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods,and it outperforms existing competitive baselines.展开更多
Synthetic aperture radar(SAR)is able to acquire high-resolution method using the active microwave imaging method.SAR images are widely used in target recognition,classification,and surface analysis,with extracted feat...Synthetic aperture radar(SAR)is able to acquire high-resolution method using the active microwave imaging method.SAR images are widely used in target recognition,classification,and surface analysis,with extracted features.Attribute scattering center(ASC)is able to describe the image features for these tasks.However,sidelobe effects reduce the accuracy and reliability of the estimated ASC model parameters.This paper incorporates the SAR super-resolution into the ASC extraction to improve its performance.Both filter bank and subspace methods are demonstrated for preprocessing to supress the sidelobe.Based on the preprocessed data,a reinforcement based ASC method is used to get the parameters.The experimental results show that the super-resolution method can reduce noise and suppress sidelobe effect,which improve accuracy of the estimated ASC model parameters.展开更多
Many types of real-world information systems, including social media and e-commerce platforms, can be modelled by means of attribute-rich, connected networks. The goal of anomaly detection in artificial intelligence i...Many types of real-world information systems, including social media and e-commerce platforms, can be modelled by means of attribute-rich, connected networks. The goal of anomaly detection in artificial intelligence is to identify illustrations that deviate significantly from the main distribution of data or that differ from known cases. Anomalous nodes in node-attributed networks can be identified with greater precision if both graph and node attributes are taken into account. Almost all of the studies in this area focus on supervised techniques for spotting outliers. While supervised algorithms for anomaly detection work well in theory, they cannot be applied to real-world applications owing to a lack of labelled data. Considering the possible data distribution, our model employs a dual variational autoencoder (VAE), while a generative adversarial network (GAN) assures that the model is robust to adversarial training. The dual VAEs are used in another capacity: as a fake-node generator. Adversarial training is used to ensure that our latent codes have a Gaussian or uniform distribution. To provide a fair presentation of the graph, the discriminator instructs the generator to generate latent variables with distributions that are more consistent with the actual distribution of the data. Once the model has been learned, the discriminator is used for anomaly detection via reconstruction loss which has been trained to distinguish between the normal and artificial distributions of data. First, using a dual VAE, our model simultaneously captures cross-modality interactions between topological structure and node characteristics and overcomes the problem of unlabeled anomalies, allowing us to better understand the network sparsity and nonlinearity. Second, the proposed model considers the regularization of the latent codes while solving the issue of unregularized embedding techniques that can quickly lead to unsatisfactory representation. Finally, we use the discriminator reconstruction loss for anomaly detection as the discriminator is well-trained to separate the normal and generated data distributions because reconstruction-based loss does not include the adversarial component. Experiments conducted on attributed networks demonstrate the effectiveness of the proposed model and show that it greatly surpasses the previous methods. The area under the curve scores of our proposed model for the BlogCatalog, Flickr, and Enron datasets are 0.83680, 0.82020, and 0.71180, respectively, proving the effectiveness of the proposed model. The result of the proposed model on the Enron dataset is slightly worse than other models;we attribute this to the dataset’s low dimensionality as the most probable explanation.展开更多
Knee osteoarthritis(KOA),characterized by heterogeneous arthritic manifestations and complex peripheral joint disorder,is one of the leading causes of disability worldwide,which has become a high burden due to the mul...Knee osteoarthritis(KOA),characterized by heterogeneous arthritic manifestations and complex peripheral joint disorder,is one of the leading causes of disability worldwide,which has become a high burden due to the multifactorial nature and the deficiency of available disease-modifying treatments.The application of mesenchymal stem/stromal cells(MSCs)as therapeutic drugs has provided novel treatment options for diverse degenerative and chronic diseases including KOA.However,the complexity and specificity of the“live”cells have posed challenges for MSC-based drug development and the concomitant scale-up preparation from laboratory to industrialization.For instance,despite the considerable progress in ex vivo cell culture technology for fulfilling the robust development of drug conversion and clinical trials,yet significant challenges remain in obtaining regulatory approvals.Thus,there’s an urgent need for the research and development of MSC drugs for KOA.In this review,we provide alternative solution strategies for the preparation of MSC drugs on the basis of the principle of quality by design,including designing the cell production processes,quality control,and clinical applications.In detail,we mainly focus on the quality by design method for MSC manufacturing in standard cell-culturing factories for the treatment of KOA by using the Quality Target Product Profile as a starting point to determine potential critical quality attributes and to establish relationships between critical material attributes and critical process parameters.Collectively,this review aims to meet product performance and robust process design,and should help to reduce the gap between compliant products and the production of compliant good manufacturing practice.展开更多
Extreme ozone pollution events(EOPEs)are associated with synoptic weather patterns(SWPs)and pose severe health and ecological risks.However,a systematic investigation of themeteorological causes,transport pathways,and...Extreme ozone pollution events(EOPEs)are associated with synoptic weather patterns(SWPs)and pose severe health and ecological risks.However,a systematic investigation of themeteorological causes,transport pathways,and source contributions to historical EOPEs is still lacking.In this paper,the K-means clustering method is applied to identify six dominant SWPs during the warm season in the Yangtze River Delta(YRD)region from 2016 to 2022.It provides an integrated analysis of the meteorological factors affecting ozone pollution in Hefei under different SWPs.Using the WRF-FLEXPART model,the transport pathways(TPPs)and geographical sources of the near-surface air masses in Hefei during EOPEs are investigated.The results reveal that Hefei experienced the highest ozone concentration(134.77±42.82μg/m^(3)),exceedance frequency(46 days(23.23%)),and proportion of EOPEs(21 instances,47.7%)under the control of peripheral subsidence of typhoon(Type 5).Regional southeast winds correlated with the ozone pollution in Hefei.During EOPEs,a high boundary layer height,solar radiation,and temperature;lowhumidity and cloud cover;and pronounced subsidence airflow occurred over Hefei and the broader YRD region.The East-South(E_S)patterns exhibited the highest frequency(28 instances,65.11%).Regarding the TPPs and geographical sources of the near-surface air masses during historical EOPEs.The YRD was the main source for land-originating air masses under E_S patterns(50.28%),with Hefei,southern Anhui,southern Jiangsu,and northern Zhejiang being key contributors.These findings can help improve ozone pollution early warning and control mechanisms at urban and regional scales.展开更多
Based on the analysis of surface geological survey,exploratory well,gravity-magnetic-electric and seismic data,and through mapping the sedimentary basin and its peripheral orogenic belts together,this paper explores s...Based on the analysis of surface geological survey,exploratory well,gravity-magnetic-electric and seismic data,and through mapping the sedimentary basin and its peripheral orogenic belts together,this paper explores systematically the boundary,distribution,geological structure,and tectonic attributes of the Ordos prototype basin in the geological historical periods.The results show that the Ordos block is bounded to the west by the Engorwusu Fault Zone,to the east by the Taihangshan Mountain Piedmont Fault Zone,to the north by the Solonker-Xilamuron Suture Zone,and to the south by the Shangnan-Danfeng Suture Zone.The Ordos Basin boundary was the plate tectonic boundary during the Middle Proterozoic to Paleozoic,and the intra-continental deformation boundary in the Meso-Cenozoic.The basin survived as a marine cratonic basin covering the entire Ordos block during the Middle Proterozoic to Ordovician,a marine-continental transitional depression basin enclosed by an island arc uplift belt at the plate margin during the Carboniferous to Permian,a unified intra-continental lacustrine depression basin in the Triassic,and an intra-continental cratonic basin circled by a rift system in the Cenozoic.The basin scope has been decreasing till the present.The large,widespread prototype basin controlled the exploration area far beyond the present-day sedimentary basin boundary,with multiple target plays vertically.The Ordos Basin has the characteristics of a whole petroleum(or deposition)system.The Middle Proterozoic wide-rift system as a typical basin under the overlying Phanerozoic basin and the Cambrian-Ordovician passive margin basin and intra-cratonic depression in the deep-sited basin will be the important successions for oil and gas exploration in the coming years.展开更多
Outer space is humanity’s most vital future frontier;it possesses unique geopolitical and strategic attributes with significant implications for national development and security.Over recent decades,the strategic val...Outer space is humanity’s most vital future frontier;it possesses unique geopolitical and strategic attributes with significant implications for national development and security.Over recent decades,the strategic value of outer space—spanning political,economic,military,technological,and societal domains—has steadily grown,driving new competition among major powers for access to space resources and related rights.Within this rivalry,military capabilities in space have emerged as the decisive foundation of space competition.As a result,major powers have increasingly directed investments toward space-based defense programs and security infrastructures,recognizing that military superiority in space is the new strategic high ground in their rivalry.展开更多
Normal forms have a significant role in the theory of relational database normalization.The definitions of normal forms are established through the functional dependency(FD)relationship between a prime or nonprime att...Normal forms have a significant role in the theory of relational database normalization.The definitions of normal forms are established through the functional dependency(FD)relationship between a prime or nonprime attribute and a key.However,determining whether an attribute is a prime attribute is a nondeterministic polynomial-time complete(NP-complete)problem,making it intractable to determine if a relation scheme is in a specific normal form.While the prime attribute problem is generally NP-complete,there are cases where identifying prime attributes is not challenging.In a relation scheme R(U,F),we partition U into four distinct subsets based on where attributes in U appear in F:U_(1)(attributes only appearing on the left-hand side of FDs),U_(2)(attributes only appearing on the right-hand side of FDs),U_(3)(attributes appearing on both sides of FDs),and U_(4)(attributes not present in F).Next,we demonstrate the necessary and sufficient conditions for a key to be the unique key of a relation scheme.Subsequently,we illustrate the features of prime attributes in U_(3) and generalize the features of common prime attributes.The findings lay the groundwork for distinguishing between complex and simple cases in prime attribute identification,thereby deepening the understanding of this problem.展开更多
BACKGROUND Not all neuropsychiatric(NP)manifestations in patients with systemic lupus erythematosus(SLE)are secondary to lupus.The clarification of the cause of NP symptoms influences therapeutic strategies for SLE.AI...BACKGROUND Not all neuropsychiatric(NP)manifestations in patients with systemic lupus erythematosus(SLE)are secondary to lupus.The clarification of the cause of NP symptoms influences therapeutic strategies for SLE.AIM To understand the attribution of psychiatric manifestations in a cohort of Chinese patients with SLE.METHODS This retrospective single-center study analyzed 160 inpatient medical records.Clinical diagnosis,which is considered the gold standard,was used to divide the subjects into a psychiatric SLE(PSLE)group(G1)and a secondary psychiatric symptoms group(G2).Clinical features were compared between these two groups.The sensitivity and specificity of the Italian attribution model were explored.RESULTS A total of 171 psychiatric syndromes were recorded in 138 patients,including 87 cases of acute confusional state,40 cases of cognitive dysfunction,18 cases of psychosis,and 13 cases each of depressive disorder and mania or hypomania.A total of 141(82.5%)syndromes were attributed to SLE.In contrast to G2 patients,G1 patients had higher SLE Disease Activity Index-2000 scores(21 vs 12,P=0.001),a lower prevalence of anti-beta-2-glycoprotein 1 antibodies(8.6%vs 25.9%,P=0.036),and a higher prevalence of anti-ribosomal ribonucleoprotein particle(rRNP)antibodies(39.0%vs 22.2%,P=0.045).The Italian attribution model exhibited a sensitivity of 95.0%and a specificity of 70.0%when the threshold value was set at 7.CONCLUSION Patients with PSLE exhibited increased disease activity.There is a correlation between PSLE and anti-rRNP antibodies.The Italian model effectively assesses multiple psychiatric manifestations in Chinese SLE patients who present with NP symptoms.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.72571150)Beijing Natural Science Foundation(Grant No.9182015)。
文摘Attributed graph clustering plays a vital role in uncovering hidden network structures,but it presents significant challenges.In recent years,various models have been proposed to identify meaningful clusters by integrating both structural and attribute-based information.However,these models often emphasize node proximities without adequately balancing the efficiency of clustering based on both structural and attribute data.Furthermore,they tend to neglect the critical fuzzy information inherent in attributed graph clusters.To address these issues,we introduce a new framework,Markov lumpability optimization,for efficient clustering of large-scale attributed graphs.Specifically,we define a lumped Markov chain on an attribute-augmented graph and introduce a new metric,Markov lumpability,to quantify the differences between the original and lumped Markov transition probability matrices.To minimize this measure,we propose a conjugate gradient projectionbased approach that ensures the partitioning closely aligns with the intrinsic structure of fuzzy clusters through conditional optimization.Extensive experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed framework compared to existing clustering algorithms.This framework has many potential applications,including dynamic community analysis of social networks,user profiling in recommendation systems,functional module identification in biological molecular networks,and financial risk control,offering a new paradigm for mining complex patterns in high-dimensional attributed graph data.
基金supported by the Natural Science Foundation of China(Grant No.72571150)。
文摘Detecting overlapping communities in attributed networks remains a significant challenge due to the complexity of jointly modeling topological structure and node attributes,the unknown number of communities,and the need to capture nodes with multiple memberships.To address these issues,we propose a novel framework named density peaks clustering with neutrosophic C-means.First,we construct a consensus embedding by aligning structure-based and attribute-based representations using spectral decomposition and canonical correlation analysis.Then,an improved density peaks algorithm automatically estimates the number of communities and selects initial cluster centers based on a newly designed cluster strength metric.Finally,a neutrosophic C-means algorithm refines the community assignments,modeling uncertainty and overlap explicitly.Experimental results on synthetic and real-world networks demonstrate that the proposed method achieves superior performance in terms of detection accuracy,stability,and its ability to identify overlapping structures.
基金supported by the National Key Research and Development Program of China(Grant No.2020YFA0608904)the International Partnership Program of the Chinese Academy of Sciences(Grant Nos.060GJHZ2023079GC and 134111KYSB20160031)+1 种基金supported by the Office of Science,U.S.Department of Energy(DOE)Biological and Environmental Research as part of the Regional and Global Model Analysis program area through the Water Cycle and Climate Extremes Modeling(WACCEM)scientific focus areaoperated for DOE by Battelle Memorial Institute under contract DE-AC05-76RL01830。
文摘The global monsoon system,encompassing the Asian-Australian,African,and American monsoons,sustains two-thirds of the world’s population by regulating water resources and agriculture.Monsoon anomalies pose severe risks,including floods and droughts.Recent research associated with the implementation of the Global Monsoons Model Intercomparison Project under the umbrella of CMIP6 has advanced our understanding of its historical variability and driving mechanisms.Observational data reveal a 20th-century shift:increased rainfall pre-1950s,followed by aridification and partial recovery post-1980s,driven by both internal variability(e.g.,Atlantic Multidecadal Oscillation)and external forcings(greenhouse gases,aerosols),while ENSO drives interannual variability through ocean-atmosphere interactions.Future projections under greenhouse forcing suggest long-term monsoon intensification,though regional disparities and model uncertainties persist.Models indicate robust trends but struggle to quantify extremes,where thermodynamic effects(warming-induced moisture rise)uniformly boost heavy rainfall,while dynamical shifts(circulation changes)create spatial heterogeneity.Volcanic eruptions and proposed solar radiation modification(SRM)further complicate predictions:tropical eruptions suppress monsoons,whereas high-latitude events alter cross-equatorial flows,highlighting unresolved feedbacks.The emergent constraint approach is booming in terms of correcting future projections and reducing uncertainty with respect to the global monsoons.Critical challenges remain.Model biases and sparse 20th-century observational data hinder accurate attribution.The interplay between natural variability and anthropogenic forcings,along with nonlinear extreme precipitation risks under warming,demands deeper mechanistic insights.Additionally,SRM’s regional impacts and hemispheric monsoon interactions require systematic evaluation.Addressing these gaps necessitates enhanced observational networks,refined climate models,and interdisciplinary efforts to disentangle multiscale drivers,ultimately improving resilience strategies for monsoon-dependent regions.
基金funded by the National Nature Science Foundation of China(81,825,019 and 82,330,103)performed with approval from the Ethical Committee of Beijing Institute of Microbiology and Epidemiology(AF/SC-08/02.128).
文摘Dear Editor,Severe fever with thrombocytopenia syndrome(SFTS)is an emerging tick-borne infectious disease caused by a novel bunyavirus called SFTS virus(SFTSV).It was initially identified in China in 2009(Yu et al.,2011).Since then,the number of reported SFTS cases has rapidly increased in China,South Korea,and Japan(Li,et al.,2018;Takahashi et al.,2014;Kim et al.,2018).Sporadic SFTS cases have also been identified in several other Asian countries,such as Vietnam,Pakistan,Myanmar,and Thailand(Takahashi et al.,2014;Tran et al.,2019;Li et al.,2021).This disease is recognized as a highly lethal viral hemorrhagic fever with a mortality rate ranging from 12%to 50%(Yu et al.,2011;Li et al.,2018;Takahashi et al.,2014;Kim et al.,2013).SFTS primarily spreads to humans through bites from ticks infected with SFTSV,with the Haemaphysalis longicornis tick acting as the predominant vector for SFTSV(Zhuang et al.,2018).
基金supported by National Natural Science Foundation of China(No.61603310)the Fundamental Research Funds for the Central Universities(No.XDJK2018B019).
文摘Purpose:Based on real-world academic data,this study aims to use network embedding technology to mining academic relationships,and investigate the effectiveness of the proposed embedding model on academic collaborator recommendation tasks.Design/methodology/approach:We propose an academic collaborator recommendation model based on attributed network embedding(ACR-ANE),which can get enhanced scholar embedding and take full advantage of the topological structure of the network and multi-type scholar attributes.The non-local neighbors for scholars are defined to capture strong relationships among scholars.A deep auto-encoder is adopted to encode the academic collaboration network structure and scholar attributes into a low-dimensional representation space.Findings:1.The proposed non-local neighbors can better describe the relationships among scholars in the real world than the first-order neighbors.2.It is important to consider the structure of the academic collaboration network and scholar attributes when recommending collaborators for scholars simultaneously.Research limitations:The designed method works for static networks,without taking account of the network dynamics.Practical implications:The designed model is embedded in academic collaboration network structure and scholarly attributes,which can be used to help scholars recommend potential collaborators.Originality/value:Experiments on two real-world scholarly datasets,Aminer and APS,show that our proposed method performs better than other baselines.
基金supported by the Natural Science Foundation of Tianjin(No.20JCQNJC00720)the Fundamental Research Fund for the Central Universities(No.3122021052)。
文摘The homogeneity analysis of multi-airport system can provide important decision-making support for the route layout and cooperative operation.Existing research seldom analyzes the homogeneity of multi-airport system from the perspective of route network analysis,and the attribute information of airport nodes in the airport route network is not appropriately integrated into the airport network.In order to solve this problem,a multi-airport system homogeneity analysis method based on airport attribute network representation learning is proposed.Firstly,the route network of a multi-airport system with attribute information is constructed.If there are flights between airports,an edge is added between airports,and regional attribute information is added for each airport node.Secondly,the airport attributes and the airport network vector are represented respectively.The airport attributes and the airport network vector are embedded into the unified airport representation vector space by the network representation learning method,and then the airport vector integrating the airport attributes and the airport network characteristics is obtained.By calculating the similarity of the airport vectors,it is convenient to calculate the degree of homogeneity between airports and the homogeneity of the multi-airport system.The experimental results on the Beijing-Tianjin-Hebei multi-airport system show that,compared with other existing algorithms,the homogeneity analysis method based on attributed network representation learning can get more consistent results with the current situation of Beijing-Tianjin-Hebei multi-airport system.
基金supported by the Centre for Health Statistics Information,National Health and Family Planning Commission of the People’s Republic of China
文摘Objective To estimate the lung cancer burden that may be attributable to ambient fine particulate matter (PM2.5) pollution in Guangzhou city in China from 2005 to 2013. Methods The data regarding PM2.5 exposure were obtained from the 'Ambient air pollution exposure estimation for the Global Burden of Disease 2013' dataset at 0.1° ×0.1° spatial resolution. Disability-adjusted life years (DALYs) were estimated based on the information of mortality and incidence of lung cancer. Comparative risk analysis and integrated exposure-response function were used to estimate attributed disease burden. Results The population-weighted average concentration of PM2.5 was increased by 34.6% between 1990 and 2013, from 38.37 μg/m3 to 51.31 μg/m^3. The lung cancer DALYs in both men and women were increased by 36.2% from 2005 to 2013. The PM2.5 attributed lung cancer DALYs increased from 12105.0 (8181.0 for males and 3924.0 for females) in 2005 to 16489.3 (11291.7 for males and 5197.6 for females) in 2013. An average of 23.1% lung cancer burden was attributable to PM2.5 pollution in 2013. Conclusion PM2.5 has caused serious but under-appreciated public health burden in Guangzhou and the trend deteriorates. Effective strategies are needed to tackle this major public health problem.
基金supported by the National Natural Science Foundation of China(Nos.62006001,62372001)the Natural Science Foundation of Chongqing City(Grant No.CSTC2021JCYJ-MSXMX0002).
文摘Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.
基金Project (No. 2000K08-G12) supported by Shaanxi Provincial Science and Technology Development Plan, China
文摘Software architectures shift the focus of developers from lines-of-code to coarser-grained architectural elements and their overall interconnection structure. There are, however, many features of the distributed software that make the developing methods of distributed software quite different from the traditional ways. Furthermore, the traditional centralized ways with fixed interfaces cannot adapt to the flexible requirements of distributed software. In this paper, the attributed grammar (AG) is extended to refine the characters of distributed software, and a distributed software architecture description language (DSADL) based on attributed grammar is introduced, and then a model of integrated environment for software architecture design is proposed. It can be demonstrated by the practice that DSADL can help the programmers to analyze and design distributed software effectively, so the efficiency of the development can be improved greatly.
基金Project(NCET-11-0866)supported by Education Ministry's new Century Excellent Talents Supporting Plan,China
文摘Parameter estimation of the attributed scattering center(ASC) model is significant for automatic target recognition(ATR). Sparse representation based parameter estimation methods have developed rapidly. Construction of the separable dictionary is a key issue for sparse representation technology. A compressive time-domain dictionary(TD) for ASC model is presented. Two-dimensional frequency domain responses of the ASC are produced and transformed into the time domain. Then these time domain responses are cutoff and stacked into vectors. These vectored time-domain responses are amalgamated to form the TD. Compared with the traditional frequency-domain dictionary(FD), the TD is a matrix that is quite spare and can markedly reduce the data size of the dictionary. Based on the basic TD construction method, we present four extended TD construction methods, which are available for different applications. In the experiments, the performance of the TD, including the basic model and the extended models, has been firstly analyzed in comparison with the FD. Secondly, an example of parameter estimation from SAR synthetic aperture radar(SAR) measurements of a target collected in an anechoic room is exhibited. Finally, a sparse image reconstruction example is from two apart apertures. Experimental results demonstrate the effectiveness and efficiency of the proposed TD.
基金Project(61171133)supported by the National Natural Science Foundation of ChinaProject(11JJ1010)supported by the Natural Science Fund for Distinguished Young Scholars of Hunan Province,ChinaProject(61101182)supported by National Natural Science Foundation for Young Scientists of China
文摘The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters' positions among a much large number of potential scatters' positions, and provide an effective approach to improve the SAR image resolution. Based on the attributed scatter center model, several experiments were performed with different practical considerations to evaluate the performance of five representative SR techniques, namely, sparse Bayesian learning (SBL), fast Bayesian matching pursuit (FBMP), smoothed 10 norm method (SL0), sparse reconstruction by separable approximation (SpaRSA), fast iterative shrinkage-thresholding algorithm (FISTA), and the parameter settings in five SR algorithms were discussed. In different situations, the performances of these algorithms were also discussed. Through the comparison of MSE and failure rate in each algorithm simulation, FBMP and SpaRSA are found suitable for dealing with problems in the SAR imaging based on attributed scattering center model. Although the SBL is time-consuming, it always get better performance when related to failure rate and high SNR.
基金support by the National Natural Science Foundation of China(NSFC)under grant number 61873274.
文摘Contrastive self‐supervised representation learning on attributed graph networks with Graph Neural Networks has attracted considerable research interest recently.However,there are still two challenges.First,most of the real‐word system are multiple relations,where entities are linked by different types of relations,and each relation is a view of the graph network.Second,the rich multi‐scale information(structure‐level and feature‐level)of the graph network can be seen as self‐supervised signals,which are not fully exploited.A novel contrastive self‐supervised representation learning framework on attributed multiplex graph networks with multi‐scale(named CoLM^(2)S)information is presented in this study.It mainly contains two components:intra‐relation contrast learning and interrelation contrastive learning.Specifically,the contrastive self‐supervised representation learning framework on attributed single‐layer graph networks with multi‐scale information(CoLMS)framework with the graph convolutional network as encoder to capture the intra‐relation information with multi‐scale structure‐level and feature‐level selfsupervised signals is introduced first.The structure‐level information includes the edge structure and sub‐graph structure,and the feature‐level information represents the output of different graph convolutional layer.Second,according to the consensus assumption among inter‐relations,the CoLM^(2)S framework is proposed to jointly learn various graph relations in attributed multiplex graph network to achieve global consensus node embedding.The proposed method can fully distil the graph information.Extensive experiments on unsupervised node clustering and graph visualisation tasks demonstrate the effectiveness of our methods,and it outperforms existing competitive baselines.
基金supported by the National Natural Foundation of China(No.62201158).
文摘Synthetic aperture radar(SAR)is able to acquire high-resolution method using the active microwave imaging method.SAR images are widely used in target recognition,classification,and surface analysis,with extracted features.Attribute scattering center(ASC)is able to describe the image features for these tasks.However,sidelobe effects reduce the accuracy and reliability of the estimated ASC model parameters.This paper incorporates the SAR super-resolution into the ASC extraction to improve its performance.Both filter bank and subspace methods are demonstrated for preprocessing to supress the sidelobe.Based on the preprocessed data,a reinforcement based ASC method is used to get the parameters.The experimental results show that the super-resolution method can reduce noise and suppress sidelobe effect,which improve accuracy of the estimated ASC model parameters.
文摘Many types of real-world information systems, including social media and e-commerce platforms, can be modelled by means of attribute-rich, connected networks. The goal of anomaly detection in artificial intelligence is to identify illustrations that deviate significantly from the main distribution of data or that differ from known cases. Anomalous nodes in node-attributed networks can be identified with greater precision if both graph and node attributes are taken into account. Almost all of the studies in this area focus on supervised techniques for spotting outliers. While supervised algorithms for anomaly detection work well in theory, they cannot be applied to real-world applications owing to a lack of labelled data. Considering the possible data distribution, our model employs a dual variational autoencoder (VAE), while a generative adversarial network (GAN) assures that the model is robust to adversarial training. The dual VAEs are used in another capacity: as a fake-node generator. Adversarial training is used to ensure that our latent codes have a Gaussian or uniform distribution. To provide a fair presentation of the graph, the discriminator instructs the generator to generate latent variables with distributions that are more consistent with the actual distribution of the data. Once the model has been learned, the discriminator is used for anomaly detection via reconstruction loss which has been trained to distinguish between the normal and artificial distributions of data. First, using a dual VAE, our model simultaneously captures cross-modality interactions between topological structure and node characteristics and overcomes the problem of unlabeled anomalies, allowing us to better understand the network sparsity and nonlinearity. Second, the proposed model considers the regularization of the latent codes while solving the issue of unregularized embedding techniques that can quickly lead to unsatisfactory representation. Finally, we use the discriminator reconstruction loss for anomaly detection as the discriminator is well-trained to separate the normal and generated data distributions because reconstruction-based loss does not include the adversarial component. Experiments conducted on attributed networks demonstrate the effectiveness of the proposed model and show that it greatly surpasses the previous methods. The area under the curve scores of our proposed model for the BlogCatalog, Flickr, and Enron datasets are 0.83680, 0.82020, and 0.71180, respectively, proving the effectiveness of the proposed model. The result of the proposed model on the Enron dataset is slightly worse than other models;we attribute this to the dataset’s low dimensionality as the most probable explanation.
基金Supported by Taishan Scholar Special Funding,No.tsqnz20240858Medical and Health Technology Project of Shandong Province,No.202402050122+4 种基金Science and Technology Development Plan of Jinan Municipal Health Commission,No.2024301008Clinical Medical Science and Technology Innovation Program of Jinan Science and Technology Bureau,No.202430055Natural Science Foundation of Jiangxi Province,No.20224BAB206077Gansu Provincial Hospital Intra-Hospital Research Fund Project,No.22GSSYB-6and the 2022 Master/Doctor/Postdoctoral Program of National Health Commission Key Laboratory of Diagnosis and Therapy of Gastrointestinal Tumor,No.NHCDP2022004 and No.NHCDP2022008.
文摘Knee osteoarthritis(KOA),characterized by heterogeneous arthritic manifestations and complex peripheral joint disorder,is one of the leading causes of disability worldwide,which has become a high burden due to the multifactorial nature and the deficiency of available disease-modifying treatments.The application of mesenchymal stem/stromal cells(MSCs)as therapeutic drugs has provided novel treatment options for diverse degenerative and chronic diseases including KOA.However,the complexity and specificity of the“live”cells have posed challenges for MSC-based drug development and the concomitant scale-up preparation from laboratory to industrialization.For instance,despite the considerable progress in ex vivo cell culture technology for fulfilling the robust development of drug conversion and clinical trials,yet significant challenges remain in obtaining regulatory approvals.Thus,there’s an urgent need for the research and development of MSC drugs for KOA.In this review,we provide alternative solution strategies for the preparation of MSC drugs on the basis of the principle of quality by design,including designing the cell production processes,quality control,and clinical applications.In detail,we mainly focus on the quality by design method for MSC manufacturing in standard cell-culturing factories for the treatment of KOA by using the Quality Target Product Profile as a starting point to determine potential critical quality attributes and to establish relationships between critical material attributes and critical process parameters.Collectively,this review aims to meet product performance and robust process design,and should help to reduce the gap between compliant products and the production of compliant good manufacturing practice.
基金supported by the National Natural Science Foundation of China(Nos.U19A2044,42105132,42030609,and 41975037)the National Key Research and Development Programof China(No.2022YFC3700303).
文摘Extreme ozone pollution events(EOPEs)are associated with synoptic weather patterns(SWPs)and pose severe health and ecological risks.However,a systematic investigation of themeteorological causes,transport pathways,and source contributions to historical EOPEs is still lacking.In this paper,the K-means clustering method is applied to identify six dominant SWPs during the warm season in the Yangtze River Delta(YRD)region from 2016 to 2022.It provides an integrated analysis of the meteorological factors affecting ozone pollution in Hefei under different SWPs.Using the WRF-FLEXPART model,the transport pathways(TPPs)and geographical sources of the near-surface air masses in Hefei during EOPEs are investigated.The results reveal that Hefei experienced the highest ozone concentration(134.77±42.82μg/m^(3)),exceedance frequency(46 days(23.23%)),and proportion of EOPEs(21 instances,47.7%)under the control of peripheral subsidence of typhoon(Type 5).Regional southeast winds correlated with the ozone pollution in Hefei.During EOPEs,a high boundary layer height,solar radiation,and temperature;lowhumidity and cloud cover;and pronounced subsidence airflow occurred over Hefei and the broader YRD region.The East-South(E_S)patterns exhibited the highest frequency(28 instances,65.11%).Regarding the TPPs and geographical sources of the near-surface air masses during historical EOPEs.The YRD was the main source for land-originating air masses under E_S patterns(50.28%),with Hefei,southern Anhui,southern Jiangsu,and northern Zhejiang being key contributors.These findings can help improve ozone pollution early warning and control mechanisms at urban and regional scales.
基金Supported by the National Natural Science Foundation of China(42330810)Major Science and Technology Project of PetroChina Changqing Oilfield Company(ZDZX2021-01).
文摘Based on the analysis of surface geological survey,exploratory well,gravity-magnetic-electric and seismic data,and through mapping the sedimentary basin and its peripheral orogenic belts together,this paper explores systematically the boundary,distribution,geological structure,and tectonic attributes of the Ordos prototype basin in the geological historical periods.The results show that the Ordos block is bounded to the west by the Engorwusu Fault Zone,to the east by the Taihangshan Mountain Piedmont Fault Zone,to the north by the Solonker-Xilamuron Suture Zone,and to the south by the Shangnan-Danfeng Suture Zone.The Ordos Basin boundary was the plate tectonic boundary during the Middle Proterozoic to Paleozoic,and the intra-continental deformation boundary in the Meso-Cenozoic.The basin survived as a marine cratonic basin covering the entire Ordos block during the Middle Proterozoic to Ordovician,a marine-continental transitional depression basin enclosed by an island arc uplift belt at the plate margin during the Carboniferous to Permian,a unified intra-continental lacustrine depression basin in the Triassic,and an intra-continental cratonic basin circled by a rift system in the Cenozoic.The basin scope has been decreasing till the present.The large,widespread prototype basin controlled the exploration area far beyond the present-day sedimentary basin boundary,with multiple target plays vertically.The Ordos Basin has the characteristics of a whole petroleum(or deposition)system.The Middle Proterozoic wide-rift system as a typical basin under the overlying Phanerozoic basin and the Cambrian-Ordovician passive margin basin and intra-cratonic depression in the deep-sited basin will be the important successions for oil and gas exploration in the coming years.
文摘Outer space is humanity’s most vital future frontier;it possesses unique geopolitical and strategic attributes with significant implications for national development and security.Over recent decades,the strategic value of outer space—spanning political,economic,military,technological,and societal domains—has steadily grown,driving new competition among major powers for access to space resources and related rights.Within this rivalry,military capabilities in space have emerged as the decisive foundation of space competition.As a result,major powers have increasingly directed investments toward space-based defense programs and security infrastructures,recognizing that military superiority in space is the new strategic high ground in their rivalry.
文摘Normal forms have a significant role in the theory of relational database normalization.The definitions of normal forms are established through the functional dependency(FD)relationship between a prime or nonprime attribute and a key.However,determining whether an attribute is a prime attribute is a nondeterministic polynomial-time complete(NP-complete)problem,making it intractable to determine if a relation scheme is in a specific normal form.While the prime attribute problem is generally NP-complete,there are cases where identifying prime attributes is not challenging.In a relation scheme R(U,F),we partition U into four distinct subsets based on where attributes in U appear in F:U_(1)(attributes only appearing on the left-hand side of FDs),U_(2)(attributes only appearing on the right-hand side of FDs),U_(3)(attributes appearing on both sides of FDs),and U_(4)(attributes not present in F).Next,we demonstrate the necessary and sufficient conditions for a key to be the unique key of a relation scheme.Subsequently,we illustrate the features of prime attributes in U_(3) and generalize the features of common prime attributes.The findings lay the groundwork for distinguishing between complex and simple cases in prime attribute identification,thereby deepening the understanding of this problem.
基金Supported by STI2030-Major Projects,No.2021ZD0202001National Natural Science Foundation of China,No.T2341003Capital Funds for Health Improvement and Research,No.CFH 2022-2-4012.
文摘BACKGROUND Not all neuropsychiatric(NP)manifestations in patients with systemic lupus erythematosus(SLE)are secondary to lupus.The clarification of the cause of NP symptoms influences therapeutic strategies for SLE.AIM To understand the attribution of psychiatric manifestations in a cohort of Chinese patients with SLE.METHODS This retrospective single-center study analyzed 160 inpatient medical records.Clinical diagnosis,which is considered the gold standard,was used to divide the subjects into a psychiatric SLE(PSLE)group(G1)and a secondary psychiatric symptoms group(G2).Clinical features were compared between these two groups.The sensitivity and specificity of the Italian attribution model were explored.RESULTS A total of 171 psychiatric syndromes were recorded in 138 patients,including 87 cases of acute confusional state,40 cases of cognitive dysfunction,18 cases of psychosis,and 13 cases each of depressive disorder and mania or hypomania.A total of 141(82.5%)syndromes were attributed to SLE.In contrast to G2 patients,G1 patients had higher SLE Disease Activity Index-2000 scores(21 vs 12,P=0.001),a lower prevalence of anti-beta-2-glycoprotein 1 antibodies(8.6%vs 25.9%,P=0.036),and a higher prevalence of anti-ribosomal ribonucleoprotein particle(rRNP)antibodies(39.0%vs 22.2%,P=0.045).The Italian attribution model exhibited a sensitivity of 95.0%and a specificity of 70.0%when the threshold value was set at 7.CONCLUSION Patients with PSLE exhibited increased disease activity.There is a correlation between PSLE and anti-rRNP antibodies.The Italian model effectively assesses multiple psychiatric manifestations in Chinese SLE patients who present with NP symptoms.