Robustness against measurement uncertainties is crucial for gas turbine engine diagnosis.While current research focuses mainly on measurement noise,measurement bias remains challenging.This study proposes a novel perf...Robustness against measurement uncertainties is crucial for gas turbine engine diagnosis.While current research focuses mainly on measurement noise,measurement bias remains challenging.This study proposes a novel performance-based fault detection and identification(FDI)strategy for twin-shaft turbofan gas turbine engines and addresses these uncertainties through a first-order Takagi-Sugeno-Kang fuzzy inference system.To handle ambient condition changes,we use parameter correction to preprocess the raw measurement data,which reduces the FDI’s system complexity.Additionally,the power-level angle is set as a scheduling parameter to reduce the number of rules in the TSK-based FDI system.The data for designing,training,and testing the proposed FDI strategy are generated using a component-level turbofan engine model.The antecedent and consequent parameters of the TSK-based FDI system are optimized using the particle swarm optimization algorithm and ridge regression.A robust structure combining a specialized fuzzy inference system with the TSK-based FDI system is proposed to handle measurement biases.The performance of the first-order TSK-based FDI system and robust FDI structure are evaluated through comprehensive simulation studies.Comparative studies confirm the superior accuracy of the first-order TSK-based FDI system in fault detection,isolation,and identification.The robust structure demonstrates a 2%-8%improvement in the success rate index under relatively large measurement bias conditions,thereby indicating excellent robustness.Accuracy against significant bias values and computation time are also evaluated,suggesting that the proposed robust structure has desirable online performance.This study proposes a novel FDI strategy that effectively addresses measurement uncertainties.展开更多
Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’...Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.展开更多
Developing efficient neural network(NN)computing systems is crucial in the era of artificial intelligence(AI).Traditional von Neumann architectures have both the issues of"memory wall"and"power wall&quo...Developing efficient neural network(NN)computing systems is crucial in the era of artificial intelligence(AI).Traditional von Neumann architectures have both the issues of"memory wall"and"power wall",limiting the data transfer between memory and processing units[1,2].Compute-in-memory(CIM)technologies,particularly analogue CIM with memristor crossbars,are promising because of their high energy efficiency,computational parallelism,and integration density for NN computations[3].In practical applications,analogue CIM excels in tasks like speech recognition and image classification,revealing its unique advantages.For instance,it efficiently processes vast amounts of audio data in speech recognition,achieving high accuracy with minimal power consumption.In image classification,the high parallelism of analogue CIM significantly speeds up feature extraction and reduces processing time.With the boosting development of AI applications,the demands for computational accuracy and task complexity are rising continually.However,analogue CIM systems are limited in handling complex regression tasks with needs of precise floating-point(FP)calculations.They are primarily suited for the classification tasks with low data precision and a limited dynamic range[4].展开更多
Published proof test coverage(PTC)estimates for emergency shutdown valves(ESDVs)show only moderate agreement and are predominantly opinion-based.A Failure Modes,Effects,and Diagnostics Analysis(FMEDA)was undertaken us...Published proof test coverage(PTC)estimates for emergency shutdown valves(ESDVs)show only moderate agreement and are predominantly opinion-based.A Failure Modes,Effects,and Diagnostics Analysis(FMEDA)was undertaken using component failure rate data to predict PTC for a full stroke test and a partial stroke test.Given the subjective and uncertain aspects of the FMEDA approach,specifically the selection of component failure rates and the determination of the probability of detecting failure modes,a Fuzzy Inference System(FIS)was proposed to manage the data,addressing the inherent uncertainties.Fuzzy inference systems have been used previously for various FMEA type assessments,but this is the first time an FIS has been employed for use with FMEDA.ESDV PTC values were generated from both the standard FMEDA and the fuzzy-FMEDA approaches using data provided by FMEDA experts.This work demonstrates that fuzzy inference systems can address the subjectivity inherent in FMEDA data,enabling reliable estimates of ESDV proof test coverage for both full and partial stroke tests.This facilitates optimized maintenance planning while ensuring safety is not compromised.展开更多
Osteoporosis is a known risk factor for rotator cuff tears(RCTs),but the causal correlation and underlying mechanisms remain unclear.This study aims to evaluate the impact of osteoporosis on RCT risk and investigate t...Osteoporosis is a known risk factor for rotator cuff tears(RCTs),but the causal correlation and underlying mechanisms remain unclear.This study aims to evaluate the impact of osteoporosis on RCT risk and investigate their genetic associations.Using data from the UK Biobank(n=457871),cross-sectional analyses demonstrated that osteoporosis was significantly associated with an increased risk of RCTs(adjusted OR[95%CI]=1.38[1.25–1.52]).A longitudinal analysis of a subset of patients(n=268117)over 11 years revealed that osteoporosis increased the risk of RCTs(adjusted HR[95%CI]=1.56[1.29–1.87]),which is notably varied between sexes in sex-stratified analysis.Causal inference methods,including propensity score matching,inverse probability weighting,causal random forest and survival random forest models further confirmed the causal effect,both from cross-sectional and longitudinal perspectives.展开更多
In order to solve the problems of high experimental cost of ammunition,lack of field test data,and the difficulty in applying the ammunition hit probability estimation method in classical statistics,this paper assumes...In order to solve the problems of high experimental cost of ammunition,lack of field test data,and the difficulty in applying the ammunition hit probability estimation method in classical statistics,this paper assumes that the projectile dispersion of ammunition is a two-dimensional joint normal distribution,and proposes a new Bayesian inference method of ammunition hit probability based on normal-inverse Wishart distribution.Firstly,the conjugate joint prior distribution of the projectile dispersion characteristic parameters is determined to be a normal inverse Wishart distribution,and the hyperparameters in the prior distribution are estimated by simulation experimental data and historical measured data.Secondly,the field test data is integrated with the Bayesian formula to obtain the joint posterior distribution of the projectile dispersion characteristic parameters,and then the hit probability of the ammunition is estimated.Finally,compared with the binomial distribution method,the method in this paper can consider the dispersion information of ammunition projectiles,and the hit probability information is more fully utilized.The hit probability results are closer to the field shooting test samples.This method has strong applicability and is conducive to obtaining more accurate hit probability estimation results.展开更多
Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and so...Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and solving the data isolation problem faced by centralized GNNs in data-sensitive scenarios. Despite the plethora of prior work on inference attacks against centralized GNNs, the vulnerability of FedGNNs to inference attacks has not yet been widely explored. It is still unclear whether the privacy leakage risks of centralized GNNs will also be introduced in FedGNNs. To bridge this gap, we present PIAFGNN, the first property inference attack (PIA) against FedGNNs. Compared with prior works on centralized GNNs, in PIAFGNN, the attacker can only obtain the global embedding gradient distributed by the central server. The attacker converts the task of stealing the target user’s local embeddings into a regression problem, using a regression model to generate the target graph node embeddings. By training shadow models and property classifiers, the attacker can infer the basic property information within the target graph that is of interest. Experiments on three benchmark graph datasets demonstrate that PIAFGNN achieves attack accuracy of over 70% in most cases, even approaching the attack accuracy of inference attacks against centralized GNNs in some instances, which is much higher than the attack accuracy of the random guessing method. Furthermore, we observe that common defense mechanisms cannot mitigate our attack without affecting the model’s performance on mainly classification tasks.展开更多
Protocol Reverse Engineering(PRE)is of great practical importance in Internet security-related fields such as intrusion detection,vulnerability mining,and protocol fuzzing.For unknown binary protocols having fixed-len...Protocol Reverse Engineering(PRE)is of great practical importance in Internet security-related fields such as intrusion detection,vulnerability mining,and protocol fuzzing.For unknown binary protocols having fixed-length fields,and the accurate identification of field boundaries has a great impact on the subsequent analysis and final performance.Hence,this paper proposes a new protocol segmentation method based on Information-theoretic statistical analysis for binary protocols by formulating the field segmentation of unsupervised binary protocols as a probabilistic inference problem and modeling its uncertainty.Specifically,we design four related constructions between entropy changes and protocol field segmentation,introduce random variables,and construct joint probability distributions with traffic sample observations.Probabilistic inference is then performed to identify the possible protocol segmentation points.Extensive trials on nine common public and industrial control protocols show that the proposed method yields higher-quality protocol segmentation results.展开更多
Offshore drilling costs are high,and the downhole environment is even more complex.Improving the rate of penetration(ROP)can effectively shorten offshore drilling cycles and improve economic benefits.It is difficult f...Offshore drilling costs are high,and the downhole environment is even more complex.Improving the rate of penetration(ROP)can effectively shorten offshore drilling cycles and improve economic benefits.It is difficult for the current ROP models to guarantee the prediction accuracy and the robustness of the models at the same time.To address the current issues,a new ROP prediction model was developed in this study,which considers ROP as a time series signal(ROP signal).The model is based on the time convolutional network(TCN)framework and integrates ensemble empirical modal decomposition(EEMD)and Bayesian network causal inference(BN),the model is named EEMD-BN-TCN.Within the proposed model,the EEMD decomposes the original ROP signal into multiple sets of sub-signals.The BN determines the causal relationship between the sub-signals and the key physical parameters(weight on bit and revolutions per minute)and carries out preliminary reconstruction of the sub-signals based on the causal relationship.The TCN predicts signals reconstructed by BN.When applying this model to an actual production well,the average absolute percentage error of the EEMD-BN-TCN prediction decreased from 18.4%with TCN to 9.2%.In addition,compared with other models,the EEMD-BN-TCN can improve the decomposition signal of ROP by regulating weight on bit and revolutions per minute,ultimately enhancing ROP.展开更多
Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus o...Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus often suffer from model misspecification or inadequacy.The on-rising deep learning(DL)techniques offer a powerful alternative.Deep learning employs multi-layered artificial neural networks to progressively transform input data into more abstract and complex representations.DL methods can autonomously uncover meaningful patterns from data,thereby bypassing potential biases introduced by predefined features(Franklin,2005;Murphy,2012).Recent efforts have aimed to apply deep neural networks(DNNs)to phylogenetics,with a growing number of applications in tree reconstruction(Suvorov et al.,2020;Zou et al.,2020;Nesterenko et al.,2022;Smith and Hahn,2023;Wang et al.,2023),substitution model selection(Abadi et al.,2020;Burgstaller-Muehlbacher et al.,2023),and diversification rate inference(Voznica et al.,2022;Lajaaiti et al.,2023;Lambert et al.,2023).In phylogenetic tree reconstruction,PhyDL(Zou et al.,2020)and Tree_learning(Suvorov et al.,2020)are two notable DNN-based programs designed to infer unrooted quartet trees directly from alignments of four amino acid(AA)and DNA sequences,respectively.展开更多
Unmanned Aerial Vehicles(UAVs)coupled with deep learning such as Convolutional Neural Networks(CNNs)have been widely applied across numerous domains,including agriculture,smart city monitoring,and fire rescue operatio...Unmanned Aerial Vehicles(UAVs)coupled with deep learning such as Convolutional Neural Networks(CNNs)have been widely applied across numerous domains,including agriculture,smart city monitoring,and fire rescue operations,owing to their malleability and versatility.However,the computation-intensive and latency-sensitive natures of CNNs present a formidable obstacle to their deployment on resource-constrained UAVs.Some early studies have explored a hybrid approach that dynamically switches between lightweight and complex models to balance accuracy and latency.However,they often overlook scenarios involving multiple concurrent CNN streams,where competition for resources between streams can substantially impact latency and overall system performance.In this paper,we first investigate the deployment of both lightweight and complex models for multiple CNN streams in UAV swarm.Specifically,we formulate an optimization problem to minimize the total latency across multiple CNN streams,under the constraints on UAV memory and the accuracy requirement of each stream.To address this problem,we propose an algorithm called Adaptive Model Switching of collaborative inference for MultiCNN streams(AMSM)to identify the inference strategy with a low latency.Simulation results demonstrate that the proposed AMSM algorithm consistently achieves the lowest latency while meeting the accuracy requirements compared to benchmark algorithms.展开更多
Associations of per-and polyfluoroalkyl substances(PFAS)on lipid metabolism have been documented but research remains scarce regarding effect of PFAS on lipid variability.To deeply understand their relationship,a step...Associations of per-and polyfluoroalkyl substances(PFAS)on lipid metabolism have been documented but research remains scarce regarding effect of PFAS on lipid variability.To deeply understand their relationship,a step-forward in causal inference is expected.To address these,we conducted a longitudinal study with three repeated measurements involving 201 participants in Beijing,among which 100 eligible participants were included for the present study.Twenty-three PFAS and four lipid indicators were assessed at each visit.We used linear mixed models and quantile g-computation models to investigate associations between PFAS and blood lipid levels.A latent class growth model described PFAS serum exposure patterns,and a generalized linear model demonstrated associations between these patterns and lipid variability.Our study found that PFDA was associated with increased TC(β=0.083,95%CI:0.011,0.155)and HDL-C(β=0.106,95%CI:0.034,0.178).The PFAS mixture also showed a positive relationship with TC(β=0.06,95%CI:0.02,0.10),with PFDA contributing most positively.Compared to the low trajectory group,the middle trajectory group for PFDA was associated with VIM of TC(β=0.756,95%CI:0.153,1.359).Furthermore,PFDA showed biological gradientswith lipid metabolism.This is the first repeated-measures study to identify the impact of PFAS serum exposure pattern on the lipid metabolism and the first to estimate the association between PFAS and blood lipid levels in middle-aged and elderly Chinese and reinforce the evidence of their causal relationship through epidemiological studies.展开更多
In recent years,the world has seen an exponential increase in energy demand,prompting scientists to look for innovative ways to exploit the power sun’s power.Solar energy technologies use the sun’s energy and light ...In recent years,the world has seen an exponential increase in energy demand,prompting scientists to look for innovative ways to exploit the power sun’s power.Solar energy technologies use the sun’s energy and light to provide heating,lighting,hot water,electricity and even cooling for homes,businesses,and industries.Therefore,ground-level solar radiation data is important for these applications.Thus,our work aims to use a mathematical modeling tool to predict solar irradiation.For this purpose,we are interested in the application of the Adaptive Neuro Fuzzy Inference System.Through this type of artificial neural system,10 models were developed,based on meteorological data such as the Day number(Nj),Ambient temperature(T),Relative Humidity(Hr),Wind speed(WS),Wind direction(WD),Declination(δ),Irradiation outside the atmosphere(Goh),Maximum temperature(Tmax),Minimum temperature(Tmin).These models have been tested by different static indicators to choose the most suitable one for the estimation of the daily global solar radiation.This study led us to choose the M8 model,which takes Nj,T,Hr,δ,Ws,Wd,G0,and S0 as input variables because it presents the best performance either in the learning phase(R^(2)=0.981,RMSE=0.107 kW/m^(2),MAE=0.089 kW/m2)or in the validation phase(R^(2)=0.979,RMSE=0.117 kW/m^(2),MAE=0.101 kW/m^(2)).展开更多
The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,direc...The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,directly predicting missing components via global characteristics extracted from incomplete inputs.However,this makes detail re-covery challenging,as global characteristics fail to provide complete missing component specifics.A new point cloud completion method named Point-PC is proposed.A memory network and a causal inference model are separately designed to introduce shape priors and select absent shape information as supplementary geometric factors for aiding completion.Concretely,a memory mechanism is proposed to store complete shape features and their associated shapes in a key-value format.The authors design a pre-training strategy that uses contrastive learning to map incomplete shape features into the complete shape feature domain,enabling retrieval of analogous shapes from incomplete inputs.In addition,the authors employ backdoor adjustment to eliminate confounders,which are shape prior components sharing identical semantic structures with incomplete inputs.Experiments conducted on three datasets show that our method achieves superior performance compared to state-of-the-art approaches.The code for Point-PC can be accessed by https://github.com/bizbard/Point-PC.git.展开更多
A distributed bearing-only target tracking algorithm based on variational Bayesian inference(VBI)under random measurement anomalies is proposed for the problem of adverse effect of random measurement anomalies on the ...A distributed bearing-only target tracking algorithm based on variational Bayesian inference(VBI)under random measurement anomalies is proposed for the problem of adverse effect of random measurement anomalies on the state estimation accuracy of moving targets in bearing-only tracking scenarios.Firstly,the measurement information of each sensor is complemented by using triangulation under the distributed framework.Secondly,the Student-t distribution is selected to model the measurement likelihood probability density function,and the joint posteriori probability density function of the estimated variables is approximately decoupled by VBI.Finally,the estimation results of each local filter are sent to the fusion center and fed back to each local filter.The simulation results show that the proposed distributed bearing-only target tracking algorithm based on VBI in the presence of abnormal measurement noise comprehensively considers the influence of system nonlinearity and random anomaly of measurement noise,and has higher estimation accuracy and robustness than other existing algorithms in the above scenarios.展开更多
Fire detection has held stringent importance in computer vision for over half a century.The development of early fire detection strategies is pivotal to the realization of safe and smart cities,inhabitable in the futu...Fire detection has held stringent importance in computer vision for over half a century.The development of early fire detection strategies is pivotal to the realization of safe and smart cities,inhabitable in the future.However,the development of optimal fire and smoke detection models is hindered by limitations like publicly available datasets,lack of diversity,and class imbalance.In this work,we explore the possible ways forward to overcome these challenges posed by available datasets.We study the impact of a class-balanced dataset to improve the fire detection capability of state-of-the-art(SOTA)vision-based models and propose the use of generative models for data augmentation,as a future work direction.First,a comparative analysis of two prominent object detection architectures,You Only Look Once version 7(YOLOv7)and YOLOv8 has been carried out using a balanced dataset,where both models have been evaluated across various evaluation metrics including precision,recall,and mean Average Precision(mAP).The results are compared to other recent fire detection models,highlighting the superior performance and efficiency of the proposed YOLOv8 architecture as trained on our balanced dataset.Next,a fractal dimension analysis gives a deeper insight into the repetition of patterns in fire,and the effectiveness of the results has been demonstrated by a windowing-based inference approach.The proposed Slicing-Aided Hyper Inference(SAHI)improves the fire and smoke detection capability of YOLOv8 for real-life applications with a significantly improved mAP performance over a strict confidence threshold.YOLOv8 with SAHI inference gives a mAP:50-95 improvement of more than 25%compared to the base YOLOv8 model.The study also provides insights into future work direction by exploring the potential of generative models like deep convolutional generative adversarial network(DCGAN)and diffusion models like stable diffusion,for data augmentation.展开更多
Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating du...Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.展开更多
Modern industrial processes are typically characterized by large-scale and intricate internal relationships.Therefore,the distributed modeling process monitoring method is effective.A novel distributed monitoring sche...Modern industrial processes are typically characterized by large-scale and intricate internal relationships.Therefore,the distributed modeling process monitoring method is effective.A novel distributed monitoring scheme utilizing the Kantorovich distance-multiblock variational autoencoder(KD-MBVAE)is introduced.Firstly,given the high consistency of relevant variables within each sub-block during the change process,the variables exhibiting analogous statistical features are grouped into identical segments according to the optimal quality transfer theory.Subsequently,the variational autoencoder(VAE)model was separately established,and corresponding T^(2)statistics were calculated.To improve fault sensitivity further,a novel statistic,derived from Kantorovich distance,is introduced by analyzing model residuals from the perspective of probability distribution.The thresholds of both statistics were determined by kernel density estimation.Finally,monitoring results for both types of statistics within all blocks are amalgamated using Bayesian inference.Additionally,a novel approach for fault diagnosis is introduced.The feasibility and efficiency of the introduced scheme are verified through two cases.展开更多
The development of the Internet of Things(IoT)has brought great convenience to people.However,some information security problems such as privacy leakage are caused by communicating with risky users.It is a challenge t...The development of the Internet of Things(IoT)has brought great convenience to people.However,some information security problems such as privacy leakage are caused by communicating with risky users.It is a challenge to choose reliable users with which to interact in the IoT.Therefore,trust plays a crucial role in the IoT because trust may avoid some risks.Agents usually choose reliable users with high trust to maximize their own interests based on reinforcement learning.However,trust propagation is time-consuming,and trust changes with the interaction process in social networks.To track the dynamic changes in trust values,a dynamic trust inference algorithm named Dynamic Double DQN Trust(Dy-DDQNTrust)is proposed to predict the indirect trust values of two users without direct contact with each other.The proposed algorithm simulates the interactions among users by double DQN.Firstly,CurrentNet and TargetNet networks are used to select users for interaction.The users with high trust are chosen to interact in future iterations.Secondly,the trust value is updated dynamically until a reliable trust path is found according to the result of the interaction.Finally,the trust value between indirect users is inferred by aggregating the opinions from multiple users through a Modified Collaborative Filtering Averagebased Similarity(SMCFAvg)aggregation strategy.Experiments are carried out on the FilmTrust and the Epinions datasets.Compared with TidalTrust,MoleTrust,DDQNTrust,DyTrust and Dynamic Weighted Heuristic trust path Search algorithm(DWHS),our dynamic trust inference algorithm has higher prediction accuracy and better scalability.展开更多
文摘Robustness against measurement uncertainties is crucial for gas turbine engine diagnosis.While current research focuses mainly on measurement noise,measurement bias remains challenging.This study proposes a novel performance-based fault detection and identification(FDI)strategy for twin-shaft turbofan gas turbine engines and addresses these uncertainties through a first-order Takagi-Sugeno-Kang fuzzy inference system.To handle ambient condition changes,we use parameter correction to preprocess the raw measurement data,which reduces the FDI’s system complexity.Additionally,the power-level angle is set as a scheduling parameter to reduce the number of rules in the TSK-based FDI system.The data for designing,training,and testing the proposed FDI strategy are generated using a component-level turbofan engine model.The antecedent and consequent parameters of the TSK-based FDI system are optimized using the particle swarm optimization algorithm and ridge regression.A robust structure combining a specialized fuzzy inference system with the TSK-based FDI system is proposed to handle measurement biases.The performance of the first-order TSK-based FDI system and robust FDI structure are evaluated through comprehensive simulation studies.Comparative studies confirm the superior accuracy of the first-order TSK-based FDI system in fault detection,isolation,and identification.The robust structure demonstrates a 2%-8%improvement in the success rate index under relatively large measurement bias conditions,thereby indicating excellent robustness.Accuracy against significant bias values and computation time are also evaluated,suggesting that the proposed robust structure has desirable online performance.This study proposes a novel FDI strategy that effectively addresses measurement uncertainties.
文摘Edge Machine Learning(EdgeML)and Tiny Machine Learning(TinyML)are fast-growing fields that bring machine learning to resource-constrained devices,allowing real-time data processing and decision-making at the network’s edge.However,the complexity of model conversion techniques,diverse inference mechanisms,and varied learning strategies make designing and deploying these models challenging.Additionally,deploying TinyML models on resource-constrained hardware with specific software frameworks has broadened EdgeML’s applications across various sectors.These factors underscore the necessity for a comprehensive literature review,as current reviews do not systematically encompass the most recent findings on these topics.Consequently,it provides a comprehensive overview of state-of-the-art techniques in model conversion,inference mechanisms,learning strategies within EdgeML,and deploying these models on resource-constrained edge devices using TinyML.It identifies 90 research articles published between 2018 and 2025,categorizing them into two main areas:(1)model conversion,inference,and learning strategies in EdgeML and(2)deploying TinyML models on resource-constrained hardware using specific software frameworks.In the first category,the synthesis of selected research articles compares and critically reviews various model conversion techniques,inference mechanisms,and learning strategies.In the second category,the synthesis identifies and elaborates on major development boards,software frameworks,sensors,and algorithms used in various applications across six major sectors.As a result,this article provides valuable insights for researchers,practitioners,and developers.It assists them in choosing suitable model conversion techniques,inference mechanisms,learning strategies,hardware development boards,software frameworks,sensors,and algorithms tailored to their specific needs and applications across various sectors.
文摘Developing efficient neural network(NN)computing systems is crucial in the era of artificial intelligence(AI).Traditional von Neumann architectures have both the issues of"memory wall"and"power wall",limiting the data transfer between memory and processing units[1,2].Compute-in-memory(CIM)technologies,particularly analogue CIM with memristor crossbars,are promising because of their high energy efficiency,computational parallelism,and integration density for NN computations[3].In practical applications,analogue CIM excels in tasks like speech recognition and image classification,revealing its unique advantages.For instance,it efficiently processes vast amounts of audio data in speech recognition,achieving high accuracy with minimal power consumption.In image classification,the high parallelism of analogue CIM significantly speeds up feature extraction and reduces processing time.With the boosting development of AI applications,the demands for computational accuracy and task complexity are rising continually.However,analogue CIM systems are limited in handling complex regression tasks with needs of precise floating-point(FP)calculations.They are primarily suited for the classification tasks with low data precision and a limited dynamic range[4].
文摘Published proof test coverage(PTC)estimates for emergency shutdown valves(ESDVs)show only moderate agreement and are predominantly opinion-based.A Failure Modes,Effects,and Diagnostics Analysis(FMEDA)was undertaken using component failure rate data to predict PTC for a full stroke test and a partial stroke test.Given the subjective and uncertain aspects of the FMEDA approach,specifically the selection of component failure rates and the determination of the probability of detecting failure modes,a Fuzzy Inference System(FIS)was proposed to manage the data,addressing the inherent uncertainties.Fuzzy inference systems have been used previously for various FMEA type assessments,but this is the first time an FIS has been employed for use with FMEDA.ESDV PTC values were generated from both the standard FMEDA and the fuzzy-FMEDA approaches using data provided by FMEDA experts.This work demonstrates that fuzzy inference systems can address the subjectivity inherent in FMEDA data,enabling reliable estimates of ESDV proof test coverage for both full and partial stroke tests.This facilitates optimized maintenance planning while ensuring safety is not compromised.
基金the Scientific Research Innovation Capability Support Project for Young Faculty(ZYGXQNJSKYCXNLZCXM-H8)Fundamental Research Funds for the Central Universities(2024ZYGXZR077)+3 种基金Guangdong Basic and Applied Basic Research Foundation(2023B1515120006)Guangzhou Basic and Applied Basic Research Foundation(2024A04J5776)the Research Fund(2023QN10Y421)Guangzhou Talent Recruitment Team Program(2024D03J0004),all related to this study.
文摘Osteoporosis is a known risk factor for rotator cuff tears(RCTs),but the causal correlation and underlying mechanisms remain unclear.This study aims to evaluate the impact of osteoporosis on RCT risk and investigate their genetic associations.Using data from the UK Biobank(n=457871),cross-sectional analyses demonstrated that osteoporosis was significantly associated with an increased risk of RCTs(adjusted OR[95%CI]=1.38[1.25–1.52]).A longitudinal analysis of a subset of patients(n=268117)over 11 years revealed that osteoporosis increased the risk of RCTs(adjusted HR[95%CI]=1.56[1.29–1.87]),which is notably varied between sexes in sex-stratified analysis.Causal inference methods,including propensity score matching,inverse probability weighting,causal random forest and survival random forest models further confirmed the causal effect,both from cross-sectional and longitudinal perspectives.
基金supported by the National Natural Science Foundation of China(No.71501183).
文摘In order to solve the problems of high experimental cost of ammunition,lack of field test data,and the difficulty in applying the ammunition hit probability estimation method in classical statistics,this paper assumes that the projectile dispersion of ammunition is a two-dimensional joint normal distribution,and proposes a new Bayesian inference method of ammunition hit probability based on normal-inverse Wishart distribution.Firstly,the conjugate joint prior distribution of the projectile dispersion characteristic parameters is determined to be a normal inverse Wishart distribution,and the hyperparameters in the prior distribution are estimated by simulation experimental data and historical measured data.Secondly,the field test data is integrated with the Bayesian formula to obtain the joint posterior distribution of the projectile dispersion characteristic parameters,and then the hit probability of the ammunition is estimated.Finally,compared with the binomial distribution method,the method in this paper can consider the dispersion information of ammunition projectiles,and the hit probability information is more fully utilized.The hit probability results are closer to the field shooting test samples.This method has strong applicability and is conducive to obtaining more accurate hit probability estimation results.
基金supported by the National Natural Science Foundation of China(Nos.62176122 and 62061146002).
文摘Federated Graph Neural Networks (FedGNNs) have achieved significant success in representation learning for graph data, enabling collaborative training among multiple parties without sharing their raw graph data and solving the data isolation problem faced by centralized GNNs in data-sensitive scenarios. Despite the plethora of prior work on inference attacks against centralized GNNs, the vulnerability of FedGNNs to inference attacks has not yet been widely explored. It is still unclear whether the privacy leakage risks of centralized GNNs will also be introduced in FedGNNs. To bridge this gap, we present PIAFGNN, the first property inference attack (PIA) against FedGNNs. Compared with prior works on centralized GNNs, in PIAFGNN, the attacker can only obtain the global embedding gradient distributed by the central server. The attacker converts the task of stealing the target user’s local embeddings into a regression problem, using a regression model to generate the target graph node embeddings. By training shadow models and property classifiers, the attacker can infer the basic property information within the target graph that is of interest. Experiments on three benchmark graph datasets demonstrate that PIAFGNN achieves attack accuracy of over 70% in most cases, even approaching the attack accuracy of inference attacks against centralized GNNs in some instances, which is much higher than the attack accuracy of the random guessing method. Furthermore, we observe that common defense mechanisms cannot mitigate our attack without affecting the model’s performance on mainly classification tasks.
文摘Protocol Reverse Engineering(PRE)is of great practical importance in Internet security-related fields such as intrusion detection,vulnerability mining,and protocol fuzzing.For unknown binary protocols having fixed-length fields,and the accurate identification of field boundaries has a great impact on the subsequent analysis and final performance.Hence,this paper proposes a new protocol segmentation method based on Information-theoretic statistical analysis for binary protocols by formulating the field segmentation of unsupervised binary protocols as a probabilistic inference problem and modeling its uncertainty.Specifically,we design four related constructions between entropy changes and protocol field segmentation,introduce random variables,and construct joint probability distributions with traffic sample observations.Probabilistic inference is then performed to identify the possible protocol segmentation points.Extensive trials on nine common public and industrial control protocols show that the proposed method yields higher-quality protocol segmentation results.
基金the financial support by the National Natural Science Foundation of China(Grant No.U24B2029)the Key Projects of the National Natural Science Foundation of China(Grant No.52334001)+1 种基金the Strategic Cooperation Technology Projects of CNPC and CUPB(Grand No.ZLZX2020-02)the China University of Petroleum,Beijing(Grand No.ZX20230042)。
文摘Offshore drilling costs are high,and the downhole environment is even more complex.Improving the rate of penetration(ROP)can effectively shorten offshore drilling cycles and improve economic benefits.It is difficult for the current ROP models to guarantee the prediction accuracy and the robustness of the models at the same time.To address the current issues,a new ROP prediction model was developed in this study,which considers ROP as a time series signal(ROP signal).The model is based on the time convolutional network(TCN)framework and integrates ensemble empirical modal decomposition(EEMD)and Bayesian network causal inference(BN),the model is named EEMD-BN-TCN.Within the proposed model,the EEMD decomposes the original ROP signal into multiple sets of sub-signals.The BN determines the causal relationship between the sub-signals and the key physical parameters(weight on bit and revolutions per minute)and carries out preliminary reconstruction of the sub-signals based on the causal relationship.The TCN predicts signals reconstructed by BN.When applying this model to an actual production well,the average absolute percentage error of the EEMD-BN-TCN prediction decreased from 18.4%with TCN to 9.2%.In addition,compared with other models,the EEMD-BN-TCN can improve the decomposition signal of ROP by regulating weight on bit and revolutions per minute,ultimately enhancing ROP.
基金supported by the National Key R&D Program of China(2022YFD1401600)the National Science Foundation for Distinguished Young Scholars of Zhejang Province,China(LR23C140001)supported by the Key Area Research and Development Program of Guangdong Province,China(2018B020205003 and 2020B0202090001).
文摘Inferring phylogenetic trees from molecular sequences is a cornerstone of evolutionary biology.Many standard phylogenetic methods(such as maximum-likelihood[ML])rely on explicit models of sequence evolution and thus often suffer from model misspecification or inadequacy.The on-rising deep learning(DL)techniques offer a powerful alternative.Deep learning employs multi-layered artificial neural networks to progressively transform input data into more abstract and complex representations.DL methods can autonomously uncover meaningful patterns from data,thereby bypassing potential biases introduced by predefined features(Franklin,2005;Murphy,2012).Recent efforts have aimed to apply deep neural networks(DNNs)to phylogenetics,with a growing number of applications in tree reconstruction(Suvorov et al.,2020;Zou et al.,2020;Nesterenko et al.,2022;Smith and Hahn,2023;Wang et al.,2023),substitution model selection(Abadi et al.,2020;Burgstaller-Muehlbacher et al.,2023),and diversification rate inference(Voznica et al.,2022;Lajaaiti et al.,2023;Lambert et al.,2023).In phylogenetic tree reconstruction,PhyDL(Zou et al.,2020)and Tree_learning(Suvorov et al.,2020)are two notable DNN-based programs designed to infer unrooted quartet trees directly from alignments of four amino acid(AA)and DNA sequences,respectively.
基金supported by the National Natural Science Foundation of China(No.61931011)the Jiangsu Provincial Key Research and Development Program,China(No.BE2021013-4)the Fundamental Research Project in University Characteristic Disciplines,China(No.ILF240071A24)。
文摘Unmanned Aerial Vehicles(UAVs)coupled with deep learning such as Convolutional Neural Networks(CNNs)have been widely applied across numerous domains,including agriculture,smart city monitoring,and fire rescue operations,owing to their malleability and versatility.However,the computation-intensive and latency-sensitive natures of CNNs present a formidable obstacle to their deployment on resource-constrained UAVs.Some early studies have explored a hybrid approach that dynamically switches between lightweight and complex models to balance accuracy and latency.However,they often overlook scenarios involving multiple concurrent CNN streams,where competition for resources between streams can substantially impact latency and overall system performance.In this paper,we first investigate the deployment of both lightweight and complex models for multiple CNN streams in UAV swarm.Specifically,we formulate an optimization problem to minimize the total latency across multiple CNN streams,under the constraints on UAV memory and the accuracy requirement of each stream.To address this problem,we propose an algorithm called Adaptive Model Switching of collaborative inference for MultiCNN streams(AMSM)to identify the inference strategy with a low latency.Simulation results demonstrate that the proposed AMSM algorithm consistently achieves the lowest latency while meeting the accuracy requirements compared to benchmark algorithms.
基金supported by the National Natural Science Foundation of China(No.82404365)the Noncommunicable Chronic Diseases-National Science and Technology Major Project(No.2023ZD0513200)+7 种基金China Medical Board(No.15-230)China Postdoctoral Science Foundation(Nos.2023M730317and 2023T160066)the Fundamental Research Funds for the Central Universities(No.3332023042)the Open Project of Hebei Key Laboratory of Environment and Human Health(No.202301)the National Key Research and Development Program of China(No.2022YFC3703000)the Non-profit Central Research Institute Fund of Chinese Academy of Medical Sciences(No.2022-JKCS-11)the CAMS Innovation Fund for Medical Sciences(No.2022-I2M-JB-003)the Programs of the National Natural Science Foundation of China(No.21976050).
文摘Associations of per-and polyfluoroalkyl substances(PFAS)on lipid metabolism have been documented but research remains scarce regarding effect of PFAS on lipid variability.To deeply understand their relationship,a step-forward in causal inference is expected.To address these,we conducted a longitudinal study with three repeated measurements involving 201 participants in Beijing,among which 100 eligible participants were included for the present study.Twenty-three PFAS and four lipid indicators were assessed at each visit.We used linear mixed models and quantile g-computation models to investigate associations between PFAS and blood lipid levels.A latent class growth model described PFAS serum exposure patterns,and a generalized linear model demonstrated associations between these patterns and lipid variability.Our study found that PFDA was associated with increased TC(β=0.083,95%CI:0.011,0.155)and HDL-C(β=0.106,95%CI:0.034,0.178).The PFAS mixture also showed a positive relationship with TC(β=0.06,95%CI:0.02,0.10),with PFDA contributing most positively.Compared to the low trajectory group,the middle trajectory group for PFDA was associated with VIM of TC(β=0.756,95%CI:0.153,1.359).Furthermore,PFDA showed biological gradientswith lipid metabolism.This is the first repeated-measures study to identify the impact of PFAS serum exposure pattern on the lipid metabolism and the first to estimate the association between PFAS and blood lipid levels in middle-aged and elderly Chinese and reinforce the evidence of their causal relationship through epidemiological studies.
文摘In recent years,the world has seen an exponential increase in energy demand,prompting scientists to look for innovative ways to exploit the power sun’s power.Solar energy technologies use the sun’s energy and light to provide heating,lighting,hot water,electricity and even cooling for homes,businesses,and industries.Therefore,ground-level solar radiation data is important for these applications.Thus,our work aims to use a mathematical modeling tool to predict solar irradiation.For this purpose,we are interested in the application of the Adaptive Neuro Fuzzy Inference System.Through this type of artificial neural system,10 models were developed,based on meteorological data such as the Day number(Nj),Ambient temperature(T),Relative Humidity(Hr),Wind speed(WS),Wind direction(WD),Declination(δ),Irradiation outside the atmosphere(Goh),Maximum temperature(Tmax),Minimum temperature(Tmin).These models have been tested by different static indicators to choose the most suitable one for the estimation of the daily global solar radiation.This study led us to choose the M8 model,which takes Nj,T,Hr,δ,Ws,Wd,G0,and S0 as input variables because it presents the best performance either in the learning phase(R^(2)=0.981,RMSE=0.107 kW/m^(2),MAE=0.089 kW/m2)or in the validation phase(R^(2)=0.979,RMSE=0.117 kW/m^(2),MAE=0.101 kW/m^(2)).
基金National Key Research and Development Program of China,Grant/Award Number:2020YFB1711704。
文摘The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,directly predicting missing components via global characteristics extracted from incomplete inputs.However,this makes detail re-covery challenging,as global characteristics fail to provide complete missing component specifics.A new point cloud completion method named Point-PC is proposed.A memory network and a causal inference model are separately designed to introduce shape priors and select absent shape information as supplementary geometric factors for aiding completion.Concretely,a memory mechanism is proposed to store complete shape features and their associated shapes in a key-value format.The authors design a pre-training strategy that uses contrastive learning to map incomplete shape features into the complete shape feature domain,enabling retrieval of analogous shapes from incomplete inputs.In addition,the authors employ backdoor adjustment to eliminate confounders,which are shape prior components sharing identical semantic structures with incomplete inputs.Experiments conducted on three datasets show that our method achieves superior performance compared to state-of-the-art approaches.The code for Point-PC can be accessed by https://github.com/bizbard/Point-PC.git.
基金Supported by the Science and Technology Key Project of Science and Technology Department of Henan Province(No.252102211041)the Key Research and Development Projects of Henan Province(No.231111212500).
文摘A distributed bearing-only target tracking algorithm based on variational Bayesian inference(VBI)under random measurement anomalies is proposed for the problem of adverse effect of random measurement anomalies on the state estimation accuracy of moving targets in bearing-only tracking scenarios.Firstly,the measurement information of each sensor is complemented by using triangulation under the distributed framework.Secondly,the Student-t distribution is selected to model the measurement likelihood probability density function,and the joint posteriori probability density function of the estimated variables is approximately decoupled by VBI.Finally,the estimation results of each local filter are sent to the fusion center and fed back to each local filter.The simulation results show that the proposed distributed bearing-only target tracking algorithm based on VBI in the presence of abnormal measurement noise comprehensively considers the influence of system nonlinearity and random anomaly of measurement noise,and has higher estimation accuracy and robustness than other existing algorithms in the above scenarios.
基金supported by a grant from R&D Program Development of Rail-Specific Digital Resource Technology Based on an AI-Enabled Rail Support Platform,grant number PK2401C1,of the Korea Railroad Research Institute.
文摘Fire detection has held stringent importance in computer vision for over half a century.The development of early fire detection strategies is pivotal to the realization of safe and smart cities,inhabitable in the future.However,the development of optimal fire and smoke detection models is hindered by limitations like publicly available datasets,lack of diversity,and class imbalance.In this work,we explore the possible ways forward to overcome these challenges posed by available datasets.We study the impact of a class-balanced dataset to improve the fire detection capability of state-of-the-art(SOTA)vision-based models and propose the use of generative models for data augmentation,as a future work direction.First,a comparative analysis of two prominent object detection architectures,You Only Look Once version 7(YOLOv7)and YOLOv8 has been carried out using a balanced dataset,where both models have been evaluated across various evaluation metrics including precision,recall,and mean Average Precision(mAP).The results are compared to other recent fire detection models,highlighting the superior performance and efficiency of the proposed YOLOv8 architecture as trained on our balanced dataset.Next,a fractal dimension analysis gives a deeper insight into the repetition of patterns in fire,and the effectiveness of the results has been demonstrated by a windowing-based inference approach.The proposed Slicing-Aided Hyper Inference(SAHI)improves the fire and smoke detection capability of YOLOv8 for real-life applications with a significantly improved mAP performance over a strict confidence threshold.YOLOv8 with SAHI inference gives a mAP:50-95 improvement of more than 25%compared to the base YOLOv8 model.The study also provides insights into future work direction by exploring the potential of generative models like deep convolutional generative adversarial network(DCGAN)and diffusion models like stable diffusion,for data augmentation.
文摘Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.
基金support from the National Key Research&Development Program of China(2021YFC2101100)the National Natural Science Foundation of China(62322309,61973119).
文摘Modern industrial processes are typically characterized by large-scale and intricate internal relationships.Therefore,the distributed modeling process monitoring method is effective.A novel distributed monitoring scheme utilizing the Kantorovich distance-multiblock variational autoencoder(KD-MBVAE)is introduced.Firstly,given the high consistency of relevant variables within each sub-block during the change process,the variables exhibiting analogous statistical features are grouped into identical segments according to the optimal quality transfer theory.Subsequently,the variational autoencoder(VAE)model was separately established,and corresponding T^(2)statistics were calculated.To improve fault sensitivity further,a novel statistic,derived from Kantorovich distance,is introduced by analyzing model residuals from the perspective of probability distribution.The thresholds of both statistics were determined by kernel density estimation.Finally,monitoring results for both types of statistics within all blocks are amalgamated using Bayesian inference.Additionally,a novel approach for fault diagnosis is introduced.The feasibility and efficiency of the introduced scheme are verified through two cases.
基金supported by the National Natural Science Foundation of China(62072392)the National Natural Science Foundation of China(61972360)the Major Scientific and Technological Innovation Projects of Shandong Province(2019522Y020131).
文摘The development of the Internet of Things(IoT)has brought great convenience to people.However,some information security problems such as privacy leakage are caused by communicating with risky users.It is a challenge to choose reliable users with which to interact in the IoT.Therefore,trust plays a crucial role in the IoT because trust may avoid some risks.Agents usually choose reliable users with high trust to maximize their own interests based on reinforcement learning.However,trust propagation is time-consuming,and trust changes with the interaction process in social networks.To track the dynamic changes in trust values,a dynamic trust inference algorithm named Dynamic Double DQN Trust(Dy-DDQNTrust)is proposed to predict the indirect trust values of two users without direct contact with each other.The proposed algorithm simulates the interactions among users by double DQN.Firstly,CurrentNet and TargetNet networks are used to select users for interaction.The users with high trust are chosen to interact in future iterations.Secondly,the trust value is updated dynamically until a reliable trust path is found according to the result of the interaction.Finally,the trust value between indirect users is inferred by aggregating the opinions from multiple users through a Modified Collaborative Filtering Averagebased Similarity(SMCFAvg)aggregation strategy.Experiments are carried out on the FilmTrust and the Epinions datasets.Compared with TidalTrust,MoleTrust,DDQNTrust,DyTrust and Dynamic Weighted Heuristic trust path Search algorithm(DWHS),our dynamic trust inference algorithm has higher prediction accuracy and better scalability.