Actuator faults can be critical in turbofan engines as they can lead to stall,surge,loss of thrust and failure of speed control.Thus,fault diagnosis of gas turbine actuators has attracted considerable attention,from b...Actuator faults can be critical in turbofan engines as they can lead to stall,surge,loss of thrust and failure of speed control.Thus,fault diagnosis of gas turbine actuators has attracted considerable attention,from both academia and industry.However,the extensive literature that exists on this topic does not address identifying the severity of actuator faults and focuses mainly on actuator fault detection and isolation.In addition,previous studies of actuator fault identification have not dealt with multiple concurrent faults in real time,especially when these are accompanied by sudden failures under dynamic conditions.This study develops component-level models for fault identification in four typical actuators used in high-bypass ratio turbofan engines under both dynamic and steady-state conditions and these are then integrated with the engine performance model developed by the authors.The research results reported here present a novel method of quantifying actuator faults using dynamic effect compensation.The maximum error for each actuator is less than0.06%and 0.07%,with average computational time of less than 0.0058 s and 0.0086 s for steady-state and transient cases,respectively.These results confirm that the proposed method can accurately and efficiently identify concurrent actuator fault for an engine operating under either transient or steady-state conditions,even in the case of a sudden malfunction.The research results emonstrate the potential benefit to emergency response capabilities by introducing this method of monitoring the health of aero engines.展开更多
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a...Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.展开更多
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a...Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.展开更多
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi...Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.展开更多
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp...With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.展开更多
The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack...The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.展开更多
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic...Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.展开更多
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods...Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ...High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).展开更多
Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including ...Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies.展开更多
We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parame...We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parameter E(z)from the DESI BAO Alcock-Paczynski(AP)data using Gaussian process to perform the null test.We find strong evidence of accelerated expansion from the DESI BAO AP data.By reconstructing the deceleration parameter q(z) from the DESI BAO AP data,we find that accelerated expansion persisted until z■0.7 with a 99.7%confidence level.Additionally,to provide insights into the Hubble tension problem,we propose combining the reconstructed E(z) with D_(H)/r_(d) data to derive a model-independent result r_(d)h=99.8±3.1 Mpc.This result is consistent with measurements from cosmic microwave background(CMB)anisotropies using the ΛCDM model.We also propose a model-independent method for reconstructing the comoving angular diameter distance D_(M)(z) from the distance modulus μ,using SNe Ia data and combining this result with DESI BAO data of D_(M)/r_(d) to constrain the value of r_(d).We find that the value of r_(d),derived from this model-independent method,is smaller than that obtained from CMB measurements,with a significant discrepancy of at least 4.17σ.All the conclusions drawn in this paper are independent of cosmological models and gravitational theories.展开更多
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d...Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.展开更多
We observed the steady-state visually evoked potential(SSVEP) from a healthy subject using a compact quad-channel potassium spin exchange relaxation-free(SERF) optically pumped magnetometer(OPM). To this end, 30 s of ...We observed the steady-state visually evoked potential(SSVEP) from a healthy subject using a compact quad-channel potassium spin exchange relaxation-free(SERF) optically pumped magnetometer(OPM). To this end, 30 s of data were collected, and SSVEP-related magnetic responses with signal intensity ranging from 150 fT to 300 f T were observed for all four channels. The corresponding signal to noise ratio(SNR) was in the range of 3.5–5.5. We then used different channels to operate the sensor as a gradiometer. In the specific case of detecting SSVEP, we noticed that the short channel separation distance led to a strongly diminished gradiometer signal. Although not optimal for the case of SSVEP detection, this set-up can prove to be highly useful for other magnetoencephalography(MEG) paradigms that require good noise cancellation.Considering its compactness, low cost, and good performance, the K-SERF sensor has great potential for biomagnetic field measurements and brain-computer interfaces(BCI).展开更多
The stable steady-state periodic responses of a belt-drive system with a one-way clutch are studied. For the first time, the dynamical system is investigated under dual excitations. The system is simultaneously excite...The stable steady-state periodic responses of a belt-drive system with a one-way clutch are studied. For the first time, the dynamical system is investigated under dual excitations. The system is simultaneously excited by the firing pulsations of the engine and the harmonic motion of the foundation. Nonlinear discrete-continuous equations are derived for coupling the transverse vibration of the belt spans and the rotations of the driving and driven pulleys and the accessory pulley. The nonlinear dynamics is studied under equal and multiple relations between the frequency of the fir- ing pulsations and the frequency of the foundation motion. Furthermore, translating belt spans are modeled as axially moving strings. A set of nonlinear piecewise ordinary differ- ential equations is achieved by using the Galerkin truncation. Under various relations between the excitation frequencies, the time histories of the dynamical system are numerically simulated based on the time discretization method. Further- more, the stable steady-state periodic response curves are calculated based on the frequency sweep. Moreover, the convergence of the Galerkin truncation is examined. Numer- ical results demonstrate that the one-way clutch reduces the resonance amplitude of the rotations of the driven pul- ley and the accessory pulley. On the other hand, numerical examples prove that the resonance areas of the belt spans are decreased by eliminating the torque-transmitting in the opposite direction. With the increasing amplitude of the foun- dation excitation, the damping effect of the one-way clutch will be reduced. Furthermore, as the amplitude of the firing pulsations of the engine increases, the jumping phenomena in steady-state response curves of the belt-drive system with or without a one-way clutch both occur.展开更多
The current research about the flow ripple of axial piston pump mainly focuses on the effect of the structure of parts on the flow ripple. Therein, the structure of parts are usually designed and optimized at rated wo...The current research about the flow ripple of axial piston pump mainly focuses on the effect of the structure of parts on the flow ripple. Therein, the structure of parts are usually designed and optimized at rated working conditions. However, the pump usually has to work in large-scale and time-variant working conditions. Therefore, the flow ripple characteristics of pump and analysis for its test accuracy with respect to variant steady-state conditions and transient conditions in a wide range of operating parameters are focused in this paper. First, a simulation model has been constructed, which takes the kinematics of oil film within friction pairs into account for higher accuracy. Afterwards, a test bed which adopts Secondary Source Method is built to verify the model. The simulation and tests results show that the angular position of the piston, corresponding to the position where the peak flow ripple is produced, varies with the different pressure. The pulsating amplitude and pulsation rate of flow ripple increase with the rise of pressure and the variation rate of pressure. For the pump working at a constant speed, the flow pulsation rate decreases dramatically with the increasing speed when the speed is less than 27.78% of the maximum speed, subsequently presents a small decrease tendency with the speed further increasing. With the rise of the variation rate of speed, the pulsating amplitude and pulsation rate of flow ripple increase. As the swash plate angle augments, the pulsating amplitude of flow ripple increases, nevertheless the flow pulsation rate decreases. In contrast with the effect of the variation of pressure, the test accuracy of flow ripple is more sensitive to the variation of speed. It makes the test accuracy above 96.20% available for the pulsating amplitude of pressure deviating within a range of ~6% from the mean pressure. However, with a variation of speed deviating within a range of ±2% from the mean speed, the attainable test accuracy of flow ripple is above 93.07%. The model constructed in this research proposes a method to determine the flow ripple characteristics of pump and its attainable test accuracy under the large-scale and time-variant working conditions. Meanwhile, a discussion about the variation of flow ripple and its obtainable test accuracy with the conditions of the pump working in wide operating ranges is given as well.展开更多
In this article,a steady-state mathematical model was developed and experimentally evaluated to inves- tigate the effect of influent flow distribution and volume ratios of anoxic and aerobic zones in each stage on the...In this article,a steady-state mathematical model was developed and experimentally evaluated to inves- tigate the effect of influent flow distribution and volume ratios of anoxic and aerobic zones in each stage on the to- tal nitrogen concentration of the effluent in the step-feed biological nitrogen removal process.Unlike the previous modeling methods,this model can be used to calculate the removal rates of ammonia and nitrate in each stage and thereby predict the concentrations of ammonia,nitrate,and total nitrogen in the effluent.To verify the simulation results,pilot-scale experimental studies were carried out in a four-stage step feed process.Good correlations were achieved between the measured data and the simulation results,which proved the validity of the developed model. The sensitivity of the model predictions was analyzed.After verification of the validity,the step feed process was optimally operated for five months using the model and the criteria developed for the design and operation.During the pilot-scale experimental period,the effluent total nitrogen concentrations were all below 5mg·L -1 ,with more than 90%removal efficiency.展开更多
A novel steady-state optimization (SSO) of internal combustion engine (ICE) strategy is proposed to maximize the efficiency of the overall powertrain for hybrid electric vehicles, in which the ICE efficiency, the ...A novel steady-state optimization (SSO) of internal combustion engine (ICE) strategy is proposed to maximize the efficiency of the overall powertrain for hybrid electric vehicles, in which the ICE efficiency, the efficiencies of the electric motor (EM) and the energy storage device are all explicitly taken into account. In addition, a novel idle optimization of ICE strategy is implemented to obtain the optimal idle operating point of the ICE and corresponding optimal parking generation power of the EM using the view of the novel SSO of ICE strategy. Simulations results show that potential fuel economy improvement is achieved relative to the conventional one which only optimized the ICE efficiency by the novel SSO of ICE strategy, and fuel consumption per voltage increment decreases a lot during the parking charge by the novel idle optimization of ICE strategy.展开更多
Gassolid hydrodynamic steadystate operation is the operating basis in a chemical looping dualreactor system.This study reported the experimental results on the steadystate operation characteristics of gassolid flow in...Gassolid hydrodynamic steadystate operation is the operating basis in a chemical looping dualreactor system.This study reported the experimental results on the steadystate operation characteristics of gassolid flow in a 15.5 m high dual circulating fluidized bed(CFB)cold test system.The effects of superficial gas velocity,static bed material height and solid returning modes on the steadystate operation characteristics between the two CFBs were investigated.Results suggest that the solid distributions in the dual CFB test system was mainly determined by the superficial gas velocity and larger solid inventory may help to improve the solid distributions.Besides,crossreturning mode coupled with selfreturning is good for steadystate running in the dualreactor test system.展开更多
It is known that there is a discrepancy between field data and the results predicted from the previous equations derived by simplifying three-dimensional(3-D) flow into two-dimensions(2-D).This paper presents a ne...It is known that there is a discrepancy between field data and the results predicted from the previous equations derived by simplifying three-dimensional(3-D) flow into two-dimensions(2-D).This paper presents a new steady-state productivity equation for horizontal wells in bottom water drive gas reservoirs.Firstly,the fundamental solution to the 3-D steady-state Laplace equation is derived with the philosophy of source and the Green function for a horizontal well located at the center of the laterally infinite gas reservoir.Then,using the fundamental solution and the Simpson integral formula,the average pseudo-pressure equation and the steady-state productivity equation are achieved for the horizontal section.Two case-studies are given in the paper,the results calculated from the newly-derived formula are very close to the numerical simulation performed with the Canadian software CMG and the real production data,indicating that the new formula can be used to predict the steady-state productivity of such horizontal gas wells.展开更多
基金support by the National Natural Science Foundation of China(Grant No.52402520)。
文摘Actuator faults can be critical in turbofan engines as they can lead to stall,surge,loss of thrust and failure of speed control.Thus,fault diagnosis of gas turbine actuators has attracted considerable attention,from both academia and industry.However,the extensive literature that exists on this topic does not address identifying the severity of actuator faults and focuses mainly on actuator fault detection and isolation.In addition,previous studies of actuator fault identification have not dealt with multiple concurrent faults in real time,especially when these are accompanied by sudden failures under dynamic conditions.This study develops component-level models for fault identification in four typical actuators used in high-bypass ratio turbofan engines under both dynamic and steady-state conditions and these are then integrated with the engine performance model developed by the authors.The research results reported here present a novel method of quantifying actuator faults using dynamic effect compensation.The maximum error for each actuator is less than0.06%and 0.07%,with average computational time of less than 0.0058 s and 0.0086 s for steady-state and transient cases,respectively.These results confirm that the proposed method can accurately and efficiently identify concurrent actuator fault for an engine operating under either transient or steady-state conditions,even in the case of a sudden malfunction.The research results emonstrate the potential benefit to emergency response capabilities by introducing this method of monitoring the health of aero engines.
文摘Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.
基金funded by University of Transport and Communications(UTC)under grant number T2025-CN-004.
文摘Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509Development of security monitoring technology based network behavior against encrypted cyber threats in ICT convergence environment).
文摘With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.
基金funded by the Science and Technology Project of State Grid Corporation of China(5108-202355437A-3-2-ZN).
文摘The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.
基金funded by Deanship of Graduate studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01264).
文摘Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.
基金supported by the project“Romanian Hub for Artificial Intelligence-HRIA”,Smart Growth,Digitization and Financial Instruments Program,2021–2027,MySMIS No.334906.
文摘Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
文摘High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).
基金Supported by Xuhui District Health Commission,No.SHXH202214.
文摘Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies.
基金supported in part by the National Key Research and Development Program of China (Grant No.2020YFC2201504)the National Natural Science Foundation of China (Grant Nos.12588101 and 12535002)。
文摘We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parameter E(z)from the DESI BAO Alcock-Paczynski(AP)data using Gaussian process to perform the null test.We find strong evidence of accelerated expansion from the DESI BAO AP data.By reconstructing the deceleration parameter q(z) from the DESI BAO AP data,we find that accelerated expansion persisted until z■0.7 with a 99.7%confidence level.Additionally,to provide insights into the Hubble tension problem,we propose combining the reconstructed E(z) with D_(H)/r_(d) data to derive a model-independent result r_(d)h=99.8±3.1 Mpc.This result is consistent with measurements from cosmic microwave background(CMB)anisotropies using the ΛCDM model.We also propose a model-independent method for reconstructing the comoving angular diameter distance D_(M)(z) from the distance modulus μ,using SNe Ia data and combining this result with DESI BAO data of D_(M)/r_(d) to constrain the value of r_(d).We find that the value of r_(d),derived from this model-independent method,is smaller than that obtained from CMB measurements,with a significant discrepancy of at least 4.17σ.All the conclusions drawn in this paper are independent of cosmological models and gravitational theories.
基金The work described in this paper was fully supported by a grant from Hong Kong Metropolitan University(RIF/2021/05).
文摘Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.
基金Project supported by the National Key Research and Development Program of China(Grant Nos.2016YFA0300600 and 2016YFA0301500)the Strategic Priority Research Program of Chinese Academy of Sciences(Grant Nos.XDB07030000 and XDBS32000000)+1 种基金the National Natural Science Foundation of China(Grant Nos.11474347 and 31730039)the Fund from the Ministry of Science and Technology of China(Grant No.2015CB351701)
文摘We observed the steady-state visually evoked potential(SSVEP) from a healthy subject using a compact quad-channel potassium spin exchange relaxation-free(SERF) optically pumped magnetometer(OPM). To this end, 30 s of data were collected, and SSVEP-related magnetic responses with signal intensity ranging from 150 fT to 300 f T were observed for all four channels. The corresponding signal to noise ratio(SNR) was in the range of 3.5–5.5. We then used different channels to operate the sensor as a gradiometer. In the specific case of detecting SSVEP, we noticed that the short channel separation distance led to a strongly diminished gradiometer signal. Although not optimal for the case of SSVEP detection, this set-up can prove to be highly useful for other magnetoencephalography(MEG) paradigms that require good noise cancellation.Considering its compactness, low cost, and good performance, the K-SERF sensor has great potential for biomagnetic field measurements and brain-computer interfaces(BCI).
基金project was supported by the State Key Program of the National Natural Science Foundation of China(Grant 11232009)the National Natural Science Foundation of China(Grants 11372171,11422214)
文摘The stable steady-state periodic responses of a belt-drive system with a one-way clutch are studied. For the first time, the dynamical system is investigated under dual excitations. The system is simultaneously excited by the firing pulsations of the engine and the harmonic motion of the foundation. Nonlinear discrete-continuous equations are derived for coupling the transverse vibration of the belt spans and the rotations of the driving and driven pulleys and the accessory pulley. The nonlinear dynamics is studied under equal and multiple relations between the frequency of the fir- ing pulsations and the frequency of the foundation motion. Furthermore, translating belt spans are modeled as axially moving strings. A set of nonlinear piecewise ordinary differ- ential equations is achieved by using the Galerkin truncation. Under various relations between the excitation frequencies, the time histories of the dynamical system are numerically simulated based on the time discretization method. Further- more, the stable steady-state periodic response curves are calculated based on the frequency sweep. Moreover, the convergence of the Galerkin truncation is examined. Numer- ical results demonstrate that the one-way clutch reduces the resonance amplitude of the rotations of the driven pul- ley and the accessory pulley. On the other hand, numerical examples prove that the resonance areas of the belt spans are decreased by eliminating the torque-transmitting in the opposite direction. With the increasing amplitude of the foun- dation excitation, the damping effect of the one-way clutch will be reduced. Furthermore, as the amplitude of the firing pulsations of the engine increases, the jumping phenomena in steady-state response curves of the belt-drive system with or without a one-way clutch both occur.
基金Supported by National Basic Research Program of China(973 Program,Grant No.2014CB046403)National Key Technology R&D Program of the Twelfth Five-year Plan of China(Grant No.2013BAF07B01)
文摘The current research about the flow ripple of axial piston pump mainly focuses on the effect of the structure of parts on the flow ripple. Therein, the structure of parts are usually designed and optimized at rated working conditions. However, the pump usually has to work in large-scale and time-variant working conditions. Therefore, the flow ripple characteristics of pump and analysis for its test accuracy with respect to variant steady-state conditions and transient conditions in a wide range of operating parameters are focused in this paper. First, a simulation model has been constructed, which takes the kinematics of oil film within friction pairs into account for higher accuracy. Afterwards, a test bed which adopts Secondary Source Method is built to verify the model. The simulation and tests results show that the angular position of the piston, corresponding to the position where the peak flow ripple is produced, varies with the different pressure. The pulsating amplitude and pulsation rate of flow ripple increase with the rise of pressure and the variation rate of pressure. For the pump working at a constant speed, the flow pulsation rate decreases dramatically with the increasing speed when the speed is less than 27.78% of the maximum speed, subsequently presents a small decrease tendency with the speed further increasing. With the rise of the variation rate of speed, the pulsating amplitude and pulsation rate of flow ripple increase. As the swash plate angle augments, the pulsating amplitude of flow ripple increases, nevertheless the flow pulsation rate decreases. In contrast with the effect of the variation of pressure, the test accuracy of flow ripple is more sensitive to the variation of speed. It makes the test accuracy above 96.20% available for the pulsating amplitude of pressure deviating within a range of ~6% from the mean pressure. However, with a variation of speed deviating within a range of ±2% from the mean speed, the attainable test accuracy of flow ripple is above 93.07%. The model constructed in this research proposes a method to determine the flow ripple characteristics of pump and its attainable test accuracy under the large-scale and time-variant working conditions. Meanwhile, a discussion about the variation of flow ripple and its obtainable test accuracy with the conditions of the pump working in wide operating ranges is given as well.
基金Supported by the National Natural Science Foundation Key International Cooperation Project of China (No.50521140075), the 863 Attached Financial Supporting Item of Beijing Municipal Science and Technology Commission (No.Z0005186040421) and the Doctor Subject Soecial Financial Supporfing Item of High College (No.20060005002).
文摘In this article,a steady-state mathematical model was developed and experimentally evaluated to inves- tigate the effect of influent flow distribution and volume ratios of anoxic and aerobic zones in each stage on the to- tal nitrogen concentration of the effluent in the step-feed biological nitrogen removal process.Unlike the previous modeling methods,this model can be used to calculate the removal rates of ammonia and nitrate in each stage and thereby predict the concentrations of ammonia,nitrate,and total nitrogen in the effluent.To verify the simulation results,pilot-scale experimental studies were carried out in a four-stage step feed process.Good correlations were achieved between the measured data and the simulation results,which proved the validity of the developed model. The sensitivity of the model predictions was analyzed.After verification of the validity,the step feed process was optimally operated for five months using the model and the criteria developed for the design and operation.During the pilot-scale experimental period,the effluent total nitrogen concentrations were all below 5mg·L -1 ,with more than 90%removal efficiency.
基金National Hi-tech Research end Development Program of China (863 Program,No.2002AA501700,No.2003AA501012)
文摘A novel steady-state optimization (SSO) of internal combustion engine (ICE) strategy is proposed to maximize the efficiency of the overall powertrain for hybrid electric vehicles, in which the ICE efficiency, the efficiencies of the electric motor (EM) and the energy storage device are all explicitly taken into account. In addition, a novel idle optimization of ICE strategy is implemented to obtain the optimal idle operating point of the ICE and corresponding optimal parking generation power of the EM using the view of the novel SSO of ICE strategy. Simulations results show that potential fuel economy improvement is achieved relative to the conventional one which only optimized the ICE efficiency by the novel SSO of ICE strategy, and fuel consumption per voltage increment decreases a lot during the parking charge by the novel idle optimization of ICE strategy.
文摘Gassolid hydrodynamic steadystate operation is the operating basis in a chemical looping dualreactor system.This study reported the experimental results on the steadystate operation characteristics of gassolid flow in a 15.5 m high dual circulating fluidized bed(CFB)cold test system.The effects of superficial gas velocity,static bed material height and solid returning modes on the steadystate operation characteristics between the two CFBs were investigated.Results suggest that the solid distributions in the dual CFB test system was mainly determined by the superficial gas velocity and larger solid inventory may help to improve the solid distributions.Besides,crossreturning mode coupled with selfreturning is good for steadystate running in the dualreactor test system.
基金financial support from the Open Fund(PLN1003) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation(Southwest Petroleum University)the National Science and Technology Major Project in the l lth Five-Year Plan(Grant No.2008ZX05054)
文摘It is known that there is a discrepancy between field data and the results predicted from the previous equations derived by simplifying three-dimensional(3-D) flow into two-dimensions(2-D).This paper presents a new steady-state productivity equation for horizontal wells in bottom water drive gas reservoirs.Firstly,the fundamental solution to the 3-D steady-state Laplace equation is derived with the philosophy of source and the Green function for a horizontal well located at the center of the laterally infinite gas reservoir.Then,using the fundamental solution and the Simpson integral formula,the average pseudo-pressure equation and the steady-state productivity equation are achieved for the horizontal section.Two case-studies are given in the paper,the results calculated from the newly-derived formula are very close to the numerical simulation performed with the Canadian software CMG and the real production data,indicating that the new formula can be used to predict the steady-state productivity of such horizontal gas wells.