Bayesian-optimized lithology identification has important basic geological research significance and engineering application value,and this paper proposes a Bayesian-optimized lithology identification method based on ...Bayesian-optimized lithology identification has important basic geological research significance and engineering application value,and this paper proposes a Bayesian-optimized lithology identification method based on machine learning of rock visible and near-infrared spectral data.First,the rock spectral data are preprocessed using Savitzky-Golay(SG)smoothing to remove the noise of the spectral data;then,the preprocessed rock spectral data are downscaled using Principal Component Analysis(PCA)to reduce the redundancy of the data,optimize the effective discriminative information,and obtain the rock spectral features;finally,a Bayesian-optimized lithology identification model is established based on rock spectral features,optimize the model hyperparameters using Bayesian optimization(BO)algorithm to avoid the combination of hyperparameters falling into the local optimal solution,and output the predicted type of rock,so as to realize the Bayesian-optimized lithology identification.In addition,this paper conducts comparative analysis on models based on Artificial Neural Network(ANN)/Random Forest(RF),dimensionality reduction/full band,and optimization algorithms.It uses the confusion matrix,accuracy,Precison(P),Recall(R)and F_(1)values(F_(1))as the evaluation indexes of model accuracy.The results indicate that the lithology identification model optimized by the BO-ANN after dimensionality reduction achieves an accuracy of up to 99.80%,up to 99.79%and up to 99.79%.Compared with the BO-RF model,it has higher identification accuracy and better stability for each type of rock identification.The experiments and reliability analysis show that the Bayesian-optimized lithology identification method proposed in this paper has good robustness and generalization performance,which is of great significance for realizing fast,accurate and Bayesian-optimized lithology identification in tunnel site.展开更多
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) has become a crucial resource in astronomical research,offering a vast amount of spectral data for stars,galaxies,and quasars.This paper presents ...The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) has become a crucial resource in astronomical research,offering a vast amount of spectral data for stars,galaxies,and quasars.This paper presents the data processing methods used by LAMOST,focusing on the classification and redshift measurement of large spectral data sets through template matching,as well as the creation of data products.Additionally,this paper details the construction of the Multiple Epoch Catalogs by integrating LAMOST spectral data with photometric data from Gaia and Pan-STARRS,and explains the creation of both low-and medium-resolution data products.展开更多
Astronomical spectroscopy is crucial for exploring the physical properties,chemical composition,and kinematic behavior of celestial objects.With continuous advancements in observational technology,astronomical spectro...Astronomical spectroscopy is crucial for exploring the physical properties,chemical composition,and kinematic behavior of celestial objects.With continuous advancements in observational technology,astronomical spectroscopy faces the dual challenges of rapidly expanding data volumes and relatively lagging data processing capabilities.In this context,the rise of artificial intelligence technologies offers an innovative solution to address these challenges.This paper analyzes the latest developments in the application of machine learning for astronomical spectral data mining and discusses future research directions in AI-based spectral studies.However,the application of machine learning technologies presents several challenges.The high complexity of models often comes with insufficient interpretability,complicating scientific understanding.Moreover,the large-scale computational demands place higher requirements on hardware resources,leading to a significant increase in computational costs.AI-based astronomical spectroscopy research should advance in the following key directions.First,develop efficient data augmentation techniques to enhance model generalization capabilities.Second,explore more interpretable model designs to ensure the reliability and transparency of scientific conclusions.Third,optimize computational efficiency and reduce the threshold for deep-learning applications through collaborative innovations in algorithms and hardware.Furthermore,promoting the integration of cross-band data processing is essential to achieve seamless integration and comprehensive analysis of multi-source data,providing richer,multidimensional information to uncover the mysteries of the universe.展开更多
Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising t...Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.展开更多
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a...Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.展开更多
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a...Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.展开更多
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi...Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.展开更多
The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack...The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.展开更多
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp...With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.展开更多
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic...Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.展开更多
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods...Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ...High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).展开更多
Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including ...Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies.展开更多
We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parame...We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parameter E(z)from the DESI BAO Alcock-Paczynski(AP)data using Gaussian process to perform the null test.We find strong evidence of accelerated expansion from the DESI BAO AP data.By reconstructing the deceleration parameter q(z) from the DESI BAO AP data,we find that accelerated expansion persisted until z■0.7 with a 99.7%confidence level.Additionally,to provide insights into the Hubble tension problem,we propose combining the reconstructed E(z) with D_(H)/r_(d) data to derive a model-independent result r_(d)h=99.8±3.1 Mpc.This result is consistent with measurements from cosmic microwave background(CMB)anisotropies using the ΛCDM model.We also propose a model-independent method for reconstructing the comoving angular diameter distance D_(M)(z) from the distance modulus μ,using SNe Ia data and combining this result with DESI BAO data of D_(M)/r_(d) to constrain the value of r_(d).We find that the value of r_(d),derived from this model-independent method,is smaller than that obtained from CMB measurements,with a significant discrepancy of at least 4.17σ.All the conclusions drawn in this paper are independent of cosmological models and gravitational theories.展开更多
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d...Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.展开更多
A new algorithm for clustering multiple data streams is proposed.The algorithm can effectively cluster data streams which show similar behavior with some unknown time delays.The algorithm uses the autoregressive (AR...A new algorithm for clustering multiple data streams is proposed.The algorithm can effectively cluster data streams which show similar behavior with some unknown time delays.The algorithm uses the autoregressive (AR) modeling technique to measure correlations between data streams.It exploits estimated frequencies spectra to extract the essential features of streams.Each stream is represented as the sum of spectral components and the correlation is measured component-wise.Each spectral component is described by four parameters,namely,amplitude,phase,damping rate and frequency.The ε-lag-correlation between two spectral components is calculated.The algorithm uses such information as similarity measures in clustering data streams.Based on a sliding window model,the algorithm can continuously report the most recent clustering results and adjust the number of clusters.Experiments on real and synthetic streams show that the proposed clustering method has a higher speed and clustering quality than other similar methods.展开更多
Passive seismic data contain large amounts of low-frequency information. To effectively extract and compensate active seismic data that lack low frequencies, we propose a multitaper spectral reconstruction method base...Passive seismic data contain large amounts of low-frequency information. To effectively extract and compensate active seismic data that lack low frequencies, we propose a multitaper spectral reconstruction method based on multiple sinusoidal tapers and derive equations for multisource and multitrace conditions. Compared to conventional cross correlation and deconvolution reconstruction methods, the proposed method can more accurately reconstruct the relative amplitude of recordings. Multidomain iterative denoising improves the SNR of retrieved data. By analyzing the spectral characteristics of passive data before and after reconstruction, we found that the data are expressed more clearly after reconstruction and denoising. To compensate for the low-frequency information in active data using passive seismic data, we match the power spectrum, supplement it, and then smooth it in the frequency domain. Finally, we use numerical simulation to verify the proposed method and conduct prestack depth migration using data after low-frequency compensation. The proposed power-matching method adds the losing low frequency information in the active seismic data using the low-frequency information of passive- source seismic data. The imaging of compensated data gives a more detailed information of deep structures.展开更多
Traditional two-dimensional(2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization(SIP) data are the coproducts of the induced polarization(IP) and the electro...Traditional two-dimensional(2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization(SIP) data are the coproducts of the induced polarization(IP) and the electromagnetic induction(EMI) effects.This is especially true under high frequencies,where the EMI effect can exceed the IP effect.2D inversion that only considers the IP effect reduces the reliability of the inversion data.In this paper,we derive differential equations using Maxwell's equations.With the introduction of the Cole-Cole model,we use the finite-element method to conduct2 D SIP forward modeling that considers the EMI and IP effects simultaneously.The data-space Occam method,in which different constraints to the model smoothness and parametric boundaries are introduced,is then used to simultaneously obtain the four parameters of the Cole-Cole model using multi-array electric field data.This approach not only improves the stability of the inversion but also significantly reduces the solution ambiguity.To improve the computational efficiency,message passing interface programming was used to accelerate the 2D SIP forward modeling and inversion.Synthetic datasets were tested using both serial and parallel algorithms,and the tests suggest that the proposed parallel algorithm is robust and efficient.展开更多
Two field experiments were conducted in Jiashan and Yuhang towns of Zhejiang Province, China, to study the feasibility of predicting N status of rice using canopy spectral reflectance. The canopy spectral reflectance ...Two field experiments were conducted in Jiashan and Yuhang towns of Zhejiang Province, China, to study the feasibility of predicting N status of rice using canopy spectral reflectance. The canopy spectral reflectance of rice grown with different levels of N inputs was determined at several important growth stages. Statistical analyses showed that as a result of the different levels of N supply, there were significant differences in the N concentrations of canopy leaves at different growth stages. Since spectral reflectance measurements showed that the N status of rice was related to reflectance in the visible and NIR (near-infrared) ranges, observations for rice in 1 nm bandwidths were then converted to bandwidths in the visible and NIR spectral regions with IKONOS (space imaging) bandwidths and vegetation indices being used to predict the N status of rice. The results indicated that canopy reflectance measurements converted to ratio vegetation index (RVI) and normalized difference vegetation index (NDVI) for simulated IKONOS bands provided a better prediction of rice N status than the reflectance measurements in the simulated IKONOS bands themselves. The precision of the developed regression models using RVI and NDVI proved to be very high with R2 ranging from 0.82 to 0.94, and when validated with experimental data from a different site, the results were satisfactory with R2 ranging from 0.55 to 0.70. Thus, the results showed that theoretically it should be possible to monitor N status using remotely sensed data.展开更多
基金support from the National Natural Science Foundation of China(Grant Nos:52379103 and 52279103)the Natural Science Foundation of Shandong Province(Grant No:ZR2023YQ049).
文摘Bayesian-optimized lithology identification has important basic geological research significance and engineering application value,and this paper proposes a Bayesian-optimized lithology identification method based on machine learning of rock visible and near-infrared spectral data.First,the rock spectral data are preprocessed using Savitzky-Golay(SG)smoothing to remove the noise of the spectral data;then,the preprocessed rock spectral data are downscaled using Principal Component Analysis(PCA)to reduce the redundancy of the data,optimize the effective discriminative information,and obtain the rock spectral features;finally,a Bayesian-optimized lithology identification model is established based on rock spectral features,optimize the model hyperparameters using Bayesian optimization(BO)algorithm to avoid the combination of hyperparameters falling into the local optimal solution,and output the predicted type of rock,so as to realize the Bayesian-optimized lithology identification.In addition,this paper conducts comparative analysis on models based on Artificial Neural Network(ANN)/Random Forest(RF),dimensionality reduction/full band,and optimization algorithms.It uses the confusion matrix,accuracy,Precison(P),Recall(R)and F_(1)values(F_(1))as the evaluation indexes of model accuracy.The results indicate that the lithology identification model optimized by the BO-ANN after dimensionality reduction achieves an accuracy of up to 99.80%,up to 99.79%and up to 99.79%.Compared with the BO-RF model,it has higher identification accuracy and better stability for each type of rock identification.The experiments and reliability analysis show that the Bayesian-optimized lithology identification method proposed in this paper has good robustness and generalization performance,which is of great significance for realizing fast,accurate and Bayesian-optimized lithology identification in tunnel site.
基金supported by the Young Data Scientist Program of the China National Astronomical Data Center。
文摘The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) has become a crucial resource in astronomical research,offering a vast amount of spectral data for stars,galaxies,and quasars.This paper presents the data processing methods used by LAMOST,focusing on the classification and redshift measurement of large spectral data sets through template matching,as well as the creation of data products.Additionally,this paper details the construction of the Multiple Epoch Catalogs by integrating LAMOST spectral data with photometric data from Gaia and Pan-STARRS,and explains the creation of both low-and medium-resolution data products.
基金supported by the National Key R&D Program of China(2021YFC2203502 and 2022YFF0711502)the National Natural Science Foundation of China(NSFC)(12173077)+4 种基金the Tianshan Talent Project of Xinjiang Uygur Autonomous Region(2022TSYCCX0095 and 2023TSYCCX0112)the Scientific Instrument Developing Project of the Chinese Academy of Sciences(PTYQ2022YZZD01)China National Astronomical Data Center(NADC)the Operation,Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments,budgeted from the Ministry of Finance of China(MOF)and administrated by the Chinese Academy of SciencesNatural Science Foundation of Xinjiang Uygur Autonomous Region(2022D01A360).
文摘Astronomical spectroscopy is crucial for exploring the physical properties,chemical composition,and kinematic behavior of celestial objects.With continuous advancements in observational technology,astronomical spectroscopy faces the dual challenges of rapidly expanding data volumes and relatively lagging data processing capabilities.In this context,the rise of artificial intelligence technologies offers an innovative solution to address these challenges.This paper analyzes the latest developments in the application of machine learning for astronomical spectral data mining and discusses future research directions in AI-based spectral studies.However,the application of machine learning technologies presents several challenges.The high complexity of models often comes with insufficient interpretability,complicating scientific understanding.Moreover,the large-scale computational demands place higher requirements on hardware resources,leading to a significant increase in computational costs.AI-based astronomical spectroscopy research should advance in the following key directions.First,develop efficient data augmentation techniques to enhance model generalization capabilities.Second,explore more interpretable model designs to ensure the reliability and transparency of scientific conclusions.Third,optimize computational efficiency and reduce the threshold for deep-learning applications through collaborative innovations in algorithms and hardware.Furthermore,promoting the integration of cross-band data processing is essential to achieve seamless integration and comprehensive analysis of multi-source data,providing richer,multidimensional information to uncover the mysteries of the universe.
基金Supported by Natural Science Basic Research Plan in Shaanxi Province of China(Program No.2022JM-396)the Strategic Priority Research Program of the Chinese Academy of Sciences,Grant No.XDA23040101+4 种基金Shaanxi Province Key Research and Development Projects(Program No.2023-YBSF-437)Xi'an Shiyou University Graduate Student Innovation Fund Program(Program No.YCX2412041)State Key Laboratory of Air Traffic Management System and Technology(SKLATM202001)Tianjin Education Commission Research Program Project(2020KJ028)Fundamental Research Funds for the Central Universities(3122019132)。
文摘Developing an accurate and efficient comprehensive water quality prediction model and its assessment method is crucial for the prevention and control of water pollution.Deep learning(DL),as one of the most promising technologies today,plays a crucial role in the effective assessment of water body health,which is essential for water resource management.This study models using both the original dataset and a dataset augmented with Generative Adversarial Networks(GAN).It integrates optimization algorithms(OA)with Convolutional Neural Networks(CNN)to propose a comprehensive water quality model evaluation method aiming at identifying the optimal models for different pollutants.Specifically,after preprocessing the spectral dataset,data augmentation was conducted to obtain two datasets.Then,six new models were developed on these datasets using particle swarm optimization(PSO),genetic algorithm(GA),and simulated annealing(SA)combined with CNN to simulate and forecast the concentrations of three water pollutants:Chemical Oxygen Demand(COD),Total Nitrogen(TN),and Total Phosphorus(TP).Finally,seven model evaluation methods,including uncertainty analysis,were used to evaluate the constructed models and select the optimal models for the three pollutants.The evaluation results indicate that the GPSCNN model performed best in predicting COD and TP concentrations,while the GGACNN model excelled in TN concentration prediction.Compared to existing technologies,the proposed models and evaluation methods provide a more comprehensive and rapid approach to water body prediction and assessment,offering new insights and methods for water pollution prevention and control.
文摘Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.
基金funded by University of Transport and Communications(UTC)under grant number T2025-CN-004.
文摘Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.
基金funded by the Science and Technology Project of State Grid Corporation of China(5108-202355437A-3-2-ZN).
文摘The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509Development of security monitoring technology based network behavior against encrypted cyber threats in ICT convergence environment).
文摘With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.
基金funded by Deanship of Graduate studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01264).
文摘Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.
基金supported by the project“Romanian Hub for Artificial Intelligence-HRIA”,Smart Growth,Digitization and Financial Instruments Program,2021–2027,MySMIS No.334906.
文摘Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
文摘High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).
基金Supported by Xuhui District Health Commission,No.SHXH202214.
文摘Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies.
基金supported in part by the National Key Research and Development Program of China (Grant No.2020YFC2201504)the National Natural Science Foundation of China (Grant Nos.12588101 and 12535002)。
文摘We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parameter E(z)from the DESI BAO Alcock-Paczynski(AP)data using Gaussian process to perform the null test.We find strong evidence of accelerated expansion from the DESI BAO AP data.By reconstructing the deceleration parameter q(z) from the DESI BAO AP data,we find that accelerated expansion persisted until z■0.7 with a 99.7%confidence level.Additionally,to provide insights into the Hubble tension problem,we propose combining the reconstructed E(z) with D_(H)/r_(d) data to derive a model-independent result r_(d)h=99.8±3.1 Mpc.This result is consistent with measurements from cosmic microwave background(CMB)anisotropies using the ΛCDM model.We also propose a model-independent method for reconstructing the comoving angular diameter distance D_(M)(z) from the distance modulus μ,using SNe Ia data and combining this result with DESI BAO data of D_(M)/r_(d) to constrain the value of r_(d).We find that the value of r_(d),derived from this model-independent method,is smaller than that obtained from CMB measurements,with a significant discrepancy of at least 4.17σ.All the conclusions drawn in this paper are independent of cosmological models and gravitational theories.
基金The work described in this paper was fully supported by a grant from Hong Kong Metropolitan University(RIF/2021/05).
文摘Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.
基金The National Natural Science Foundation of China(No.60673060)the Natural Science Foundation of Jiangsu Province(No.BK2005047)
文摘A new algorithm for clustering multiple data streams is proposed.The algorithm can effectively cluster data streams which show similar behavior with some unknown time delays.The algorithm uses the autoregressive (AR) modeling technique to measure correlations between data streams.It exploits estimated frequencies spectra to extract the essential features of streams.Each stream is represented as the sum of spectral components and the correlation is measured component-wise.Each spectral component is described by four parameters,namely,amplitude,phase,damping rate and frequency.The ε-lag-correlation between two spectral components is calculated.The algorithm uses such information as similarity measures in clustering data streams.Based on a sliding window model,the algorithm can continuously report the most recent clustering results and adjust the number of clusters.Experiments on real and synthetic streams show that the proposed clustering method has a higher speed and clustering quality than other similar methods.
基金sponsored by the Natural Science Foundation of China(No.41374115)National High Technology Research and Development Program of China(863 project)(No.2014AA06A605)
文摘Passive seismic data contain large amounts of low-frequency information. To effectively extract and compensate active seismic data that lack low frequencies, we propose a multitaper spectral reconstruction method based on multiple sinusoidal tapers and derive equations for multisource and multitrace conditions. Compared to conventional cross correlation and deconvolution reconstruction methods, the proposed method can more accurately reconstruct the relative amplitude of recordings. Multidomain iterative denoising improves the SNR of retrieved data. By analyzing the spectral characteristics of passive data before and after reconstruction, we found that the data are expressed more clearly after reconstruction and denoising. To compensate for the low-frequency information in active data using passive seismic data, we match the power spectrum, supplement it, and then smooth it in the frequency domain. Finally, we use numerical simulation to verify the proposed method and conduct prestack depth migration using data after low-frequency compensation. The proposed power-matching method adds the losing low frequency information in the active seismic data using the low-frequency information of passive- source seismic data. The imaging of compensated data gives a more detailed information of deep structures.
基金jointly sponsored by the National Natural Science Foundation of China(Grant No.41374078)the Geological Survey Projects of the Ministry of Land and Resources of China(Grant Nos.12120113086100 and 12120113101300)Beijing Higher Education Young Elite Teacher Project
文摘Traditional two-dimensional(2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization(SIP) data are the coproducts of the induced polarization(IP) and the electromagnetic induction(EMI) effects.This is especially true under high frequencies,where the EMI effect can exceed the IP effect.2D inversion that only considers the IP effect reduces the reliability of the inversion data.In this paper,we derive differential equations using Maxwell's equations.With the introduction of the Cole-Cole model,we use the finite-element method to conduct2 D SIP forward modeling that considers the EMI and IP effects simultaneously.The data-space Occam method,in which different constraints to the model smoothness and parametric boundaries are introduced,is then used to simultaneously obtain the four parameters of the Cole-Cole model using multi-array electric field data.This approach not only improves the stability of the inversion but also significantly reduces the solution ambiguity.To improve the computational efficiency,message passing interface programming was used to accelerate the 2D SIP forward modeling and inversion.Synthetic datasets were tested using both serial and parallel algorithms,and the tests suggest that the proposed parallel algorithm is robust and efficient.
基金Project supported by the National Natural Science Foundation of China (Nos. 30070444 and 40201021)the British Council (No. SHA/992/308)the Doctor Foundation of Qingdao University of Science and Technology.
文摘Two field experiments were conducted in Jiashan and Yuhang towns of Zhejiang Province, China, to study the feasibility of predicting N status of rice using canopy spectral reflectance. The canopy spectral reflectance of rice grown with different levels of N inputs was determined at several important growth stages. Statistical analyses showed that as a result of the different levels of N supply, there were significant differences in the N concentrations of canopy leaves at different growth stages. Since spectral reflectance measurements showed that the N status of rice was related to reflectance in the visible and NIR (near-infrared) ranges, observations for rice in 1 nm bandwidths were then converted to bandwidths in the visible and NIR spectral regions with IKONOS (space imaging) bandwidths and vegetation indices being used to predict the N status of rice. The results indicated that canopy reflectance measurements converted to ratio vegetation index (RVI) and normalized difference vegetation index (NDVI) for simulated IKONOS bands provided a better prediction of rice N status than the reflectance measurements in the simulated IKONOS bands themselves. The precision of the developed regression models using RVI and NDVI proved to be very high with R2 ranging from 0.82 to 0.94, and when validated with experimental data from a different site, the results were satisfactory with R2 ranging from 0.55 to 0.70. Thus, the results showed that theoretically it should be possible to monitor N status using remotely sensed data.