China' Mainland has a poor distribution of meteorological stations.Existing models’estimation accuracy for creating high-resolution surfaces of meteorological data is restricted for air temperature,and low for re...China' Mainland has a poor distribution of meteorological stations.Existing models’estimation accuracy for creating high-resolution surfaces of meteorological data is restricted for air temperature,and low for relative humidity and wind speed(few studies reported).This study compared the typical generalized additive model(GAM)and autoencoder-based residual neural network(hereafter,residual network for short)in terms of predicting three meteorological parameters,namely air temperature,relative humidity,and wind speed,using data from 824 monitoring stations across China’s mainland in 2015.The performance of the two models was assessed using a 10-fold cross-validation procedure.The air temperature models employ basic variables such as latitude,longitude,elevation,and the day of the year.The relative humidity models employ air temperature and ozone concentration as covariates,while the wind speed models use wind speed coarse-resolution reanalysis data as covariates,in addition to the fundamental variables.Spatial coordinates represent spatial variation,while the time index of the day captures time variation in our spatiotemporal models.In comparison to GAM,the residual network considerably improved prediction accuracy:on average,the coefficient of variation(CV)R2 of the three meteorological parameters rose by 0.21,CV root-mean square(RMSE)fell by 37%,and the relative humidity model improved the most.The accuracy of relative humidity models was considerably improved once the monthly index was included,demonstrating that varied amounts of temporal variables are crucial for relative humidity models.We also spoke about the benefits and drawbacks of using coarse resolution reanalysis data and closest neighbor values as variables.In comparison to classic GAMs,this study indicates that the residual network model may considerably increase the accuracy of national high spatial(1 km)and temporal(daily)resolution meteorological data.Our findings have implications for high-resolution and high-accuracy meteorological parameter mapping in China.展开更多
In integrating geo-spatial datasets, sometimes layers are unable to perfectly overlay each other. In most cases, the cause of misalignment is the cartographic variation of objects forming features in the datasets. Eit...In integrating geo-spatial datasets, sometimes layers are unable to perfectly overlay each other. In most cases, the cause of misalignment is the cartographic variation of objects forming features in the datasets. Either this could be due to actual changes on ground, collection, or storage approaches used leading to overlapping or openings between features. In this paper, we present an alignment method that uses adjustment algorithms to update the geometry of features within a dataset or complementary adjacent datasets so that they can align to achieve perfect integration. The method identifies every unique spatial instance in datasets and their spatial points that define all their geometry;the differences are compared and used to compute the alignment parameters. This provides a uniform geo-spatial features’ alignment taking into consideration changes in the different datasets being integrated without affecting the topology and attributes.展开更多
Through analyzing theprinciple of data sharing in the data-base system,this paper discusses theprinciple and method for integratingand sharing GIS data by data engine,introduces a way to achieve the highintegration an...Through analyzing theprinciple of data sharing in the data-base system,this paper discusses theprinciple and method for integratingand sharing GIS data by data engine,introduces a way to achieve the highintegration and sharing of GIS data on the basis of VCT in VC^(++),and pro-vides the method for uniting VCT intoRDBMS in order to implement a spa-tial database with object-oriented datamodel.展开更多
The paper presents a set of techniques of digital watermarking by which copyright and user rights messages are hidden into geo-spatial graphics data,as well as techniques of compressing and encrypting the watermarked ...The paper presents a set of techniques of digital watermarking by which copyright and user rights messages are hidden into geo-spatial graphics data,as well as techniques of compressing and encrypting the watermarked geo-spatial graphics data.The technology aims at tracing and resisting the illegal distribution and duplication of the geo-spatial graphics data product,so as to effectively protect the data producer's rights as well as to facilitate the secure sharing of geo-spatial graphics data.So far in the CIS field throughout the world,few researches have been made on digital watermarking.The research is a novel exploration both in the field of security management of geo-spatial graphics data and in the applications of digital watermarking technique.An application software employing the proposed technology has been developed.A number of experimental tests on the 1:500,000 digital bathymetric chart of the South China Sea and 1:10,000 digital topographic map of Jiangsu Province have been conducted to verify the feasibility of the proposed technology.展开更多
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a...Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.展开更多
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a...Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.展开更多
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi...Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.展开更多
The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack...The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.展开更多
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp...With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.展开更多
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic...Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.展开更多
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods...Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.展开更多
The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities...The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.展开更多
High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging ...High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).展开更多
Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including ...Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies.展开更多
We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parame...We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parameter E(z)from the DESI BAO Alcock-Paczynski(AP)data using Gaussian process to perform the null test.We find strong evidence of accelerated expansion from the DESI BAO AP data.By reconstructing the deceleration parameter q(z) from the DESI BAO AP data,we find that accelerated expansion persisted until z■0.7 with a 99.7%confidence level.Additionally,to provide insights into the Hubble tension problem,we propose combining the reconstructed E(z) with D_(H)/r_(d) data to derive a model-independent result r_(d)h=99.8±3.1 Mpc.This result is consistent with measurements from cosmic microwave background(CMB)anisotropies using the ΛCDM model.We also propose a model-independent method for reconstructing the comoving angular diameter distance D_(M)(z) from the distance modulus μ,using SNe Ia data and combining this result with DESI BAO data of D_(M)/r_(d) to constrain the value of r_(d).We find that the value of r_(d),derived from this model-independent method,is smaller than that obtained from CMB measurements,with a significant discrepancy of at least 4.17σ.All the conclusions drawn in this paper are independent of cosmological models and gravitational theories.展开更多
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d...Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.展开更多
Concerning about the rapid urban growth in recent China, this study takes Beijing as a case and puts forward that urban sprawl can be measured from spatial configuration, urban growth efficiency and external impacts, ...Concerning about the rapid urban growth in recent China, this study takes Beijing as a case and puts forward that urban sprawl can be measured from spatial configuration, urban growth efficiency and external impacts, and then develops a geo-spatial indices system for measuring sprawl, a total of 13 indicators. In order to calculate these indices, different sources data are selected, including land use maps, former land use planning, land price and floor-area-ratio samples, digitized map of the highways and city centers, population and GDP statistical data, etc. Various GIS spatial analysis methods are used to spatialize these indices into 100mx100m cells. Besides, an integrated urban sprawl index is calculated by weight sum of these 13 indices. The application result indicates that geo-spatial indices system can capture most of the typical features and interior differentia of urban sprawl. Construction land in Beijing has kept fast growing with large amount, low efficiency and disordered spatial configuration, indicating a typical sprawling tendency. The following specific sprawl features are identified by each indicator: (1) typical spatial configuration of sprawling: obvious fragmentation and irregularity of landscape due to unsuccessful enforcement of land use planning, unadvisable pattern of typical discontinuous development, strip development and leapfrog development; (2) low efficiency of sprawl: low development density, low population density and economic output in newly developed area; and (3) negative impacts on agriculture, environment and city life. According to the integrated sprawl index, the sprawling amount in the northern part is larger than that in the southern, but the sprawling extent is in converse case; most sprawling area include the marginal area of the near suburbs and the area between highways, etc. Four sprawling patterns are identified: randomly expansion at urban fringe, strip development along or between highways, scattered development of industrial land, leapfrog development of urban residence and industrial area.展开更多
The process of sharing of data has become easier than ever with the advancement of cloud computing and software tools.However,big challenges remain such as efficient handling of big geospatial data,supporting and shar...The process of sharing of data has become easier than ever with the advancement of cloud computing and software tools.However,big challenges remain such as efficient handling of big geospatial data,supporting and sharing of crowd sourced/citizen science data,integration with semantic heterogeneity,and inclusion of agile processes for continuous improvement of geospatial technology.This paper discusses the new frontiers regarding these challenges and the related work performed by the Open Geospatial Consortium,the world leading organization focused on developing open geospatial standards that“geo-enable”the Web,wireless,and location-based services and mainstream IT.展开更多
The enhancement of computing power,the maturity of learning algorithms,and the richness of application scenarios make Artificial Intelligence(AI)solution increasingly attractive when solving Geo-spatial Information Sc...The enhancement of computing power,the maturity of learning algorithms,and the richness of application scenarios make Artificial Intelligence(AI)solution increasingly attractive when solving Geo-spatial Information Science(GSIS)problems.These include image matching,image target detection,change detection,image retrieval,and for generating data models of various types.This paper discusses the connection and synthesis between AI and GSIS in block adjustment,image search and discovery in big databases,automatic change detection,and detection of abnormalities,demonstrating that AI can integrate GSIS.Moreover,the concept of Earth Observation Brain and Smart Geo-spatial Service(SGSS)is introduced in the end,and it is expected to promote the development of GSIS into broadening applications.展开更多
Tourism is a rapidly growing investment point in Sri Lanka, where huge investment is takeing place. Even though the investment is very massive, the planning, development, and marketing are key components of success in...Tourism is a rapidly growing investment point in Sri Lanka, where huge investment is takeing place. Even though the investment is very massive, the planning, development, and marketing are key components of success in tourism zone enhancement. The main objective of this study was to implement a geo-spatial information system for development of tourism in Kandy district. Primary data collection methods i.e. questionnaire survey, interviews, focus group interviews, and observations were employed for data collection. Google maps with Google API standards which are specially designed for developers and computer programmers were used for implementation of the system. System requirements were identified by interviewing tourists and observations made on tourist sites. Proximity analysis, spatial joint, and network analysis with Google direction application program interface (API) and Google place API were used to analyze data. The study highlights the potential tourist attractions and the accessibility and other required details through a web output. Issues and challenges faced by travelers are mainly lack of specific location information, public transport schedules, and reliable tourist attraction information. Online geo-spatial information system created in this study provides a guide for tourists to fred the destination routes, the service areas, and all necessary details on particular destinations.展开更多
文摘China' Mainland has a poor distribution of meteorological stations.Existing models’estimation accuracy for creating high-resolution surfaces of meteorological data is restricted for air temperature,and low for relative humidity and wind speed(few studies reported).This study compared the typical generalized additive model(GAM)and autoencoder-based residual neural network(hereafter,residual network for short)in terms of predicting three meteorological parameters,namely air temperature,relative humidity,and wind speed,using data from 824 monitoring stations across China’s mainland in 2015.The performance of the two models was assessed using a 10-fold cross-validation procedure.The air temperature models employ basic variables such as latitude,longitude,elevation,and the day of the year.The relative humidity models employ air temperature and ozone concentration as covariates,while the wind speed models use wind speed coarse-resolution reanalysis data as covariates,in addition to the fundamental variables.Spatial coordinates represent spatial variation,while the time index of the day captures time variation in our spatiotemporal models.In comparison to GAM,the residual network considerably improved prediction accuracy:on average,the coefficient of variation(CV)R2 of the three meteorological parameters rose by 0.21,CV root-mean square(RMSE)fell by 37%,and the relative humidity model improved the most.The accuracy of relative humidity models was considerably improved once the monthly index was included,demonstrating that varied amounts of temporal variables are crucial for relative humidity models.We also spoke about the benefits and drawbacks of using coarse resolution reanalysis data and closest neighbor values as variables.In comparison to classic GAMs,this study indicates that the residual network model may considerably increase the accuracy of national high spatial(1 km)and temporal(daily)resolution meteorological data.Our findings have implications for high-resolution and high-accuracy meteorological parameter mapping in China.
文摘In integrating geo-spatial datasets, sometimes layers are unable to perfectly overlay each other. In most cases, the cause of misalignment is the cartographic variation of objects forming features in the datasets. Either this could be due to actual changes on ground, collection, or storage approaches used leading to overlapping or openings between features. In this paper, we present an alignment method that uses adjustment algorithms to update the geometry of features within a dataset or complementary adjacent datasets so that they can align to achieve perfect integration. The method identifies every unique spatial instance in datasets and their spatial points that define all their geometry;the differences are compared and used to compute the alignment parameters. This provides a uniform geo-spatial features’ alignment taking into consideration changes in the different datasets being integrated without affecting the topology and attributes.
文摘Through analyzing theprinciple of data sharing in the data-base system,this paper discusses theprinciple and method for integratingand sharing GIS data by data engine,introduces a way to achieve the highintegration and sharing of GIS data on the basis of VCT in VC^(++),and pro-vides the method for uniting VCT intoRDBMS in order to implement a spa-tial database with object-oriented datamodel.
基金Under the auspices of Jiangsu Provincial Science and Technology Fundation of Surveying and Mapping (No. 200416 )
文摘The paper presents a set of techniques of digital watermarking by which copyright and user rights messages are hidden into geo-spatial graphics data,as well as techniques of compressing and encrypting the watermarked geo-spatial graphics data.The technology aims at tracing and resisting the illegal distribution and duplication of the geo-spatial graphics data product,so as to effectively protect the data producer's rights as well as to facilitate the secure sharing of geo-spatial graphics data.So far in the CIS field throughout the world,few researches have been made on digital watermarking.The research is a novel exploration both in the field of security management of geo-spatial graphics data and in the applications of digital watermarking technique.An application software employing the proposed technology has been developed.A number of experimental tests on the 1:500,000 digital bathymetric chart of the South China Sea and 1:10,000 digital topographic map of Jiangsu Province have been conducted to verify the feasibility of the proposed technology.
文摘Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.
基金funded by University of Transport and Communications(UTC)under grant number T2025-CN-004.
文摘Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.
基金funded by the Science and Technology Project of State Grid Corporation of China(5108-202355437A-3-2-ZN).
文摘The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509Development of security monitoring technology based network behavior against encrypted cyber threats in ICT convergence environment).
文摘With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.
基金funded by Deanship of Graduate studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01264).
文摘Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.
基金supported by the project“Romanian Hub for Artificial Intelligence-HRIA”,Smart Growth,Digitization and Financial Instruments Program,2021–2027,MySMIS No.334906.
文摘Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.
文摘The rapid growth of biomedical data,particularly multi-omics data including genomes,transcriptomics,proteomics,metabolomics,and epigenomics,medical research and clinical decision-making confront both new opportunities and obstacles.The huge and diversified nature of these datasets cannot always be managed using traditional data analysis methods.As a consequence,deep learning has emerged as a strong tool for analysing numerous omics data due to its ability to handle complex and non-linear relationships.This paper explores the fundamental concepts of deep learning and how they are used in multi-omics medical data mining.We demonstrate how autoencoders,variational autoencoders,multimodal models,attention mechanisms,transformers,and graph neural networks enable pattern analysis and recognition across all omics data.Deep learning has been found to be effective in illness classification,biomarker identification,gene network learning,and therapeutic efficacy prediction.We also consider critical problems like as data quality,model explainability,whether findings can be repeated,and computational power requirements.We now consider future elements of combining omics with clinical and imaging data,explainable AI,federated learning,and real-time diagnostics.Overall,this study emphasises the need of collaborating across disciplines to advance deep learning-based multi-omics research for precision medicine and comprehending complicated disorders.
文摘High-throughput transcriptomics has evolved from bulk RNA-seq to single-cell and spatial profiling,yet its clinical translation still depends on effective integration across diverse omics and data modalities.Emerging foundation models and multimodal learning frameworks are enabling scalable and transferable representations of cellular states,while advances in interpretability and real-world data integration are bridging the gap between discovery and clinical application.This paper outlines a concise roadmap for AI-driven,transcriptome-centered multi-omics integration in precision medicine(Figure 1).
基金Supported by Xuhui District Health Commission,No.SHXH202214.
文摘Gastrointestinal tumors require personalized treatment strategies due to their heterogeneity and complexity.Multimodal artificial intelligence(AI)addresses this challenge by integrating diverse data sources-including computed tomography(CT),magnetic resonance imaging(MRI),endoscopic imaging,and genomic profiles-to enable intelligent decision-making for individualized therapy.This approach leverages AI algorithms to fuse imaging,endoscopic,and omics data,facilitating comprehensive characterization of tumor biology,prediction of treatment response,and optimization of therapeutic strategies.By combining CT and MRI for structural assessment,endoscopic data for real-time visual inspection,and genomic information for molecular profiling,multimodal AI enhances the accuracy of patient stratification and treatment personalization.The clinical implementation of this technology demonstrates potential for improving patient outcomes,advancing precision oncology,and supporting individualized care in gastrointestinal cancers.Ultimately,multimodal AI serves as a transformative tool in oncology,bridging data integration with clinical application to effectively tailor therapies.
基金supported in part by the National Key Research and Development Program of China (Grant No.2020YFC2201504)the National Natural Science Foundation of China (Grant Nos.12588101 and 12535002)。
文摘We investigate the null tests of cosmic accelerated expansion by using the baryon acoustic oscillation(BAO)data measured by the dark energy spectroscopic instrument(DESI)and reconstruct the dimensionless Hubble parameter E(z)from the DESI BAO Alcock-Paczynski(AP)data using Gaussian process to perform the null test.We find strong evidence of accelerated expansion from the DESI BAO AP data.By reconstructing the deceleration parameter q(z) from the DESI BAO AP data,we find that accelerated expansion persisted until z■0.7 with a 99.7%confidence level.Additionally,to provide insights into the Hubble tension problem,we propose combining the reconstructed E(z) with D_(H)/r_(d) data to derive a model-independent result r_(d)h=99.8±3.1 Mpc.This result is consistent with measurements from cosmic microwave background(CMB)anisotropies using the ΛCDM model.We also propose a model-independent method for reconstructing the comoving angular diameter distance D_(M)(z) from the distance modulus μ,using SNe Ia data and combining this result with DESI BAO data of D_(M)/r_(d) to constrain the value of r_(d).We find that the value of r_(d),derived from this model-independent method,is smaller than that obtained from CMB measurements,with a significant discrepancy of at least 4.17σ.All the conclusions drawn in this paper are independent of cosmological models and gravitational theories.
基金The work described in this paper was fully supported by a grant from Hong Kong Metropolitan University(RIF/2021/05).
文摘Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.
基金National Natural Science Foundation of China, No.40571056 Sustentation Fund on Doctoral Thesis from Beijing Science and Technology Committee, No.ZZ0608
文摘Concerning about the rapid urban growth in recent China, this study takes Beijing as a case and puts forward that urban sprawl can be measured from spatial configuration, urban growth efficiency and external impacts, and then develops a geo-spatial indices system for measuring sprawl, a total of 13 indicators. In order to calculate these indices, different sources data are selected, including land use maps, former land use planning, land price and floor-area-ratio samples, digitized map of the highways and city centers, population and GDP statistical data, etc. Various GIS spatial analysis methods are used to spatialize these indices into 100mx100m cells. Besides, an integrated urban sprawl index is calculated by weight sum of these 13 indices. The application result indicates that geo-spatial indices system can capture most of the typical features and interior differentia of urban sprawl. Construction land in Beijing has kept fast growing with large amount, low efficiency and disordered spatial configuration, indicating a typical sprawling tendency. The following specific sprawl features are identified by each indicator: (1) typical spatial configuration of sprawling: obvious fragmentation and irregularity of landscape due to unsuccessful enforcement of land use planning, unadvisable pattern of typical discontinuous development, strip development and leapfrog development; (2) low efficiency of sprawl: low development density, low population density and economic output in newly developed area; and (3) negative impacts on agriculture, environment and city life. According to the integrated sprawl index, the sprawling amount in the northern part is larger than that in the southern, but the sprawling extent is in converse case; most sprawling area include the marginal area of the near suburbs and the area between highways, etc. Four sprawling patterns are identified: randomly expansion at urban fringe, strip development along or between highways, scattered development of industrial land, leapfrog development of urban residence and industrial area.
文摘The process of sharing of data has become easier than ever with the advancement of cloud computing and software tools.However,big challenges remain such as efficient handling of big geospatial data,supporting and sharing of crowd sourced/citizen science data,integration with semantic heterogeneity,and inclusion of agile processes for continuous improvement of geospatial technology.This paper discusses the new frontiers regarding these challenges and the related work performed by the Open Geospatial Consortium,the world leading organization focused on developing open geospatial standards that“geo-enable”the Web,wireless,and location-based services and mainstream IT.
基金This work was supported in part by the National key R and D plan on strategic international scientific and technological innovation cooperation special project[grant number 2016YFE0202300]the National Natural Science Foundation of China[grant number 61671332,41771452,51708426,41890820,41771454]+1 种基金the Natural Science Fund of Hubei Province in China[grant number 2018CFA007]the Independent Research Projects of Wuhan University[grant number 2042018kf0250].
文摘The enhancement of computing power,the maturity of learning algorithms,and the richness of application scenarios make Artificial Intelligence(AI)solution increasingly attractive when solving Geo-spatial Information Science(GSIS)problems.These include image matching,image target detection,change detection,image retrieval,and for generating data models of various types.This paper discusses the connection and synthesis between AI and GSIS in block adjustment,image search and discovery in big databases,automatic change detection,and detection of abnormalities,demonstrating that AI can integrate GSIS.Moreover,the concept of Earth Observation Brain and Smart Geo-spatial Service(SGSS)is introduced in the end,and it is expected to promote the development of GSIS into broadening applications.
文摘Tourism is a rapidly growing investment point in Sri Lanka, where huge investment is takeing place. Even though the investment is very massive, the planning, development, and marketing are key components of success in tourism zone enhancement. The main objective of this study was to implement a geo-spatial information system for development of tourism in Kandy district. Primary data collection methods i.e. questionnaire survey, interviews, focus group interviews, and observations were employed for data collection. Google maps with Google API standards which are specially designed for developers and computer programmers were used for implementation of the system. System requirements were identified by interviewing tourists and observations made on tourist sites. Proximity analysis, spatial joint, and network analysis with Google direction application program interface (API) and Google place API were used to analyze data. The study highlights the potential tourist attractions and the accessibility and other required details through a web output. Issues and challenges faced by travelers are mainly lack of specific location information, public transport schedules, and reliable tourist attraction information. Online geo-spatial information system created in this study provides a guide for tourists to fred the destination routes, the service areas, and all necessary details on particular destinations.