This paper proposes a useful web-based system for the management and sharing of electron probe micro-analysis( EPMA)data in geology. A new web-based architecture that integrates the management and sharing functions is...This paper proposes a useful web-based system for the management and sharing of electron probe micro-analysis( EPMA)data in geology. A new web-based architecture that integrates the management and sharing functions is developed and implemented.Earth scientists can utilize this system to not only manage their data,but also easily communicate and share it with other researchers.Data query methods provide the core functionality of the proposed management and sharing modules. The modules in this system have been developed using cloud GIS technologies,which help achieve real-time spatial area retrieval on a map. The system has been tested by approximately 263 users at Jilin University and Beijing SHRIMP Center. A survey was conducted among these users to estimate the usability of the primary functions of the system,and the assessment result is summarized and presented.展开更多
In this paper we propose a service-oriented architecture for spatial data integration (SOA-SDI) in the context of a large number of available spatial data sources that are physically sitting at different places, and d...In this paper we propose a service-oriented architecture for spatial data integration (SOA-SDI) in the context of a large number of available spatial data sources that are physically sitting at different places, and develop web-based GIS systems based on SOA-SDI, allowing client applications to pull in, analyze and present spatial data from those available spatial data sources. The proposed architecture logically includes 4 layers or components; they are layer of multiple data provider services, layer of data in-tegration, layer of backend services, and front-end graphical user interface (GUI) for spatial data presentation. On the basis of the 4-layered SOA-SDI framework, WebGIS applications can be quickly deployed, which proves that SOA-SDI has the potential to reduce the input of software development and shorten the development period.展开更多
Recently, use of mobile communicational devices in field data collection is increasing such as smart phones and cellular phones due to emergence of embedded Global Position System GPS and Wi-Fi Internet access. Accura...Recently, use of mobile communicational devices in field data collection is increasing such as smart phones and cellular phones due to emergence of embedded Global Position System GPS and Wi-Fi Internet access. Accurate timely and handy field data collection is required for disaster management and emergency quick responses. In this article, we introduce web-based GIS system to collect the field data by personal mobile phone through Post Office Protocol POP3 mail server. The main objective of this work is to demonstrate real-time field data collection method to the students using their mobile phone to collect field data by timely and handy manners, either individual or group survey in local or global scale research.展开更多
With long-term marine surveys and research,and especially with the development of new marine environment monitoring technologies,prodigious amounts of complex marine environmental data are generated,and continuously i...With long-term marine surveys and research,and especially with the development of new marine environment monitoring technologies,prodigious amounts of complex marine environmental data are generated,and continuously increase rapidly.Features of these data include massive volume,widespread distribution,multiple-sources,heterogeneous,multi-dimensional and dynamic in structure and time.The present study recommends an integrative visualization solution for these data,to enhance the visual display of data and data archives,and to develop a joint use of these data distributed among different organizations or communities.This study also analyses the web services technologies and defines the concept of the marine information gird,then focuses on the spatiotemporal visualization method and proposes a process-oriented spatiotemporal visualization method.We discuss how marine environmental data can be organized based on the spatiotemporal visualization method,and how organized data are represented for use with web services and stored in a reusable fashion.In addition,we provide an original visualization architecture that is integrative and based on the explored technologies.In the end,we propose a prototype system of marine environmental data of the South China Sea for visualizations of Argo floats,sea surface temperature fields,sea current fields,salinity,in-situ investigation data,and ocean stations.An integration visualization architecture is illustrated on the prototype system,which highlights the process-oriented temporal visualization method and demonstrates the benefit of the architecture and the methods described in this study.展开更多
This paper is concerned with the development of product data management (PDM) systems--WPDM systems based on web technologies. As a tool to integrate information, traditional PDM system has many benefits for the com...This paper is concerned with the development of product data management (PDM) systems--WPDM systems based on web technologies. As a tool to integrate information, traditional PDM system has many benefits for the companies in such aspects as improving design productivity, better control over projects and so on. With the maturing of web technologies, the advantages of WPDM system are obvious. We will show these advantages in detail in Part 3. WPDM system is built on three-tier application model to provide security and flexibility, they are back-end, middle layer and front-end. The basic designs in each layer will be briefly introduced in Part 4. In the future, WPDM will be extended to integrate with other applications to provide a complete web-based engineering environment.展开更多
Remote sensing and web-based platforms have emerged as vital tools in the effective monitoring of mangrove ecosystems, which are crucial for coastal protection, biodiversity, and carbon sequestration. Utilizing satell...Remote sensing and web-based platforms have emerged as vital tools in the effective monitoring of mangrove ecosystems, which are crucial for coastal protection, biodiversity, and carbon sequestration. Utilizing satellite imagery and aerial data, remote sensing allows researchers to assess the health and extent of mangrove forests over large areas and time periods, providing insights into changes due to environmental stressors like climate change, urbanization, and deforestation. Coupled with web-based platforms, this technology facilitates real-time data sharing and collaborative research efforts among scientists, policymakers, and conservationists. Thus, there is a need to grow this research interest among experts working in this kind of ecosystem. The aim of this paper is to provide a comprehensive literature review on the effective role of remote sensing and web-based platform in monitoring mangrove ecosystem. The research paper utilized the thematic approach to extract specific information to use in the discussion which helped realize the efficiency of digital monitoring for the environment. Web-based platforms and remote sensing represent a powerful tool for environmental monitoring, particularly in the context of forest ecosystems. They facilitate the accessibility of vital data, promote collaboration among stakeholders, support evidence-based policymaking, and engage communities in conservation efforts. As experts confront the urgent challenges posed by climate change and environmental degradation, leveraging technology through web-based platforms is essential for fostering a sustainable future for the forests of the world.展开更多
Large-scale deep-seated landslides pose a significant threat to human life and infrastructure.Therefore,closely monitoring these landslides is crucial for assessing and mitigating their associated risks.In this paper,...Large-scale deep-seated landslides pose a significant threat to human life and infrastructure.Therefore,closely monitoring these landslides is crucial for assessing and mitigating their associated risks.In this paper,the authors introduce the So Lo Mon framework,a comprehensive monitoring system developed for three large-scale landslides in the Autonomous Province of Bolzano,Italy.A web-based platform integrates various monitoring data(GNSS,topographic data,in-place inclinometer),providing a user-friendly interface for visualizing and analyzing the collected data.This facilitates the identification of trends and patterns in landslide behaviour,enabling the triggering of warnings and the implementation of appropriate mitigation measures.The So Lo Mon platform has proven to be an invaluable tool for managing the risks associated with large-scale landslides through non-structural measures and driving countermeasure works design.It serves as a centralized data repository,offering visualization and analysis tools.This information empowers decisionmakers to make informed choices regarding risk mitigation,ultimately ensuring the safety of communities and infrastructures.展开更多
Unbalanced energy consumption distribution caused by the concentration of facilities and population topples the natural energy equilibrium of a city and causes environmental problems such as urban tropical night,heat ...Unbalanced energy consumption distribution caused by the concentration of facilities and population topples the natural energy equilibrium of a city and causes environmental problems such as urban tropical night,heat island phenomenon,global warming deterioration.Therefore,to secure eco-friendliness and sustainability of a city,it is necessary to introduce measures to alleviate the unequal distribution phenomenon of urban energy consumption from the city planning stage.For this purpose,the first step is to understand the current energy environment.The urban energy environment is affected by many factors in addition to gathering of buildings.Therefore,there is a limit to fully understanding advanced urban energy environment with only simple statistical urban information management technique.Research on methods of analyzing urban energy environment through simulation of recent urban scale is underway.There is not enough discussion about basic informaion databases for environmental analysis simulation of urban energy.This study presents a method using GIS(geographic information system) and web-based environmental information database as a way to improve the simulation accuracy.First,environmental information factors used for urban simulation were derived,and a web-based environmental information database targeting Daegu metropolitan city of Korea was built.Then,the urban energy environment was analyzed on a trial basis by linking the database with GIS.展开更多
Objective: To establish an interactive management model for community-oriented high-risk osteoporosis in conjunction with a rural community health service center. Materials and Methods: Toward multidimensional analysi...Objective: To establish an interactive management model for community-oriented high-risk osteoporosis in conjunction with a rural community health service center. Materials and Methods: Toward multidimensional analysis of data, the system we developed combines basic principles of data warehouse technology oriented to the needs of community health services. This paper introduces the steps we took in constructing the data warehouse;the case presented here is that of a district community health management information system in Changshu, Jiangsu Province, China. For our data warehouse, we chose the MySQL 4.5 relational database, the Browser/Server, (B/S) model, and hypertext preprocessor as the development tools. Results: The system allowed online analysis processing and next-stage work preparation, and provided a platform for data management, data query, online analysis, etc., in community health service center, specialist outpatient for osteoporosis, and health administration sectors. Conclusion: The users of remote management system and data warehouse can include community health service centers, osteoporosis departments of hospitals, and health administration departments;provide reference for policymaking of health administrators, residents’ health information, and intervention suggestions for general practitioners in community health service centers, patients’ follow-up information for osteoporosis specialists in general hospitals.展开更多
Geographic Hypermedia(GH)is a rich and interactive map document with geo-tagged graphics,sound and video ele-ments.A Geographic Hypermedia System(GHS)is designed to manage,query,display and explore GH resources.Recogn...Geographic Hypermedia(GH)is a rich and interactive map document with geo-tagged graphics,sound and video ele-ments.A Geographic Hypermedia System(GHS)is designed to manage,query,display and explore GH resources.Recognizing emerging geo-tagged videos and measurable images as valuable geographic data resources,this paper aims to design a web-based GHS using web mapping,geoprocessing,video streaming and XMLHTTP services.The concept,data model,system design and implementation of this GHS are discussed in detail.Geo-tagged videos are modeled as temporal,spatial and metadata entities such as video clip,video path and frame-based descriptions.Similarly,geo-tagged stereo video and derived data are modeled as interre-lated entities:original video,rectified video,stereo video,video path,frame-based description and measurable image(rectified and disparity image with baseline,interior and exterior parameters).The entity data are organized into video files,GIS layers with linear referencing and XML documents for web publishing.These data can be integrated in HTML pages or used as Rich Internet Appli-cations(RIA)using standard web technologies such as the AJAX,ASP.NET and RIA frameworks.An SOA-based GHS is designed using four types of web services:ArcGIS Server 9.3 web mapping and geoprocessing services,Flash FMS 3.0 video streaming ser-vices and GeoRSS XMLHTTP services.GHS applications in road facility management and campus hypermapping indicate that the GH data models and technical solutions introduced in this paper are useful and flexible enough for wider deployment as a GHS.展开更多
Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale buildin...Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale building energy use modeling.Due to the limited availability of socio-economic geospatial data,it is more challenging to map building functions than building morphological information,especially over large areas.In this study,we proposed an integrated framework to map building functions in 50 U.S.cities by integrating multi-source web-based geospatial data.First,a web crawler was developed to extract Points of Interest(POIs)from Tripadvisor.com,and a map crawler was developed to extract POIs and land use parcels from Google Maps.Second,an unsupervised machine learning algorithm named OneClassSVM was used to identify residential buildings based on landscape features derived from Microsoft building footprints.Third,the type ratio of POIs and the area ratio of land use parcels were used to identify six non-residential functions(i.e.hospital,hotel,school,shop,restaurant,and office).The accuracy assessment indicates that the proposed framework performed well,with an average overall accuracy of 94%and a kappa coefficient of 0.63.With the worldwide coverage of Google Maps and Tripadvisor.com,the proposed framework is transferable to other cities over the world.The data products generated from this study are of great use for quantitative city-scale urban studies,such as building energy use modeling at the single building level over large areas.展开更多
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a...Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.展开更多
Chlorophyll-a (Chl-a) concentration in lakes can tell a lot about a lake’s water quality and ecosystem. It is a measure of the amount of algae growing in a waterbody and can be used to monitor the trophic condition o...Chlorophyll-a (Chl-a) concentration in lakes can tell a lot about a lake’s water quality and ecosystem. It is a measure of the amount of algae growing in a waterbody and can be used to monitor the trophic condition of a waterbody. We studied the pre and post effects of marine ranch construction in Chl-a concentration in Zhelin Bay, Southern China using Normalized Difference Chlorophyll Index (NDCI) and a web-based tool (https://mapcoordinates.info/). We used 8 day composite MODIS image collections of 500 m resolution and randomly selected two stations to extract the chlorophyll-a concentration values through the web-based tool. We recorded the slight increase in NDCI values in all stations after the construction of marine ranch which is a good indicator of the marine organisms’ reproduction and survival.展开更多
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a...Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.展开更多
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi...Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.展开更多
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp...With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.展开更多
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic...Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.展开更多
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods...Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.展开更多
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d...Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.展开更多
基金National Major Scientific Instruments and Equipment Development Special Funds,China(No.2016YFF0103303)National Science and Technology Support Program,China(No.2014BAK02B03)
文摘This paper proposes a useful web-based system for the management and sharing of electron probe micro-analysis( EPMA)data in geology. A new web-based architecture that integrates the management and sharing functions is developed and implemented.Earth scientists can utilize this system to not only manage their data,but also easily communicate and share it with other researchers.Data query methods provide the core functionality of the proposed management and sharing modules. The modules in this system have been developed using cloud GIS technologies,which help achieve real-time spatial area retrieval on a map. The system has been tested by approximately 263 users at Jilin University and Beijing SHRIMP Center. A survey was conducted among these users to estimate the usability of the primary functions of the system,and the assessment result is summarized and presented.
基金Supported by the Research Fund of Key GIS Lab of the Education Ministry (No. 200610)
文摘In this paper we propose a service-oriented architecture for spatial data integration (SOA-SDI) in the context of a large number of available spatial data sources that are physically sitting at different places, and develop web-based GIS systems based on SOA-SDI, allowing client applications to pull in, analyze and present spatial data from those available spatial data sources. The proposed architecture logically includes 4 layers or components; they are layer of multiple data provider services, layer of data in-tegration, layer of backend services, and front-end graphical user interface (GUI) for spatial data presentation. On the basis of the 4-layered SOA-SDI framework, WebGIS applications can be quickly deployed, which proves that SOA-SDI has the potential to reduce the input of software development and shorten the development period.
文摘Recently, use of mobile communicational devices in field data collection is increasing such as smart phones and cellular phones due to emergence of embedded Global Position System GPS and Wi-Fi Internet access. Accurate timely and handy field data collection is required for disaster management and emergency quick responses. In this article, we introduce web-based GIS system to collect the field data by personal mobile phone through Post Office Protocol POP3 mail server. The main objective of this work is to demonstrate real-time field data collection method to the students using their mobile phone to collect field data by timely and handy manners, either individual or group survey in local or global scale research.
基金Supported by the Knowledge Innovation Program of the Chinese Academy of Sciences (No.KZCX1-YW-12-04)the National High Technology Research and Development Program of China (863 Program) (Nos.2009AA12Z148,2007AA092202)Support for this study was provided by the Institute of Geographical Sciences and the Natural Resources Research,Chinese Academy of Science (IGSNRR,CAS) and the Institute of Oceanology, CAS
文摘With long-term marine surveys and research,and especially with the development of new marine environment monitoring technologies,prodigious amounts of complex marine environmental data are generated,and continuously increase rapidly.Features of these data include massive volume,widespread distribution,multiple-sources,heterogeneous,multi-dimensional and dynamic in structure and time.The present study recommends an integrative visualization solution for these data,to enhance the visual display of data and data archives,and to develop a joint use of these data distributed among different organizations or communities.This study also analyses the web services technologies and defines the concept of the marine information gird,then focuses on the spatiotemporal visualization method and proposes a process-oriented spatiotemporal visualization method.We discuss how marine environmental data can be organized based on the spatiotemporal visualization method,and how organized data are represented for use with web services and stored in a reusable fashion.In addition,we provide an original visualization architecture that is integrative and based on the explored technologies.In the end,we propose a prototype system of marine environmental data of the South China Sea for visualizations of Argo floats,sea surface temperature fields,sea current fields,salinity,in-situ investigation data,and ocean stations.An integration visualization architecture is illustrated on the prototype system,which highlights the process-oriented temporal visualization method and demonstrates the benefit of the architecture and the methods described in this study.
文摘This paper is concerned with the development of product data management (PDM) systems--WPDM systems based on web technologies. As a tool to integrate information, traditional PDM system has many benefits for the companies in such aspects as improving design productivity, better control over projects and so on. With the maturing of web technologies, the advantages of WPDM system are obvious. We will show these advantages in detail in Part 3. WPDM system is built on three-tier application model to provide security and flexibility, they are back-end, middle layer and front-end. The basic designs in each layer will be briefly introduced in Part 4. In the future, WPDM will be extended to integrate with other applications to provide a complete web-based engineering environment.
文摘Remote sensing and web-based platforms have emerged as vital tools in the effective monitoring of mangrove ecosystems, which are crucial for coastal protection, biodiversity, and carbon sequestration. Utilizing satellite imagery and aerial data, remote sensing allows researchers to assess the health and extent of mangrove forests over large areas and time periods, providing insights into changes due to environmental stressors like climate change, urbanization, and deforestation. Coupled with web-based platforms, this technology facilitates real-time data sharing and collaborative research efforts among scientists, policymakers, and conservationists. Thus, there is a need to grow this research interest among experts working in this kind of ecosystem. The aim of this paper is to provide a comprehensive literature review on the effective role of remote sensing and web-based platform in monitoring mangrove ecosystem. The research paper utilized the thematic approach to extract specific information to use in the discussion which helped realize the efficiency of digital monitoring for the environment. Web-based platforms and remote sensing represent a powerful tool for environmental monitoring, particularly in the context of forest ecosystems. They facilitate the accessibility of vital data, promote collaboration among stakeholders, support evidence-based policymaking, and engage communities in conservation efforts. As experts confront the urgent challenges posed by climate change and environmental degradation, leveraging technology through web-based platforms is essential for fostering a sustainable future for the forests of the world.
基金funded by the So Lo Mon project“Monitoraggio a Lungo Termine di Grandi Frane basato su Sistemi Integrati di Sensori e Reti”(Longterm monitoring of large-scale landslides based on integrated systems of sensors and networks),Program EFRE-FESR 2014–2020,Project EFRE-FESR4008 South Tyrol–Person in charge:V.Mair。
文摘Large-scale deep-seated landslides pose a significant threat to human life and infrastructure.Therefore,closely monitoring these landslides is crucial for assessing and mitigating their associated risks.In this paper,the authors introduce the So Lo Mon framework,a comprehensive monitoring system developed for three large-scale landslides in the Autonomous Province of Bolzano,Italy.A web-based platform integrates various monitoring data(GNSS,topographic data,in-place inclinometer),providing a user-friendly interface for visualizing and analyzing the collected data.This facilitates the identification of trends and patterns in landslide behaviour,enabling the triggering of warnings and the implementation of appropriate mitigation measures.The So Lo Mon platform has proven to be an invaluable tool for managing the risks associated with large-scale landslides through non-structural measures and driving countermeasure works design.It serves as a centralized data repository,offering visualization and analysis tools.This information empowers decisionmakers to make informed choices regarding risk mitigation,ultimately ensuring the safety of communities and infrastructures.
基金Funded by the National Research Foundation of Korea from the Korea government (MEST) under grant No. NRF-2010-0029455
文摘Unbalanced energy consumption distribution caused by the concentration of facilities and population topples the natural energy equilibrium of a city and causes environmental problems such as urban tropical night,heat island phenomenon,global warming deterioration.Therefore,to secure eco-friendliness and sustainability of a city,it is necessary to introduce measures to alleviate the unequal distribution phenomenon of urban energy consumption from the city planning stage.For this purpose,the first step is to understand the current energy environment.The urban energy environment is affected by many factors in addition to gathering of buildings.Therefore,there is a limit to fully understanding advanced urban energy environment with only simple statistical urban information management technique.Research on methods of analyzing urban energy environment through simulation of recent urban scale is underway.There is not enough discussion about basic informaion databases for environmental analysis simulation of urban energy.This study presents a method using GIS(geographic information system) and web-based environmental information database as a way to improve the simulation accuracy.First,environmental information factors used for urban simulation were derived,and a web-based environmental information database targeting Daegu metropolitan city of Korea was built.Then,the urban energy environment was analyzed on a trial basis by linking the database with GIS.
文摘Objective: To establish an interactive management model for community-oriented high-risk osteoporosis in conjunction with a rural community health service center. Materials and Methods: Toward multidimensional analysis of data, the system we developed combines basic principles of data warehouse technology oriented to the needs of community health services. This paper introduces the steps we took in constructing the data warehouse;the case presented here is that of a district community health management information system in Changshu, Jiangsu Province, China. For our data warehouse, we chose the MySQL 4.5 relational database, the Browser/Server, (B/S) model, and hypertext preprocessor as the development tools. Results: The system allowed online analysis processing and next-stage work preparation, and provided a platform for data management, data query, online analysis, etc., in community health service center, specialist outpatient for osteoporosis, and health administration sectors. Conclusion: The users of remote management system and data warehouse can include community health service centers, osteoporosis departments of hospitals, and health administration departments;provide reference for policymaking of health administrators, residents’ health information, and intervention suggestions for general practitioners in community health service centers, patients’ follow-up information for osteoporosis specialists in general hospitals.
基金Supported by the National Natural Science Foundation of China (No.40771166 )the Henan University Foundation (No.SBGJ090605)
文摘Geographic Hypermedia(GH)is a rich and interactive map document with geo-tagged graphics,sound and video ele-ments.A Geographic Hypermedia System(GHS)is designed to manage,query,display and explore GH resources.Recognizing emerging geo-tagged videos and measurable images as valuable geographic data resources,this paper aims to design a web-based GHS using web mapping,geoprocessing,video streaming and XMLHTTP services.The concept,data model,system design and implementation of this GHS are discussed in detail.Geo-tagged videos are modeled as temporal,spatial and metadata entities such as video clip,video path and frame-based descriptions.Similarly,geo-tagged stereo video and derived data are modeled as interre-lated entities:original video,rectified video,stereo video,video path,frame-based description and measurable image(rectified and disparity image with baseline,interior and exterior parameters).The entity data are organized into video files,GIS layers with linear referencing and XML documents for web publishing.These data can be integrated in HTML pages or used as Rich Internet Appli-cations(RIA)using standard web technologies such as the AJAX,ASP.NET and RIA frameworks.An SOA-based GHS is designed using four types of web services:ArcGIS Server 9.3 web mapping and geoprocessing services,Flash FMS 3.0 video streaming ser-vices and GeoRSS XMLHTTP services.GHS applications in road facility management and campus hypermapping indicate that the GH data models and technical solutions introduced in this paper are useful and flexible enough for wider deployment as a GHS.
基金supported by the National Science Foundation[grant numbers 1854502 and 1855902]Publication was made possible in part by support from the HKU Libraries Open Access Author Fund sponsored by the HKU Libraries.USDA is an equal opportunity provider and employer.Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S.Department of Agriculture.
文摘Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale building energy use modeling.Due to the limited availability of socio-economic geospatial data,it is more challenging to map building functions than building morphological information,especially over large areas.In this study,we proposed an integrated framework to map building functions in 50 U.S.cities by integrating multi-source web-based geospatial data.First,a web crawler was developed to extract Points of Interest(POIs)from Tripadvisor.com,and a map crawler was developed to extract POIs and land use parcels from Google Maps.Second,an unsupervised machine learning algorithm named OneClassSVM was used to identify residential buildings based on landscape features derived from Microsoft building footprints.Third,the type ratio of POIs and the area ratio of land use parcels were used to identify six non-residential functions(i.e.hospital,hotel,school,shop,restaurant,and office).The accuracy assessment indicates that the proposed framework performed well,with an average overall accuracy of 94%and a kappa coefficient of 0.63.With the worldwide coverage of Google Maps and Tripadvisor.com,the proposed framework is transferable to other cities over the world.The data products generated from this study are of great use for quantitative city-scale urban studies,such as building energy use modeling at the single building level over large areas.
文摘Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.
文摘Chlorophyll-a (Chl-a) concentration in lakes can tell a lot about a lake’s water quality and ecosystem. It is a measure of the amount of algae growing in a waterbody and can be used to monitor the trophic condition of a waterbody. We studied the pre and post effects of marine ranch construction in Chl-a concentration in Zhelin Bay, Southern China using Normalized Difference Chlorophyll Index (NDCI) and a web-based tool (https://mapcoordinates.info/). We used 8 day composite MODIS image collections of 500 m resolution and randomly selected two stations to extract the chlorophyll-a concentration values through the web-based tool. We recorded the slight increase in NDCI values in all stations after the construction of marine ranch which is a good indicator of the marine organisms’ reproduction and survival.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R104)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability.
基金funded by University of Transport and Communications(UTC)under grant number T2025-CN-004.
文摘Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.RS-2023-00235509Development of security monitoring technology based network behavior against encrypted cyber threats in ICT convergence environment).
文摘With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy.
基金funded by Deanship of Graduate studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-01264).
文摘Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage.
基金supported by the project“Romanian Hub for Artificial Intelligence-HRIA”,Smart Growth,Digitization and Financial Instruments Program,2021–2027,MySMIS No.334906.
文摘Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results.
基金The work described in this paper was fully supported by a grant from Hong Kong Metropolitan University(RIF/2021/05).
文摘Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested.