Creation of a spectral signature reflectance data, which aids in the identification of the crops is important in determining size and location crop fields. Therefore, we developed a spectral signature reflectance for ...Creation of a spectral signature reflectance data, which aids in the identification of the crops is important in determining size and location crop fields. Therefore, we developed a spectral signature reflectance for the vegetative stage of the green gram (Vigna. radiata L.) over 5 years (2020, 2018, 2017, 2015, and 2013) for agroecological zone IV and V in Kenya. The years chosen were those whose satellite resolution data was available for the vegetative stage of crop growth in the short rain season (October, November, December (OND)). We used Landsat 8 OLI satellite imagery in this study. Cropping pattern data for the study area were evaluated by calculating the Top of Atmosphere reflectance. Farms geo-referencing, along with field data collection, was undertaken to extract Top of Atmosphere reflectance for bands 2, 3, 4 and 7. We also carried a spectral similarity assessment on the various cropping patterns. The spectral reflectance ranged from 0.07696 - 0.09632, 0.07466 - 0.09467, 0.0704047 - 0.12188,0.19822 - 0.24387, 0.19269 - 0.26900, and 0.11354 - 0.20815 for bands 2, 3, 4, 5, 6, and 7 for green gram, respectively. The results showed a dissimilarity among the various cropping patterns. The lowest dissimilarity index was 0.027 for the maize (Zea mays L.) bean (Phaseolus vulgaris) versus the maize-pigeon pea (Cajanus cajan) crop, while the highest dissimilarity index was 0.443 for the maize bean versus the maize bean and cowpea cropping patterns. High crop dissimilarities experienced across the cropping pattern through these spectral reflectance values confirm that the green gram was potentially identifiable. The results can be used in crop type identification in agroecological lower midland zone IV and V for mung bean management. This study therefore suggests that use of reflectance data in remote sensing of agricultural ecosystems would aid in planning, management, and crop allocation to different ecozones.展开更多
The effectiveness of machine learning algorithms and the limited reference data introduce uncertainness for sugarcane classification.To address these problems,our study classified sugarcane plantations at the field sc...The effectiveness of machine learning algorithms and the limited reference data introduce uncertainness for sugarcane classification.To address these problems,our study classified sugarcane plantations at the field scale using multi-temporal and multi-sensor data together with a large number of ground truth datasets(>13,000 points)and compared the efficacy of ensemble and kernel classifier methods over 3 years(2021,2022,and 2023)across Northeast Thailand.In the first step,land cover was generated from a random forest classifier,demonstrating excellent results for all years with an OA higher than 95%.In the second step,the discretization of sugarcane from non-sugarcane classes in the agricultural category was conducted using four efficient machine learning algorithms(decision tree(DT),random forest(RF),support vector machine(SVM),and one-class SVM).The RF classifier gave the optimal results with over 90%accuracy.Our results aligned with provincial statistics from the Office of the Cane and Sugar Board,thereby highlighting the efficacy and reliability of the RF method in mapping sugarcane in small fields and cloudy regions.A temporal evolution analysis of sugarcane cultivation spanning the preceding 3 years revealed a significant increase in the productive area.Our findings provide crucial information for sustainable management practices.展开更多
Estimating wheat grain protein content by remote sensing is important for assessing wheat quality at maturity and making grains harvest and purchase policies. However, spatial variability of soil condition, temperatur...Estimating wheat grain protein content by remote sensing is important for assessing wheat quality at maturity and making grains harvest and purchase policies. However, spatial variability of soil condition, temperature, and precipitation will affect grain protein contents and these factors usually cannot be monitored accurately by remote sensing data from single image. In this research, the relationships between wheat protein content at maturity and wheat agronomic parameters at different growing stages were analyzed and multi-temporal images of Landsat TM were used to estimate grain protein content by partial least squares regression. Experiment data were acquired in the suburb of Beijing during a 2-yr experiment in the period from 2003 to 2004. Determination coefficient, average deviation of self-modeling, and deviation of cross- validation were employed to assess the estimation accuracy of wheat grain protein content. Their values were 0.88, 1.30%, 3.81% and 0.72, 5.22%, 12.36% for 2003 and 2004, respectively. The research laid an agronomic foundation for GPC (grain protein content) estimation by multi-temporal remote sensing. The results showed that it is feasible to estimate GPC of wheat from multi-temporal remote sensing data in large area.展开更多
Flood disasters can have a serious impact on people's production and lives, and can cause hugelosses in lives and property security. Based on multi-source remote sensing data, this study establisheddecision tree c...Flood disasters can have a serious impact on people's production and lives, and can cause hugelosses in lives and property security. Based on multi-source remote sensing data, this study establisheddecision tree classification rules through multi-source and multi-temporal feature fusion, classified groundobjects before the disaster and extracted flood information in the disaster area based on optical imagesduring the disaster, so as to achieve rapid acquisition of the disaster situation of each disaster bearing object.In the case of Qianliang Lake, which suffered from flooding in 2020, the results show that decision treeclassification algorithms based on multi-temporal features can effectively integrate multi-temporal and multispectralinformation to overcome the shortcomings of single-temporal image classification and achieveground-truth object classification.展开更多
Landslide susceptibility map(LSM)is a crucial tool for managing landslide hazards and identifying potential landslide areas.However,current LSMs rely primarily on static landslide-related factors with little variation...Landslide susceptibility map(LSM)is a crucial tool for managing landslide hazards and identifying potential landslide areas.However,current LSMs rely primarily on static landslide-related factors with little variation over several decades,thereby overlooking the movement of slopes and failing to capture landslide dynamics.The long-term ground deformation map(GDM)derived from multi-temporal interferometric synthetic aperture radar(MT-InSAR)can effectively address the shortcomings.Fengjie County is an important area for geohazard management in the Three Gorges Reservoir Area(TGRA),China.Landslides in this area,however,cause significant socio-economic loss due to geological,tectonic,climatic,and anthropological factors.This research aims to integrate random forest(RF)with MT-InSAR to generate a landslide dynamic susceptibility map(LDSM)for Fengjie County,enhancing the reliability of landslide risk management.First,the RF model was employed to generate a static LSM,whereas MT-InSAR was utilized to obtain the GDM of the study area from January 2020 to June 2023.The static LSM and the GDM were subsequently integrated using a dynamic weight matrix to derive the LDSM.Our analysis covered a temporal framework spanning three years,focusing on spatiotemporal changes in landslide susceptibility levels and the influence of climate factors.Compared with the static LSM,the LDSM can promptly identify moving landslide areas,reduce high landslide susceptibility areas,and achieve greater accuracy.Moreover,the spatiotemporal changes in landslide susceptibility are regulated by the total annual rainfall,with wet years being more conducive to landslides than dry years.The proposed LDSM offers useful insights for the dynamic prevention and refined management of landslide hazards in the TGRA,significantly enhancing the resilience in this region.展开更多
As landmass of the world is covered by vegetation, taking into account phenology when performing land cover classification may yield more accurate maps. The availability of no-cost Moderate Resolution Imaging Spectrom...As landmass of the world is covered by vegetation, taking into account phenology when performing land cover classification may yield more accurate maps. The availability of no-cost Moderate Resolution Imaging Spectrometer (MODIS) NDVI dataset that provides high-quality continuous time series data is representing a potentially significant source of land cover information especially for detection natural forest distribution. This study intends to assess the advantage of MODIS 250 m Normalized Difference Vegetation Index (NDVI) multi-temporal imagery for detection of densely vegetation cover distribution in Java and then for identification of remaining natural forest in Java from densely vegetation cover distribution. Result of this study successfully demonstrated the contribution of MODIS NDVI 250 m for detection the natural forest distribution in Java Island. Therefore, the approach described herein provided classification accuracy comparable to those of maps derived from higher resolution data and will be a viable alternative for regional or national classifications.展开更多
Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from sei...Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.展开更多
Inspired by recent significant agricultural yield losses in the eastern China and a missing operational monitoring system,we developed a comprehensive drought monitoring model to better understand the impact of indivi...Inspired by recent significant agricultural yield losses in the eastern China and a missing operational monitoring system,we developed a comprehensive drought monitoring model to better understand the impact of individual key factors contributing to this issue.The resulting model,the‘Humidity calibrated Drought Condition Index’(HcDCI)was applied for the years 2001 to 2019 in form of a case study to Weihai County,Shandong Province in East China.Design and development are based on a linear combination of the Vegetation Condition Index(VCI),the Temperature Condition Index(TCI),and the Rainfall Condition Index(RCI)using multi-source satellite data to create a basic Drought Condition Index(DCI).VCI and TCI were derived from MODIS(Moderate Resolution Imaging Spectroradiometer)data,while precipitation is taken from CHIRPS(Climate Hazards Group InfraRed Precipitation with Station data)data.For reasons of accuracy,the decisive coefficients were determined by the relative humidity of soils at depth of 10-20 cm of particular areas collected by an agrometeorological ground station.The correlation between DCI and soil humidity was optimized with the factors of 0.53,0.33,and 0.14 for VCI,TCI,and RCI,respectively.The model revealed,light agricultural droughts from 2003 to 2013 and in 2018,while more severe droughts occurred in 2001 and 2002,2014-2017,and 2019.The droughts were most severe in January,March,and December,and our findings coincide with historical records.The average temperature during 2012-2019 is 1℃ higher than that during the period 2001-2011 and the average precipitation during 2014-2019 is 192.77 mm less than that during 2008-2013.The spatio-temporal accuracy of the HcDCI model was positively validated by correlation with agricultural crop yield quantities.The model thus,demonstrates its capability to reveal drought periods in detail,its transferability to other regions and its usefulness to take future measures.展开更多
Real-world studies(RWSs)have emerged as a transformative force in oncology research,complementing traditional randomized controlled trials(RCTs)by providing comprehensive insights into cancer care within routine clini...Real-world studies(RWSs)have emerged as a transformative force in oncology research,complementing traditional randomized controlled trials(RCTs)by providing comprehensive insights into cancer care within routine clinical settings.This review examines the evolving landscape of RWSs in oncology,focusing on their implementation,methodological considerations,and impact on precision medicine.We systematically analyze how RWSs leverage diverse data sources,including electronic health records(EHRs),insurance claims,and patient registries,to generate evidence that bridges the gap between controlled clinical trials and real-world clinical practice.The review underscores the key contributions of RWSs,including capturing therapeutic outcomes in traditionally underrepresented populations,expanding drug indications,and evaluating long-term safety and effectiveness in routine clinical settings.While acknowledging significant challenges,including data quality variability and privacy concerns,we discuss how emerging technologies like artificial intelligence are helping to address these limitations.The integration of RWSs with traditional clinical research is revolutionizing the paradigm of precision oncology and enabling more personalized treatment approaches based on real-world evidence.展开更多
Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and...Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and gridded data)and actual climate data(fixed-point observations near the sampling sites),in northeastern China’s warm temperate zone and analyzed differences in their correlations with tree-ring width index.The results were:(1)Gridded temperature data,as well as precipitation and relative humidity data from the Huailai meteorological station,was more consistent with the actual climate data;in contrast,gridded soil moisture content data showed significant discrepancies.(2)Horizontal distance had a greater impact on the representativeness of actual climate conditions than vertical elevation differences.(3)Differences in consistency between alternative and actual climate data also affected their correlations with tree-ring width indices.In some growing season months,correlation coefficients,both in magnitude and sign,differed significantly from those based on actual data.The selection of different alternative climate datasets can lead to biased results in assessing forest responses to climate change,which is detrimental to the management of forest ecosystems in harsh environments.Therefore,the scientific and rational selection of alternative climate data is essential for dendroecological and climatological research.展开更多
Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.Howev...Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality.展开更多
Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy a...Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys.展开更多
With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-...With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-to-end datamodem scheme that transmits the caller’s digital certificates through a voice channel for the recipient to verify the caller’s identity.Encoding useful information through voice channels is very difficult without the assistance of telecommunications providers.For example,speech activity detection may quickly classify encoded signals as nonspeech signals and reject input waveforms.To address this issue,we propose a novel modulation method based on linear frequency modulation that encodes 3 bits per symbol by varying its frequency,shape,and phase,alongside a lightweightMobileNetV3-Small-based demodulator for efficient and accurate signal decoding on resource-constrained devices.This method leverages the unique characteristics of linear frequency modulation signals,making them more easily transmitted and decoded in speech channels.To ensure reliable data delivery over unstable voice links,we further introduce a robust framing scheme with delimiter-based synchronization,a sample-level position remedying algorithm,and a feedback-driven retransmission mechanism.We have validated the feasibility and performance of our system through expanded real-world evaluations,demonstrating that it outperforms existing advanced methods in terms of robustness and data transfer rate.This technology establishes the foundational infrastructure for reliable certificate delivery over voice channels,which is crucial for achieving strong caller authentication and preventing telephone fraud at its root cause.展开更多
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a...Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.展开更多
Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either re...Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either require trusted setups or suffer from high communication overhead and low verification efficiency.This paper presents ISTIRDA,a DAS scheme that lets light clients certify availability by sampling small random codeword symbols.Built on ISTIR,an improved Reed–Solomon interactive oracle proof of proximity,ISTIRDA combines adaptive folding with dynamic code rate adjustment to preserve soundness while lowering communication.This paper formalizes opening consistency and prove security with bounded error in the random oracle model,giving polylogarithmic verifier queries and no trusted setup.In a prototype compared with FRIDA under equal soundness,ISTIRDA reduces communication by 40.65%to 80%.For data larger than 16 MB,ISTIRDA verifies faster and the advantage widens;at 128 MB,proofs are about 60%smaller and verification time is roughly 25%shorter,while prover overhead remains modest.In peer-to-peer emulation under injected latency and loss,ISTIRDA reaches confidence more quickly and is less sensitive to packet loss and load.These results indicate that ISTIRDA is a scalable and provably secure DAS scheme suitable for high-throughput,large-block public blockchains,substantially easing bandwidth and latency pressure on lightweight nodes.展开更多
With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service...With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service resources,uneven distribution,and prominent supply-demand contradictions have seriously affected service quality.Big data technology,with core advantages including data collection,analysis and mining,and accurate prediction,provides a new solution for the allocation of community elderly care service resources.This paper systematically studies the application value of big data technology in the allocation of community elderly care service resources from three aspects:resource allocation efficiency,service accuracy,and management intelligence.Combined with practical needs,it proposes optimal allocation strategies such as building a big data analysis platform and accurately grasping the elderly’s care needs,striving to provide operable path references for the construction of community elderly care service systems,promoting the early realization of the elderly care service goal of“adequate support and proper care for the elderly”,and boosting the high-quality development of China’s elderly care service industry.展开更多
文摘Creation of a spectral signature reflectance data, which aids in the identification of the crops is important in determining size and location crop fields. Therefore, we developed a spectral signature reflectance for the vegetative stage of the green gram (Vigna. radiata L.) over 5 years (2020, 2018, 2017, 2015, and 2013) for agroecological zone IV and V in Kenya. The years chosen were those whose satellite resolution data was available for the vegetative stage of crop growth in the short rain season (October, November, December (OND)). We used Landsat 8 OLI satellite imagery in this study. Cropping pattern data for the study area were evaluated by calculating the Top of Atmosphere reflectance. Farms geo-referencing, along with field data collection, was undertaken to extract Top of Atmosphere reflectance for bands 2, 3, 4 and 7. We also carried a spectral similarity assessment on the various cropping patterns. The spectral reflectance ranged from 0.07696 - 0.09632, 0.07466 - 0.09467, 0.0704047 - 0.12188,0.19822 - 0.24387, 0.19269 - 0.26900, and 0.11354 - 0.20815 for bands 2, 3, 4, 5, 6, and 7 for green gram, respectively. The results showed a dissimilarity among the various cropping patterns. The lowest dissimilarity index was 0.027 for the maize (Zea mays L.) bean (Phaseolus vulgaris) versus the maize-pigeon pea (Cajanus cajan) crop, while the highest dissimilarity index was 0.443 for the maize bean versus the maize bean and cowpea cropping patterns. High crop dissimilarities experienced across the cropping pattern through these spectral reflectance values confirm that the green gram was potentially identifiable. The results can be used in crop type identification in agroecological lower midland zone IV and V for mung bean management. This study therefore suggests that use of reflectance data in remote sensing of agricultural ecosystems would aid in planning, management, and crop allocation to different ecozones.
基金financially supported by Mahasarakham University.
文摘The effectiveness of machine learning algorithms and the limited reference data introduce uncertainness for sugarcane classification.To address these problems,our study classified sugarcane plantations at the field scale using multi-temporal and multi-sensor data together with a large number of ground truth datasets(>13,000 points)and compared the efficacy of ensemble and kernel classifier methods over 3 years(2021,2022,and 2023)across Northeast Thailand.In the first step,land cover was generated from a random forest classifier,demonstrating excellent results for all years with an OA higher than 95%.In the second step,the discretization of sugarcane from non-sugarcane classes in the agricultural category was conducted using four efficient machine learning algorithms(decision tree(DT),random forest(RF),support vector machine(SVM),and one-class SVM).The RF classifier gave the optimal results with over 90%accuracy.Our results aligned with provincial statistics from the Office of the Cane and Sugar Board,thereby highlighting the efficacy and reliability of the RF method in mapping sugarcane in small fields and cloudy regions.A temporal evolution analysis of sugarcane cultivation spanning the preceding 3 years revealed a significant increase in the productive area.Our findings provide crucial information for sustainable management practices.
基金the National Natural Science Foundation of China (41171281, 40701120)the Beijing Nova Program, China (2008B33)
文摘Estimating wheat grain protein content by remote sensing is important for assessing wheat quality at maturity and making grains harvest and purchase policies. However, spatial variability of soil condition, temperature, and precipitation will affect grain protein contents and these factors usually cannot be monitored accurately by remote sensing data from single image. In this research, the relationships between wheat protein content at maturity and wheat agronomic parameters at different growing stages were analyzed and multi-temporal images of Landsat TM were used to estimate grain protein content by partial least squares regression. Experiment data were acquired in the suburb of Beijing during a 2-yr experiment in the period from 2003 to 2004. Determination coefficient, average deviation of self-modeling, and deviation of cross- validation were employed to assess the estimation accuracy of wheat grain protein content. Their values were 0.88, 1.30%, 3.81% and 0.72, 5.22%, 12.36% for 2003 and 2004, respectively. The research laid an agronomic foundation for GPC (grain protein content) estimation by multi-temporal remote sensing. The results showed that it is feasible to estimate GPC of wheat from multi-temporal remote sensing data in large area.
文摘Flood disasters can have a serious impact on people's production and lives, and can cause hugelosses in lives and property security. Based on multi-source remote sensing data, this study establisheddecision tree classification rules through multi-source and multi-temporal feature fusion, classified groundobjects before the disaster and extracted flood information in the disaster area based on optical imagesduring the disaster, so as to achieve rapid acquisition of the disaster situation of each disaster bearing object.In the case of Qianliang Lake, which suffered from flooding in 2020, the results show that decision treeclassification algorithms based on multi-temporal features can effectively integrate multi-temporal and multispectralinformation to overcome the shortcomings of single-temporal image classification and achieveground-truth object classification.
基金supported by the National Science Fund for Distinguished Young Scholars(Grant No.42225702)the Maria Skłodowska-Curie Action(MSCA)-UPGRADE(mUltiscale IoT equipPed lonG linear infRastructure resilience built and sustAinable DevelopmEnt)project-HORIZON-MSCA-2022-SE-01(Grant No.101131146)。
文摘Landslide susceptibility map(LSM)is a crucial tool for managing landslide hazards and identifying potential landslide areas.However,current LSMs rely primarily on static landslide-related factors with little variation over several decades,thereby overlooking the movement of slopes and failing to capture landslide dynamics.The long-term ground deformation map(GDM)derived from multi-temporal interferometric synthetic aperture radar(MT-InSAR)can effectively address the shortcomings.Fengjie County is an important area for geohazard management in the Three Gorges Reservoir Area(TGRA),China.Landslides in this area,however,cause significant socio-economic loss due to geological,tectonic,climatic,and anthropological factors.This research aims to integrate random forest(RF)with MT-InSAR to generate a landslide dynamic susceptibility map(LDSM)for Fengjie County,enhancing the reliability of landslide risk management.First,the RF model was employed to generate a static LSM,whereas MT-InSAR was utilized to obtain the GDM of the study area from January 2020 to June 2023.The static LSM and the GDM were subsequently integrated using a dynamic weight matrix to derive the LDSM.Our analysis covered a temporal framework spanning three years,focusing on spatiotemporal changes in landslide susceptibility levels and the influence of climate factors.Compared with the static LSM,the LDSM can promptly identify moving landslide areas,reduce high landslide susceptibility areas,and achieve greater accuracy.Moreover,the spatiotemporal changes in landslide susceptibility are regulated by the total annual rainfall,with wet years being more conducive to landslides than dry years.The proposed LDSM offers useful insights for the dynamic prevention and refined management of landslide hazards in the TGRA,significantly enhancing the resilience in this region.
文摘As landmass of the world is covered by vegetation, taking into account phenology when performing land cover classification may yield more accurate maps. The availability of no-cost Moderate Resolution Imaging Spectrometer (MODIS) NDVI dataset that provides high-quality continuous time series data is representing a potentially significant source of land cover information especially for detection natural forest distribution. This study intends to assess the advantage of MODIS 250 m Normalized Difference Vegetation Index (NDVI) multi-temporal imagery for detection of densely vegetation cover distribution in Java and then for identification of remaining natural forest in Java from densely vegetation cover distribution. Result of this study successfully demonstrated the contribution of MODIS NDVI 250 m for detection the natural forest distribution in Java Island. Therefore, the approach described herein provided classification accuracy comparable to those of maps derived from higher resolution data and will be a viable alternative for regional or national classifications.
文摘Earthquakes are highly destructive spatio-temporal phenomena whose analysis is essential for disaster preparedness and risk mitigation.Modern seismological research produces vast volumes of heterogeneous data from seismic networks,satellite observations,and geospatial repositories,creating the need for scalable infrastructures capable of integrating and analyzing such data to support intelligent decision-making.Data warehousing technologies provide a robust foundation for this purpose;however,existing earthquake-oriented data warehouses remain limited,often relying on simplified schemas,domain-specific analytics,or cataloguing efforts.This paper presents the design and implementation of a spatio-temporal data warehouse for seismic activity.The framework integrates spatial and temporal dimensions in a unified schema and introduces a novel array-based approach for managing many-to-many relationships between facts and dimensions without intermediate bridge tables.A comparative evaluation against a conventional bridge-table schema demonstrates that the array-based design improves fact-centric query performance,while the bridge-table schema remains advantageous for dimension-centric queries.To reconcile these trade-offs,a hybrid schema is proposed that retains both representations,ensuring balanced efficiency across heterogeneous workloads.The proposed framework demonstrates how spatio-temporal data warehousing can address schema complexity,improve query performance,and support multidimensional visualization.In doing so,it provides a foundation for integrating seismic analysis into broader big data-driven intelligent decision systems for disaster resilience,risk mitigation,and emergency management.
基金Under the auspices of Shenzhen Science and Technology Program(No.KQTD20180410161218820)Guangdong Basic and Applied Basic Research Foundation(No.2021A1515012600)。
文摘Inspired by recent significant agricultural yield losses in the eastern China and a missing operational monitoring system,we developed a comprehensive drought monitoring model to better understand the impact of individual key factors contributing to this issue.The resulting model,the‘Humidity calibrated Drought Condition Index’(HcDCI)was applied for the years 2001 to 2019 in form of a case study to Weihai County,Shandong Province in East China.Design and development are based on a linear combination of the Vegetation Condition Index(VCI),the Temperature Condition Index(TCI),and the Rainfall Condition Index(RCI)using multi-source satellite data to create a basic Drought Condition Index(DCI).VCI and TCI were derived from MODIS(Moderate Resolution Imaging Spectroradiometer)data,while precipitation is taken from CHIRPS(Climate Hazards Group InfraRed Precipitation with Station data)data.For reasons of accuracy,the decisive coefficients were determined by the relative humidity of soils at depth of 10-20 cm of particular areas collected by an agrometeorological ground station.The correlation between DCI and soil humidity was optimized with the factors of 0.53,0.33,and 0.14 for VCI,TCI,and RCI,respectively.The model revealed,light agricultural droughts from 2003 to 2013 and in 2018,while more severe droughts occurred in 2001 and 2002,2014-2017,and 2019.The droughts were most severe in January,March,and December,and our findings coincide with historical records.The average temperature during 2012-2019 is 1℃ higher than that during the period 2001-2011 and the average precipitation during 2014-2019 is 192.77 mm less than that during 2008-2013.The spatio-temporal accuracy of the HcDCI model was positively validated by correlation with agricultural crop yield quantities.The model thus,demonstrates its capability to reveal drought periods in detail,its transferability to other regions and its usefulness to take future measures.
基金supported by the Zhejiang Provincial Natural Science Foundation(No.ZCLY24H1601)the National Natural Science Foundation of China(No.82403697)+1 种基金the Medical and Health Science and Technology Project of Zhejiang Province(No.2025KY411)the National Key R&D Program of China(No.2022YFC2505100).
文摘Real-world studies(RWSs)have emerged as a transformative force in oncology research,complementing traditional randomized controlled trials(RCTs)by providing comprehensive insights into cancer care within routine clinical settings.This review examines the evolving landscape of RWSs in oncology,focusing on their implementation,methodological considerations,and impact on precision medicine.We systematically analyze how RWSs leverage diverse data sources,including electronic health records(EHRs),insurance claims,and patient registries,to generate evidence that bridges the gap between controlled clinical trials and real-world clinical practice.The review underscores the key contributions of RWSs,including capturing therapeutic outcomes in traditionally underrepresented populations,expanding drug indications,and evaluating long-term safety and effectiveness in routine clinical settings.While acknowledging significant challenges,including data quality variability and privacy concerns,we discuss how emerging technologies like artificial intelligence are helping to address these limitations.The integration of RWSs with traditional clinical research is revolutionizing the paradigm of precision oncology and enabling more personalized treatment approaches based on real-world evidence.
基金supported by the International Partnership program of the Chinese Academy of Sciences(170GJHZ2023074GC)National Natural Science Foundation of China(42425706 and 42488201)+1 种基金National Key Research and Development Program of China(2024YFF0807902)Beijing Natural Science Foundation(8242041),and China Postdoctoral Science Foundation(2025M770353).
文摘Accurately assessing the relationship between tree growth and climatic factors is of great importance in dendrochronology.This study evaluated the consistency between alternative climate datasets(including station and gridded data)and actual climate data(fixed-point observations near the sampling sites),in northeastern China’s warm temperate zone and analyzed differences in their correlations with tree-ring width index.The results were:(1)Gridded temperature data,as well as precipitation and relative humidity data from the Huailai meteorological station,was more consistent with the actual climate data;in contrast,gridded soil moisture content data showed significant discrepancies.(2)Horizontal distance had a greater impact on the representativeness of actual climate conditions than vertical elevation differences.(3)Differences in consistency between alternative and actual climate data also affected their correlations with tree-ring width indices.In some growing season months,correlation coefficients,both in magnitude and sign,differed significantly from those based on actual data.The selection of different alternative climate datasets can lead to biased results in assessing forest responses to climate change,which is detrimental to the management of forest ecosystems in harsh environments.Therefore,the scientific and rational selection of alternative climate data is essential for dendroecological and climatological research.
基金supported by the National Key R&D Program of China[Grant No.2023YFF0713600]the National Natural Science Foundation of China[Grant No.62275062]+3 种基金Project of Shandong Innovation and Startup Community of High-end Medical Apparatus and Instruments[Grant No.2023-SGTTXM-002 and 2024-SGTTXM-005]the Shandong Province Technology Innovation Guidance Plan(Central Leading Local Science and Technology Development Fund)[Grant No.YDZX2023115]the Taishan Scholar Special Funding Project of Shandong Provincethe Shandong Laboratory of Advanced Biomaterials and Medical Devices in Weihai[Grant No.ZL202402].
文摘Photoacoustic-computed tomography is a novel imaging technique that combines high absorption contrast and deep tissue penetration capability,enabling comprehensive three-dimensional imaging of biological targets.However,the increasing demand for higher resolution and real-time imaging results in significant data volume,limiting data storage,transmission and processing efficiency of system.Therefore,there is an urgent need for an effective method to compress the raw data without compromising image quality.This paper presents a photoacoustic-computed tomography 3D data compression method and system based on Wavelet-Transformer.This method is based on the cooperative compression framework that integrates wavelet hard coding with deep learning-based soft decoding.It combines the multiscale analysis capability of wavelet transforms with the global feature modeling advantage of Transformers,achieving high-quality data compression and reconstruction.Experimental results using k-wave simulation suggest that the proposed compression system has advantages under extreme compression conditions,achieving a raw data compression ratio of up to 1:40.Furthermore,three-dimensional data compression experiment using in vivo mouse demonstrated that the maximum peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)values of reconstructed images reached 38.60 and 0.9583,effectively overcoming detail loss and artifacts introduced by raw data compression.All the results suggest that the proposed system can significantly reduce storage requirements and hardware cost,enhancing computational efficiency and image quality.These advantages support the development of photoacoustic-computed tomography toward higher efficiency,real-time performance and intelligent functionality.
文摘Amid the increasing demand for data sharing,the need for flexible,secure,and auditable access control mechanisms has garnered significant attention in the academic community.However,blockchain-based ciphertextpolicy attribute-based encryption(CP-ABE)schemes still face cumbersome ciphertext re-encryption and insufficient oversight when handling dynamic attribute changes and cross-chain collaboration.To address these issues,we propose a dynamic permission attribute-encryption scheme for multi-chain collaboration.This scheme incorporates a multiauthority architecture for distributed attribute management and integrates an attribute revocation and granting mechanism that eliminates the need for ciphertext re-encryption,effectively reducing both computational and communication overhead.It leverages the InterPlanetary File System(IPFS)for off-chain data storage and constructs a cross-chain regulatory framework—comprising a Hyperledger Fabric business chain and a FISCO BCOS regulatory chain—to record changes in decryption privileges and access behaviors in an auditable manner.Security analysis shows selective indistinguishability under chosen-plaintext attack(sIND-CPA)security under the decisional q-Parallel Bilinear Diffie-Hellman Exponent Assumption(q-PBDHE).In the performance and experimental evaluations,we compared the proposed scheme with several advanced schemes.The results show that,while preserving security,the proposed scheme achieves higher encryption/decryption efficiency and lower storage overhead for ciphertexts and keys.
文摘With the popularization of new technologies,telephone fraud has become the main means of stealing money and personal identity information.Taking inspiration from the website authentication mechanism,we propose an end-to-end datamodem scheme that transmits the caller’s digital certificates through a voice channel for the recipient to verify the caller’s identity.Encoding useful information through voice channels is very difficult without the assistance of telecommunications providers.For example,speech activity detection may quickly classify encoded signals as nonspeech signals and reject input waveforms.To address this issue,we propose a novel modulation method based on linear frequency modulation that encodes 3 bits per symbol by varying its frequency,shape,and phase,alongside a lightweightMobileNetV3-Small-based demodulator for efficient and accurate signal decoding on resource-constrained devices.This method leverages the unique characteristics of linear frequency modulation signals,making them more easily transmitted and decoded in speech channels.To ensure reliable data delivery over unstable voice links,we further introduce a robust framing scheme with delimiter-based synchronization,a sample-level position remedying algorithm,and a feedback-driven retransmission mechanism.We have validated the feasibility and performance of our system through expanded real-world evaluations,demonstrating that it outperforms existing advanced methods in terms of robustness and data transfer rate.This technology establishes the foundational infrastructure for reliable certificate delivery over voice channels,which is crucial for achieving strong caller authentication and preventing telephone fraud at its root cause.
文摘Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications.
基金supported in part by the Research Fund of Key Lab of Education Blockchain and Intelligent Technology,Ministry of Education(EBME25-F-08).
文摘Lightweight nodes are crucial for blockchain scalability,but verifying the availability of complete block data puts significant strain on bandwidth and latency.Existing data availability sampling(DAS)schemes either require trusted setups or suffer from high communication overhead and low verification efficiency.This paper presents ISTIRDA,a DAS scheme that lets light clients certify availability by sampling small random codeword symbols.Built on ISTIR,an improved Reed–Solomon interactive oracle proof of proximity,ISTIRDA combines adaptive folding with dynamic code rate adjustment to preserve soundness while lowering communication.This paper formalizes opening consistency and prove security with bounded error in the random oracle model,giving polylogarithmic verifier queries and no trusted setup.In a prototype compared with FRIDA under equal soundness,ISTIRDA reduces communication by 40.65%to 80%.For data larger than 16 MB,ISTIRDA verifies faster and the advantage widens;at 128 MB,proofs are about 60%smaller and verification time is roughly 25%shorter,while prover overhead remains modest.In peer-to-peer emulation under injected latency and loss,ISTIRDA reaches confidence more quickly and is less sensitive to packet loss and load.These results indicate that ISTIRDA is a scalable and provably secure DAS scheme suitable for high-throughput,large-block public blockchains,substantially easing bandwidth and latency pressure on lightweight nodes.
文摘With the accelerating aging process of China’s population,the demand for community elderly care services has shown diversified and personalized characteristics.However,problems such as insufficient total care service resources,uneven distribution,and prominent supply-demand contradictions have seriously affected service quality.Big data technology,with core advantages including data collection,analysis and mining,and accurate prediction,provides a new solution for the allocation of community elderly care service resources.This paper systematically studies the application value of big data technology in the allocation of community elderly care service resources from three aspects:resource allocation efficiency,service accuracy,and management intelligence.Combined with practical needs,it proposes optimal allocation strategies such as building a big data analysis platform and accurately grasping the elderly’s care needs,striving to provide operable path references for the construction of community elderly care service systems,promoting the early realization of the elderly care service goal of“adequate support and proper care for the elderly”,and boosting the high-quality development of China’s elderly care service industry.