Peer-to-peer (P2P) networking is a distributed architecture that partitions tasks or data between peer nodes. In this paper, an efficient Hypercube Sequential Matrix Partition (HS-MP) for efficient data sharing in P2P...Peer-to-peer (P2P) networking is a distributed architecture that partitions tasks or data between peer nodes. In this paper, an efficient Hypercube Sequential Matrix Partition (HS-MP) for efficient data sharing in P2P Networks using tokenizer method is proposed to resolve the problems of the larger P2P networks. The availability of data is first measured by the tokenizer using Dynamic Hypercube Organization. By applying Dynamic Hypercube Organization, that efficiently coordinates and assists the peers in P2P network ensuring data availability at many locations. Each data in peer is then assigned with valid ID by the tokenizer using Sequential Self-Organizing (SSO) ID generation model. This ensures data sharing with other nodes in large P2P network at minimum time interval which is obtained through proximity of data availability. To validate the framework HS-MP, the performance is evaluated using traffic traces collected from data sharing applications. Simulations conducting using Network simulator-2 show that the proposed framework outperforms the conventional streaming models. The performance of the proposed system is analyzed using energy consumption, average latency and average data availability rate with respect to the number of peer nodes, data size, amount of data shared and execution time. The proposed method reduces the energy consumption 43.35% to transpose traffic, 35.29% to bitrev traffic and 25% to bitcomp traffic patterns.展开更多
Six national-scale,or near national-scale,geochemical data sets for soils or stream sediments exist for the United States.The earliest of these,here termed the 'Shacklette' data set,was generated by a U.S. Geologica...Six national-scale,or near national-scale,geochemical data sets for soils or stream sediments exist for the United States.The earliest of these,here termed the 'Shacklette' data set,was generated by a U.S. Geological Survey(USGS) project conducted from 1961 to 1975.This project used soil collected from a depth of about 20 cm as the sampling medium at 1323 sites throughout the conterminous U.S.The National Uranium Resource Evaluation Hydrogeochemical and Stream Sediment Reconnaissance(NUREHSSR) Program of the U.S.Department of Energy was conducted from 1975 to 1984 and collected either stream sediments,lake sediments,or soils at more than 378,000 sites in both the conterminous U.S.and Alaska.The sampled area represented about 65%of the nation.The Natural Resources Conservation Service(NRCS),from 1978 to 1982,collected samples from multiple soil horizons at sites within the major crop-growing regions of the conterminous U.S.This data set contains analyses of more than 3000 samples.The National Geochemical Survey,a USGS project conducted from 1997 to 2009,used a subset of the NURE-HSSR archival samples as its starting point and then collected primarily stream sediments, with occasional soils,in the parts of the U.S.not covered by the NURE-HSSR Program.This data set contains chemical analyses for more than 70,000 samples.The USGS,in collaboration with the Mexican Geological Survey and the Geological Survey of Canada,initiated soil sampling for the North American Soil Geochemical Landscapes Project in 2007.Sampling of three horizons or depths at more than 4800 sites in the U.S.was completed in 2010,and chemical analyses are currently ongoing.The NRCS initiated a project in the 1990s to analyze the various soil horizons from selected pedons throughout the U.S.This data set currently contains data from more than 1400 sites.This paper(1) discusses each data set in terms of its purpose,sample collection protocols,and analytical methods;and(2) evaluates each data set in terms of its appropriateness as a national-scale geochemical database and its usefulness for nationalscale geochemical mapping.展开更多
The accuracy and repeatability of the laser interferometer measurement system (LIMS) are often limited by the mirror surface error that comes from the mirror surface shape and distortion. This paper describes a new ...The accuracy and repeatability of the laser interferometer measurement system (LIMS) are often limited by the mirror surface error that comes from the mirror surface shape and distortion. This paper describes a new method to calibrate mirror map on ultraprecise movement stage (UPMS) with nanopositioning and to make a real-time compensation for the mirror surface error by using mirror map data tables with the software algorithm. Based on the mirror map test model, the factors affecting mirror map are analyzed through geometric method on the UPMS with six digrees of freedom. Dam processing methods including spline interpolation and spline offsets are used to process the raw sampling data to build mirror map tables. The linear interpolation as compensation method to make a real-time correction on the stage mirror unflatness is adopted and the correction formulas are illuminated. In this way, the measurement accuracy of the system is obviously improved from 40 nm to 5 nm.展开更多
Due to the widespread use of navigational satellites,the ubiquity of mobile phones,and the rapid advancement of mobile communication technologies,high-precision mobile phone signaling data(HMPSD)holds exceptional prom...Due to the widespread use of navigational satellites,the ubiquity of mobile phones,and the rapid advancement of mobile communication technologies,high-precision mobile phone signaling data(HMPSD)holds exceptional promise for discerning fine-grained characteristics of residents'travel behaviors,owing to its superior spatial and temporal resolution.This study focuses on identifying the most consistent commuting patterns of residents in the Qiaoxi District of Shijiazhuang,China,over the course of a month,using these patterns as the basis for transport mode identification.Leveraging the high-precise geographical coordinates of individuals'workplaces and homes,along with actual commuting durations derived from the high-frequency positioning of HMPSD,and comparing these with the predicted commuting durations for four transport modes from a navigational map,we have developed a novel approach for identifying individual transport modes,incorporating time matching,frequency ranking,and speed threshold assessments.This approach swiftly and effectively identifies the commuting modes for each resident—namely,driving,public transportation,walking,bicycling,and electric biking—along with their respective commuting distances and durations.Furthermore,to support urban planning and transportation management efforts,we aggregated individual commuting data—including flows,modes,distances,and durations—at a parcel level.This aggregation method effectively reveals favorable commuting characteristics within the central area of Qiaoxi District,highlights the commuting needs and irrational commuting conditions in peripheral parcels,and informs tailored strategies for adjusting planning layouts and optimizing facility configurations.This study facilitates an in-depth exploration of fine-grained travel patterns through integrated air-land transportation resources,providing new insights and methodologies for refined urban transportation planning and travel management through advanced data applications and identification methods.展开更多
Conventional soil maps generally contain one or more soil types within a single soil polygon.But their geographic locations within the polygon are not specified.This restricts current applications of the maps in site-...Conventional soil maps generally contain one or more soil types within a single soil polygon.But their geographic locations within the polygon are not specified.This restricts current applications of the maps in site-specific agricultural management and environmental modelling.We examined the utility of legacy pedon data for disaggregating soil polygons and the effectiveness of similarity-based prediction for making use of the under-or over-sampled legacy pedon data for the disaggregation.The method consisted of three steps.First,environmental similarities between the pedon sites and each location were computed based on soil formative environmental factors.Second,according to soil types of the pedon sites,the similarities were aggregated to derive similarity distribution for each soil type.Third,a hardening process was performed on the maps to allocate candidate soil types within the polygons.The study was conducted at the soil subgroup level in a semi-arid area situated in Manitoba,Canada.Based on 186 independent pedon sites,the evaluation of the disaggregated map of soil subgroups showed an overall accuracy of 67% and a Kappa statistic of 0.62.The map represented a better spatial pattern of soil subgroups in both detail and accuracy compared to a dominant soil subgroup map,which was commonly used in practice.Incorrect predictions mainly occurred in the agricultural plain area and the soil subgroups that are very similar in taxonomy,indicating that new environmental covariates need to be developed.We concluded that the combination of legacy pedon data with similarity-based prediction is an effective solution for soil polygon disaggregation.展开更多
Background OneGeology is an initiative of Geological Survey Organisations(GSO)around the globe that dates back to Brighton,UK in 2007.Since then OneGeology has been a leader in developing geological online map data us...Background OneGeology is an initiative of Geological Survey Organisations(GSO)around the globe that dates back to Brighton,UK in 2007.Since then OneGeology has been a leader in developing geological online map data using a new international standard–a geological exchange language known as the‘GeoSciML’.Currently version 3.2 exists,which enables instant interoperability of the data.Increased use of this new language allows geological data to be shared and integrated across the planet with other organisations.In autumn 2013 OneGeology was transformed into a Consortium with a clearly defined governance structure,making its structure more official.展开更多
The in-memory computing(IMC)paradigm emerges as an effective solution to break the bottlenecks of conventional von Neumann architecture.In the current work,an approximate multiplier in spin-orbit torque magnetoresisti...The in-memory computing(IMC)paradigm emerges as an effective solution to break the bottlenecks of conventional von Neumann architecture.In the current work,an approximate multiplier in spin-orbit torque magnetoresistive random access memory(SOTMRAM)based true IMC(STIMC)architecture was presented,where computations were performed natively within the cell array instead of in peripheral circuits.Firstly,basic Boolean logic operations were realized by utilizing the feature of unipolar SOT device.Two majority gate-based imprecise compressors and an ultra-efficient approximate multiplier were then built to reduce the energy and latency.An optimized data mapping strategy facilitating bit-serial operations with an extensive degree of parallelism was also adopted.Finally,the performance enhancements by performing our approximate multiplier in image smoothing were demonstrated.Detailed simulation results show that the proposed 838 approximate multiplier could reduce the energy and latency at least by 74.2%and 44.4%compared with the existing designs.Moreover,the scheme could achieve improved peak signal-to-noise ratio(PSNR)and structural similarity index metric(SSIM),ensuring high-quality image processing outcomes.展开更多
This study utilizes global ionospheric map data with high spatiotemporal resolution to analyze the phenomenon of noontime bite-outs in 2014(a year of high solar activity)and 2020(a year of low solar activity).By stati...This study utilizes global ionospheric map data with high spatiotemporal resolution to analyze the phenomenon of noontime bite-outs in 2014(a year of high solar activity)and 2020(a year of low solar activity).By statistically examining the occurrence rate,intensity,and timing characteristics of noontime bite-outs at various grid points,this study explores the impact of solar activity on these phenomena and their spatiotemporal distribution.The results indicate that noontime bite-outs exhibit significant variations with solar activity,season,longitude,and latitude.During years of low solar activity,the occurrence rate of noontime bite-outs is higher and covers a broader area compared to years of high solar activity,while the duration tends to be shorter.The highest occurrence rate is observed in winter,with minimal variation across other seasons.This is primarily because the total electron content(TEC)is lower in the winter hemisphere than in the summer hemisphere,while the duration of noontime bite-outs is longer in summer.The intensity varies by region,with the radio intensity metric being more effective in mid-and high-latitudes,while the absolute intensity metric captures variations more effectively in mid-and low-latitude regions.During noontime bite-outs,the minimum TEC values across all regions typically occur around 13:00 local time.The duration of these patterns ranges from 2.5 to 6 h.The mechanism of noontime bite-outs in high latitude regions differs significantly from that in mid-to-low latitude regions.展开更多
文摘Peer-to-peer (P2P) networking is a distributed architecture that partitions tasks or data between peer nodes. In this paper, an efficient Hypercube Sequential Matrix Partition (HS-MP) for efficient data sharing in P2P Networks using tokenizer method is proposed to resolve the problems of the larger P2P networks. The availability of data is first measured by the tokenizer using Dynamic Hypercube Organization. By applying Dynamic Hypercube Organization, that efficiently coordinates and assists the peers in P2P network ensuring data availability at many locations. Each data in peer is then assigned with valid ID by the tokenizer using Sequential Self-Organizing (SSO) ID generation model. This ensures data sharing with other nodes in large P2P network at minimum time interval which is obtained through proximity of data availability. To validate the framework HS-MP, the performance is evaluated using traffic traces collected from data sharing applications. Simulations conducting using Network simulator-2 show that the proposed framework outperforms the conventional streaming models. The performance of the proposed system is analyzed using energy consumption, average latency and average data availability rate with respect to the number of peer nodes, data size, amount of data shared and execution time. The proposed method reduces the energy consumption 43.35% to transpose traffic, 35.29% to bitrev traffic and 25% to bitcomp traffic patterns.
文摘Six national-scale,or near national-scale,geochemical data sets for soils or stream sediments exist for the United States.The earliest of these,here termed the 'Shacklette' data set,was generated by a U.S. Geological Survey(USGS) project conducted from 1961 to 1975.This project used soil collected from a depth of about 20 cm as the sampling medium at 1323 sites throughout the conterminous U.S.The National Uranium Resource Evaluation Hydrogeochemical and Stream Sediment Reconnaissance(NUREHSSR) Program of the U.S.Department of Energy was conducted from 1975 to 1984 and collected either stream sediments,lake sediments,or soils at more than 378,000 sites in both the conterminous U.S.and Alaska.The sampled area represented about 65%of the nation.The Natural Resources Conservation Service(NRCS),from 1978 to 1982,collected samples from multiple soil horizons at sites within the major crop-growing regions of the conterminous U.S.This data set contains analyses of more than 3000 samples.The National Geochemical Survey,a USGS project conducted from 1997 to 2009,used a subset of the NURE-HSSR archival samples as its starting point and then collected primarily stream sediments, with occasional soils,in the parts of the U.S.not covered by the NURE-HSSR Program.This data set contains chemical analyses for more than 70,000 samples.The USGS,in collaboration with the Mexican Geological Survey and the Geological Survey of Canada,initiated soil sampling for the North American Soil Geochemical Landscapes Project in 2007.Sampling of three horizons or depths at more than 4800 sites in the U.S.was completed in 2010,and chemical analyses are currently ongoing.The NRCS initiated a project in the 1990s to analyze the various soil horizons from selected pedons throughout the U.S.This data set currently contains data from more than 1400 sites.This paper(1) discusses each data set in terms of its purpose,sample collection protocols,and analytical methods;and(2) evaluates each data set in terms of its appropriateness as a national-scale geochemical database and its usefulness for nationalscale geochemical mapping.
文摘The accuracy and repeatability of the laser interferometer measurement system (LIMS) are often limited by the mirror surface error that comes from the mirror surface shape and distortion. This paper describes a new method to calibrate mirror map on ultraprecise movement stage (UPMS) with nanopositioning and to make a real-time compensation for the mirror surface error by using mirror map data tables with the software algorithm. Based on the mirror map test model, the factors affecting mirror map are analyzed through geometric method on the UPMS with six digrees of freedom. Dam processing methods including spline interpolation and spline offsets are used to process the raw sampling data to build mirror map tables. The linear interpolation as compensation method to make a real-time correction on the stage mirror unflatness is adopted and the correction formulas are illuminated. In this way, the measurement accuracy of the system is obviously improved from 40 nm to 5 nm.
基金supported by the Key Research Base of Philosophy and Social Sciences in Sichuan Province—Modern Design and Culture Research Center(Grant No.MD24C003)the National Natural Science Foundation of China(Grant No.52308084)+1 种基金China Postdoctoral Science Foundation(Grant No.2022M712877)Key R&D and Promotion Projects of Henan Province(Grant No.222102110125)。
文摘Due to the widespread use of navigational satellites,the ubiquity of mobile phones,and the rapid advancement of mobile communication technologies,high-precision mobile phone signaling data(HMPSD)holds exceptional promise for discerning fine-grained characteristics of residents'travel behaviors,owing to its superior spatial and temporal resolution.This study focuses on identifying the most consistent commuting patterns of residents in the Qiaoxi District of Shijiazhuang,China,over the course of a month,using these patterns as the basis for transport mode identification.Leveraging the high-precise geographical coordinates of individuals'workplaces and homes,along with actual commuting durations derived from the high-frequency positioning of HMPSD,and comparing these with the predicted commuting durations for four transport modes from a navigational map,we have developed a novel approach for identifying individual transport modes,incorporating time matching,frequency ranking,and speed threshold assessments.This approach swiftly and effectively identifies the commuting modes for each resident—namely,driving,public transportation,walking,bicycling,and electric biking—along with their respective commuting distances and durations.Furthermore,to support urban planning and transportation management efforts,we aggregated individual commuting data—including flows,modes,distances,and durations—at a parcel level.This aggregation method effectively reveals favorable commuting characteristics within the central area of Qiaoxi District,highlights the commuting needs and irrational commuting conditions in peripheral parcels,and informs tailored strategies for adjusting planning layouts and optimizing facility configurations.This study facilitates an in-depth exploration of fine-grained travel patterns through integrated air-land transportation resources,providing new insights and methodologies for refined urban transportation planning and travel management through advanced data applications and identification methods.
基金supported by the National Natural Science Foundation of China (41130530,91325301,41431177,41571212,41401237)the Project of "One-Three-Five" Strategic Planning & Frontier Sciences of the Institute of Soil Science,Chinese Academy of Sciences (ISSASIP1622)+1 种基金the Government Interest Related Program between Canadian Space Agency and Agriculture and Agri-Food,Canada (13MOA01002)the Natural Science Research Program of Jiangsu Province (14KJA170001)
文摘Conventional soil maps generally contain one or more soil types within a single soil polygon.But their geographic locations within the polygon are not specified.This restricts current applications of the maps in site-specific agricultural management and environmental modelling.We examined the utility of legacy pedon data for disaggregating soil polygons and the effectiveness of similarity-based prediction for making use of the under-or over-sampled legacy pedon data for the disaggregation.The method consisted of three steps.First,environmental similarities between the pedon sites and each location were computed based on soil formative environmental factors.Second,according to soil types of the pedon sites,the similarities were aggregated to derive similarity distribution for each soil type.Third,a hardening process was performed on the maps to allocate candidate soil types within the polygons.The study was conducted at the soil subgroup level in a semi-arid area situated in Manitoba,Canada.Based on 186 independent pedon sites,the evaluation of the disaggregated map of soil subgroups showed an overall accuracy of 67% and a Kappa statistic of 0.62.The map represented a better spatial pattern of soil subgroups in both detail and accuracy compared to a dominant soil subgroup map,which was commonly used in practice.Incorrect predictions mainly occurred in the agricultural plain area and the soil subgroups that are very similar in taxonomy,indicating that new environmental covariates need to be developed.We concluded that the combination of legacy pedon data with similarity-based prediction is an effective solution for soil polygon disaggregation.
文摘Background OneGeology is an initiative of Geological Survey Organisations(GSO)around the globe that dates back to Brighton,UK in 2007.Since then OneGeology has been a leader in developing geological online map data using a new international standard–a geological exchange language known as the‘GeoSciML’.Currently version 3.2 exists,which enables instant interoperability of the data.Increased use of this new language allows geological data to be shared and integrated across the planet with other organisations.In autumn 2013 OneGeology was transformed into a Consortium with a clearly defined governance structure,making its structure more official.
基金supported in part by National Natural Science Foundation of China(Grant Nos.62374055,12327806)supported in part by Natural Science Foundation of Wuhan(Grant No.2024040701010049).
文摘The in-memory computing(IMC)paradigm emerges as an effective solution to break the bottlenecks of conventional von Neumann architecture.In the current work,an approximate multiplier in spin-orbit torque magnetoresistive random access memory(SOTMRAM)based true IMC(STIMC)architecture was presented,where computations were performed natively within the cell array instead of in peripheral circuits.Firstly,basic Boolean logic operations were realized by utilizing the feature of unipolar SOT device.Two majority gate-based imprecise compressors and an ultra-efficient approximate multiplier were then built to reduce the energy and latency.An optimized data mapping strategy facilitating bit-serial operations with an extensive degree of parallelism was also adopted.Finally,the performance enhancements by performing our approximate multiplier in image smoothing were demonstrated.Detailed simulation results show that the proposed 838 approximate multiplier could reduce the energy and latency at least by 74.2%and 44.4%compared with the existing designs.Moreover,the scheme could achieve improved peak signal-to-noise ratio(PSNR)and structural similarity index metric(SSIM),ensuring high-quality image processing outcomes.
基金the National Key R&D Program of China(No.2022YFB3904402)the National Natural Science Foundation of China(No.42474037).
文摘This study utilizes global ionospheric map data with high spatiotemporal resolution to analyze the phenomenon of noontime bite-outs in 2014(a year of high solar activity)and 2020(a year of low solar activity).By statistically examining the occurrence rate,intensity,and timing characteristics of noontime bite-outs at various grid points,this study explores the impact of solar activity on these phenomena and their spatiotemporal distribution.The results indicate that noontime bite-outs exhibit significant variations with solar activity,season,longitude,and latitude.During years of low solar activity,the occurrence rate of noontime bite-outs is higher and covers a broader area compared to years of high solar activity,while the duration tends to be shorter.The highest occurrence rate is observed in winter,with minimal variation across other seasons.This is primarily because the total electron content(TEC)is lower in the winter hemisphere than in the summer hemisphere,while the duration of noontime bite-outs is longer in summer.The intensity varies by region,with the radio intensity metric being more effective in mid-and high-latitudes,while the absolute intensity metric captures variations more effectively in mid-and low-latitude regions.During noontime bite-outs,the minimum TEC values across all regions typically occur around 13:00 local time.The duration of these patterns ranges from 2.5 to 6 h.The mechanism of noontime bite-outs in high latitude regions differs significantly from that in mid-to-low latitude regions.