Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each ...Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each published content. Since the verification scheme in NDN applies the asymmetric encryption algorithm to sign contents,the content verification overhead is too high to satisfy wire-speed packet forwarding. In this paper, we propose two schemes to improve the verification performance of NDN routers to prevent content poisoning. The first content verification scheme, called "user-assisted",leads to the best performance, but can be bypassed if the clients and the content producer collude. A second scheme, named ``RouterCooperation ‘', prevents the aforementioned collusion attack by making edge routers verify the contents independently without the assistance of users and the core routers no longer verify the contents. The Router-Cooperation verification scheme reduces the computing complexity of cryptographic operation by replacing the asymmetric encryption algorithm with symmetric encryption algorithm.The simulation results demonstrate that this Router-Cooperation scheme can speed up18.85 times of the original content verification scheme with merely extra 80 Bytes transmission overhead.展开更多
The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created ...The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created new opportunities for startups;for instance,the extraction of pertinent research papers from enormous knowledge repositories using certain innovative methods has become an important task for researchers and entrepreneurs.Traditionally,the content of the papers are compared to list the relevant papers from a repository.The conventional method results in a long list of papers that is often impossible to interpret productively.Therefore,the need for a novel approach that intelligently utilizes the available data is imminent.Moreover,the primary element of the scientific knowledge base is a research article,which consists of various logical sections such as the Abstract,Introduction,Related Work,Methodology,Results,and Conclusion.Thus,this study utilizes these logical sections of research articles,because they hold significant potential in finding relevant papers.In this study,comprehensive experiments were performed to determine the role of the logical sections-based terms indexing method in improving the quality of results(i.e.,retrieving relevant papers).Therefore,we proposed,implemented,and evaluated the logical sections-based content comparisons method to address the research objective with a standard method of indexing terms.The section-based approach outperformed the standard content-based approach in identifying relevant documents from all classified topics of computer science.Overall,the proposed approach extracted 14%more relevant results from the entire dataset.As the experimental results suggested that employing a finer content similarity technique improved the quality of results,the proposed approach has led the foundation of knowledge-based startups.展开更多
Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of m...Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.展开更多
On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of c...On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of carbonate content with limited well measarements as a control. The method was applied to the slope area of the northern South China Sea near ODP Sites 1146 and 1148, and the results are satisfaetory. Before inversion calculation, a stepwise regression method was applied to obtain six properties related most closely to the carbonate content variations among the various properties on the seismic profiles across or near the wells. These include the average frequency, the integrated absolute amplitude, the dominant frequency, the reflection time, the derivative instantaneous amplitude, and the instantaneous frequency. The results, with carbonate content errors of mostly ±5 % relative to those measured from sediment samples, show a relatively accurate picture of carbonate distribution along the slope profile. This method pioneers a new quantitative model to acquire carbonate content variations directly from high-resolution seismic data. It will provide a new approach toward obtaining substitutive high-resolution sediment data for earth system studies related to basin evolution, especially in discussing the coupling between regional sedimentation and climate change.展开更多
Soyang Lake is the largest lake in Republic of Korea bordering Chuncheon,Yanggu,and Inje in Gangwon Province.It is widely used as an environmental resource for hydropower,flood control,and water supply.Therefore,we co...Soyang Lake is the largest lake in Republic of Korea bordering Chuncheon,Yanggu,and Inje in Gangwon Province.It is widely used as an environmental resource for hydropower,flood control,and water supply.Therefore,we conducted a survey of the floodplain of Soyang Lake to analyze the sediments in the area.We used global positioning system(GPS)data and aerial photography to monitor sediment deposits in the Soyang Lake floodplain.Data from three GPS units were compared to determine the accuracy of sampling location measurement.Sediment samples were collected at three sites:two in the eastern region of the floodplain and one in the western region.A total of eight samples were collected:Three samples were collected at 10 cm intervals to a depth of 30 cm from each site of the eastern sampling point,and two samples were collected at depths of 10 and 30 cm at the western sampling point.Samples were collected and analyzed for vertical and horizontal trends in particle size and moisture content.The sizes of the sediment samples ranged from coarse to very coarse sediments with a negative slope,which indicate eastward movement from the breach.The probability of a breach was indicated by the high water content at the eastern side of the floodplain,with the eastern sites showing a higher probability than the western sites.The results of this study indicate that analyses of grain fineness,moisture content,sediment deposits,and sediment removal rates can be used to understand and predict the direction of breach movement and sediment distribution in Soyang Lake.展开更多
We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional datab...We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional database query languages. Our basic ideas rest with “ontology” and the notions of “information content”. A public ontology, if available, would be the best choice for reliable domain knowledge. To enable an ontology to work with a database would involve, among others, certain mechanism thereby the two systems can form a coherent whole. This is achieved by means of the notion of “information content inclusion relation”, IIR for short. We present what an IIR is, and how IIR can be identified from both an ontology and a database, and then reasoning about them.展开更多
Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candi...Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candidate anchor histogram and the file-type specific knowledge to refine how anchors are determined when performing de- duplication of file data and enforces the selected average chunk size. CAC yields more chunks being found which in turn produces smaller average chtmks and a better reduction in data. We present a detailed evaluation of CAC and the experimental results show that this scheme can improve the compression ratio chunking for file types whose bytes are not randomly distributed (from 11.3% to 16.7% according to different datasets), and improve the write throughput on average by 9.7%.展开更多
Moisture in insulation materials will impair their thermal and acoustic performance, induce microbe growth, and cause equipment/material corrosion. Moisture content measurement is vital to the effective moisture contr...Moisture in insulation materials will impair their thermal and acoustic performance, induce microbe growth, and cause equipment/material corrosion. Moisture content measurement is vital to the effective moisture control. This investigation proposes a simple, fast, and accurate method to measure moisture content of insulation materials through matching the measured temperature rise. Since moisture content corresponds to unique thermophysical properties, the measured temperature rise varies with moisture content. During the data analysis, all possible volumetric heat capacities and thermal conductivities are enumerated to match the measured temperature rise based on the composite heat conduction theory. Then, the partial derivatives with respect to both volumetric heat capacity and thermal conductivity are evaluated, so that these partial derivatives will be guaranteed equaling to zero at the optimal solutions to the moisture content. Compared to the benchmarked gravimetric method, this proposed method was found having a better accuracy but requiring a short test time.展开更多
In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of...In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of tl CCN node. Then we discuss mobility management, routing strategy, and caching policy in CCN. For better network performance, we propose a probability cache replacement policy that is based on cotent popularity. We also propose and evaluate a probability cache with evicted copy-up decision policy.展开更多
As a typical in-memory computing hardware design, nonvolatile ternary content-addressable memories(TCAMs) enable the logic operation and data storage for high throughout in parallel big data processing. However,TCAM c...As a typical in-memory computing hardware design, nonvolatile ternary content-addressable memories(TCAMs) enable the logic operation and data storage for high throughout in parallel big data processing. However,TCAM cells based on conventional silicon-based devices suffer from structural complexity and large footprintlimitations. Here, we demonstrate an ultrafast nonvolatile TCAM cell based on the MoTe2/hBN/multilayergraphene (MLG) van der Waals heterostructure using a top-gated partial floating-gate field-effect transistor(PFGFET) architecture. Based on its ambipolar transport properties, the carrier type in the source/drain andcentral channel regions of the MoTe2 channel can be efficiently tuned by the control gate and top gate, respectively,enabling the reconfigurable operation of the device in either memory or FET mode. When working inthe memory mode, it achieves an ultrafast 60 ns programming/erase speed with a current on-off ratio of ∼105,excellent retention capability, and robust endurance. When serving as a reconfigurable transistor, unipolar p-typeand n-type FETs are obtained by adopting ultrafast 60 ns control-gate voltage pulses with different polarities.The monolithic integration of memory and logic within a single device enables the content-addressable memory(CAM) functionality. Finally, by integrating two PFGFETs in parallel, a TCAM cell with a high current ratioof ∼10^(5) between the match and mismatch states is achieved without requiring additional peripheral circuitry.These results provide a promising route for the design of high-performance TCAM devices for future in-memorycomputing applications.展开更多
This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use...This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.展开更多
Rural tourism plays a crucial role in driving the sustainable development of rural economies.With the rise of the digital economy,user-generated content(UGC)videos on platforms such as TikTok have become a significant...Rural tourism plays a crucial role in driving the sustainable development of rural economies.With the rise of the digital economy,user-generated content(UGC)videos on platforms such as TikTok have become a significant factor influencing consumer decision-making,creating new opportunities for the growth of rural tourism.Using the TikTok app as the research platform,this study examines the relationship between UGC short videos,tourists’intentions to engage in rural tourism,and their perception of destination image.Specifically,it explores the impact of UGC short videos on tourists’willingness to participate in rural tourism and the mediating role of destination image perception.The findings indicate that UGC short videos positively influence tourists’willingness to engage in rural tourism.Destination image perception mediates this relationship,shaping tourists’decisions through cognitive and emotional image perceptions.Based on these findings,this paper recommends rural tourism destination managers enhance promotional strategies and improve destination image perception through UGC short video content.展开更多
In big data of business service or transaction,it is impossible to provide entire information to both of services from cyber system,so some service providers made use of maliciously services to get more interests.Trus...In big data of business service or transaction,it is impossible to provide entire information to both of services from cyber system,so some service providers made use of maliciously services to get more interests.Trust management is an effective solution to deal with these malicious actions.This paper gave a trust computing model based on service-recommendation in big data.This model takes into account difference of recommendation trust between familiar node and stranger node.Thus,to ensure accuracy of recommending trust computing,paper proposed a fine-granularity similarity computing method based on the similarity of service concept domain ontology.This model is more accurate in computing trust value of cyber service nodes and prevents better cheating and attacking of malicious service nodes.Experiment results illustrated our model is effective.展开更多
Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of...Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of content distribution and retrieval.In order to make full use of the limited caching space in routers,it is an urgent challenge to make an efficient cache replacement policy.However,the existing cache replacement policies only consider very few factors that affect the cache performance.In this paper,we present a cache replacement policy based on multi-factors for NDN(CRPM),in which the content with the least cache value is evicted from the caching space.CRPM fully analyzes multi-factors that affect the caching performance,puts forward the corresponding calculation methods,and utilize the multi-factors to measure the cache value of contents.Furthermore,a new cache value function is constructed,which makes the content with high value be stored in the router as long as possible,so as to ensure the efficient use of cache resources.The simulation results show that CPRM can effectively improve cache hit ratio,enhance cache resource utilization,reduce energy consumption and decrease hit distance of content acquisition.展开更多
Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offer...Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required.展开更多
基金financially supported by Shenzhen Key Fundamental Research Projects(Grant No.:JCYJ20170306091556329).
文摘Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each published content. Since the verification scheme in NDN applies the asymmetric encryption algorithm to sign contents,the content verification overhead is too high to satisfy wire-speed packet forwarding. In this paper, we propose two schemes to improve the verification performance of NDN routers to prevent content poisoning. The first content verification scheme, called "user-assisted",leads to the best performance, but can be bypassed if the clients and the content producer collude. A second scheme, named ``RouterCooperation ‘', prevents the aforementioned collusion attack by making edge routers verify the contents independently without the assistance of users and the core routers no longer verify the contents. The Router-Cooperation verification scheme reduces the computing complexity of cryptographic operation by replacing the asymmetric encryption algorithm with symmetric encryption algorithm.The simulation results demonstrate that this Router-Cooperation scheme can speed up18.85 times of the original content verification scheme with merely extra 80 Bytes transmission overhead.
基金supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(2020-0-01592)Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2019R1F1A1058548).
文摘The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created new opportunities for startups;for instance,the extraction of pertinent research papers from enormous knowledge repositories using certain innovative methods has become an important task for researchers and entrepreneurs.Traditionally,the content of the papers are compared to list the relevant papers from a repository.The conventional method results in a long list of papers that is often impossible to interpret productively.Therefore,the need for a novel approach that intelligently utilizes the available data is imminent.Moreover,the primary element of the scientific knowledge base is a research article,which consists of various logical sections such as the Abstract,Introduction,Related Work,Methodology,Results,and Conclusion.Thus,this study utilizes these logical sections of research articles,because they hold significant potential in finding relevant papers.In this study,comprehensive experiments were performed to determine the role of the logical sections-based terms indexing method in improving the quality of results(i.e.,retrieving relevant papers).Therefore,we proposed,implemented,and evaluated the logical sections-based content comparisons method to address the research objective with a standard method of indexing terms.The section-based approach outperformed the standard content-based approach in identifying relevant documents from all classified topics of computer science.Overall,the proposed approach extracted 14%more relevant results from the entire dataset.As the experimental results suggested that employing a finer content similarity technique improved the quality of results,the proposed approach has led the foundation of knowledge-based startups.
基金Under the auspices of National Natural Science Foundation of China(No.41230751,41101547)Scientific Research Foundation of Graduate School of Nanjing University(No.2012CL14)
文摘Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.
基金This paper is supported by the National Natural Science Foundation ofChina(Nos.40476030,40576031)andthe National Key Basic ResearchSpecial Foundation Project of China(No.G2000078501).
文摘On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of carbonate content with limited well measarements as a control. The method was applied to the slope area of the northern South China Sea near ODP Sites 1146 and 1148, and the results are satisfaetory. Before inversion calculation, a stepwise regression method was applied to obtain six properties related most closely to the carbonate content variations among the various properties on the seismic profiles across or near the wells. These include the average frequency, the integrated absolute amplitude, the dominant frequency, the reflection time, the derivative instantaneous amplitude, and the instantaneous frequency. The results, with carbonate content errors of mostly ±5 % relative to those measured from sediment samples, show a relatively accurate picture of carbonate distribution along the slope profile. This method pioneers a new quantitative model to acquire carbonate content variations directly from high-resolution seismic data. It will provide a new approach toward obtaining substitutive high-resolution sediment data for earth system studies related to basin evolution, especially in discussing the coupling between regional sedimentation and climate change.
基金This research was supported by a grant from the National Research Foundation of Korea provided by the government of Republic of Korea(2019R1A2C1085686).
文摘Soyang Lake is the largest lake in Republic of Korea bordering Chuncheon,Yanggu,and Inje in Gangwon Province.It is widely used as an environmental resource for hydropower,flood control,and water supply.Therefore,we conducted a survey of the floodplain of Soyang Lake to analyze the sediments in the area.We used global positioning system(GPS)data and aerial photography to monitor sediment deposits in the Soyang Lake floodplain.Data from three GPS units were compared to determine the accuracy of sampling location measurement.Sediment samples were collected at three sites:two in the eastern region of the floodplain and one in the western region.A total of eight samples were collected:Three samples were collected at 10 cm intervals to a depth of 30 cm from each site of the eastern sampling point,and two samples were collected at depths of 10 and 30 cm at the western sampling point.Samples were collected and analyzed for vertical and horizontal trends in particle size and moisture content.The sizes of the sediment samples ranged from coarse to very coarse sediments with a negative slope,which indicate eastward movement from the breach.The probability of a breach was indicated by the high water content at the eastern side of the floodplain,with the eastern sites showing a higher probability than the western sites.The results of this study indicate that analyses of grain fineness,moisture content,sediment deposits,and sediment removal rates can be used to understand and predict the direction of breach movement and sediment distribution in Soyang Lake.
文摘We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional database query languages. Our basic ideas rest with “ontology” and the notions of “information content”. A public ontology, if available, would be the best choice for reliable domain knowledge. To enable an ontology to work with a database would involve, among others, certain mechanism thereby the two systems can form a coherent whole. This is achieved by means of the notion of “information content inclusion relation”, IIR for short. We present what an IIR is, and how IIR can be identified from both an ontology and a database, and then reasoning about them.
基金Supported by the National Natural Science Foundation of China (No.60673001) the State Key Development Program of Basic Research of China (No. 2004CB318203).
文摘Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candidate anchor histogram and the file-type specific knowledge to refine how anchors are determined when performing de- duplication of file data and enforces the selected average chunk size. CAC yields more chunks being found which in turn produces smaller average chtmks and a better reduction in data. We present a detailed evaluation of CAC and the experimental results show that this scheme can improve the compression ratio chunking for file types whose bytes are not randomly distributed (from 11.3% to 16.7% according to different datasets), and improve the write throughput on average by 9.7%.
文摘Moisture in insulation materials will impair their thermal and acoustic performance, induce microbe growth, and cause equipment/material corrosion. Moisture content measurement is vital to the effective moisture control. This investigation proposes a simple, fast, and accurate method to measure moisture content of insulation materials through matching the measured temperature rise. Since moisture content corresponds to unique thermophysical properties, the measured temperature rise varies with moisture content. During the data analysis, all possible volumetric heat capacities and thermal conductivities are enumerated to match the measured temperature rise based on the composite heat conduction theory. Then, the partial derivatives with respect to both volumetric heat capacity and thermal conductivity are evaluated, so that these partial derivatives will be guaranteed equaling to zero at the optimal solutions to the moisture content. Compared to the benchmarked gravimetric method, this proposed method was found having a better accuracy but requiring a short test time.
基金supported by National Natural Science Foundation of China under Grant No.60872018 and No. 60902015Major National Science and Technology Project No. 2011ZX03005-004-03
文摘In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of tl CCN node. Then we discuss mobility management, routing strategy, and caching policy in CCN. For better network performance, we propose a probability cache replacement policy that is based on cotent popularity. We also propose and evaluate a probability cache with evicted copy-up decision policy.
基金supported by the National Key Research&Development Projects of China(Grant No.2022YFA1204100)National Natural Science Foundation of China(Grant No.62488201)+1 种基金CAS Project for Young Scientists in Basic Research(YSBR-003)the Innovation Program of Quantum Science and Technology(2021ZD0302700)。
文摘As a typical in-memory computing hardware design, nonvolatile ternary content-addressable memories(TCAMs) enable the logic operation and data storage for high throughout in parallel big data processing. However,TCAM cells based on conventional silicon-based devices suffer from structural complexity and large footprintlimitations. Here, we demonstrate an ultrafast nonvolatile TCAM cell based on the MoTe2/hBN/multilayergraphene (MLG) van der Waals heterostructure using a top-gated partial floating-gate field-effect transistor(PFGFET) architecture. Based on its ambipolar transport properties, the carrier type in the source/drain andcentral channel regions of the MoTe2 channel can be efficiently tuned by the control gate and top gate, respectively,enabling the reconfigurable operation of the device in either memory or FET mode. When working inthe memory mode, it achieves an ultrafast 60 ns programming/erase speed with a current on-off ratio of ∼105,excellent retention capability, and robust endurance. When serving as a reconfigurable transistor, unipolar p-typeand n-type FETs are obtained by adopting ultrafast 60 ns control-gate voltage pulses with different polarities.The monolithic integration of memory and logic within a single device enables the content-addressable memory(CAM) functionality. Finally, by integrating two PFGFETs in parallel, a TCAM cell with a high current ratioof ∼10^(5) between the match and mismatch states is achieved without requiring additional peripheral circuitry.These results provide a promising route for the design of high-performance TCAM devices for future in-memorycomputing applications.
基金funded by the Office of the Vice-President for Research and Development of Cebu Technological University.
文摘This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.
基金The Liaoning Provincial Social Science Planning Fund Project(L23CGL002)。
文摘Rural tourism plays a crucial role in driving the sustainable development of rural economies.With the rise of the digital economy,user-generated content(UGC)videos on platforms such as TikTok have become a significant factor influencing consumer decision-making,creating new opportunities for the growth of rural tourism.Using the TikTok app as the research platform,this study examines the relationship between UGC short videos,tourists’intentions to engage in rural tourism,and their perception of destination image.Specifically,it explores the impact of UGC short videos on tourists’willingness to participate in rural tourism and the mediating role of destination image perception.The findings indicate that UGC short videos positively influence tourists’willingness to engage in rural tourism.Destination image perception mediates this relationship,shaping tourists’decisions through cognitive and emotional image perceptions.Based on these findings,this paper recommends rural tourism destination managers enhance promotional strategies and improve destination image perception through UGC short video content.
文摘In big data of business service or transaction,it is impossible to provide entire information to both of services from cyber system,so some service providers made use of maliciously services to get more interests.Trust management is an effective solution to deal with these malicious actions.This paper gave a trust computing model based on service-recommendation in big data.This model takes into account difference of recommendation trust between familiar node and stranger node.Thus,to ensure accuracy of recommending trust computing,paper proposed a fine-granularity similarity computing method based on the similarity of service concept domain ontology.This model is more accurate in computing trust value of cyber service nodes and prevents better cheating and attacking of malicious service nodes.Experiment results illustrated our model is effective.
基金This research was funded by the National Natural Science Foundation of China(No.61862046)the Inner Mongolia Natural Science Foundation of China under Grant No.2018MS06024+2 种基金the Research Project of Higher Education School of Inner Mongolia Autonomous Region under Grant NJZY18010the Inner Mongolia Autonomous Region Science and Technology Achievements Transformation Project(No.CGZH2018124)the CERNET Innovation Project under Grant No.NGII20180626.
文摘Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of content distribution and retrieval.In order to make full use of the limited caching space in routers,it is an urgent challenge to make an efficient cache replacement policy.However,the existing cache replacement policies only consider very few factors that affect the cache performance.In this paper,we present a cache replacement policy based on multi-factors for NDN(CRPM),in which the content with the least cache value is evicted from the caching space.CRPM fully analyzes multi-factors that affect the caching performance,puts forward the corresponding calculation methods,and utilize the multi-factors to measure the cache value of contents.Furthermore,a new cache value function is constructed,which makes the content with high value be stored in the router as long as possible,so as to ensure the efficient use of cache resources.The simulation results show that CPRM can effectively improve cache hit ratio,enhance cache resource utilization,reduce energy consumption and decrease hit distance of content acquisition.
文摘Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required.