The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneousl...The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneously continues to elude most blockchain systems,often forcing trade-offs that limit their real-world applicability.This review paper synthesizes current research efforts aimed at resolving the trilemma,focusing on innovative consensus mechanisms,sharding techniques,layer-2 protocols,and hybrid architectural models.We critically analyze recent breakthroughs,including Directed Acyclic Graph(DAG)-based structures,cross-chain interoperability frameworks,and zero-knowledge proof(ZKP)enhancements,which aimto reconcile scalability with robust security and decentralization.Furthermore,we evaluate the trade-offs inherent in these approaches,highlighting their practical implications for enterprise adoption,decentralized finance(DeFi),and Web3 ecosystems.By mapping the evolving landscape of solutions,this review identifies gaps in currentmethodologies and proposes future research directions,such as adaptive consensus algorithms and artificial intelligence-driven(AI-driven)governance models.Our analysis underscores that while no universal solution exists,interdisciplinary innovations are progressively narrowing the trilemma’s constraints,paving the way for next-generation blockchain infrastructures.展开更多
Scalability remains a major challenge in building practical fault-tolerant quantum computers.Currently,the largest number of qubits achieved across leading quantum platforms ranges from hundreds to thousands.In atom a...Scalability remains a major challenge in building practical fault-tolerant quantum computers.Currently,the largest number of qubits achieved across leading quantum platforms ranges from hundreds to thousands.In atom arrays,scalability is primarily constrained by the capacity to generate large numbers of optical tweezers,and conventional techniques using acousto-optic deflectors or spatial light modulators struggle to produce arrays much beyond∼10,000 tweezers.Moreover,these methods require additional microscope objectives to focus the light into micrometer-sized spots,which further complicates system integration and scalability.Here,we demonstrate the experimental generation of an optical tweezer array containing 280×280 spots using a metasurface,nearly an order of magnitude more than most existing systems.The metasurface leverages a large number of subwavelength phase-control pixels to engineer the wavefront of the incident light,enabling both large-scale tweezer generation and direct focusing into micron-scale spots without the need for a microscope.This result shifts the scalability bottleneck for atom arrays from the tweezer generation hardware to the available laser power.Furthermore,the array shows excellent intensity uniformity exceeding 90%,making it suitable for homogeneous single-atom loading and paving the way for trapping arrays of more than 10,000 atoms in the near future.展开更多
A density-based partitioning strategy is proposed for large domain networks in order to deal with the scalability issue found in autonomic networks considering, as a scenario, the autonomic Quality of Service (QoS) ...A density-based partitioning strategy is proposed for large domain networks in order to deal with the scalability issue found in autonomic networks considering, as a scenario, the autonomic Quality of Service (QoS) management context. The approach adopted focus as on obtaining dense network partitions having more paths for a given vertices set in the domain. It is demonstrated that dense partitions improve autonomic processing scalability, for instance, reducing routing process complexity. The solution looks for a significant trade-off between partition autonomic algorithm execution time and path selection quality in large domains. Simulation scenarios for path selection execution time are presented and discussed. Authors argue that autonomic networks may benefit from the dense partition approach proposed by achieving scalable, efficient and near real-time support for autonomic management systems.展开更多
All-inorganic perovskite solar cells(PSCs) have potential to pass the stability international standard of IEC61215:2016 but cannot deliver high performance and stability due to the poor interface contact. In this pape...All-inorganic perovskite solar cells(PSCs) have potential to pass the stability international standard of IEC61215:2016 but cannot deliver high performance and stability due to the poor interface contact. In this paper, Sn-doped TiO_(2)(Ti_(1-x)Sn_(x)O_(2)) ultrathin nanoparticles are prepared for electron transport layer(ETL) by solution process. The ultrathin Ti_(1-x)Sn_(x)O_(2) nanocrystals have greatly improved interface contact due to the facile film formation, good conductivity and high work function. The all-inorganic inverted NiOx/CsPbI_(2)Br/Ti_(1-x)Sn_(x)O_(2)p-i-n device shows a power conversion efficiency(PCE) of 14.0%. We tested the heat stability, light stability and light-heat stability. After stored in 85℃ for 65 days, the inverted PSCs still retains 98% of initial efficiency. Under continuous standard one-sun illumination for 600 h,there is no efficiency decay, and under continuous illumination at 85℃ for 200 h, the device still retains 85% of initial efficiency. The 1.0 cm^(2) device of inverted structure shows a PCE of up to 11.2%. The ultrathin Ti_(1-x)Sn_(x)O_(2)is promising to improve the scalability and stability and thus increase the commercial prospect.展开更多
The locator/ID separation paradigm has been widely discussed to resolve the serious scalability issue that today's Internet is facing. Many researches have been carried on with this issue to alleviate the routing ...The locator/ID separation paradigm has been widely discussed to resolve the serious scalability issue that today's Internet is facing. Many researches have been carried on with this issue to alleviate the routing burden of the Default Free Zone (DFZ), improve the traffic engineering capabilities and support efficient mobility and multi-homing. However, in the locator/ID split networks, a third party is needed to store the identifier-to-locator pairs. How to map identifiers onto locators in a scalable and secure way is a really critical challenge. In this paper, we propose SS-MAP, a scalable and secure locator/ID mapping scheme for future Internet. First, SS-MAP uses a near-optimal DHT to map identifiers onto locators, which is able to achieve the maximal performance of the system with reasonable maintenance overhead relatively. Second, SS-MAP uses a decentralized admission control system to protect the DHT-based identifier-to-locator mapping from Sybil attacks, where a malicious mapping server creates numerous fake identities (called Sybil identifiers) to control a large fraction of the mapping system. This is the first work to discuss the Sybil attack problem in identifier-to-locator mapping mechanisms with the best knowledge of the authors. We evaluate the performance of the proposed approach in terms of scalability and security. The analysis and simulation results show that the scheme is scalable for large size networks and can resistant to Sybil attacks.展开更多
The scalability of the tunnel-regenerated multi-active-region (TRMAR) structure has been investigated for the application in light-emitting diodes (LEDs). The use of the TRMAR structure was proved theoretically to...The scalability of the tunnel-regenerated multi-active-region (TRMAR) structure has been investigated for the application in light-emitting diodes (LEDs). The use of the TRMAR structure was proved theoretically to have unique advantages over conventional slngle-active-layer structures in virtually every aspect, such as high quantum efficiency, high power and low leakage. Our study showed that the TRMAR LED structure could obtain high output power under low current injection and high wall-plug efficiency compared with the conventional single-active-layer LED structure.展开更多
A Recommender System(RS)is a crucial part of several firms,particularly those involved in e-commerce.In conventional RS,a user may only offer a single rating for an item-that is insufficient to perceive consumer prefe...A Recommender System(RS)is a crucial part of several firms,particularly those involved in e-commerce.In conventional RS,a user may only offer a single rating for an item-that is insufficient to perceive consumer preferences.Nowadays,businesses in industries like e-learning and tourism enable customers to rate a product using a variety of factors to comprehend customers’preferences.On the other hand,the collaborative filtering(CF)algorithm utilizing AutoEncoder(AE)is seen to be effective in identifying user-interested items.However,the cost of these computations increases nonlinearly as the number of items and users increases.To triumph over the issues,a novel expanded stacked autoencoder(ESAE)with Kernel Fuzzy C-Means Clustering(KFCM)technique is proposed with two phases.In the first phase of offline,the sparse multicriteria rating matrix is smoothened to a complete matrix by predicting the users’intact rating by the ESAE approach and users are clustered using the KFCM approach.In the next phase of online,the top-N recommendation prediction is made by the ESAE approach involving only the most similar user from multiple clusters.Hence the ESAE_KFCM model upgrades the prediction accuracy of 98.2%in Top-N recommendation with a minimized recommendation generation time.An experimental check on the Yahoo!Movies(YM)movie dataset and TripAdvisor(TA)travel dataset confirmed that the ESAE_KFCM model constantly outperforms conventional RS algorithms on a variety of assessment measures.展开更多
In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor ...In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor scalability,and high latency.Due to the failure of consensus algorithms in managing nodes’identities,blockchain technology is considered inappropriate for many applications,e.g.,in IoT environments,because of poor scalability.This paper proposes a blockchain consensus mechanism called the Advanced DAG-based Ranking(ADR)protocol to improve blockchain scalability and throughput.The ADR protocol uses the directed acyclic graph ledger,where nodes are placed according to their ranking positions in the graph.It allows honest nodes to use theDirect Acyclic Graph(DAG)topology to write blocks and verify transactions instead of a chain of blocks.By using a three-step strategy,this protocol ensures that the system is secured against doublespending attacks and allows for higher throughput and scalability.The first step involves the safe entry of nodes into the system by verifying their private and public keys.The next step involves developing an advanced DAG ledger so nodes can start block production and verify transactions.In the third step,a ranking algorithm is developed to separate the nodes created by attackers.After eliminating attacker nodes,the nodes are ranked according to their performance in the system,and true nodes are arranged in blocks in topological order.As a result,the ADR protocol is suitable for applications in the Internet of Things(IoT).We evaluated ADR on EC2 clusters with more than 100 nodes and achieved better transaction throughput and liveness of the network while adding malicious nodes.Based on the simulation results,this research determined that the transaction’s performance was significantly improved over blockchains like Internet of Things Applications(IOTA)and ByteBall.展开更多
The continuous increase of data transmission density in wireless mobile communications has posed a challenge to the system performance of Wireless Mesh Networks (WMNs ). There is a rule for wireless Ad hoc networks th...The continuous increase of data transmission density in wireless mobile communications has posed a challenge to the system performance of Wireless Mesh Networks (WMNs ). There is a rule for wireless Ad hoc networks that the average node capacity decreases while the number of nodes increases , so it is hard to establish a large - scale wireless Mesh network. Network scalability is very important for enhancing the adaptive networking capability of the wireless Mesh network. This article discusses key scalability technologies for Mesh Base Stations (BSs ) and Mesh Mobile Stations (MSs ), such as channel allocation, intelligent routing , multi- antenna , node classification, Quality of Service (QoS) differentiation and cooperative transmission.展开更多
This paper proposes an optimal solution to choose the number of enhancement layers in fine granularity scalability (FGS) scheme under the constraint of minimum transmission energy, in which FGS is combined with transm...This paper proposes an optimal solution to choose the number of enhancement layers in fine granularity scalability (FGS) scheme under the constraint of minimum transmission energy, in which FGS is combined with transmission energy control, so that FGS enhancement layer transmission energy is minimized while the distortion guaranteed. By changing the bit-plane level and packet loss rate, minimum transmission energy of enhancement layer is obtained, while the expected distortion is satisfied.展开更多
For the large-scale application requirements of the belt-type networks,the mathematical modeling as well as quantitative analysis for the scalability of the network based on average path length is completed in this pa...For the large-scale application requirements of the belt-type networks,the mathematical modeling as well as quantitative analysis for the scalability of the network based on average path length is completed in this paper,and the theorem for the scale scalability of the belt-type networks is derived.The theorem provides a calculation formula for the upper limit of node scale theory of the belt-type networks and a calculation formula for the upper limit of single node load theory.展开更多
With ever-increasing applications of IoT, and due to the heterogeneous and bursty nature of these applications, scalability has become an important research issue in building cloud-based IoT/M2M systems. This research...With ever-increasing applications of IoT, and due to the heterogeneous and bursty nature of these applications, scalability has become an important research issue in building cloud-based IoT/M2M systems. This research proposes a dynamic SDN-based network slicing mechanism to tackle the scalability problems caused by such heterogeneity and fluctuation of IoT application requirements. The proposed method can automatically create a network slice on-the-fly for each new type of IoT application and adjust the QoS characteristics of the slice dynamically according to the changing requirements </span><span style="font-family:Verdana;">of an IoT application. Validated with extensive experiments, the proposed me</span><span style="font-family:Verdana;">chanism demonstrates better platform scalability when compared to a static slicing system.展开更多
The key to large-scale parallel solutions of deterministic particle transport problem is single-node computation performance. Hence, single-node computation is often parallelized on multi-core or many-core computer ar...The key to large-scale parallel solutions of deterministic particle transport problem is single-node computation performance. Hence, single-node computation is often parallelized on multi-core or many-core computer architectures. However, the number of on-chip cores grows quickly with the scale-down of feature size in semiconductor technology. In this paper, we present a scalability investigation of one energy group time-independent deterministic discrete ordinates neutron transport in 3D Cartesian geometry(Sweep3D) on Intel's Many Integrated Core(MIC) architecture, which can provide up to 62 cores with four hardware threads per core now and will own up to 72 in the future. The parallel programming model, Open MP, and vector intrinsic functions are used to exploit thread parallelism and vector parallelism for the discrete ordinates method, respectively. The results on a 57-core MIC coprocessor show that the implementation of Sweep3 D on MIC has good scalability in performance. In addition, the application of the Roofline model to assess the implementation and performance comparison between MIC and Tesla K20 C Graphics Processing Unit(GPU) are also reported.展开更多
With emerging large volume and diverse heterogeneity of Internet of Things (IoT) applications, the one-size-fits-all design of the current 4G networks is no longer adequate to serve various types of IoT applications. ...With emerging large volume and diverse heterogeneity of Internet of Things (IoT) applications, the one-size-fits-all design of the current 4G networks is no longer adequate to serve various types of IoT applications. Consequently, the concepts of network slicing enabled by Network Function Virtualization (NFV) have been proposed in the upcoming 5G networks. 5G network slicing allows IoT applications of different QoS requirements to be served by different virtual networks. Moreover, these network slices are equipped with scalability that allows them to grow or shrink their instances of Virtual Network Functions (VNFs) when needed. However, all current research only focuses on scalability on a single network slice, which is the scalability at the VNF level only. Such a design will eventually reach the capacity limit of a single slice under stressful incoming traffic, and cause the breakdown of an IoT system. Therefore, we propose a new IoT scalability architecture in this research to provide scalability at the NS level and design a testbed to implement the proposed architecture in order to verify its effectiveness. For evaluation, three systems are compared for their throughput, response time, and CPU utilization under three different types of IoT traffic, including the single slice scaling system, the multiple slices scaling system and the hybrid scaling system where both single slicing and multiple slicing can be simultaneously applied. Due to the balanced tradeoff between slice scalability and resource availability, the hybrid scaling system turns out to perform the best in terms of throughput and response time with medium CPU utilization.展开更多
The explosive growth of the Internet and database applications has driven database to be more scalable and available, and able to support on line scaling without interrupting service. To support more client’s queries...The explosive growth of the Internet and database applications has driven database to be more scalable and available, and able to support on line scaling without interrupting service. To support more client’s queries without downtime and degrading the response time, more nodes have to be scaled up while the database is running. This paper presents the overview of scalable and available database that satisfies the above characteristics. And we propose a novel on line scaling method. Our method improves the existing on line scaling method for fast response time and higher throughputs. Our proposed method reduces unnecessary network use, i.e., we decrease the number of data copy by reusing the backup data. Also, our on line scaling operation can be processed parallel by selecting adequate nodes as new node. Our performance study shows that our method results in significant reduction in data copy time.展开更多
文摘The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneously continues to elude most blockchain systems,often forcing trade-offs that limit their real-world applicability.This review paper synthesizes current research efforts aimed at resolving the trilemma,focusing on innovative consensus mechanisms,sharding techniques,layer-2 protocols,and hybrid architectural models.We critically analyze recent breakthroughs,including Directed Acyclic Graph(DAG)-based structures,cross-chain interoperability frameworks,and zero-knowledge proof(ZKP)enhancements,which aimto reconcile scalability with robust security and decentralization.Furthermore,we evaluate the trade-offs inherent in these approaches,highlighting their practical implications for enterprise adoption,decentralized finance(DeFi),and Web3 ecosystems.By mapping the evolving landscape of solutions,this review identifies gaps in currentmethodologies and proposes future research directions,such as adaptive consensus algorithms and artificial intelligence-driven(AI-driven)governance models.Our analysis underscores that while no universal solution exists,interdisciplinary innovations are progressively narrowing the trilemma’s constraints,paving the way for next-generation blockchain infrastructures.
基金supported by the National Natural Science Foundation of China (Grant No.92576208)Tsinghua University Initiative Scientific Research Program+1 种基金Beijing Science and Technology Planning ProjectTsinghua University Dushi Program。
文摘Scalability remains a major challenge in building practical fault-tolerant quantum computers.Currently,the largest number of qubits achieved across leading quantum platforms ranges from hundreds to thousands.In atom arrays,scalability is primarily constrained by the capacity to generate large numbers of optical tweezers,and conventional techniques using acousto-optic deflectors or spatial light modulators struggle to produce arrays much beyond∼10,000 tweezers.Moreover,these methods require additional microscope objectives to focus the light into micrometer-sized spots,which further complicates system integration and scalability.Here,we demonstrate the experimental generation of an optical tweezer array containing 280×280 spots using a metasurface,nearly an order of magnitude more than most existing systems.The metasurface leverages a large number of subwavelength phase-control pixels to engineer the wavefront of the incident light,enabling both large-scale tweezer generation and direct focusing into micron-scale spots without the need for a microscope.This result shifts the scalability bottleneck for atom arrays from the tweezer generation hardware to the available laser power.Furthermore,the array shows excellent intensity uniformity exceeding 90%,making it suitable for homogeneous single-atom loading and paving the way for trapping arrays of more than 10,000 atoms in the near future.
文摘A density-based partitioning strategy is proposed for large domain networks in order to deal with the scalability issue found in autonomic networks considering, as a scenario, the autonomic Quality of Service (QoS) management context. The approach adopted focus as on obtaining dense network partitions having more paths for a given vertices set in the domain. It is demonstrated that dense partitions improve autonomic processing scalability, for instance, reducing routing process complexity. The solution looks for a significant trade-off between partition autonomic algorithm execution time and path selection quality in large domains. Simulation scenarios for path selection execution time are presented and discussed. Authors argue that autonomic networks may benefit from the dense partition approach proposed by achieving scalable, efficient and near real-time support for autonomic management systems.
基金in part supported by the Start-up funds from Central Organization Department and South China University of Technologyfunds from the National Natural Science Foundation of China (U2001217)+1 种基金the Guangdong Science and Technology Program (2020B121201003, 2019ZT08L075,2019QN01L118, 2021A1515012545)the Fundamental Research Fund for the Central Universities,SCUT(2020ZYGXZR095)。
文摘All-inorganic perovskite solar cells(PSCs) have potential to pass the stability international standard of IEC61215:2016 but cannot deliver high performance and stability due to the poor interface contact. In this paper, Sn-doped TiO_(2)(Ti_(1-x)Sn_(x)O_(2)) ultrathin nanoparticles are prepared for electron transport layer(ETL) by solution process. The ultrathin Ti_(1-x)Sn_(x)O_(2) nanocrystals have greatly improved interface contact due to the facile film formation, good conductivity and high work function. The all-inorganic inverted NiOx/CsPbI_(2)Br/Ti_(1-x)Sn_(x)O_(2)p-i-n device shows a power conversion efficiency(PCE) of 14.0%. We tested the heat stability, light stability and light-heat stability. After stored in 85℃ for 65 days, the inverted PSCs still retains 98% of initial efficiency. Under continuous standard one-sun illumination for 600 h,there is no efficiency decay, and under continuous illumination at 85℃ for 200 h, the device still retains 85% of initial efficiency. The 1.0 cm^(2) device of inverted structure shows a PCE of up to 11.2%. The ultrathin Ti_(1-x)Sn_(x)O_(2)is promising to improve the scalability and stability and thus increase the commercial prospect.
基金supported in part by National Key Basic Research Program of China (973 program) under Grant No.2007CB307101,2007CB307106National Key Technology R&D Program under Grant No.2008BAH37B03+2 种基金Program of Introducing Talents of Discipline to Universities (111 Project) under Grant No. B08002National Natural Science Foundation of China under Grant No.60833002China Fundamental Research Funds for the Central Universities under Grant No.2009YJS016
文摘The locator/ID separation paradigm has been widely discussed to resolve the serious scalability issue that today's Internet is facing. Many researches have been carried on with this issue to alleviate the routing burden of the Default Free Zone (DFZ), improve the traffic engineering capabilities and support efficient mobility and multi-homing. However, in the locator/ID split networks, a third party is needed to store the identifier-to-locator pairs. How to map identifiers onto locators in a scalable and secure way is a really critical challenge. In this paper, we propose SS-MAP, a scalable and secure locator/ID mapping scheme for future Internet. First, SS-MAP uses a near-optimal DHT to map identifiers onto locators, which is able to achieve the maximal performance of the system with reasonable maintenance overhead relatively. Second, SS-MAP uses a decentralized admission control system to protect the DHT-based identifier-to-locator mapping from Sybil attacks, where a malicious mapping server creates numerous fake identities (called Sybil identifiers) to control a large fraction of the mapping system. This is the first work to discuss the Sybil attack problem in identifier-to-locator mapping mechanisms with the best knowledge of the authors. We evaluate the performance of the proposed approach in terms of scalability and security. The analysis and simulation results show that the scheme is scalable for large size networks and can resistant to Sybil attacks.
基金Project supported by the National Basic Research Program of China (Grant No 2006CB604902)the National High Technology Development Program of China (Grant No 2006AA03A121)+4 种基金the National Natural Science Foundation of China (Grant No 60506012)Beijing Natural Science Foundation (Grant No KZ200510005003)Fok Ying Tung Education Foundation (Grant No 101062)Excellent PhD Thesis Foundation (Grant No 200542),Beijing New-Star Program of China (Grant No 2005A11)
文摘The scalability of the tunnel-regenerated multi-active-region (TRMAR) structure has been investigated for the application in light-emitting diodes (LEDs). The use of the TRMAR structure was proved theoretically to have unique advantages over conventional slngle-active-layer structures in virtually every aspect, such as high quantum efficiency, high power and low leakage. Our study showed that the TRMAR LED structure could obtain high output power under low current injection and high wall-plug efficiency compared with the conventional single-active-layer LED structure.
文摘A Recommender System(RS)is a crucial part of several firms,particularly those involved in e-commerce.In conventional RS,a user may only offer a single rating for an item-that is insufficient to perceive consumer preferences.Nowadays,businesses in industries like e-learning and tourism enable customers to rate a product using a variety of factors to comprehend customers’preferences.On the other hand,the collaborative filtering(CF)algorithm utilizing AutoEncoder(AE)is seen to be effective in identifying user-interested items.However,the cost of these computations increases nonlinearly as the number of items and users increases.To triumph over the issues,a novel expanded stacked autoencoder(ESAE)with Kernel Fuzzy C-Means Clustering(KFCM)technique is proposed with two phases.In the first phase of offline,the sparse multicriteria rating matrix is smoothened to a complete matrix by predicting the users’intact rating by the ESAE approach and users are clustered using the KFCM approach.In the next phase of online,the top-N recommendation prediction is made by the ESAE approach involving only the most similar user from multiple clusters.Hence the ESAE_KFCM model upgrades the prediction accuracy of 98.2%in Top-N recommendation with a minimized recommendation generation time.An experimental check on the Yahoo!Movies(YM)movie dataset and TripAdvisor(TA)travel dataset confirmed that the ESAE_KFCM model constantly outperforms conventional RS algorithms on a variety of assessment measures.
文摘In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor scalability,and high latency.Due to the failure of consensus algorithms in managing nodes’identities,blockchain technology is considered inappropriate for many applications,e.g.,in IoT environments,because of poor scalability.This paper proposes a blockchain consensus mechanism called the Advanced DAG-based Ranking(ADR)protocol to improve blockchain scalability and throughput.The ADR protocol uses the directed acyclic graph ledger,where nodes are placed according to their ranking positions in the graph.It allows honest nodes to use theDirect Acyclic Graph(DAG)topology to write blocks and verify transactions instead of a chain of blocks.By using a three-step strategy,this protocol ensures that the system is secured against doublespending attacks and allows for higher throughput and scalability.The first step involves the safe entry of nodes into the system by verifying their private and public keys.The next step involves developing an advanced DAG ledger so nodes can start block production and verify transactions.In the third step,a ranking algorithm is developed to separate the nodes created by attackers.After eliminating attacker nodes,the nodes are ranked according to their performance in the system,and true nodes are arranged in blocks in topological order.As a result,the ADR protocol is suitable for applications in the Internet of Things(IoT).We evaluated ADR on EC2 clusters with more than 100 nodes and achieved better transaction throughput and liveness of the network while adding malicious nodes.Based on the simulation results,this research determined that the transaction’s performance was significantly improved over blockchains like Internet of Things Applications(IOTA)and ByteBall.
基金This work was funded by Special Standardization Foundation of the Science and Technology Commission of Shanghai Municipality under Grant 07DZ05018the Natural Science Foundation of Shanghai Municipality under Grant 07ZR14104
文摘The continuous increase of data transmission density in wireless mobile communications has posed a challenge to the system performance of Wireless Mesh Networks (WMNs ). There is a rule for wireless Ad hoc networks that the average node capacity decreases while the number of nodes increases , so it is hard to establish a large - scale wireless Mesh network. Network scalability is very important for enhancing the adaptive networking capability of the wireless Mesh network. This article discusses key scalability technologies for Mesh Base Stations (BSs ) and Mesh Mobile Stations (MSs ), such as channel allocation, intelligent routing , multi- antenna , node classification, Quality of Service (QoS) differentiation and cooperative transmission.
文摘This paper proposes an optimal solution to choose the number of enhancement layers in fine granularity scalability (FGS) scheme under the constraint of minimum transmission energy, in which FGS is combined with transmission energy control, so that FGS enhancement layer transmission energy is minimized while the distortion guaranteed. By changing the bit-plane level and packet loss rate, minimum transmission energy of enhancement layer is obtained, while the expected distortion is satisfied.
文摘For the large-scale application requirements of the belt-type networks,the mathematical modeling as well as quantitative analysis for the scalability of the network based on average path length is completed in this paper,and the theorem for the scale scalability of the belt-type networks is derived.The theorem provides a calculation formula for the upper limit of node scale theory of the belt-type networks and a calculation formula for the upper limit of single node load theory.
文摘With ever-increasing applications of IoT, and due to the heterogeneous and bursty nature of these applications, scalability has become an important research issue in building cloud-based IoT/M2M systems. This research proposes a dynamic SDN-based network slicing mechanism to tackle the scalability problems caused by such heterogeneity and fluctuation of IoT application requirements. The proposed method can automatically create a network slice on-the-fly for each new type of IoT application and adjust the QoS characteristics of the slice dynamically according to the changing requirements </span><span style="font-family:Verdana;">of an IoT application. Validated with extensive experiments, the proposed me</span><span style="font-family:Verdana;">chanism demonstrates better platform scalability when compared to a static slicing system.
基金Supported by National Natural Science Foundation of China(Nos.61402039,61170083,60970033,61373032 and 91430218)National High Technology Research and Development Program of China(No.2012AA01A301)+1 种基金China Postdoctoral Science Foundation(No.2014M562570)National Key Basic Research Program of China(No.61312701001)
文摘The key to large-scale parallel solutions of deterministic particle transport problem is single-node computation performance. Hence, single-node computation is often parallelized on multi-core or many-core computer architectures. However, the number of on-chip cores grows quickly with the scale-down of feature size in semiconductor technology. In this paper, we present a scalability investigation of one energy group time-independent deterministic discrete ordinates neutron transport in 3D Cartesian geometry(Sweep3D) on Intel's Many Integrated Core(MIC) architecture, which can provide up to 62 cores with four hardware threads per core now and will own up to 72 in the future. The parallel programming model, Open MP, and vector intrinsic functions are used to exploit thread parallelism and vector parallelism for the discrete ordinates method, respectively. The results on a 57-core MIC coprocessor show that the implementation of Sweep3 D on MIC has good scalability in performance. In addition, the application of the Roofline model to assess the implementation and performance comparison between MIC and Tesla K20 C Graphics Processing Unit(GPU) are also reported.
文摘With emerging large volume and diverse heterogeneity of Internet of Things (IoT) applications, the one-size-fits-all design of the current 4G networks is no longer adequate to serve various types of IoT applications. Consequently, the concepts of network slicing enabled by Network Function Virtualization (NFV) have been proposed in the upcoming 5G networks. 5G network slicing allows IoT applications of different QoS requirements to be served by different virtual networks. Moreover, these network slices are equipped with scalability that allows them to grow or shrink their instances of Virtual Network Functions (VNFs) when needed. However, all current research only focuses on scalability on a single network slice, which is the scalability at the VNF level only. Such a design will eventually reach the capacity limit of a single slice under stressful incoming traffic, and cause the breakdown of an IoT system. Therefore, we propose a new IoT scalability architecture in this research to provide scalability at the NS level and design a testbed to implement the proposed architecture in order to verify its effectiveness. For evaluation, three systems are compared for their throughput, response time, and CPU utilization under three different types of IoT traffic, including the single slice scaling system, the multiple slices scaling system and the hybrid scaling system where both single slicing and multiple slicing can be simultaneously applied. Due to the balanced tradeoff between slice scalability and resource availability, the hybrid scaling system turns out to perform the best in terms of throughput and response time with medium CPU utilization.
基金.This work is supported by University IT Research Center Project
文摘The explosive growth of the Internet and database applications has driven database to be more scalable and available, and able to support on line scaling without interrupting service. To support more client’s queries without downtime and degrading the response time, more nodes have to be scaled up while the database is running. This paper presents the overview of scalable and available database that satisfies the above characteristics. And we propose a novel on line scaling method. Our method improves the existing on line scaling method for fast response time and higher throughputs. Our proposed method reduces unnecessary network use, i.e., we decrease the number of data copy by reusing the backup data. Also, our on line scaling operation can be processed parallel by selecting adequate nodes as new node. Our performance study shows that our method results in significant reduction in data copy time.