The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneousl...The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneously continues to elude most blockchain systems,often forcing trade-offs that limit their real-world applicability.This review paper synthesizes current research efforts aimed at resolving the trilemma,focusing on innovative consensus mechanisms,sharding techniques,layer-2 protocols,and hybrid architectural models.We critically analyze recent breakthroughs,including Directed Acyclic Graph(DAG)-based structures,cross-chain interoperability frameworks,and zero-knowledge proof(ZKP)enhancements,which aimto reconcile scalability with robust security and decentralization.Furthermore,we evaluate the trade-offs inherent in these approaches,highlighting their practical implications for enterprise adoption,decentralized finance(DeFi),and Web3 ecosystems.By mapping the evolving landscape of solutions,this review identifies gaps in currentmethodologies and proposes future research directions,such as adaptive consensus algorithms and artificial intelligence-driven(AI-driven)governance models.Our analysis underscores that while no universal solution exists,interdisciplinary innovations are progressively narrowing the trilemma’s constraints,paving the way for next-generation blockchain infrastructures.展开更多
A Recommender System(RS)is a crucial part of several firms,particularly those involved in e-commerce.In conventional RS,a user may only offer a single rating for an item-that is insufficient to perceive consumer prefe...A Recommender System(RS)is a crucial part of several firms,particularly those involved in e-commerce.In conventional RS,a user may only offer a single rating for an item-that is insufficient to perceive consumer preferences.Nowadays,businesses in industries like e-learning and tourism enable customers to rate a product using a variety of factors to comprehend customers’preferences.On the other hand,the collaborative filtering(CF)algorithm utilizing AutoEncoder(AE)is seen to be effective in identifying user-interested items.However,the cost of these computations increases nonlinearly as the number of items and users increases.To triumph over the issues,a novel expanded stacked autoencoder(ESAE)with Kernel Fuzzy C-Means Clustering(KFCM)technique is proposed with two phases.In the first phase of offline,the sparse multicriteria rating matrix is smoothened to a complete matrix by predicting the users’intact rating by the ESAE approach and users are clustered using the KFCM approach.In the next phase of online,the top-N recommendation prediction is made by the ESAE approach involving only the most similar user from multiple clusters.Hence the ESAE_KFCM model upgrades the prediction accuracy of 98.2%in Top-N recommendation with a minimized recommendation generation time.An experimental check on the Yahoo!Movies(YM)movie dataset and TripAdvisor(TA)travel dataset confirmed that the ESAE_KFCM model constantly outperforms conventional RS algorithms on a variety of assessment measures.展开更多
All-inorganic perovskite solar cells(PSCs) have potential to pass the stability international standard of IEC61215:2016 but cannot deliver high performance and stability due to the poor interface contact. In this pape...All-inorganic perovskite solar cells(PSCs) have potential to pass the stability international standard of IEC61215:2016 but cannot deliver high performance and stability due to the poor interface contact. In this paper, Sn-doped TiO_(2)(Ti_(1-x)Sn_(x)O_(2)) ultrathin nanoparticles are prepared for electron transport layer(ETL) by solution process. The ultrathin Ti_(1-x)Sn_(x)O_(2) nanocrystals have greatly improved interface contact due to the facile film formation, good conductivity and high work function. The all-inorganic inverted NiOx/CsPbI_(2)Br/Ti_(1-x)Sn_(x)O_(2)p-i-n device shows a power conversion efficiency(PCE) of 14.0%. We tested the heat stability, light stability and light-heat stability. After stored in 85℃ for 65 days, the inverted PSCs still retains 98% of initial efficiency. Under continuous standard one-sun illumination for 600 h,there is no efficiency decay, and under continuous illumination at 85℃ for 200 h, the device still retains 85% of initial efficiency. The 1.0 cm^(2) device of inverted structure shows a PCE of up to 11.2%. The ultrathin Ti_(1-x)Sn_(x)O_(2)is promising to improve the scalability and stability and thus increase the commercial prospect.展开更多
The locator/ID separation paradigm has been widely discussed to resolve the serious scalability issue that today's Internet is facing. Many researches have been carried on with this issue to alleviate the routing ...The locator/ID separation paradigm has been widely discussed to resolve the serious scalability issue that today's Internet is facing. Many researches have been carried on with this issue to alleviate the routing burden of the Default Free Zone (DFZ), improve the traffic engineering capabilities and support efficient mobility and multi-homing. However, in the locator/ID split networks, a third party is needed to store the identifier-to-locator pairs. How to map identifiers onto locators in a scalable and secure way is a really critical challenge. In this paper, we propose SS-MAP, a scalable and secure locator/ID mapping scheme for future Internet. First, SS-MAP uses a near-optimal DHT to map identifiers onto locators, which is able to achieve the maximal performance of the system with reasonable maintenance overhead relatively. Second, SS-MAP uses a decentralized admission control system to protect the DHT-based identifier-to-locator mapping from Sybil attacks, where a malicious mapping server creates numerous fake identities (called Sybil identifiers) to control a large fraction of the mapping system. This is the first work to discuss the Sybil attack problem in identifier-to-locator mapping mechanisms with the best knowledge of the authors. We evaluate the performance of the proposed approach in terms of scalability and security. The analysis and simulation results show that the scheme is scalable for large size networks and can resistant to Sybil attacks.展开更多
The scalability of the tunnel-regenerated multi-active-region (TRMAR) structure has been investigated for the application in light-emitting diodes (LEDs). The use of the TRMAR structure was proved theoretically to...The scalability of the tunnel-regenerated multi-active-region (TRMAR) structure has been investigated for the application in light-emitting diodes (LEDs). The use of the TRMAR structure was proved theoretically to have unique advantages over conventional slngle-active-layer structures in virtually every aspect, such as high quantum efficiency, high power and low leakage. Our study showed that the TRMAR LED structure could obtain high output power under low current injection and high wall-plug efficiency compared with the conventional single-active-layer LED structure.展开更多
In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor ...In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor scalability,and high latency.Due to the failure of consensus algorithms in managing nodes’identities,blockchain technology is considered inappropriate for many applications,e.g.,in IoT environments,because of poor scalability.This paper proposes a blockchain consensus mechanism called the Advanced DAG-based Ranking(ADR)protocol to improve blockchain scalability and throughput.The ADR protocol uses the directed acyclic graph ledger,where nodes are placed according to their ranking positions in the graph.It allows honest nodes to use theDirect Acyclic Graph(DAG)topology to write blocks and verify transactions instead of a chain of blocks.By using a three-step strategy,this protocol ensures that the system is secured against doublespending attacks and allows for higher throughput and scalability.The first step involves the safe entry of nodes into the system by verifying their private and public keys.The next step involves developing an advanced DAG ledger so nodes can start block production and verify transactions.In the third step,a ranking algorithm is developed to separate the nodes created by attackers.After eliminating attacker nodes,the nodes are ranked according to their performance in the system,and true nodes are arranged in blocks in topological order.As a result,the ADR protocol is suitable for applications in the Internet of Things(IoT).We evaluated ADR on EC2 clusters with more than 100 nodes and achieved better transaction throughput and liveness of the network while adding malicious nodes.Based on the simulation results,this research determined that the transaction’s performance was significantly improved over blockchains like Internet of Things Applications(IOTA)and ByteBall.展开更多
The continuous increase of data transmission density in wireless mobile communications has posed a challenge to the system performance of Wireless Mesh Networks (WMNs ). There is a rule for wireless Ad hoc networks th...The continuous increase of data transmission density in wireless mobile communications has posed a challenge to the system performance of Wireless Mesh Networks (WMNs ). There is a rule for wireless Ad hoc networks that the average node capacity decreases while the number of nodes increases , so it is hard to establish a large - scale wireless Mesh network. Network scalability is very important for enhancing the adaptive networking capability of the wireless Mesh network. This article discusses key scalability technologies for Mesh Base Stations (BSs ) and Mesh Mobile Stations (MSs ), such as channel allocation, intelligent routing , multi- antenna , node classification, Quality of Service (QoS) differentiation and cooperative transmission.展开更多
This paper proposes an optimal solution to choose the number of enhancement layers in fine granularity scalability (FGS) scheme under the constraint of minimum transmission energy, in which FGS is combined with transm...This paper proposes an optimal solution to choose the number of enhancement layers in fine granularity scalability (FGS) scheme under the constraint of minimum transmission energy, in which FGS is combined with transmission energy control, so that FGS enhancement layer transmission energy is minimized while the distortion guaranteed. By changing the bit-plane level and packet loss rate, minimum transmission energy of enhancement layer is obtained, while the expected distortion is satisfied.展开更多
For the large-scale application requirements of the belt-type networks,the mathematical modeling as well as quantitative analysis for the scalability of the network based on average path length is completed in this pa...For the large-scale application requirements of the belt-type networks,the mathematical modeling as well as quantitative analysis for the scalability of the network based on average path length is completed in this paper,and the theorem for the scale scalability of the belt-type networks is derived.The theorem provides a calculation formula for the upper limit of node scale theory of the belt-type networks and a calculation formula for the upper limit of single node load theory.展开更多
With ever-increasing applications of IoT, and due to the heterogeneous and bursty nature of these applications, scalability has become an important research issue in building cloud-based IoT/M2M systems. This research...With ever-increasing applications of IoT, and due to the heterogeneous and bursty nature of these applications, scalability has become an important research issue in building cloud-based IoT/M2M systems. This research proposes a dynamic SDN-based network slicing mechanism to tackle the scalability problems caused by such heterogeneity and fluctuation of IoT application requirements. The proposed method can automatically create a network slice on-the-fly for each new type of IoT application and adjust the QoS characteristics of the slice dynamically according to the changing requirements </span><span style="font-family:Verdana;">of an IoT application. Validated with extensive experiments, the proposed me</span><span style="font-family:Verdana;">chanism demonstrates better platform scalability when compared to a static slicing system.展开更多
A density-based partitioning strategy is proposed for large domain networks in order to deal with the scalability issue found in autonomic networks considering, as a scenario, the autonomic Quality of Service (QoS) ...A density-based partitioning strategy is proposed for large domain networks in order to deal with the scalability issue found in autonomic networks considering, as a scenario, the autonomic Quality of Service (QoS) management context. The approach adopted focus as on obtaining dense network partitions having more paths for a given vertices set in the domain. It is demonstrated that dense partitions improve autonomic processing scalability, for instance, reducing routing process complexity. The solution looks for a significant trade-off between partition autonomic algorithm execution time and path selection quality in large domains. Simulation scenarios for path selection execution time are presented and discussed. Authors argue that autonomic networks may benefit from the dense partition approach proposed by achieving scalable, efficient and near real-time support for autonomic management systems.展开更多
With emerging large volume and diverse heterogeneity of Internet of Things (IoT) applications, the one-size-fits-all design of the current 4G networks is no longer adequate to serve various types of IoT applications. ...With emerging large volume and diverse heterogeneity of Internet of Things (IoT) applications, the one-size-fits-all design of the current 4G networks is no longer adequate to serve various types of IoT applications. Consequently, the concepts of network slicing enabled by Network Function Virtualization (NFV) have been proposed in the upcoming 5G networks. 5G network slicing allows IoT applications of different QoS requirements to be served by different virtual networks. Moreover, these network slices are equipped with scalability that allows them to grow or shrink their instances of Virtual Network Functions (VNFs) when needed. However, all current research only focuses on scalability on a single network slice, which is the scalability at the VNF level only. Such a design will eventually reach the capacity limit of a single slice under stressful incoming traffic, and cause the breakdown of an IoT system. Therefore, we propose a new IoT scalability architecture in this research to provide scalability at the NS level and design a testbed to implement the proposed architecture in order to verify its effectiveness. For evaluation, three systems are compared for their throughput, response time, and CPU utilization under three different types of IoT traffic, including the single slice scaling system, the multiple slices scaling system and the hybrid scaling system where both single slicing and multiple slicing can be simultaneously applied. Due to the balanced tradeoff between slice scalability and resource availability, the hybrid scaling system turns out to perform the best in terms of throughput and response time with medium CPU utilization.展开更多
The photonic frequency-interleaving(PFI)technique has shown great potential for broadband signal acquisition,effectively overcoming the challenges of clock jitter and channel mismatch in the conventional time-interlea...The photonic frequency-interleaving(PFI)technique has shown great potential for broadband signal acquisition,effectively overcoming the challenges of clock jitter and channel mismatch in the conventional time-interleaving paradigm.However,current comb-based PFI schemes have complex system architectures and face challenges in achieving large bandwidth,dense channelization,and flexible reconfigurability simultaneously,which impedes practical applications.In this work,we propose and demonstrate a broadband PFI scheme with high reconfigurability and scalability by exploiting multiple free-running lasers for dense spectral slicing with high crosstalk suppression.A dedicated system model is developed through a comprehensive analysis of the system non-idealities,and a cross-channel signal reconstruction algorithm is developed for distortion-free signal reconstruction,based on precise calibrations of intra-and inter-channel impairments.The system performance is validated through the reception of multi-format broadband signals,both digital and analog,with a detailed evaluation of signal reconstruction quality,achieving inter-channel phase differences of less than 2°.The reconfigurability and scalability of the scheme are demonstrated through a dual-band radar imaging experiment and a three-channel interleaving implementation with a maximum acquisition bandwidth of 4 GHz.To the best of our knowledge,this is the first demonstration of a practical radio-frequency(RF)application enabled by PFI.Our work provides an innovative solution for next-generation software-defined broadband RF receivers.展开更多
This paper presents the definition of multi-dimensional scalability of the Internet architecture, and puts forward a mathematical method to evaluate Internet scalability based on a variety of constraints. Then, the me...This paper presents the definition of multi-dimensional scalability of the Internet architecture, and puts forward a mathematical method to evaluate Internet scalability based on a variety of constraints. Then, the method is employed to study the Internet scalability problem in performance, scale and service scalability. Based on the examples, theoretical analysis and experimental simulation are conducted to address the scalability issue. The results show that the proposed definition and evaluation method of multi-dimensional Internet scalability can effectively evaluate the scalability of the Internet in every aspect, thus providing rational suggestions and methods for evaluation of the next generation Internet architecture.展开更多
The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resour...The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resource limitations and diverse system architectures.The current conventional intrusion detection systems(IDS)face scalability problems and trust-related issues,but blockchain-based solutions face limitations because of their low transaction throughput(Bitcoin:7 TPS(Transactions Per Second),Ethereum:15-30 TPS)and high latency.The research introduces MBID(Multi-Tier Blockchain Intrusion Detection)as a groundbreaking Multi-Tier Blockchain Intrusion Detection System with AI-Enhanced Detection,which solves the problems in huge IoT networks.The MBID system uses a four-tier architecture that includes device,edge,fog,and cloud layers with blockchain implementations and Physics-Informed Neural Networks(PINNs)for edge-based anomaly detection and a dual consensus mechanism that uses Honesty-based Distributed Proof-of-Authority(HDPoA)and Delegated Proof of Stake(DPoS).The system achieves scalability and efficiency through the combination of dynamic sharding and Interplanetary File System(IPFS)integration.Experimental evaluations demonstrate exceptional performance,achieving a detection accuracy of 99.84%,an ultra-low false positive rate of 0.01% with a False Negative Rate of 0.15%,and a near-instantaneous edge detection latency of 0.40 ms.The system demonstrated an aggregate throughput of 214.57 TPS in a 3-shard configuration,providing a clear,evidence-based path for horizontally scaling to support overmillions of devices with exceeding throughput.The proposed architecture represents a significant advancement in blockchain-based security for IoT networks,effectively balancing the trade-offs between scalability,security,and decentralization.展开更多
Managing sensitive data in dynamic and high-stakes environments,such as healthcare,requires access control frameworks that offer real-time adaptability,scalability,and regulatory compliance.BIG-ABAC introduces a trans...Managing sensitive data in dynamic and high-stakes environments,such as healthcare,requires access control frameworks that offer real-time adaptability,scalability,and regulatory compliance.BIG-ABAC introduces a transformative approach to Attribute-Based Access Control(ABAC)by integrating real-time policy evaluation and contextual adaptation.Unlike traditional ABAC systems that rely on static policies,BIG-ABAC dynamically updates policies in response to evolving rules and real-time contextual attributes,ensuring precise and efficient access control.Leveraging decision trees evaluated in real-time,BIG-ABAC overcomes the limitations of conventional access control models,enabling seamless adaptation to complex,high-demand scenarios.The framework adheres to the NIST ABAC standard while incorporating modern distributed streaming technologies to enhance scalability and traceability.Its flexible policy enforcement mechanisms facilitate the implementation of regulatory requirements such as HIPAA and GDPR,allowing organizations to align access control policies with compliance needs dynamically.Performance evaluations demonstrate that BIG-ABAC processes 95% of access requests within 50 ms and updates policies dynamically with a latency of 30 ms,significantly outperforming traditional ABAC models.These results establish BIG-ABAC as a benchmark for adaptive,scalable,and context-aware access control,making it an ideal solution for dynamic,high-risk domains such as healthcare,smart cities,and Industrial IoT(IIoT).展开更多
Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat...Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.展开更多
The immutability is a crucial property for blockchain applications,however,it also leads to problems such as the inability to revise illegal data on the blockchain and delete private data.Although redactable blockchai...The immutability is a crucial property for blockchain applications,however,it also leads to problems such as the inability to revise illegal data on the blockchain and delete private data.Although redactable blockchains enable on-chain modification,they suffer from inefficiency and excessive centralization,the majority of redactable blockchain schemes ignore the difficult problems of traceability and consistency check.In this paper,we present a Dynamically Redactable Blockchain based on decentralized Chameleon hash(DRBC).Specifically,we propose an Identity-Based Decentralized Chameleon Hash(IDCH)and a Version-Based Transaction structure(VT)to realize the traceability of transaction modifications in a decentralized environment.Then,we propose an efficient block consistency check protocol based on the Bloom filter tree,which can realize the consistency check of transactions with extremely low time and space cost.Security analysis and experiment results demonstrate the reliability of DRBC and its significant advantages in a decentralized environment.展开更多
Microservices have revolutionized traditional software architecture. While monolithic designs continue to be common, particularly in legacy applications, there is a growing trend towards the modularity, independent de...Microservices have revolutionized traditional software architecture. While monolithic designs continue to be common, particularly in legacy applications, there is a growing trend towards the modularity, independent deployability, and flexibility offered by microservices, which is further enhanced by developments in cloud technology. This shift towards microservice architecture meets the modern business need for agility, facilitating rapid adaptability in a competitive landscape. Microservices offer an agile framework and, in many cases, can simplify the development process, though the implementation can vary and sometimes introduce complexities. Unlike monolithic systems, which can be cumbersome to modify, microservices enable quicker adjustments and faster deployment times, essential in today’s dynamic environment. This article delves into the essence of microservices and explores their growing prominence in the software industry.展开更多
The long transaction latency and low throughput of blockchain are the key challenges affecting the large-scale adoption of blockchain technology. Sharding technology is a primary solution by divides the blockchain net...The long transaction latency and low throughput of blockchain are the key challenges affecting the large-scale adoption of blockchain technology. Sharding technology is a primary solution by divides the blockchain network into multiple independent shards for parallel transaction processing. However, most existing random or modular schemes fail to consider the transactional relationships between accounts, which leads to a high proportion of cross-shard transactions, thereby increasing the communication overhead and transaction confirmation latency between shards. To solve this problem, this paper proposes a blockchain sharding algorithm based on account degree and frequency (DFSA). The algorithm takes into account both account degree and weight relationships between accounts. The blockchain transaction network is modeled as an undirected weighted graph, and community detection algorithms are employed to analyze the correlations between accounts. Strong-correlated accounts are grouped into the same shard, and a multi-shard blockchain network is constructed. Additionally, to further reduce the number of cross-shard transactions, this paper designs a random redundancy strategy based on account correlation, which randomly selects strong-correlated accounts and stores them redundantly in another shard, thus original cross-shard transactions can be verified and confirmed within the same shard. Simulation experiments demonstrate that DFSA outperforms the random sharding algorithm (RSA), modular sharding algorithm (MSA), and label propagation algorithm (LPA) in terms of cross-shard transaction proportion, latency, and throughput. Therefore, DFSA can effectively reduce cross-shard transaction proportion and lower transaction confirmation latency.展开更多
文摘The blockchain trilemma—balancing decentralization,security,and scalability—remains a critical challenge in distributed ledger technology.Despite significant advancements,achieving all three attributes simultaneously continues to elude most blockchain systems,often forcing trade-offs that limit their real-world applicability.This review paper synthesizes current research efforts aimed at resolving the trilemma,focusing on innovative consensus mechanisms,sharding techniques,layer-2 protocols,and hybrid architectural models.We critically analyze recent breakthroughs,including Directed Acyclic Graph(DAG)-based structures,cross-chain interoperability frameworks,and zero-knowledge proof(ZKP)enhancements,which aimto reconcile scalability with robust security and decentralization.Furthermore,we evaluate the trade-offs inherent in these approaches,highlighting their practical implications for enterprise adoption,decentralized finance(DeFi),and Web3 ecosystems.By mapping the evolving landscape of solutions,this review identifies gaps in currentmethodologies and proposes future research directions,such as adaptive consensus algorithms and artificial intelligence-driven(AI-driven)governance models.Our analysis underscores that while no universal solution exists,interdisciplinary innovations are progressively narrowing the trilemma’s constraints,paving the way for next-generation blockchain infrastructures.
文摘A Recommender System(RS)is a crucial part of several firms,particularly those involved in e-commerce.In conventional RS,a user may only offer a single rating for an item-that is insufficient to perceive consumer preferences.Nowadays,businesses in industries like e-learning and tourism enable customers to rate a product using a variety of factors to comprehend customers’preferences.On the other hand,the collaborative filtering(CF)algorithm utilizing AutoEncoder(AE)is seen to be effective in identifying user-interested items.However,the cost of these computations increases nonlinearly as the number of items and users increases.To triumph over the issues,a novel expanded stacked autoencoder(ESAE)with Kernel Fuzzy C-Means Clustering(KFCM)technique is proposed with two phases.In the first phase of offline,the sparse multicriteria rating matrix is smoothened to a complete matrix by predicting the users’intact rating by the ESAE approach and users are clustered using the KFCM approach.In the next phase of online,the top-N recommendation prediction is made by the ESAE approach involving only the most similar user from multiple clusters.Hence the ESAE_KFCM model upgrades the prediction accuracy of 98.2%in Top-N recommendation with a minimized recommendation generation time.An experimental check on the Yahoo!Movies(YM)movie dataset and TripAdvisor(TA)travel dataset confirmed that the ESAE_KFCM model constantly outperforms conventional RS algorithms on a variety of assessment measures.
基金in part supported by the Start-up funds from Central Organization Department and South China University of Technologyfunds from the National Natural Science Foundation of China (U2001217)+1 种基金the Guangdong Science and Technology Program (2020B121201003, 2019ZT08L075,2019QN01L118, 2021A1515012545)the Fundamental Research Fund for the Central Universities,SCUT(2020ZYGXZR095)。
文摘All-inorganic perovskite solar cells(PSCs) have potential to pass the stability international standard of IEC61215:2016 but cannot deliver high performance and stability due to the poor interface contact. In this paper, Sn-doped TiO_(2)(Ti_(1-x)Sn_(x)O_(2)) ultrathin nanoparticles are prepared for electron transport layer(ETL) by solution process. The ultrathin Ti_(1-x)Sn_(x)O_(2) nanocrystals have greatly improved interface contact due to the facile film formation, good conductivity and high work function. The all-inorganic inverted NiOx/CsPbI_(2)Br/Ti_(1-x)Sn_(x)O_(2)p-i-n device shows a power conversion efficiency(PCE) of 14.0%. We tested the heat stability, light stability and light-heat stability. After stored in 85℃ for 65 days, the inverted PSCs still retains 98% of initial efficiency. Under continuous standard one-sun illumination for 600 h,there is no efficiency decay, and under continuous illumination at 85℃ for 200 h, the device still retains 85% of initial efficiency. The 1.0 cm^(2) device of inverted structure shows a PCE of up to 11.2%. The ultrathin Ti_(1-x)Sn_(x)O_(2)is promising to improve the scalability and stability and thus increase the commercial prospect.
基金supported in part by National Key Basic Research Program of China (973 program) under Grant No.2007CB307101,2007CB307106National Key Technology R&D Program under Grant No.2008BAH37B03+2 种基金Program of Introducing Talents of Discipline to Universities (111 Project) under Grant No. B08002National Natural Science Foundation of China under Grant No.60833002China Fundamental Research Funds for the Central Universities under Grant No.2009YJS016
文摘The locator/ID separation paradigm has been widely discussed to resolve the serious scalability issue that today's Internet is facing. Many researches have been carried on with this issue to alleviate the routing burden of the Default Free Zone (DFZ), improve the traffic engineering capabilities and support efficient mobility and multi-homing. However, in the locator/ID split networks, a third party is needed to store the identifier-to-locator pairs. How to map identifiers onto locators in a scalable and secure way is a really critical challenge. In this paper, we propose SS-MAP, a scalable and secure locator/ID mapping scheme for future Internet. First, SS-MAP uses a near-optimal DHT to map identifiers onto locators, which is able to achieve the maximal performance of the system with reasonable maintenance overhead relatively. Second, SS-MAP uses a decentralized admission control system to protect the DHT-based identifier-to-locator mapping from Sybil attacks, where a malicious mapping server creates numerous fake identities (called Sybil identifiers) to control a large fraction of the mapping system. This is the first work to discuss the Sybil attack problem in identifier-to-locator mapping mechanisms with the best knowledge of the authors. We evaluate the performance of the proposed approach in terms of scalability and security. The analysis and simulation results show that the scheme is scalable for large size networks and can resistant to Sybil attacks.
基金Project supported by the National Basic Research Program of China (Grant No 2006CB604902)the National High Technology Development Program of China (Grant No 2006AA03A121)+4 种基金the National Natural Science Foundation of China (Grant No 60506012)Beijing Natural Science Foundation (Grant No KZ200510005003)Fok Ying Tung Education Foundation (Grant No 101062)Excellent PhD Thesis Foundation (Grant No 200542),Beijing New-Star Program of China (Grant No 2005A11)
文摘The scalability of the tunnel-regenerated multi-active-region (TRMAR) structure has been investigated for the application in light-emitting diodes (LEDs). The use of the TRMAR structure was proved theoretically to have unique advantages over conventional slngle-active-layer structures in virtually every aspect, such as high quantum efficiency, high power and low leakage. Our study showed that the TRMAR LED structure could obtain high output power under low current injection and high wall-plug efficiency compared with the conventional single-active-layer LED structure.
文摘In the past decade,blockchain has evolved as a promising solution to develop secure distributed ledgers and has gained massive attention.However,current blockchain systems face the problems of limited throughput,poor scalability,and high latency.Due to the failure of consensus algorithms in managing nodes’identities,blockchain technology is considered inappropriate for many applications,e.g.,in IoT environments,because of poor scalability.This paper proposes a blockchain consensus mechanism called the Advanced DAG-based Ranking(ADR)protocol to improve blockchain scalability and throughput.The ADR protocol uses the directed acyclic graph ledger,where nodes are placed according to their ranking positions in the graph.It allows honest nodes to use theDirect Acyclic Graph(DAG)topology to write blocks and verify transactions instead of a chain of blocks.By using a three-step strategy,this protocol ensures that the system is secured against doublespending attacks and allows for higher throughput and scalability.The first step involves the safe entry of nodes into the system by verifying their private and public keys.The next step involves developing an advanced DAG ledger so nodes can start block production and verify transactions.In the third step,a ranking algorithm is developed to separate the nodes created by attackers.After eliminating attacker nodes,the nodes are ranked according to their performance in the system,and true nodes are arranged in blocks in topological order.As a result,the ADR protocol is suitable for applications in the Internet of Things(IoT).We evaluated ADR on EC2 clusters with more than 100 nodes and achieved better transaction throughput and liveness of the network while adding malicious nodes.Based on the simulation results,this research determined that the transaction’s performance was significantly improved over blockchains like Internet of Things Applications(IOTA)and ByteBall.
基金This work was funded by Special Standardization Foundation of the Science and Technology Commission of Shanghai Municipality under Grant 07DZ05018the Natural Science Foundation of Shanghai Municipality under Grant 07ZR14104
文摘The continuous increase of data transmission density in wireless mobile communications has posed a challenge to the system performance of Wireless Mesh Networks (WMNs ). There is a rule for wireless Ad hoc networks that the average node capacity decreases while the number of nodes increases , so it is hard to establish a large - scale wireless Mesh network. Network scalability is very important for enhancing the adaptive networking capability of the wireless Mesh network. This article discusses key scalability technologies for Mesh Base Stations (BSs ) and Mesh Mobile Stations (MSs ), such as channel allocation, intelligent routing , multi- antenna , node classification, Quality of Service (QoS) differentiation and cooperative transmission.
文摘This paper proposes an optimal solution to choose the number of enhancement layers in fine granularity scalability (FGS) scheme under the constraint of minimum transmission energy, in which FGS is combined with transmission energy control, so that FGS enhancement layer transmission energy is minimized while the distortion guaranteed. By changing the bit-plane level and packet loss rate, minimum transmission energy of enhancement layer is obtained, while the expected distortion is satisfied.
文摘For the large-scale application requirements of the belt-type networks,the mathematical modeling as well as quantitative analysis for the scalability of the network based on average path length is completed in this paper,and the theorem for the scale scalability of the belt-type networks is derived.The theorem provides a calculation formula for the upper limit of node scale theory of the belt-type networks and a calculation formula for the upper limit of single node load theory.
文摘With ever-increasing applications of IoT, and due to the heterogeneous and bursty nature of these applications, scalability has become an important research issue in building cloud-based IoT/M2M systems. This research proposes a dynamic SDN-based network slicing mechanism to tackle the scalability problems caused by such heterogeneity and fluctuation of IoT application requirements. The proposed method can automatically create a network slice on-the-fly for each new type of IoT application and adjust the QoS characteristics of the slice dynamically according to the changing requirements </span><span style="font-family:Verdana;">of an IoT application. Validated with extensive experiments, the proposed me</span><span style="font-family:Verdana;">chanism demonstrates better platform scalability when compared to a static slicing system.
文摘A density-based partitioning strategy is proposed for large domain networks in order to deal with the scalability issue found in autonomic networks considering, as a scenario, the autonomic Quality of Service (QoS) management context. The approach adopted focus as on obtaining dense network partitions having more paths for a given vertices set in the domain. It is demonstrated that dense partitions improve autonomic processing scalability, for instance, reducing routing process complexity. The solution looks for a significant trade-off between partition autonomic algorithm execution time and path selection quality in large domains. Simulation scenarios for path selection execution time are presented and discussed. Authors argue that autonomic networks may benefit from the dense partition approach proposed by achieving scalable, efficient and near real-time support for autonomic management systems.
文摘With emerging large volume and diverse heterogeneity of Internet of Things (IoT) applications, the one-size-fits-all design of the current 4G networks is no longer adequate to serve various types of IoT applications. Consequently, the concepts of network slicing enabled by Network Function Virtualization (NFV) have been proposed in the upcoming 5G networks. 5G network slicing allows IoT applications of different QoS requirements to be served by different virtual networks. Moreover, these network slices are equipped with scalability that allows them to grow or shrink their instances of Virtual Network Functions (VNFs) when needed. However, all current research only focuses on scalability on a single network slice, which is the scalability at the VNF level only. Such a design will eventually reach the capacity limit of a single slice under stressful incoming traffic, and cause the breakdown of an IoT system. Therefore, we propose a new IoT scalability architecture in this research to provide scalability at the NS level and design a testbed to implement the proposed architecture in order to verify its effectiveness. For evaluation, three systems are compared for their throughput, response time, and CPU utilization under three different types of IoT traffic, including the single slice scaling system, the multiple slices scaling system and the hybrid scaling system where both single slicing and multiple slicing can be simultaneously applied. Due to the balanced tradeoff between slice scalability and resource availability, the hybrid scaling system turns out to perform the best in terms of throughput and response time with medium CPU utilization.
基金National Key Research and Development Program of China(2021YFB2800800)National Key Laboratory Program(E13D01012F)+4 种基金National Natural Science Foundation of China(62104232,62327806,61988102)Key Research Program of Frontier Sciences,CAS(ZDBS-LYJSC016)Guangdong Province Key Field RD Program Project(2020B0101110002)Science and Technology Planning Project of Guangdong Province(2019B090909011)Program of GBA Branch of AIRCAS(E0Z2D10600)。
文摘The photonic frequency-interleaving(PFI)technique has shown great potential for broadband signal acquisition,effectively overcoming the challenges of clock jitter and channel mismatch in the conventional time-interleaving paradigm.However,current comb-based PFI schemes have complex system architectures and face challenges in achieving large bandwidth,dense channelization,and flexible reconfigurability simultaneously,which impedes practical applications.In this work,we propose and demonstrate a broadband PFI scheme with high reconfigurability and scalability by exploiting multiple free-running lasers for dense spectral slicing with high crosstalk suppression.A dedicated system model is developed through a comprehensive analysis of the system non-idealities,and a cross-channel signal reconstruction algorithm is developed for distortion-free signal reconstruction,based on precise calibrations of intra-and inter-channel impairments.The system performance is validated through the reception of multi-format broadband signals,both digital and analog,with a detailed evaluation of signal reconstruction quality,achieving inter-channel phase differences of less than 2°.The reconfigurability and scalability of the scheme are demonstrated through a dual-band radar imaging experiment and a three-channel interleaving implementation with a maximum acquisition bandwidth of 4 GHz.To the best of our knowledge,this is the first demonstration of a practical radio-frequency(RF)application enabled by PFI.Our work provides an innovative solution for next-generation software-defined broadband RF receivers.
基金the National Basic Research Program of China (973 Program) (Grant No. 2003CB314801)the National High-Tech Research & Development Program of China (863 Program) (Grant Nos. 2008AA01A326, 2006AA01Z205, 2006AA01Z209)the National Natural Science Foundation of China (Grant No. 90704001)
文摘This paper presents the definition of multi-dimensional scalability of the Internet architecture, and puts forward a mathematical method to evaluate Internet scalability based on a variety of constraints. Then, the method is employed to study the Internet scalability problem in performance, scale and service scalability. Based on the examples, theoretical analysis and experimental simulation are conducted to address the scalability issue. The results show that the proposed definition and evaluation method of multi-dimensional Internet scalability can effectively evaluate the scalability of the Internet in every aspect, thus providing rational suggestions and methods for evaluation of the next generation Internet architecture.
基金supported in part by Multimedia University under the Research Fellow Grant MMUI/250008in part by Telekom Research&Development Sdn Bhd underGrantRDTC/241149Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R140),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resource limitations and diverse system architectures.The current conventional intrusion detection systems(IDS)face scalability problems and trust-related issues,but blockchain-based solutions face limitations because of their low transaction throughput(Bitcoin:7 TPS(Transactions Per Second),Ethereum:15-30 TPS)and high latency.The research introduces MBID(Multi-Tier Blockchain Intrusion Detection)as a groundbreaking Multi-Tier Blockchain Intrusion Detection System with AI-Enhanced Detection,which solves the problems in huge IoT networks.The MBID system uses a four-tier architecture that includes device,edge,fog,and cloud layers with blockchain implementations and Physics-Informed Neural Networks(PINNs)for edge-based anomaly detection and a dual consensus mechanism that uses Honesty-based Distributed Proof-of-Authority(HDPoA)and Delegated Proof of Stake(DPoS).The system achieves scalability and efficiency through the combination of dynamic sharding and Interplanetary File System(IPFS)integration.Experimental evaluations demonstrate exceptional performance,achieving a detection accuracy of 99.84%,an ultra-low false positive rate of 0.01% with a False Negative Rate of 0.15%,and a near-instantaneous edge detection latency of 0.40 ms.The system demonstrated an aggregate throughput of 214.57 TPS in a 3-shard configuration,providing a clear,evidence-based path for horizontally scaling to support overmillions of devices with exceeding throughput.The proposed architecture represents a significant advancement in blockchain-based security for IoT networks,effectively balancing the trade-offs between scalability,security,and decentralization.
文摘Managing sensitive data in dynamic and high-stakes environments,such as healthcare,requires access control frameworks that offer real-time adaptability,scalability,and regulatory compliance.BIG-ABAC introduces a transformative approach to Attribute-Based Access Control(ABAC)by integrating real-time policy evaluation and contextual adaptation.Unlike traditional ABAC systems that rely on static policies,BIG-ABAC dynamically updates policies in response to evolving rules and real-time contextual attributes,ensuring precise and efficient access control.Leveraging decision trees evaluated in real-time,BIG-ABAC overcomes the limitations of conventional access control models,enabling seamless adaptation to complex,high-demand scenarios.The framework adheres to the NIST ABAC standard while incorporating modern distributed streaming technologies to enhance scalability and traceability.Its flexible policy enforcement mechanisms facilitate the implementation of regulatory requirements such as HIPAA and GDPR,allowing organizations to align access control policies with compliance needs dynamically.Performance evaluations demonstrate that BIG-ABAC processes 95% of access requests within 50 ms and updates policies dynamically with a latency of 30 ms,significantly outperforming traditional ABAC models.These results establish BIG-ABAC as a benchmark for adaptive,scalable,and context-aware access control,making it an ideal solution for dynamic,high-risk domains such as healthcare,smart cities,and Industrial IoT(IIoT).
文摘Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.
基金supported in part by the National Key R&D Program of China under project 2022YFB2702901the Guangxi Natural Science Foundation under grants 2024GXNSFDA010064 and 2024GXNSFAA010453+5 种基金the National Natural Science Foundation of China under projects 62172119,62362013,U21A20467 and 72192801Zhejiang Provincial Natural Science Foundation of China under grant LZ23F020012Innovation Project of GUET Graduate Education under grants 2023YCXS070the Guangxi Young Teachers'Basic Ability Improvement Program under grant 2024KY0224Lion Rock Labs of Cyberspace Security under grant LRL24-1-C003one of the research outcomes of the Xiong'an Autonomous and Controllable Blockchain Underlying Technology Platform Project(2020).
文摘The immutability is a crucial property for blockchain applications,however,it also leads to problems such as the inability to revise illegal data on the blockchain and delete private data.Although redactable blockchains enable on-chain modification,they suffer from inefficiency and excessive centralization,the majority of redactable blockchain schemes ignore the difficult problems of traceability and consistency check.In this paper,we present a Dynamically Redactable Blockchain based on decentralized Chameleon hash(DRBC).Specifically,we propose an Identity-Based Decentralized Chameleon Hash(IDCH)and a Version-Based Transaction structure(VT)to realize the traceability of transaction modifications in a decentralized environment.Then,we propose an efficient block consistency check protocol based on the Bloom filter tree,which can realize the consistency check of transactions with extremely low time and space cost.Security analysis and experiment results demonstrate the reliability of DRBC and its significant advantages in a decentralized environment.
文摘Microservices have revolutionized traditional software architecture. While monolithic designs continue to be common, particularly in legacy applications, there is a growing trend towards the modularity, independent deployability, and flexibility offered by microservices, which is further enhanced by developments in cloud technology. This shift towards microservice architecture meets the modern business need for agility, facilitating rapid adaptability in a competitive landscape. Microservices offer an agile framework and, in many cases, can simplify the development process, though the implementation can vary and sometimes introduce complexities. Unlike monolithic systems, which can be cumbersome to modify, microservices enable quicker adjustments and faster deployment times, essential in today’s dynamic environment. This article delves into the essence of microservices and explores their growing prominence in the software industry.
基金supported by the National Natural Science Foundation of China(Grant No.61802301)awarded to J.Lithe Postgraduate Innovation Fund Project of Xi’an Shiyou University(Grant No.YCX2513159).
文摘The long transaction latency and low throughput of blockchain are the key challenges affecting the large-scale adoption of blockchain technology. Sharding technology is a primary solution by divides the blockchain network into multiple independent shards for parallel transaction processing. However, most existing random or modular schemes fail to consider the transactional relationships between accounts, which leads to a high proportion of cross-shard transactions, thereby increasing the communication overhead and transaction confirmation latency between shards. To solve this problem, this paper proposes a blockchain sharding algorithm based on account degree and frequency (DFSA). The algorithm takes into account both account degree and weight relationships between accounts. The blockchain transaction network is modeled as an undirected weighted graph, and community detection algorithms are employed to analyze the correlations between accounts. Strong-correlated accounts are grouped into the same shard, and a multi-shard blockchain network is constructed. Additionally, to further reduce the number of cross-shard transactions, this paper designs a random redundancy strategy based on account correlation, which randomly selects strong-correlated accounts and stores them redundantly in another shard, thus original cross-shard transactions can be verified and confirmed within the same shard. Simulation experiments demonstrate that DFSA outperforms the random sharding algorithm (RSA), modular sharding algorithm (MSA), and label propagation algorithm (LPA) in terms of cross-shard transaction proportion, latency, and throughput. Therefore, DFSA can effectively reduce cross-shard transaction proportion and lower transaction confirmation latency.