Predictive cruise control(PCC)is an intelligence-assisted control technology that can significantly improve the overall performance of a vehicle by using road and traffic information in advance.With the continuous dev...Predictive cruise control(PCC)is an intelligence-assisted control technology that can significantly improve the overall performance of a vehicle by using road and traffic information in advance.With the continuous development of cloud control platforms(CCPs)and telematics boxes(T-boxes),cloud-based predictive cruise control(CPCC)systems are considered an effective solution to the problems of map update difficulties and insufficient computing power on the vehicle side.In this study,a vehicle-cloud hierarchical control architecture for PCC is designed based on a CCP and T-box.This architecture utilizes waypoint structures for hierarchical and dynamic cooperative inter-triggering,enabling rolling optimization of the system and commending parsing at the vehicle end.This approach significantly improves the anti-interference capability and resolution efficiency of the system.On the CCP side,a predictive fuel-saving speed-planning(PFSP)algorithm that considers the throttle input,speed variations,and time efficiency based on the waypoint structure is proposed.It features a forward optimization search without requiring weight adjustments,demonstrating robust applicability to various road conditions and vehicles equiped with constant cruise(CC)system.On the vehicle-side T-box,based on the reference control sequence with the global navigation satellite system position,the recommended speed is analyzed and controlled using the acute angle principle.Through analyzing the differences of the PFSP algorithm compared to dynamic programming(DP)and Model predictive control(MPC)algorithms under uphill and downhill conditions,the results show that the PFSP achieves good energy-saving performance compared to CC without exhibiting significant speed fluctuations,demonstrating strong adaptability to the CC system.Finally,by building an experimental platform and running field tests over a total of 2000 km,we verified the effectiveness and stability of the CPCC system and proved the fuel-saving performance of the proposed PFSP algorithm.The results showed that the CPCC system equipped with the PFSP algorithm achieved an average fuel-saving rate of 2.05%-4.39%compared to CC.展开更多
The growing scale and complexity of component interactions in cloud computing systems post great challenges for operators to understand the characteristics of system performance. Profiling has long been proved to be a...The growing scale and complexity of component interactions in cloud computing systems post great challenges for operators to understand the characteristics of system performance. Profiling has long been proved to be an effective approach to performance analysis; however, existing approaches confront new challenges that emerge in cloud computing systems. First, the efficiency of the profiling becomes of critical concern; second, service-oriented profiling should be considered to support separation-of-concerns performance analysis. To address the above issues, in this paper, we present P-Tracer, an online performance profiling tool specifically tailored for cloud computing systems. P-Tracer constructs a specific search engine that proactively processes performance logs and generates a particular index for fast queries; second, for each service, P-Tracer retrieves a statistical insight of performance characteristics from multi-dimensions and provides operators with a suite of web-based interfaces to query the critical information. We evaluate P- Tracer in the aspects of tracing overheads, data preprocessing scalability and querying efficiency. Three real-world case studies that happened in Alibaba cloud computing platform demonstrate that P-Tracer can help operators understand soft-ware behaviors and localize the primary causes of performance anomalies effectively and efficiently.展开更多
The rapid growth of Internet content,applications and services require more computing and storage capacity and higher bandwidth.Traditionally,internet services are provided from the cloud(i.e.,from far away)and consum...The rapid growth of Internet content,applications and services require more computing and storage capacity and higher bandwidth.Traditionally,internet services are provided from the cloud(i.e.,from far away)and consumed on increasingly smart devices.Edge computing and caching provides these services from nearby smart devices.Blending both approaches should combine the power of cloud services and the responsiveness of edge networks.This paper investigates how to intelligently use the caching and computing capabilities of edge nodes/cloudlets through the use of artificial intelligence-based policies.We first analyze the scenarios of mobile edge networks with edge computing and caching abilities,then design a paradigm of virtualized edge network which includes an efficient way of isolating traffic flow in physical network layer.We develop the caching and communicating resource virtualization in virtual layer,and formulate the dynamic resource allocation problem into a reinforcement learning model,with the proposed self-adaptive and self-learning management,more flexible,better performance and more secure network services with lower cost will be obtained.Simulation results and analyzes show that addressing cached contents in proper edge nodes through a trained model is more efficient than requiring them from the cloud.展开更多
Device-to-Device(D2D)communication is a promising technology that can reduce the burden on cellular networks while increasing network capacity.In this paper,we focus on the channel resource allocation and power contro...Device-to-Device(D2D)communication is a promising technology that can reduce the burden on cellular networks while increasing network capacity.In this paper,we focus on the channel resource allocation and power control to improve the system resource utilization and network throughput.Firstly,we treat each D2D pair as an independent agent.Each agent makes decisions based on the local channel states information observed by itself.The multi-agent Reinforcement Learning(RL)algorithm is proposed for our multi-user system.We assume that the D2D pair do not possess any information on the availability and quality of the resource block to be selected,so the problem is modeled as a stochastic non-cooperative game.Hence,each agent becomes a player and they make decisions together to achieve global optimization.Thereby,the multi-agent Q-learning algorithm based on game theory is established.Secondly,in order to accelerate the convergence rate of multi-agent Q-learning,we consider a power allocation strategy based on Fuzzy C-means(FCM)algorithm.The strategy firstly groups the D2D users by FCM,and treats each group as an agent,and then performs multi-agent Q-learning algorithm to determine the power for each group of D2D users.The simulation results show that the Q-learning algorithm based on multi-agent can improve the throughput of the system.In particular,FCM can greatly speed up the convergence of the multi-agent Q-learning algorithm while improving system throughput.展开更多
Ensuring the safety and performance of lithium-ion batteries(LIBs)is a significant challenge for electric vehicles.To tackle this issue,an innovative liquid-immersed battery thermal management system(LIBTMS)using bion...Ensuring the safety and performance of lithium-ion batteries(LIBs)is a significant challenge for electric vehicles.To tackle this issue,an innovative liquid-immersed battery thermal management system(LIBTMS)using bionic baffles with fish-like perforations is developed.The thermal-flow-electric coupling characteristics of LIBTMSs with different baffle structures(no baffle,conventional baffle,baffle with circular perforations and baffle with fish-like perforations)are systematically investigated using experimental and numerical methods.The results indicate that the forced flow scheme exhibits better thermal management performance and voltage equalization than static flow.Moreover,the LIB temperatures and voltage deviations of different LIBTMSs increase quickly with increasing discharge rates.More importantly,the innovative LIBTMS exhibits the best thermoelectric performance due to its excellent thermoelectric equilibrium behavior caused by high electrical and temperature consistency,as well as the best overall performance involving the balance between the pressure loss and heat transfer capacity of the system.Compared with other structures,the innovative LIBTMS using baffles with fish-like perforations exhibits a maximum reduction of 10.1%,15.2%,25.8%and 9.0%in LIB maximum temperature,maximum temperature difference,system pressure drop and voltage deviation,respectively,under the same operating conditions.Furthermore,for the LIBTMS using baffles with fish-like perforations,the bottom inlet and top outlet configuration and coolant precooling are suggested to enhance cooling performance.展开更多
The cloud operating system (cloud OS) is used for managing the cloud resources such that they can be used effectively and efficiently. And also it is the duty of cloud OS to provide convenient interface for users an...The cloud operating system (cloud OS) is used for managing the cloud resources such that they can be used effectively and efficiently. And also it is the duty of cloud OS to provide convenient interface for users and applications. However, these two goals are often conflicting because convenient abstraction usually needs more computing resources. Thus, the cloud OS has its own characteristics of resource management and task scheduling for supporting various kinds of cloud applications. The evolution of cloud OS is in fact driven by these two often conflicting goals and finding the right tradeoff between them makes each phase of the evolution happen. In this paper, we have investigated the ways of cloud OS evolution from three different aspects: enabling technology evolution, OS architecture evolution and cloud ecosystem evolution. We show that finding the appropriate APIs (application programming interfaces) is critical for the next phase of cloud OS evolution. Convenient interfaces need to be provided without scarifying efficiency when APIs are chosen. We present an API-driven cloud OS practice, showing the great capability of APIs for developing a better cloud OS and helping build and run the cloud ecosystem healthily.展开更多
Efficient thermal management of lithium-ion battery,working under extremely rapid charging-discharging,is of widespread interest to avoid the battery degradation due to temperature rise,resulting in the enhanced lifes...Efficient thermal management of lithium-ion battery,working under extremely rapid charging-discharging,is of widespread interest to avoid the battery degradation due to temperature rise,resulting in the enhanced lifespan.Herein,thermal management of lithium-ion battery has been performed via a liquid cooling theoretical model integrated with thermoelectric model of battery packs and single-phase heat transfer.Aiming to alleviate the battery temperature fluctuation by automatically manipulating the flow rate of working fluid,a nominal model-free controller,i.e.,fuzzy logic controller is designed.An optimized on-off controller based on pump speed optimization is introduced to serve as the comparative controller.Thermal control simulations are conducted under regular operating and extreme operating conditions,and two controllers are applied to control battery temperature with proper intervals which is conducive to enhance the battery charge-discharge efficiency.The results indicate that,for any operating condition,the fuzzy logic controller shows excellence in terms of the tracking accuracy of set-point of battery temperature.Thanks to the establishment of fuzzy set and fuzzy behavioral rules,the battery temperature has been throughout maintained near the set point,and the temperature fluctuation amplitude is highly reduced,with better temperature control accuracy of~0.2℃(regular condition)and~0.5℃(extreme condition)compared with~1.1℃(regular condition)and~1.6℃(extreme condition)of optimized on-off controller.While in the case of extreme operating condition,the proposed optimized on-off controller manifests the hysteresis in temperature fluctuation,which is ascribed to the set of dead-band for the feedback temperature.The simulation results cast new light on the utilization and development of model-free temperature controller for the thermal management of lithium-ion battery.展开更多
In this paper,deep learning technology was utilited to solve the railway track recognition in intrusion detection problem.The railway track recognition can be viewed as semantic segmentation task which extends image p...In this paper,deep learning technology was utilited to solve the railway track recognition in intrusion detection problem.The railway track recognition can be viewed as semantic segmentation task which extends image processing to pixel level prediction.An encoder-decoder architecture DeepLabv3+model was applied in this work due to its good performance in semantic segmentation task.Since images of the railway track collected from the video surveillance of the train cab were used as experiment dataset in this work,the following improvements were made to the model.The first aspect deals with over-fitting problem due to the limited amount of training data.Data augmentation and transfer learning are applied consequently to rich the diversity of data and enhance model robustness during the training process.Besides,different gradient descent methods are compared to obtain the optimal optimizer for training model parameters.The third problem relates to data sample imbalance,cross entropy(CE)loss is replaced by focal loss(FL)to address the issue of serious imbalance between positive and negative sample.Effectiveness of the improved DeepLabv3+model with above solutions is demonstrated by experiment results with different system parameters.展开更多
The outbreak and spreading of the COVID-19 pandemic have had a significant impact on transportation system.By analyzing the impact of the pandemic on the transportation system,the impact of the pandemic on the social ...The outbreak and spreading of the COVID-19 pandemic have had a significant impact on transportation system.By analyzing the impact of the pandemic on the transportation system,the impact of the pandemic on the social economy can be reflected to a certain extent,and the effect of anti-pandemic policy implementation can also be evaluated.In addition,the analysis results are expected to provide support for policy optimization.Currently,most of the relevant studies analyze the impact of the pandemic on the overall transportation system from the macro perspective,while few studies quantitatively analyze the impact of the pandemic on individual spatiotemporal travel behavior.Based on the license plate recognition(LPR)data,this paper analyzes the spatiotemporal travel patterns of travelers in each stage of the pandemic progress,quantifies the change of travelers'spatiotemporal behaviors,and analyzes the adjustment of travelers'behaviors under the influence of the pandemic.There are three different behavior adjustment strategies under the influence of the pandemic,and the behavior adjustment is related to the individual's past travel habits.The paper quantitatively assesses the impact of the COVID-19 pandemic on individual travel behavior.And the method proposed in this paper can be used to quantitatively assess the impact of any long-term emergency on individual micro travel behavior.展开更多
The authors regret that Eq.(5)in the paper is wrongly written and should be revised as follows:s_(p)(a_(i),a_(j))=len(a_(i))×Ratio(LCS(a_(i),a_(j)),a_(i))+len(a_(j))×Ratio(LCS(a_(i),a_(j)),a_(j))/len(a_(i))+...The authors regret that Eq.(5)in the paper is wrongly written and should be revised as follows:s_(p)(a_(i),a_(j))=len(a_(i))×Ratio(LCS(a_(i),a_(j)),a_(i))+len(a_(j))×Ratio(LCS(a_(i),a_(j)),a_(j))/len(a_(i))+len(a_(j))(5)The authors would like to apologise for any inconvenience caused.展开更多
基金Supported by National Key Research and Development Program of China(Grant No.2021YFB2501000).
文摘Predictive cruise control(PCC)is an intelligence-assisted control technology that can significantly improve the overall performance of a vehicle by using road and traffic information in advance.With the continuous development of cloud control platforms(CCPs)and telematics boxes(T-boxes),cloud-based predictive cruise control(CPCC)systems are considered an effective solution to the problems of map update difficulties and insufficient computing power on the vehicle side.In this study,a vehicle-cloud hierarchical control architecture for PCC is designed based on a CCP and T-box.This architecture utilizes waypoint structures for hierarchical and dynamic cooperative inter-triggering,enabling rolling optimization of the system and commending parsing at the vehicle end.This approach significantly improves the anti-interference capability and resolution efficiency of the system.On the CCP side,a predictive fuel-saving speed-planning(PFSP)algorithm that considers the throttle input,speed variations,and time efficiency based on the waypoint structure is proposed.It features a forward optimization search without requiring weight adjustments,demonstrating robust applicability to various road conditions and vehicles equiped with constant cruise(CC)system.On the vehicle-side T-box,based on the reference control sequence with the global navigation satellite system position,the recommended speed is analyzed and controlled using the acute angle principle.Through analyzing the differences of the PFSP algorithm compared to dynamic programming(DP)and Model predictive control(MPC)algorithms under uphill and downhill conditions,the results show that the PFSP achieves good energy-saving performance compared to CC without exhibiting significant speed fluctuations,demonstrating strong adaptability to the CC system.Finally,by building an experimental platform and running field tests over a total of 2000 km,we verified the effectiveness and stability of the CPCC system and proved the fuel-saving performance of the proposed PFSP algorithm.The results showed that the CPCC system equipped with the PFSP algorithm achieved an average fuel-saving rate of 2.05%-4.39%compared to CC.
基金This research was supported by the National Basic Research Program of China (2011CB302600), the National High Technology Research and Development Program of China (2012AA011201), the National Natural Science Foundation of China (Grant Nos. 61161160565, 90818028, 91118008, 60903043), and an NSFC/RGC Joint Research Scheme sponsored by the Research Grants Council of Hong Kong, China and National Natural Science Foundation of China Project (JC201104220300A).
文摘The growing scale and complexity of component interactions in cloud computing systems post great challenges for operators to understand the characteristics of system performance. Profiling has long been proved to be an effective approach to performance analysis; however, existing approaches confront new challenges that emerge in cloud computing systems. First, the efficiency of the profiling becomes of critical concern; second, service-oriented profiling should be considered to support separation-of-concerns performance analysis. To address the above issues, in this paper, we present P-Tracer, an online performance profiling tool specifically tailored for cloud computing systems. P-Tracer constructs a specific search engine that proactively processes performance logs and generates a particular index for fast queries; second, for each service, P-Tracer retrieves a statistical insight of performance characteristics from multi-dimensions and provides operators with a suite of web-based interfaces to query the critical information. We evaluate P- Tracer in the aspects of tracing overheads, data preprocessing scalability and querying efficiency. Three real-world case studies that happened in Alibaba cloud computing platform demonstrate that P-Tracer can help operators understand soft-ware behaviors and localize the primary causes of performance anomalies effectively and efficiently.
基金This work was supported by the National Natural Science Foundation of China(61871058)Key Special Project in Intergovernmental International Scientific and Technological Innovation Cooperation of National Key Research and Development Program(2017YFE0118600).
文摘The rapid growth of Internet content,applications and services require more computing and storage capacity and higher bandwidth.Traditionally,internet services are provided from the cloud(i.e.,from far away)and consumed on increasingly smart devices.Edge computing and caching provides these services from nearby smart devices.Blending both approaches should combine the power of cloud services and the responsiveness of edge networks.This paper investigates how to intelligently use the caching and computing capabilities of edge nodes/cloudlets through the use of artificial intelligence-based policies.We first analyze the scenarios of mobile edge networks with edge computing and caching abilities,then design a paradigm of virtualized edge network which includes an efficient way of isolating traffic flow in physical network layer.We develop the caching and communicating resource virtualization in virtual layer,and formulate the dynamic resource allocation problem into a reinforcement learning model,with the proposed self-adaptive and self-learning management,more flexible,better performance and more secure network services with lower cost will be obtained.Simulation results and analyzes show that addressing cached contents in proper edge nodes through a trained model is more efficient than requiring them from the cloud.
基金This work was supported by the National Natural Science Foundation of China(61871058)Key Special Project in Intergovernmental International Scientific and Technological Innovation Cooperation of National Key Research and Development Program(2017YFE0118600).
文摘Device-to-Device(D2D)communication is a promising technology that can reduce the burden on cellular networks while increasing network capacity.In this paper,we focus on the channel resource allocation and power control to improve the system resource utilization and network throughput.Firstly,we treat each D2D pair as an independent agent.Each agent makes decisions based on the local channel states information observed by itself.The multi-agent Reinforcement Learning(RL)algorithm is proposed for our multi-user system.We assume that the D2D pair do not possess any information on the availability and quality of the resource block to be selected,so the problem is modeled as a stochastic non-cooperative game.Hence,each agent becomes a player and they make decisions together to achieve global optimization.Thereby,the multi-agent Q-learning algorithm based on game theory is established.Secondly,in order to accelerate the convergence rate of multi-agent Q-learning,we consider a power allocation strategy based on Fuzzy C-means(FCM)algorithm.The strategy firstly groups the D2D users by FCM,and treats each group as an agent,and then performs multi-agent Q-learning algorithm to determine the power for each group of D2D users.The simulation results show that the Q-learning algorithm based on multi-agent can improve the throughput of the system.In particular,FCM can greatly speed up the convergence of the multi-agent Q-learning algorithm while improving system throughput.
基金supported by the National Key R&D Program of China(Grant No.2021YFB3803203)。
文摘Ensuring the safety and performance of lithium-ion batteries(LIBs)is a significant challenge for electric vehicles.To tackle this issue,an innovative liquid-immersed battery thermal management system(LIBTMS)using bionic baffles with fish-like perforations is developed.The thermal-flow-electric coupling characteristics of LIBTMSs with different baffle structures(no baffle,conventional baffle,baffle with circular perforations and baffle with fish-like perforations)are systematically investigated using experimental and numerical methods.The results indicate that the forced flow scheme exhibits better thermal management performance and voltage equalization than static flow.Moreover,the LIB temperatures and voltage deviations of different LIBTMSs increase quickly with increasing discharge rates.More importantly,the innovative LIBTMS exhibits the best thermoelectric performance due to its excellent thermoelectric equilibrium behavior caused by high electrical and temperature consistency,as well as the best overall performance involving the balance between the pressure loss and heat transfer capacity of the system.Compared with other structures,the innovative LIBTMS using baffles with fish-like perforations exhibits a maximum reduction of 10.1%,15.2%,25.8%and 9.0%in LIB maximum temperature,maximum temperature difference,system pressure drop and voltage deviation,respectively,under the same operating conditions.Furthermore,for the LIBTMS using baffles with fish-like perforations,the bottom inlet and top outlet configuration and coolant precooling are suggested to enhance cooling performance.
文摘The cloud operating system (cloud OS) is used for managing the cloud resources such that they can be used effectively and efficiently. And also it is the duty of cloud OS to provide convenient interface for users and applications. However, these two goals are often conflicting because convenient abstraction usually needs more computing resources. Thus, the cloud OS has its own characteristics of resource management and task scheduling for supporting various kinds of cloud applications. The evolution of cloud OS is in fact driven by these two often conflicting goals and finding the right tradeoff between them makes each phase of the evolution happen. In this paper, we have investigated the ways of cloud OS evolution from three different aspects: enabling technology evolution, OS architecture evolution and cloud ecosystem evolution. We show that finding the appropriate APIs (application programming interfaces) is critical for the next phase of cloud OS evolution. Convenient interfaces need to be provided without scarifying efficiency when APIs are chosen. We present an API-driven cloud OS practice, showing the great capability of APIs for developing a better cloud OS and helping build and run the cloud ecosystem healthily.
基金supported by the National Key R&D Program of China(2021YFB3803200)the National Natural Science Foundation of China(Grant No.U2241253)。
文摘Efficient thermal management of lithium-ion battery,working under extremely rapid charging-discharging,is of widespread interest to avoid the battery degradation due to temperature rise,resulting in the enhanced lifespan.Herein,thermal management of lithium-ion battery has been performed via a liquid cooling theoretical model integrated with thermoelectric model of battery packs and single-phase heat transfer.Aiming to alleviate the battery temperature fluctuation by automatically manipulating the flow rate of working fluid,a nominal model-free controller,i.e.,fuzzy logic controller is designed.An optimized on-off controller based on pump speed optimization is introduced to serve as the comparative controller.Thermal control simulations are conducted under regular operating and extreme operating conditions,and two controllers are applied to control battery temperature with proper intervals which is conducive to enhance the battery charge-discharge efficiency.The results indicate that,for any operating condition,the fuzzy logic controller shows excellence in terms of the tracking accuracy of set-point of battery temperature.Thanks to the establishment of fuzzy set and fuzzy behavioral rules,the battery temperature has been throughout maintained near the set point,and the temperature fluctuation amplitude is highly reduced,with better temperature control accuracy of~0.2℃(regular condition)and~0.5℃(extreme condition)compared with~1.1℃(regular condition)and~1.6℃(extreme condition)of optimized on-off controller.While in the case of extreme operating condition,the proposed optimized on-off controller manifests the hysteresis in temperature fluctuation,which is ascribed to the set of dead-band for the feedback temperature.The simulation results cast new light on the utilization and development of model-free temperature controller for the thermal management of lithium-ion battery.
基金the Key Special Project in Intergovernmental International Scientific and Technological Innovation Cooperation of the National Key Research and Development Program of China(2017YFE0118600)。
文摘In this paper,deep learning technology was utilited to solve the railway track recognition in intrusion detection problem.The railway track recognition can be viewed as semantic segmentation task which extends image processing to pixel level prediction.An encoder-decoder architecture DeepLabv3+model was applied in this work due to its good performance in semantic segmentation task.Since images of the railway track collected from the video surveillance of the train cab were used as experiment dataset in this work,the following improvements were made to the model.The first aspect deals with over-fitting problem due to the limited amount of training data.Data augmentation and transfer learning are applied consequently to rich the diversity of data and enhance model robustness during the training process.Besides,different gradient descent methods are compared to obtain the optimal optimizer for training model parameters.The third problem relates to data sample imbalance,cross entropy(CE)loss is replaced by focal loss(FL)to address the issue of serious imbalance between positive and negative sample.Effectiveness of the improved DeepLabv3+model with above solutions is demonstrated by experiment results with different system parameters.
基金supported by“Pioneer”and“Leading Goose”R&D Program of Zhejiang(2022C01042)the National Natural Science Foundation of China(Grant No.92046011)+1 种基金Center for Balance Architecture Zhejiang UniversityAlibaba-Zhejiang University Joint Research Institute of Frontier Technologies.
文摘The outbreak and spreading of the COVID-19 pandemic have had a significant impact on transportation system.By analyzing the impact of the pandemic on the transportation system,the impact of the pandemic on the social economy can be reflected to a certain extent,and the effect of anti-pandemic policy implementation can also be evaluated.In addition,the analysis results are expected to provide support for policy optimization.Currently,most of the relevant studies analyze the impact of the pandemic on the overall transportation system from the macro perspective,while few studies quantitatively analyze the impact of the pandemic on individual spatiotemporal travel behavior.Based on the license plate recognition(LPR)data,this paper analyzes the spatiotemporal travel patterns of travelers in each stage of the pandemic progress,quantifies the change of travelers'spatiotemporal behaviors,and analyzes the adjustment of travelers'behaviors under the influence of the pandemic.There are three different behavior adjustment strategies under the influence of the pandemic,and the behavior adjustment is related to the individual's past travel habits.The paper quantitatively assesses the impact of the COVID-19 pandemic on individual travel behavior.And the method proposed in this paper can be used to quantitatively assess the impact of any long-term emergency on individual micro travel behavior.
文摘The authors regret that Eq.(5)in the paper is wrongly written and should be revised as follows:s_(p)(a_(i),a_(j))=len(a_(i))×Ratio(LCS(a_(i),a_(j)),a_(i))+len(a_(j))×Ratio(LCS(a_(i),a_(j)),a_(j))/len(a_(i))+len(a_(j))(5)The authors would like to apologise for any inconvenience caused.