A three-dimensional cloud-scale model has been designed.The governing equations of the model are composed of two groups of equations:one group includes compressible motion equations,continuity equation, pressure equat...A three-dimensional cloud-scale model has been designed.The governing equations of the model are composed of two groups of equations:one group includes compressible motion equations,continuity equation, pressure equation and thermodynamic equation,which are of Eulerian type,and the other consists of cloud- precipitation microphysics equations which are of Lagrangian type.Since the degree of influence of sound wave on the air motion is quite different from that on the temperature or hydrometeors,the time splitting procedure is used in solving governing equations.Both unstaggered and staggered meshes have been utilized.Integra- tion schemes adopted are the Eulerian backward difference method for the unstaggered mesh and semi-implicit method for staggered mesh.Several experiments of modelling have been conducted and a reasonable three- dimensional image of deep convection is obtained.With this model the horizontal and vertical vortex circula- tions are simulated.Furthermore,the effects of horizontal vortex on the formation and development of downdraft within cloud have also been studied.展开更多
Big data is an emerging term in the storage indus- try, and it is data analytics on big storage, i.e., Cloud-scale storage. In Cloud-scale (or EB-scale) file systems, load bal- ancing in request workloads across a m...Big data is an emerging term in the storage indus- try, and it is data analytics on big storage, i.e., Cloud-scale storage. In Cloud-scale (or EB-scale) file systems, load bal- ancing in request workloads across a metadata server cluster is critical for avoiding performance bottlenecks and improv- ing quality of services. Many good approaches have been pro- posed for load balancing in distributed file systems. Some of them pay attention to global namespace balancing, making metadata distribution across metadata servers as uniform as possible. However, they do not work well in skew request dis- tributions, which impair load balancing but simultaneously increase the effectiveness of caching and replication, in this paper, we propose Cloud Cache (C2), an adaptive and scal- able load balancing scheme for metadata server cluster in EB-scale file systems. It combines adaptive cache diffusion and replication scheme to cope with the request load balanc- ing problem, and it can be integrated into existing distributed metadata management approaches to efficiently improve their load balancing performance. C2 runs as follows: 1) to run adaptive cache diffusion first, if a node is overloaded, load- shedding will be used; otherwise, load-stealing will be used; and 2) to run adaptive replication scheme second, if there is a very popular metadata item (or at least two items) causing a node be overloaded, adaptive replication scheme will be used,in which the very popular item is not split into several nodes using adaptive cache diffusion because of its knapsack prop- erty. By conducting performance evaluation in trace-driven simulations, experimental results demonstrate the efficiency and scalability of C2.展开更多
当前,大规模室外基础设施的数字化需求持续扩大,基于深度学习的自动扫描到建筑信息模型(scanning to building information modeling, Scan2BIM)通过卓越的特征学习能力和自动化流程显著提升了建模精度和构建速度,在结构复杂的室外场景...当前,大规模室外基础设施的数字化需求持续扩大,基于深度学习的自动扫描到建筑信息模型(scanning to building information modeling, Scan2BIM)通过卓越的特征学习能力和自动化流程显著提升了建模精度和构建速度,在结构复杂的室外场景重建中发挥了关键作用.文中介绍了Scan2BIM的4大核心模块及其相关研究进展.其中,针对3D点云获取模块,从采集设备与采集来源2个维度概括了3D点云数据采集的技术发展,并着重梳理了代表性3D点云数据集;根据学习方式的不同,将大规模点云对齐算法划分为基于优化和深度学习2大类,并从精准度、计算效率、鲁棒性等多维度对比分析了相关工作;在点云分割模块中,分别对点云全景分割和点云实例分割算法通过统一的评估指标进行了整理归纳;对于BIM自动化建模,简述了BIM核心互操作标准体系,并分类总结了多种几何实体建模与关系建模算法.最后,通过深入分析和前瞻性探讨,指出了现阶段大规模室外场景建模的高效性、精准性、泛化性与统一性的无法有效结合的问题;未来将重点围绕多源数据融合建模、精度与鲁棒性协同优化、端到端Scan2BIM通用框架构建以及大模型应用与探索等方向展开.展开更多
多尺度特征在点云领域的密集预测任务中至关重要。当前三维点云处理技术主要依赖编码器-解码器框架,通过主干网络提取并融合多尺度特征。然而,这些方法通常采用延迟融合策略,导致特征集成不足。为解决这一问题,本文提出了HRFN3D(High-re...多尺度特征在点云领域的密集预测任务中至关重要。当前三维点云处理技术主要依赖编码器-解码器框架,通过主干网络提取并融合多尺度特征。然而,这些方法通常采用延迟融合策略,导致特征集成不足。为解决这一问题,本文提出了HRFN3D(High-resolution Feature Network for 3D Point Cloud)模型,一种专为点云分类和分割任务设计的高分辨率特征网络。HRFN3D通过创新性的关系学习模块,在早期阶段进行特征融合,促进低分辨率高语义点与高分辨率低语义点的交互,使高分辨率点在早期阶段就保留高语义信息,优化后续特征学习。在后期,结合不同池化策略生成全局特征向量,并与原始点特征拼接,既保留细节,又增强全局特征的代表性。实验结果显示,HRFN3D在Shape Net Part数据集上将类平均交并比和实例平均交并比分别提升了2.2个百分点和0.9个百分点,并获得了最佳实例平均交并比86.3%;在Model Net40数据集上,以4.3 M的参数量实现了91.5%的最高类平均精度。这些结果验证了HRFN3D在多尺度特征处理中的有效性。展开更多
文摘A three-dimensional cloud-scale model has been designed.The governing equations of the model are composed of two groups of equations:one group includes compressible motion equations,continuity equation, pressure equation and thermodynamic equation,which are of Eulerian type,and the other consists of cloud- precipitation microphysics equations which are of Lagrangian type.Since the degree of influence of sound wave on the air motion is quite different from that on the temperature or hydrometeors,the time splitting procedure is used in solving governing equations.Both unstaggered and staggered meshes have been utilized.Integra- tion schemes adopted are the Eulerian backward difference method for the unstaggered mesh and semi-implicit method for staggered mesh.Several experiments of modelling have been conducted and a reasonable three- dimensional image of deep convection is obtained.With this model the horizontal and vertical vortex circula- tions are simulated.Furthermore,the effects of horizontal vortex on the formation and development of downdraft within cloud have also been studied.
文摘Big data is an emerging term in the storage indus- try, and it is data analytics on big storage, i.e., Cloud-scale storage. In Cloud-scale (or EB-scale) file systems, load bal- ancing in request workloads across a metadata server cluster is critical for avoiding performance bottlenecks and improv- ing quality of services. Many good approaches have been pro- posed for load balancing in distributed file systems. Some of them pay attention to global namespace balancing, making metadata distribution across metadata servers as uniform as possible. However, they do not work well in skew request dis- tributions, which impair load balancing but simultaneously increase the effectiveness of caching and replication, in this paper, we propose Cloud Cache (C2), an adaptive and scal- able load balancing scheme for metadata server cluster in EB-scale file systems. It combines adaptive cache diffusion and replication scheme to cope with the request load balanc- ing problem, and it can be integrated into existing distributed metadata management approaches to efficiently improve their load balancing performance. C2 runs as follows: 1) to run adaptive cache diffusion first, if a node is overloaded, load- shedding will be used; otherwise, load-stealing will be used; and 2) to run adaptive replication scheme second, if there is a very popular metadata item (or at least two items) causing a node be overloaded, adaptive replication scheme will be used,in which the very popular item is not split into several nodes using adaptive cache diffusion because of its knapsack prop- erty. By conducting performance evaluation in trace-driven simulations, experimental results demonstrate the efficiency and scalability of C2.
文摘随着算力网络中计算资源与虚拟化设备的广泛应用,在算力网络虚拟化中,针对云集群弹性伸缩策略基于阈值的响应式触发过程中存在的弹性滞后问题,提出一种基于Transformer的预测式云集群资源弹性伸缩方法(Predictive Cloud Cluster Resource Elastic Scaling Method Based on Transformer,Cloudformer).该方法利用序列分解模块将云集群数据分解为趋势项和季节项,趋势项采用双系数网络分别对输入空间预测的均值和方差进行归一化和反归一化,季节项采用融合傅里叶变换的频域自注意力模型进行预测,并在模型训练过程中使用指数移动平均模型动态调整训练损失的误差范围.实验结果表明,对比最先进的五种预测式弹性伸缩算法,本文所提出的方法在保持较低的模型训练和推理时间下,不同预测窗口单变量与多变量预测均方误差分别降低了10.07%和10.01%.
文摘当前,大规模室外基础设施的数字化需求持续扩大,基于深度学习的自动扫描到建筑信息模型(scanning to building information modeling, Scan2BIM)通过卓越的特征学习能力和自动化流程显著提升了建模精度和构建速度,在结构复杂的室外场景重建中发挥了关键作用.文中介绍了Scan2BIM的4大核心模块及其相关研究进展.其中,针对3D点云获取模块,从采集设备与采集来源2个维度概括了3D点云数据采集的技术发展,并着重梳理了代表性3D点云数据集;根据学习方式的不同,将大规模点云对齐算法划分为基于优化和深度学习2大类,并从精准度、计算效率、鲁棒性等多维度对比分析了相关工作;在点云分割模块中,分别对点云全景分割和点云实例分割算法通过统一的评估指标进行了整理归纳;对于BIM自动化建模,简述了BIM核心互操作标准体系,并分类总结了多种几何实体建模与关系建模算法.最后,通过深入分析和前瞻性探讨,指出了现阶段大规模室外场景建模的高效性、精准性、泛化性与统一性的无法有效结合的问题;未来将重点围绕多源数据融合建模、精度与鲁棒性协同优化、端到端Scan2BIM通用框架构建以及大模型应用与探索等方向展开.
文摘多尺度特征在点云领域的密集预测任务中至关重要。当前三维点云处理技术主要依赖编码器-解码器框架,通过主干网络提取并融合多尺度特征。然而,这些方法通常采用延迟融合策略,导致特征集成不足。为解决这一问题,本文提出了HRFN3D(High-resolution Feature Network for 3D Point Cloud)模型,一种专为点云分类和分割任务设计的高分辨率特征网络。HRFN3D通过创新性的关系学习模块,在早期阶段进行特征融合,促进低分辨率高语义点与高分辨率低语义点的交互,使高分辨率点在早期阶段就保留高语义信息,优化后续特征学习。在后期,结合不同池化策略生成全局特征向量,并与原始点特征拼接,既保留细节,又增强全局特征的代表性。实验结果显示,HRFN3D在Shape Net Part数据集上将类平均交并比和实例平均交并比分别提升了2.2个百分点和0.9个百分点,并获得了最佳实例平均交并比86.3%;在Model Net40数据集上,以4.3 M的参数量实现了91.5%的最高类平均精度。这些结果验证了HRFN3D在多尺度特征处理中的有效性。