随着5G、物联网(Internet of Things,IoT)、人工智能(Artificial Intelligence,AI)等技术的蓬勃发展,高密度、大带宽、超低时延的业务需求激增,边缘计算作为应对这些挑战的关键范式应运而生。边缘数据中心作为边缘计算的核心物理载体,...随着5G、物联网(Internet of Things,IoT)、人工智能(Artificial Intelligence,AI)等技术的蓬勃发展,高密度、大带宽、超低时延的业务需求激增,边缘计算作为应对这些挑战的关键范式应运而生。边缘数据中心作为边缘计算的核心物理载体,其建设模式与性能直接影响边缘计算的落地效果。本文深入剖析了当前基于传统汇聚节点机房和通信基站改造的边缘数据中心建设模式在能效、空间利用率、部署速度、适应性及投资效率等方面存在的显著问题。创新性地提出并详细阐述了CMCC-BLOCK(中国移动边缘算力处理单元)一体化数据中心的设计理念、技术架构与实现方案。为5G及未来网络时代的边缘计算提供高效、可靠、敏捷的基础设施支撑,具有广阔的推广前景。展开更多
In this paper we study the problem of locating multiple facilities in convex sets with fuzzy parameters. This problem asks to find the location of new facilities in the given convex sets such that the sum of weighted ...In this paper we study the problem of locating multiple facilities in convex sets with fuzzy parameters. This problem asks to find the location of new facilities in the given convex sets such that the sum of weighted distances between new facilities and existing facilities is minimized. We present a linear programming model for this problem with block norms, then we use it for problems with fuzzy data. We also do this for rectilinear and infinity norms as special cases of block norms.展开更多
Based on the arrangement of the across-fault measurement data along the northern edge of the Qinghai-Xizang block,we divide the deformation into different types and probe the nature of various fault movements based on...Based on the arrangement of the across-fault measurement data along the northern edge of the Qinghai-Xizang block,we divide the deformation into different types and probe the nature of various fault movements based on these types.The recent situation of tectonic movement of main structural belts and seismicity in this area are expounded.From the above,it is concluded that across-fault measurement can reflect not only the conditions of fault movement nearby but also the change of regional stress fields; not only is this a method to obtain regional seismogenic information and to conduct short-term prediction but it is also involved with large scale space-time prediction of moderate and strong earthquakes on the basis of the macro characteristics of fractures.展开更多
Based on GPS velocity during 1999-2007,GPS baseline time series on large scale during1999-2008 and cross-fault leveling data during 1985-2008,the paper makes some analysis and discussion to study and summarize the mov...Based on GPS velocity during 1999-2007,GPS baseline time series on large scale during1999-2008 and cross-fault leveling data during 1985-2008,the paper makes some analysis and discussion to study and summarize the movement,tectonic deformation and strain accumulation evolution characteristics of the Longmenshan fault and the surrounding area before the MS8. 0 Wenchuan earthquake,as well as the possible physical mechanism late in the seismic cycle of the Wenchuan earthquake. Multiple results indicate that:GPS velocity profiles show that obvious continuous deformation across the eastern Qinghai-Tibetan Plateau before the earthquake was distributed across a zone at least 500 km wide,while there was little deformation in Sichuan Basin and Longmenshan fault zone,which means that the eastern Qinghai-Tibetan Plateau provides energy accumulation for locked Longmenshan fault zone continuously. GPS strain rates show that the east-west compression deformation was larger in the northwest of the mid-northern segment of the Longmenshan fault zone,and deformation amplitude decreased gradually from far field to near fault zone,and there was little deformation in fault zone. The east-west compression deformation was significant surrounding the southwestern segment of the Longmenshan fault zone,and strain accumulation rate was larger than that of mid-northern segment.Fault locking indicates nearly whole Longmenshan fault was locked before the earthquake except the source of the earthquake which was weakly locked,and a 20 km width patch in southwestern segment between 12 km to 22. 5 km depth was in creeping state. GPS baseline time series in northeast direction on large scale became compressive generally from 2005 in the North-South Seismic Belt,which reflects that relative compression deformation enhances. The cross-fault leveling data show that annual vertical change rate and deformation trend accumulation rate in the Longmenshan fault zone were little,which indicates that vertical activity near the fault was very weak and the fault was tightly locked. According to analyses of GPS and cross-fault leveling data before the Wenchuan earthquake,we consider that the Longmenshan fault is tightly locked from the surface to the deep,and the horizontal and vertical deformation are weak surrounding the fault in relatively small-scale crustal deformation. The process of weak deformation may be slow,and weak deformation area may be larger when large earthquake is coming. Continuous and slow compression deformation across eastern Qinghai-Tibetan Plateau before the earthquake provides dynamic support for strain accumulation in the Longmenshan fault zone in relative large-scale crustal deformation.展开更多
In reversible data hiding, pixel value ordering is an up-to-the-minute research idea in the field of data hiding. Secret messages are embedded in the maximum or the minimum value among the pixels in a block. Pixel val...In reversible data hiding, pixel value ordering is an up-to-the-minute research idea in the field of data hiding. Secret messages are embedded in the maximum or the minimum value among the pixels in a block. Pixel value ordering helps identify the embeddable pixels in a block but suffers from fewer embedding payloads. It leaves many pixels in a block without implanting any bits there. The proposed scheme in this paper resolved that problem by allowing every pixel to conceive data bits. The method partitioned the image pixels in blocks of size two. In each block, it first orders these two pixels and then measures the average value. The average value is placed in the middle of these two pixels. Thus, the scheme extends the block size from two to three. After applying the embedding method of Weng <i><span>et al</span></i><span>., the implantation task removed the average value from the block to reduce its size again to two. These two alive pixels are called stego pixels, which produced a stego image. A piece of state information is produced during implanting to track whether a change is happening to the block’s cover pixels. This way, after embedding in all blocks, a binary stream of state information is produced, which has later been converted to decimal values. Thus, image data were assembled in a two-dimensional array. Considering the array as another image plane, Weng </span><i><span>et al</span></i><span>.’s method is again applied to embed further to produce another stego image. Model validation ensured that the proposed method performed better than previous work </span><span>i</span><span>n this field.</span>展开更多
After setting the ground of the quantum innovation potential of biosourced entities and outlining the inventive spectrum of adjacent technologies that can derive from those, the current review highlights, with the sup...After setting the ground of the quantum innovation potential of biosourced entities and outlining the inventive spectrum of adjacent technologies that can derive from those, the current review highlights, with the support of Bigger Data approaches, and a fairly large number of articles, more than 250 and 10,000 patents, the following. It covers an overview of biosourced chemicals and materials, mainly biomonomers, biooligomers and biopolymers;these are produced today in a way that allows reducing the fossil resources depletion and dependency, and obtaining environmentally-friendlier goods in a leaner energy consuming society. A process with a realistic productivity is underlined thanks to the implementation of recent and specifically effective processes where engineered microorganisms are capable to convert natural non-fossil goods, at industrial scale, into fuels and useful high-value chemicals in good yield. Those processes, further detailed, integrate: metabolic engineering involving 1) system biology, 2) synthetic biology and 3) evolutionary engineering. They enable acceptable production yield and productivity, meet the targeted chemical profiles, minimize the consumption of inputs, reduce the production of by-products and further diminish the overall operation costs. As generally admitted the properties of most natural occurring biopolymers (e.g., starch, poly (lactic acid), PHAs.) are often inferior to those of the polymers derived from petroleum;blends and composites, exhibiting improved properties, are now successfully produced. Specific attention is paid to these aspects. Then further evidence is provided to support the important potential and role of products deriving from the biomass in general. The need to enter into the era of Bigger Data, to grow and increase the awareness and multidimensional role and opportunity of biosourcing serves as a conclusion and future prospects. Although providing a large reference database, this review is largely initiatory, therefore not mimicking previous classic reviews but putting them in a multiplying synergistic prospective.展开更多
Cloud computing allows scalability at a lower cost for data analytics in a big data environment. This paradigm considers the dimensioning of resources to process different volumes of data, minimizing the response time...Cloud computing allows scalability at a lower cost for data analytics in a big data environment. This paradigm considers the dimensioning of resources to process different volumes of data, minimizing the response time of big data. This work proposes a performance and availability evaluation of big data environments in the private cloud through a methodology and stochastic and combinatorial models considering performance metrics such as execution times, processor utilization, memory utilization, and availability. The proposed methodology considers objective activities, performance, and availability modeling to evaluate the private cloud environment. A performance model based on stochastic Petrinets is adopted to evaluate the big data environment on the private cloud. Reliability block diagram models are adopted to evaluate the availability of big environment data in the private cloud. Two case studies based on the CloudStack platform and Hadoop cluster are adopted to demonstrate the viability of the proposed methodologies and models. Case Study 1 evaluated the performance metrics of the Hadoop cluster in the private cloud, considering different service offerings, workloads, and the number of data sets. The sentiment analysis technique is used in tweets from users with symptoms of depression to generate the analyzed datasets. Case Study 2 evaluated the availability of big data environments in the private cloud.展开更多
文摘随着5G、物联网(Internet of Things,IoT)、人工智能(Artificial Intelligence,AI)等技术的蓬勃发展,高密度、大带宽、超低时延的业务需求激增,边缘计算作为应对这些挑战的关键范式应运而生。边缘数据中心作为边缘计算的核心物理载体,其建设模式与性能直接影响边缘计算的落地效果。本文深入剖析了当前基于传统汇聚节点机房和通信基站改造的边缘数据中心建设模式在能效、空间利用率、部署速度、适应性及投资效率等方面存在的显著问题。创新性地提出并详细阐述了CMCC-BLOCK(中国移动边缘算力处理单元)一体化数据中心的设计理念、技术架构与实现方案。为5G及未来网络时代的边缘计算提供高效、可靠、敏捷的基础设施支撑,具有广阔的推广前景。
文摘In this paper we study the problem of locating multiple facilities in convex sets with fuzzy parameters. This problem asks to find the location of new facilities in the given convex sets such that the sum of weighted distances between new facilities and existing facilities is minimized. We present a linear programming model for this problem with block norms, then we use it for problems with fuzzy data. We also do this for rectilinear and infinity norms as special cases of block norms.
文摘Based on the arrangement of the across-fault measurement data along the northern edge of the Qinghai-Xizang block,we divide the deformation into different types and probe the nature of various fault movements based on these types.The recent situation of tectonic movement of main structural belts and seismicity in this area are expounded.From the above,it is concluded that across-fault measurement can reflect not only the conditions of fault movement nearby but also the change of regional stress fields; not only is this a method to obtain regional seismogenic information and to conduct short-term prediction but it is also involved with large scale space-time prediction of moderate and strong earthquakes on the basis of the macro characteristics of fractures.
基金supported by the National Key R&D Program of China(2018YFC1503606 2017YFC1500502)Earthquake Tracking Task(2019010215)
文摘Based on GPS velocity during 1999-2007,GPS baseline time series on large scale during1999-2008 and cross-fault leveling data during 1985-2008,the paper makes some analysis and discussion to study and summarize the movement,tectonic deformation and strain accumulation evolution characteristics of the Longmenshan fault and the surrounding area before the MS8. 0 Wenchuan earthquake,as well as the possible physical mechanism late in the seismic cycle of the Wenchuan earthquake. Multiple results indicate that:GPS velocity profiles show that obvious continuous deformation across the eastern Qinghai-Tibetan Plateau before the earthquake was distributed across a zone at least 500 km wide,while there was little deformation in Sichuan Basin and Longmenshan fault zone,which means that the eastern Qinghai-Tibetan Plateau provides energy accumulation for locked Longmenshan fault zone continuously. GPS strain rates show that the east-west compression deformation was larger in the northwest of the mid-northern segment of the Longmenshan fault zone,and deformation amplitude decreased gradually from far field to near fault zone,and there was little deformation in fault zone. The east-west compression deformation was significant surrounding the southwestern segment of the Longmenshan fault zone,and strain accumulation rate was larger than that of mid-northern segment.Fault locking indicates nearly whole Longmenshan fault was locked before the earthquake except the source of the earthquake which was weakly locked,and a 20 km width patch in southwestern segment between 12 km to 22. 5 km depth was in creeping state. GPS baseline time series in northeast direction on large scale became compressive generally from 2005 in the North-South Seismic Belt,which reflects that relative compression deformation enhances. The cross-fault leveling data show that annual vertical change rate and deformation trend accumulation rate in the Longmenshan fault zone were little,which indicates that vertical activity near the fault was very weak and the fault was tightly locked. According to analyses of GPS and cross-fault leveling data before the Wenchuan earthquake,we consider that the Longmenshan fault is tightly locked from the surface to the deep,and the horizontal and vertical deformation are weak surrounding the fault in relatively small-scale crustal deformation. The process of weak deformation may be slow,and weak deformation area may be larger when large earthquake is coming. Continuous and slow compression deformation across eastern Qinghai-Tibetan Plateau before the earthquake provides dynamic support for strain accumulation in the Longmenshan fault zone in relative large-scale crustal deformation.
文摘In reversible data hiding, pixel value ordering is an up-to-the-minute research idea in the field of data hiding. Secret messages are embedded in the maximum or the minimum value among the pixels in a block. Pixel value ordering helps identify the embeddable pixels in a block but suffers from fewer embedding payloads. It leaves many pixels in a block without implanting any bits there. The proposed scheme in this paper resolved that problem by allowing every pixel to conceive data bits. The method partitioned the image pixels in blocks of size two. In each block, it first orders these two pixels and then measures the average value. The average value is placed in the middle of these two pixels. Thus, the scheme extends the block size from two to three. After applying the embedding method of Weng <i><span>et al</span></i><span>., the implantation task removed the average value from the block to reduce its size again to two. These two alive pixels are called stego pixels, which produced a stego image. A piece of state information is produced during implanting to track whether a change is happening to the block’s cover pixels. This way, after embedding in all blocks, a binary stream of state information is produced, which has later been converted to decimal values. Thus, image data were assembled in a two-dimensional array. Considering the array as another image plane, Weng </span><i><span>et al</span></i><span>.’s method is again applied to embed further to produce another stego image. Model validation ensured that the proposed method performed better than previous work </span><span>i</span><span>n this field.</span>
基金浙江省“尖兵”“领雁”研发攻关计划(2024C01058)浙江省“十四五”第二批本科省级教学改革备案项目(JGBA2024014)+2 种基金2025年01月批次教育部产学合作协同育人项目(2501270945)2024年度浙江大学本科“AI赋能”示范课程建设项目(24)浙江大学第一批AI For Education系列实证教学研究项目(202402)。
文摘After setting the ground of the quantum innovation potential of biosourced entities and outlining the inventive spectrum of adjacent technologies that can derive from those, the current review highlights, with the support of Bigger Data approaches, and a fairly large number of articles, more than 250 and 10,000 patents, the following. It covers an overview of biosourced chemicals and materials, mainly biomonomers, biooligomers and biopolymers;these are produced today in a way that allows reducing the fossil resources depletion and dependency, and obtaining environmentally-friendlier goods in a leaner energy consuming society. A process with a realistic productivity is underlined thanks to the implementation of recent and specifically effective processes where engineered microorganisms are capable to convert natural non-fossil goods, at industrial scale, into fuels and useful high-value chemicals in good yield. Those processes, further detailed, integrate: metabolic engineering involving 1) system biology, 2) synthetic biology and 3) evolutionary engineering. They enable acceptable production yield and productivity, meet the targeted chemical profiles, minimize the consumption of inputs, reduce the production of by-products and further diminish the overall operation costs. As generally admitted the properties of most natural occurring biopolymers (e.g., starch, poly (lactic acid), PHAs.) are often inferior to those of the polymers derived from petroleum;blends and composites, exhibiting improved properties, are now successfully produced. Specific attention is paid to these aspects. Then further evidence is provided to support the important potential and role of products deriving from the biomass in general. The need to enter into the era of Bigger Data, to grow and increase the awareness and multidimensional role and opportunity of biosourcing serves as a conclusion and future prospects. Although providing a large reference database, this review is largely initiatory, therefore not mimicking previous classic reviews but putting them in a multiplying synergistic prospective.
文摘Cloud computing allows scalability at a lower cost for data analytics in a big data environment. This paradigm considers the dimensioning of resources to process different volumes of data, minimizing the response time of big data. This work proposes a performance and availability evaluation of big data environments in the private cloud through a methodology and stochastic and combinatorial models considering performance metrics such as execution times, processor utilization, memory utilization, and availability. The proposed methodology considers objective activities, performance, and availability modeling to evaluate the private cloud environment. A performance model based on stochastic Petrinets is adopted to evaluate the big data environment on the private cloud. Reliability block diagram models are adopted to evaluate the availability of big environment data in the private cloud. Two case studies based on the CloudStack platform and Hadoop cluster are adopted to demonstrate the viability of the proposed methodologies and models. Case Study 1 evaluated the performance metrics of the Hadoop cluster in the private cloud, considering different service offerings, workloads, and the number of data sets. The sentiment analysis technique is used in tweets from users with symptoms of depression to generate the analyzed datasets. Case Study 2 evaluated the availability of big data environments in the private cloud.