Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple dat...Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.展开更多
Based on normalized six-hourly black body temperature (TBB) data of three geostationary meteorological satellites,the leading modes of the mei-yu cloud system between 1998 and 2008 were extracted by the Empirical Or...Based on normalized six-hourly black body temperature (TBB) data of three geostationary meteorological satellites,the leading modes of the mei-yu cloud system between 1998 and 2008 were extracted by the Empirical Orthogonal Function (EOF) method,and the transition processes from the first typical leading mode to other leading modes were discussed and compared.The analysis shows that,when the southern mode (EOF1) transforms to the northeastern mode (EOF3),in the mid-troposphere,a low trough develops and moves southeastward over central and eastern China.The circulation pattern is characterized by two highs and one low in the lower troposphere.A belt of low pressure is sandwiched between the weak high over central and western China and the strong western North Pacific subtropical high (WNPSH).Cold air moves southward along the northerly flow behind the low,and meets the warm and moist air between the WNPSH and the forepart of the low trough,which leads to continuous convection.At the same time,the central extent of the WNPSH increases while its ridge extends westward.In addition,transitions from the southern mode to the dual centers mode and the tropical-low-influenced mode were found to be atypical,and so no common points could be concluded.Furthermore,the choice of threshold value can affect the number of samples discussed.展开更多
Numerical simulation of meso-β-scale convective cloud systems associated with a PRE-STORM MCC case has been carried out using a 2-D version of the CSU Regional Atmospheric Modeling System (RAMS) nonhydrostatic model ...Numerical simulation of meso-β-scale convective cloud systems associated with a PRE-STORM MCC case has been carried out using a 2-D version of the CSU Regional Atmospheric Modeling System (RAMS) nonhydrostatic model with parameterized microphysics. It is found that the predicted meso-r-scale convective phenomena arc basically unsteady under the situation of strong shear at low-levels, while the meso-β-scale convective system is maintained up to 3 hours or more. The meso -β- scale cloud system exhibits characteristics of a multi-celled convective storm in which the meso-r-scale convective cells have lifetime of about 30 min. Pressure perturbation depicts a meso-low after a half hour in the low levels. As the cloud system evolves, the meso-low intensifies and extends to the upshear side and covers the entire domain in the mid-lower levels with the peak values of 5-8 hPa. Temperature perturbation depicts a warm region in the middle levels through the entire simulation period. The meso-r-scale warm cores with peak values of 4-8 ℃ are associated with strong convective cells. The cloud top evaporation causes a stronger cold layer around the cloud top levels.Simulation of microphysics exhibits that graupel is primarily concentrated in the strong convective cells forming the main source of convective rainfall after one hour of simulation time. Aggregates are mainly located in the stratiform region and decaying convective cells which produce the stratiform rainfall. Riming of the ice crystals is the predominant precipitation formation mechanism in the convection region, whereas aggregation of ice crystals is the predominant one in the stratiform region, which is consistent with observations. Sensitivity experiments of ice-phase mierophysical processes show that the microphysical structures of the convective cloud system can be simulated better with the diagnosed aggregation collection efficiencies.展开更多
Mobile edge users(MEUs)collect data from sensor devices and report to cloud systems,which can facilitate numerous applications in sensor‑cloud systems(SCS).However,because there is no effective way to access the groun...Mobile edge users(MEUs)collect data from sensor devices and report to cloud systems,which can facilitate numerous applications in sensor‑cloud systems(SCS).However,because there is no effective way to access the ground truth to verify the quality of sensing devices’data or MEUs’reports,malicious sensing devices or MEUs may report false data and cause damage to the platform.It is critical for selecting sensing devices and MEUs to report truthful data.To tackle this challenge,a novel scheme that uses unmanned aerial vehicles(UAV)to detect the truth of sensing devices and MEUs(UAV‑DT)is proposed to construct a clean data collection platform for SCS.In the UAV‑DT scheme,the UAV delivers check codes to sensor devices and requires them to provide routes to the specified destination node.Then,the UAV flies along the path that enables maximal truth detection and collects the information of the sensing devices forwarding data packets to the cloud during this period.The information collected by the UAV will be checked in two aspects to verify the credibility of the sensor devices.The first is to check whether there is an abnormality in the received and sent data packets of the sensing devices and an evaluation of the degree of trust is given;the second is to compare the data packets submitted by the sensing devices to MEUs with the data packets submitted by the MEUs to the platform to verify the credibility of MEUs.Then,based on the verified trust value,an incentive mechanism is proposed to select credible MEUs for data collection,so as to create a clean data collection sensor‑cloud network.The simulation results show that the proposed UAV‑DT scheme can identify the trust of sensing devices and MEUs well.As a result,the proportion of clean data collected is greatly improved.展开更多
Intensity of competition, especially global competition, has driven many organizations to search for innovative ways to improve productivity and performance. This trend has led many firms to adopt approaches to implem...Intensity of competition, especially global competition, has driven many organizations to search for innovative ways to improve productivity and performance. This trend has led many firms to adopt approaches to implement cloud systems. Cloud systems have distinct characteristics that differentiate it from traditional internet services, due to a number of significant innovation factors. For example, firms need improved access to high-speed intemet as well as access to customer relationship management (CRM) and enterprise resource planning (ERP) capacities. This means that the interest of many firms in the implementation of cloud systems has increased. Although cloud systems are designed to facilitate knowledge transfer, currently, there is no method to ensure that knowledge transfer is useful or relevant to a firm. This in turn means that finns need to ensure that the cloud system has the capabilities to screen knowledge for compliance against some known knowledge characteristics. The use of cloud systems could result in an efficient delivery of innovation knowledge in an effective way. This paper presents an approach for assessment of the successful implementation of cloud systems. This paper also discusses the various success factors of cloud systems for global innovation.展开更多
. This paper conducts the analysis on the data mining algorithm implementation and its application in parallel cloud system based on C++. With the increase in the number of the cloud computing platform developers, w.... This paper conducts the analysis on the data mining algorithm implementation and its application in parallel cloud system based on C++. With the increase in the number of the cloud computing platform developers, with the use of cloud computing platform to support the growth of the number of Internet users, the system is also the proportion of log data growth. At present applies in the colony environment many is the news transmission model. In takes in the rest transmission model, between each concurrent execution part exchanges the information, and the coordinated step and the control execution through the transmission news. As for the C++ in the data mining applications, it should ? rstly hold the following features. Parallel communication and serial communication are two basic ways of general communication. Under this basis, this paper proposes the novel perspective on the data mining algorithm implementation and its application in parallel cloud system based on C++. The later research will be focused on the code based implementation.展开更多
Based on the National Centers for Environmental Prediction(NCEP) and Climate Prediction Center(CPC) Merged Analysis of Precipitation(CMAP) data and Cloud Sat products, the seasonal variations of the cloud proper...Based on the National Centers for Environmental Prediction(NCEP) and Climate Prediction Center(CPC) Merged Analysis of Precipitation(CMAP) data and Cloud Sat products, the seasonal variations of the cloud properties, vertical occurrence frequency, and ice water content of clouds over southeastern China were investigated in this study. In the Cloud Sat data, a significant alternation in high or low cloud patterns was observed from winter to summer over southeastern China. It was found that the East Asian Summer Monsoon(EASM) circulation and its transport of moisture leads to a conditional instability, which benefits the local upward motion in summer, and thereby results in an increased amount of high cloud. The deep convective cloud centers were found to coincide well with the northward march of the EASM, while cirrus lagged slightly behind the convection center and coincided well with the outflow and meridional wind divergence of the EASM. Analysis of the radiative heating rates revealed that both the plentiful summer moisture and higher clouds are effective in destabilizing the atmosphere. Moreover, clouds heat the mid-troposphere and the cloud radiative heating is balanced by adiabatic cooling through upward motion, which causes meridional wind by the Sverdrup balance. The cloud heating–forced circulation was observed to coincide well with the EASM circulation, serving as a positive effect on EASM circulation.展开更多
ABSTRACT The spatial and temporal global distribution of deep clouds was analyzed using a four-year dataset (2007-10) based on observations from CloudSat and CALIPSO. Results showed that in the Northern Hemisphere,...ABSTRACT The spatial and temporal global distribution of deep clouds was analyzed using a four-year dataset (2007-10) based on observations from CloudSat and CALIPSO. Results showed that in the Northern Hemisphere, the number of deep cloud systems (DCS) reached a maximum in summer and a minimum in winter. Seasonal variations in the number of DCS varied zonally in the Southern Hemisphere. DCS occurred most frequently over central Africa, the northern parts of South America and Australia, and Tibet. The mean cloud-top height of deep cloud cores (TDCC) decreased toward high latitudes in all seasons. DCS with the highest TDCC and deepest cores occurred over east and south Asian monsoon regions, west-central Africa and northern South America. The width of DCS (WDCS) increased toward high latitudes in all seasons. In general, DCS were more developed in the horizontal than in the vertical direction over high latitudes and vice versa over lower lat- itudes. Findings from this study show that different mechanisms are behind the development of DCS at different latitudes. Most DCS at low latitudes are deep convective clouds which are highly developed in the vertical direction but cover a rela tively small area in the horizontal direction; these DCS have the highest TDCC and smallest WDCS. The DCS at midlatitudes are more likely to be caused by cyclones, so they have less vertical development than DCS at low latitudes. DCS at high latitudes are mainly generated by large frontal systems, so they have the largest WDCS and the smallest TDCC.展开更多
In order to improve the performance of multi-objective workflow scheduling in cloud system, a multi-swarm multiobjective optimization algorithm(MSMOOA) is proposed to satisfy multiple conflicting objectives. Inspired ...In order to improve the performance of multi-objective workflow scheduling in cloud system, a multi-swarm multiobjective optimization algorithm(MSMOOA) is proposed to satisfy multiple conflicting objectives. Inspired by division of the same species into multiple swarms for different objectives and information sharing among these swarms in nature, each physical machine in the data center is considered a swarm and employs improved multi-objective particle swarm optimization to find out non-dominated solutions with one objective in MSMOOA. The particles in each swarm are divided into two classes and adopt different strategies to evolve cooperatively. One class of particles can communicate with several swarms simultaneously to promote the information sharing among swarms and the other class of particles can only exchange information with the particles located in the same swarm. Furthermore, in order to avoid the influence by the elastic available resources, a manager server is adopted in the cloud data center to collect the available resources for scheduling. The quality of the proposed method with other related approaches is evaluated by using hybrid and parallel workflow applications. The experiment results highlight the better performance of the MSMOOA than that of compared algorithms.展开更多
Accurate prediction of server load is important to cloud systems for improving the resource utilization, reducing the energy consumption and guaranteeing the quality of service(QoS).This paper analyzes the features of...Accurate prediction of server load is important to cloud systems for improving the resource utilization, reducing the energy consumption and guaranteeing the quality of service(QoS).This paper analyzes the features of cloud server load and the advantages and disadvantages of typical server load prediction algorithms, integrates the cloud model(CM) and the Markov chain(MC) together to realize a new CM-MC algorithm, and then proposes a new server load prediction algorithm based on CM-MC for cloud systems. The algorithm utilizes the historical data sample training method of the cloud model, and utilizes the Markov prediction theory to obtain the membership degree vector, based on which the weighted sum of the predicted values is used for the cloud model. The experiments show that the proposed prediction algorithm has higher prediction accuracy than other typical server load prediction algorithms, especially if the data has significant volatility. The proposed server load prediction algorithm based on CM-MC is suitable for cloud systems, and can help to reduce the energy consumption of cloud data centers.展开更多
Recently,there has been a sudden shift from using traditional office applications to the collaborative cloud-based office suite such as Microsoft Office 365.Such cloud-based systems allow users to work together on the...Recently,there has been a sudden shift from using traditional office applications to the collaborative cloud-based office suite such as Microsoft Office 365.Such cloud-based systems allow users to work together on the same docu-ment stored in a cloud server at once,by which users can effectively collaborate with each other.However,there are security concerns unsolved in using cloud col-laboration.One of the major concerns is the security of data stored in cloud ser-vers,which comes from the fact that data that multiple users are working together cannot be stored in encrypted form because of the dynamic characteristic of cloud collaboration.In this paper,we propose a novel mode of operation,DL-ECB,for AES by which we can modify,insert,and delete the ciphertext based on changes in plaintext.Therefore,we can use encrypted data in collaborative cloud-based platforms.To demonstrate that the DL-ECB mode can preserve the confidential-ity,integrity,and auditability of data used in collaborative cloud systems from adversaries,we implement and evaluate the prototype of the DL-ECB mode.展开更多
Data security is a major cloud computing issue due to different usertransactions in the system. The evolution of cryptography and cryptographic analysis are regarded domains of current research. deoxyribo nucleic acid...Data security is a major cloud computing issue due to different usertransactions in the system. The evolution of cryptography and cryptographic analysis are regarded domains of current research. deoxyribo nucleic acid (DNA) cryptography makes use of DNA as a sensing platform, which is then manipulated usinga variety of molecular methods. Many security mechanisms including knowledgebased authentication, two-factor authentication, adaptive authentication, multifactorauthentication and single password authentication have been deployed. These cryptographic techniques have been developed to ensure confidentiality, but most ofthem are based on complex mathematical calculations and equations. In the proposed approach, a novel and unique Hybrid helix scuttle-deoxy ribo nucleic acids(HHS-DNA) encryption algorithm has been proposed which is inspired by DNAcryptography and Helix scuttle. The proposed HHS-DNA is a type of multifold binary version of DNA (MF-BDNA). The major role of this paper is to present a multifold HHS-DNA algorithm to encrypt the cloud data assuring more security with lesscomplexity. The experimentation is carried out and it reduces the encryption time,cipher text size, and improves throughput. When compared with previous techniques, there is a 45% improvement in throughput, 37% fast in encryption time,54.67% cipher text size. The relevant experimental results and foregoing analysisshow that this method is of good robustness, stability, and security.展开更多
Most user authentication mechanisms of cloud systems depend on the credentials approach in which a user submits his/her identity through a username and password.Unfortunately,this approach has many security problems b...Most user authentication mechanisms of cloud systems depend on the credentials approach in which a user submits his/her identity through a username and password.Unfortunately,this approach has many security problems because personal data can be stolen or recognized by hackers.This paper aims to present a cloud-based biometric authentication model(CBioAM)for improving and securing cloud services.The research study presents the verification and identification processes of the proposed cloud-based biometric authentication system(CBioAS),where the biometric samples of users are saved in database servers and the authentication process is implemented without loss of the users’information.The paper presents the performance evaluation of the proposed model in terms of three main characteristics including accuracy,sensitivity,and specificity.The research study introduces a novel algorithm called“Bio_Authen_as_a_Service”for implementing and evaluating the proposed model.The proposed system performs the biometric authentication process securely and preserves the privacy of user information.The experimental result was highly promising for securing cloud services using the proposed model.The experiments showed encouraging results with a performance average of 93.94%,an accuracy average of 96.15%,a sensitivity average of 87.69%,and a specificity average of 97.99%.展开更多
Numerous methods are analysed in detail to improve task schedulingand data security performance in the cloud environment. The methodsinvolve scheduling according to the factors like makespan, waiting time,cost, deadli...Numerous methods are analysed in detail to improve task schedulingand data security performance in the cloud environment. The methodsinvolve scheduling according to the factors like makespan, waiting time,cost, deadline, and popularity. However, the methods are inappropriate forachieving higher scheduling performance. Regarding data security, existingmethods use various encryption schemes but introduce significant serviceinterruption. This article sketches a practical Real-time Application CentricTRS (Throughput-Resource utilization–Success) Scheduling with Data Security(RATRSDS) model by considering all these issues in task scheduling anddata security. The method identifies the required resource and their claim timeby receiving the service requests. Further, for the list of resources as services,the method computes throughput support (Thrs) according to the number ofstatements executed and the complete statements of the service. Similarly, themethod computes Resource utilization support (Ruts) according to the idletime on any duty cycle and total servicing time. Also, the method computesthe value of Success support (Sus) according to the number of completions forthe number of allocations. The method estimates the TRS score (ThroughputResource utilization Success) for different resources using all these supportmeasures. According to the value of the TRS score, the services are rankedand scheduled. On the other side, based on the requirement of service requests,the method computes Requirement Support (RS). The selection of service isperformed and allocated. Similarly, choosing the route according to the RouteSupport Measure (RSM) enforced route security. Finally, data security hasgets implemented with a service-based encryption technique. The RATRSDSscheme has claimed higher performance in data security and scheduling.展开更多
Pipeline defect detection systems collect the videos from cameras of pipeline robots,however the systems always analyzed these videos by offline systems or humans to detect the defects of potential security threats.Th...Pipeline defect detection systems collect the videos from cameras of pipeline robots,however the systems always analyzed these videos by offline systems or humans to detect the defects of potential security threats.The existing systems tend to reach the limit in terms of data access anywhere,access security and video processing on cloud.There is in need of studying on a pipeline defect detection cloud system for automatic pipeline inspection.In this paper,we deploy the framework of a cloud based pipeline defect detection system,including the user management module,pipeline robot control module,system service module,and defect detection module.In the system,we use a role encryption scheme for video collection,data uploading,and access security,and propose a hybrid information method for defect detection.The experimental results show that our approach is a scalable and efficient defection detection cloud system.展开更多
The reliability and availability of cloud systems have become major concerns of service providers,brokers,and end-users.Therefore,studying fault-tolerance mechanisms in cloud computing attracts intense attention in in...The reliability and availability of cloud systems have become major concerns of service providers,brokers,and end-users.Therefore,studying fault-tolerance mechanisms in cloud computing attracts intense attention in industry and academia.The task-scheduling mechanisms can improve the fault-tolerance level of cloud systems.A task-scheduling mechanism distributes tasks to a group of instances to be executed.Much work has been undertaken in this direction to improve the overall outcome of cloud computing,such as improving service qual-ity and reducing power consumption.However,little work on task scheduling has studied the problem of lost tasks from the broker’s perspective.Task loss can hap-pen due to virtual machine failures,server crashes,connection interruption,etc.The broker-based concept means that the backup task can be allocated by the bro-ker on the same cloud service provider(CSP)or a different CSP to reduce costs,for example.This paper proposes a novel fault-tolerant mechanism that employs the primary backup(PB)model of task scheduling to address this issue.The pro-posed mechanism minimizes the impact of failure events by reducing the number of lost tasks.The mechanism is further improved to shorten the makespan time of submitted tasks in cloud systems.The experiments demonstrated that the pro-posed mechanism decreased the number of lost tasks by about 13%–15%com-pared with other mechanisms in the literature.展开更多
Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption i...Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.展开更多
Using data of airborne particle measurement system, weather radar and Ka-band millimeter wave cloud-meter, physical structure characteristics of a typical stable stratiform cloud in Hebei Province on February 27, 2018...Using data of airborne particle measurement system, weather radar and Ka-band millimeter wave cloud-meter, physical structure characteristics of a typical stable stratiform cloud in Hebei Province on February 27, 2018 was analyzed. Research results showed that the detected cloud system was the precipitation stratiform cloud in the later stage of development. The cloud layer developed stably, and the vertical structure was unevenly distributed. The concentration of small cloud particles in high-level clouds was low, and it fluctuated greatly in space, and presented a discontinuous distribution state. The concentration of large cloud particles and precipitation particles was high, which was conducive to the growth of cloud droplets and the aggregation of ice crystals. The concentration of small cloud particles and the content of supercooled water were high in the middle and low-level clouds. The precipitation cloud system had a significant hierarchical structure, which conformed to the "catalysis-supply" mechanism. From the upper layer to the lower layer, the cloud particle spectrum was mainly in the form of single peak or double peak distribution, which showed a monotonic decreasing trend in general. The spectral distribution of small cloud particles in the cloud was discontinuous, and the high-value areas of spectral concentration of large cloud particles and precipitation particles were concentrated in the upper part of the cloud layer, and the particle spectrum was significantly widened. There was inversion zone at the bottom of the cloud layer, which was conducive to the continuous increase of particle concentration and the formation of large supercooled water droplets.展开更多
In this paper, we implement a content authoring and cloud system for cloud-based smart cloud learning. With the advent of the smartphone and mobile devices such as tablets, the educational paradigm is also changing. W...In this paper, we implement a content authoring and cloud system for cloud-based smart cloud learning. With the advent of the smartphone and mobile devices such as tablets, the educational paradigm is also changing. Was using the computer to aid in the learning e-learning started in ICT education are evolving Over the variety, in recent years, combines smart learning and social learning, cloud-based smart devices in e-learning the concept of Smart Cloud Smart Education Services Learning was gives rise to the term. Smart Cloud and free running is always the knowledge content uploaded by anyone, anywhere, can also be shared with other users. A terminal and location, without limitations on time, are continuously available to the environment of a high- quality knowledge content was coming through a variety of smart media, open educational content platform is built over the level of the possible conditions than e-learning. In this paper, we develop applications and Web sites that can provide authored content for smart cloud learning. In addition, we have built the cloud for content management and the website that can share content with other users. In the future, should wish to study a way that can provide customized services over the learner analysis based on big data technology.展开更多
文摘Cloud computing has become an essential technology for the management and processing of large datasets,offering scalability,high availability,and fault tolerance.However,optimizing data replication across multiple data centers poses a significant challenge,especially when balancing opposing goals such as latency,storage costs,energy consumption,and network efficiency.This study introduces a novel Dynamic Optimization Algorithm called Dynamic Multi-Objective Gannet Optimization(DMGO),designed to enhance data replication efficiency in cloud environments.Unlike traditional static replication systems,DMGO adapts dynamically to variations in network conditions,system demand,and resource availability.The approach utilizes multi-objective optimization approaches to efficiently balance data access latency,storage efficiency,and operational costs.DMGO consistently evaluates data center performance and adjusts replication algorithms in real time to guarantee optimal system efficiency.Experimental evaluations conducted in a simulated cloud environment demonstrate that DMGO significantly outperforms conventional static algorithms,achieving faster data access,lower storage overhead,reduced energy consumption,and improved scalability.The proposed methodology offers a robust and adaptable solution for modern cloud systems,ensuring efficient resource consumption while maintaining high performance.
基金supported by the National Natural Science Foundation of China (Grant No. 40975023)the Special Promotion Program for Meteorology (Grant No. GYHY201406011 and No. GYHY201106044)the National High Technology Research and Development Project of China (Grant No. 2012AA120903)
文摘Based on normalized six-hourly black body temperature (TBB) data of three geostationary meteorological satellites,the leading modes of the mei-yu cloud system between 1998 and 2008 were extracted by the Empirical Orthogonal Function (EOF) method,and the transition processes from the first typical leading mode to other leading modes were discussed and compared.The analysis shows that,when the southern mode (EOF1) transforms to the northeastern mode (EOF3),in the mid-troposphere,a low trough develops and moves southeastward over central and eastern China.The circulation pattern is characterized by two highs and one low in the lower troposphere.A belt of low pressure is sandwiched between the weak high over central and western China and the strong western North Pacific subtropical high (WNPSH).Cold air moves southward along the northerly flow behind the low,and meets the warm and moist air between the WNPSH and the forepart of the low trough,which leads to continuous convection.At the same time,the central extent of the WNPSH increases while its ridge extends westward.In addition,transitions from the southern mode to the dual centers mode and the tropical-low-influenced mode were found to be atypical,and so no common points could be concluded.Furthermore,the choice of threshold value can affect the number of samples discussed.
文摘Numerical simulation of meso-β-scale convective cloud systems associated with a PRE-STORM MCC case has been carried out using a 2-D version of the CSU Regional Atmospheric Modeling System (RAMS) nonhydrostatic model with parameterized microphysics. It is found that the predicted meso-r-scale convective phenomena arc basically unsteady under the situation of strong shear at low-levels, while the meso-β-scale convective system is maintained up to 3 hours or more. The meso -β- scale cloud system exhibits characteristics of a multi-celled convective storm in which the meso-r-scale convective cells have lifetime of about 30 min. Pressure perturbation depicts a meso-low after a half hour in the low levels. As the cloud system evolves, the meso-low intensifies and extends to the upshear side and covers the entire domain in the mid-lower levels with the peak values of 5-8 hPa. Temperature perturbation depicts a warm region in the middle levels through the entire simulation period. The meso-r-scale warm cores with peak values of 4-8 ℃ are associated with strong convective cells. The cloud top evaporation causes a stronger cold layer around the cloud top levels.Simulation of microphysics exhibits that graupel is primarily concentrated in the strong convective cells forming the main source of convective rainfall after one hour of simulation time. Aggregates are mainly located in the stratiform region and decaying convective cells which produce the stratiform rainfall. Riming of the ice crystals is the predominant precipitation formation mechanism in the convection region, whereas aggregation of ice crystals is the predominant one in the stratiform region, which is consistent with observations. Sensitivity experiments of ice-phase mierophysical processes show that the microphysical structures of the convective cloud system can be simulated better with the diagnosed aggregation collection efficiencies.
基金National Natural Science Foundation of China under Grant No.62032020Hunan Science and Technology Plan⁃ning Project under Grant No.2019RS3019the National Key Research and Development Program of China under Grant 2018YFB1003702.
文摘Mobile edge users(MEUs)collect data from sensor devices and report to cloud systems,which can facilitate numerous applications in sensor‑cloud systems(SCS).However,because there is no effective way to access the ground truth to verify the quality of sensing devices’data or MEUs’reports,malicious sensing devices or MEUs may report false data and cause damage to the platform.It is critical for selecting sensing devices and MEUs to report truthful data.To tackle this challenge,a novel scheme that uses unmanned aerial vehicles(UAV)to detect the truth of sensing devices and MEUs(UAV‑DT)is proposed to construct a clean data collection platform for SCS.In the UAV‑DT scheme,the UAV delivers check codes to sensor devices and requires them to provide routes to the specified destination node.Then,the UAV flies along the path that enables maximal truth detection and collects the information of the sensing devices forwarding data packets to the cloud during this period.The information collected by the UAV will be checked in two aspects to verify the credibility of the sensor devices.The first is to check whether there is an abnormality in the received and sent data packets of the sensing devices and an evaluation of the degree of trust is given;the second is to compare the data packets submitted by the sensing devices to MEUs with the data packets submitted by the MEUs to the platform to verify the credibility of MEUs.Then,based on the verified trust value,an incentive mechanism is proposed to select credible MEUs for data collection,so as to create a clean data collection sensor‑cloud network.The simulation results show that the proposed UAV‑DT scheme can identify the trust of sensing devices and MEUs well.As a result,the proportion of clean data collected is greatly improved.
文摘Intensity of competition, especially global competition, has driven many organizations to search for innovative ways to improve productivity and performance. This trend has led many firms to adopt approaches to implement cloud systems. Cloud systems have distinct characteristics that differentiate it from traditional internet services, due to a number of significant innovation factors. For example, firms need improved access to high-speed intemet as well as access to customer relationship management (CRM) and enterprise resource planning (ERP) capacities. This means that the interest of many firms in the implementation of cloud systems has increased. Although cloud systems are designed to facilitate knowledge transfer, currently, there is no method to ensure that knowledge transfer is useful or relevant to a firm. This in turn means that finns need to ensure that the cloud system has the capabilities to screen knowledge for compliance against some known knowledge characteristics. The use of cloud systems could result in an efficient delivery of innovation knowledge in an effective way. This paper presents an approach for assessment of the successful implementation of cloud systems. This paper also discusses the various success factors of cloud systems for global innovation.
文摘. This paper conducts the analysis on the data mining algorithm implementation and its application in parallel cloud system based on C++. With the increase in the number of the cloud computing platform developers, with the use of cloud computing platform to support the growth of the number of Internet users, the system is also the proportion of log data growth. At present applies in the colony environment many is the news transmission model. In takes in the rest transmission model, between each concurrent execution part exchanges the information, and the coordinated step and the control execution through the transmission news. As for the C++ in the data mining applications, it should ? rstly hold the following features. Parallel communication and serial communication are two basic ways of general communication. Under this basis, this paper proposes the novel perspective on the data mining algorithm implementation and its application in parallel cloud system based on C++. The later research will be focused on the code based implementation.
基金supported by the National Science Fund for Distinguished Young Scholars (41125017)National Natural Science Funds of China (41405103)
文摘Based on the National Centers for Environmental Prediction(NCEP) and Climate Prediction Center(CPC) Merged Analysis of Precipitation(CMAP) data and Cloud Sat products, the seasonal variations of the cloud properties, vertical occurrence frequency, and ice water content of clouds over southeastern China were investigated in this study. In the Cloud Sat data, a significant alternation in high or low cloud patterns was observed from winter to summer over southeastern China. It was found that the East Asian Summer Monsoon(EASM) circulation and its transport of moisture leads to a conditional instability, which benefits the local upward motion in summer, and thereby results in an increased amount of high cloud. The deep convective cloud centers were found to coincide well with the northward march of the EASM, while cirrus lagged slightly behind the convection center and coincided well with the outflow and meridional wind divergence of the EASM. Analysis of the radiative heating rates revealed that both the plentiful summer moisture and higher clouds are effective in destabilizing the atmosphere. Moreover, clouds heat the mid-troposphere and the cloud radiative heating is balanced by adiabatic cooling through upward motion, which causes meridional wind by the Sverdrup balance. The cloud heating–forced circulation was observed to coincide well with the EASM circulation, serving as a positive effect on EASM circulation.
基金supported by the National Natural Science Foundation of China (Grant No.41375080)the National Program on Key Basic Research Project of China (Grant Nos.2011CB403405 and 2013CB955804)the US Department of Energy Atmospheric System Research Program (DESC0007171)
文摘ABSTRACT The spatial and temporal global distribution of deep clouds was analyzed using a four-year dataset (2007-10) based on observations from CloudSat and CALIPSO. Results showed that in the Northern Hemisphere, the number of deep cloud systems (DCS) reached a maximum in summer and a minimum in winter. Seasonal variations in the number of DCS varied zonally in the Southern Hemisphere. DCS occurred most frequently over central Africa, the northern parts of South America and Australia, and Tibet. The mean cloud-top height of deep cloud cores (TDCC) decreased toward high latitudes in all seasons. DCS with the highest TDCC and deepest cores occurred over east and south Asian monsoon regions, west-central Africa and northern South America. The width of DCS (WDCS) increased toward high latitudes in all seasons. In general, DCS were more developed in the horizontal than in the vertical direction over high latitudes and vice versa over lower lat- itudes. Findings from this study show that different mechanisms are behind the development of DCS at different latitudes. Most DCS at low latitudes are deep convective clouds which are highly developed in the vertical direction but cover a rela tively small area in the horizontal direction; these DCS have the highest TDCC and smallest WDCS. The DCS at midlatitudes are more likely to be caused by cyclones, so they have less vertical development than DCS at low latitudes. DCS at high latitudes are mainly generated by large frontal systems, so they have the largest WDCS and the smallest TDCC.
基金Project(61473078)supported by the National Natural Science Foundation of ChinaProject(2015-2019)supported by the Program for Changjiang Scholars from the Ministry of Education,China+1 种基金Project(16510711100)supported by International Collaborative Project of the Shanghai Committee of Science and Technology,ChinaProject(KJ2017A418)supported by Anhui University Science Research,China
文摘In order to improve the performance of multi-objective workflow scheduling in cloud system, a multi-swarm multiobjective optimization algorithm(MSMOOA) is proposed to satisfy multiple conflicting objectives. Inspired by division of the same species into multiple swarms for different objectives and information sharing among these swarms in nature, each physical machine in the data center is considered a swarm and employs improved multi-objective particle swarm optimization to find out non-dominated solutions with one objective in MSMOOA. The particles in each swarm are divided into two classes and adopt different strategies to evolve cooperatively. One class of particles can communicate with several swarms simultaneously to promote the information sharing among swarms and the other class of particles can only exchange information with the particles located in the same swarm. Furthermore, in order to avoid the influence by the elastic available resources, a manager server is adopted in the cloud data center to collect the available resources for scheduling. The quality of the proposed method with other related approaches is evaluated by using hybrid and parallel workflow applications. The experiment results highlight the better performance of the MSMOOA than that of compared algorithms.
基金supported by the National Natural Science Foundation of China(61472192 61772286)+3 种基金the National Key Research and Development Program of China(2018YFB1003700)the Scientific and Technological Support Project(Society)of Jiangsu Province(BE2016776)the "333" Project of Jiangsu Province(BRA2017228 BRA2017401)
文摘Accurate prediction of server load is important to cloud systems for improving the resource utilization, reducing the energy consumption and guaranteeing the quality of service(QoS).This paper analyzes the features of cloud server load and the advantages and disadvantages of typical server load prediction algorithms, integrates the cloud model(CM) and the Markov chain(MC) together to realize a new CM-MC algorithm, and then proposes a new server load prediction algorithm based on CM-MC for cloud systems. The algorithm utilizes the historical data sample training method of the cloud model, and utilizes the Markov prediction theory to obtain the membership degree vector, based on which the weighted sum of the predicted values is used for the cloud model. The experiments show that the proposed prediction algorithm has higher prediction accuracy than other typical server load prediction algorithms, especially if the data has significant volatility. The proposed server load prediction algorithm based on CM-MC is suitable for cloud systems, and can help to reduce the energy consumption of cloud data centers.
基金This work was supported in part by the Mid-Career Researcher and Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(MSIT)Future Planning under Grant NRF2020R1A2C2014336 and Grant NRF2021R1F1A105611511.
文摘Recently,there has been a sudden shift from using traditional office applications to the collaborative cloud-based office suite such as Microsoft Office 365.Such cloud-based systems allow users to work together on the same docu-ment stored in a cloud server at once,by which users can effectively collaborate with each other.However,there are security concerns unsolved in using cloud col-laboration.One of the major concerns is the security of data stored in cloud ser-vers,which comes from the fact that data that multiple users are working together cannot be stored in encrypted form because of the dynamic characteristic of cloud collaboration.In this paper,we propose a novel mode of operation,DL-ECB,for AES by which we can modify,insert,and delete the ciphertext based on changes in plaintext.Therefore,we can use encrypted data in collaborative cloud-based platforms.To demonstrate that the DL-ECB mode can preserve the confidential-ity,integrity,and auditability of data used in collaborative cloud systems from adversaries,we implement and evaluate the prototype of the DL-ECB mode.
文摘Data security is a major cloud computing issue due to different usertransactions in the system. The evolution of cryptography and cryptographic analysis are regarded domains of current research. deoxyribo nucleic acid (DNA) cryptography makes use of DNA as a sensing platform, which is then manipulated usinga variety of molecular methods. Many security mechanisms including knowledgebased authentication, two-factor authentication, adaptive authentication, multifactorauthentication and single password authentication have been deployed. These cryptographic techniques have been developed to ensure confidentiality, but most ofthem are based on complex mathematical calculations and equations. In the proposed approach, a novel and unique Hybrid helix scuttle-deoxy ribo nucleic acids(HHS-DNA) encryption algorithm has been proposed which is inspired by DNAcryptography and Helix scuttle. The proposed HHS-DNA is a type of multifold binary version of DNA (MF-BDNA). The major role of this paper is to present a multifold HHS-DNA algorithm to encrypt the cloud data assuring more security with lesscomplexity. The experimentation is carried out and it reduces the encryption time,cipher text size, and improves throughput. When compared with previous techniques, there is a 45% improvement in throughput, 37% fast in encryption time,54.67% cipher text size. The relevant experimental results and foregoing analysisshow that this method is of good robustness, stability, and security.
基金funding for this study from King Khalid University,Grant Number(GRP-35–40/2019).
文摘Most user authentication mechanisms of cloud systems depend on the credentials approach in which a user submits his/her identity through a username and password.Unfortunately,this approach has many security problems because personal data can be stolen or recognized by hackers.This paper aims to present a cloud-based biometric authentication model(CBioAM)for improving and securing cloud services.The research study presents the verification and identification processes of the proposed cloud-based biometric authentication system(CBioAS),where the biometric samples of users are saved in database servers and the authentication process is implemented without loss of the users’information.The paper presents the performance evaluation of the proposed model in terms of three main characteristics including accuracy,sensitivity,and specificity.The research study introduces a novel algorithm called“Bio_Authen_as_a_Service”for implementing and evaluating the proposed model.The proposed system performs the biometric authentication process securely and preserves the privacy of user information.The experimental result was highly promising for securing cloud services using the proposed model.The experiments showed encouraging results with a performance average of 93.94%,an accuracy average of 96.15%,a sensitivity average of 87.69%,and a specificity average of 97.99%.
文摘Numerous methods are analysed in detail to improve task schedulingand data security performance in the cloud environment. The methodsinvolve scheduling according to the factors like makespan, waiting time,cost, deadline, and popularity. However, the methods are inappropriate forachieving higher scheduling performance. Regarding data security, existingmethods use various encryption schemes but introduce significant serviceinterruption. This article sketches a practical Real-time Application CentricTRS (Throughput-Resource utilization–Success) Scheduling with Data Security(RATRSDS) model by considering all these issues in task scheduling anddata security. The method identifies the required resource and their claim timeby receiving the service requests. Further, for the list of resources as services,the method computes throughput support (Thrs) according to the number ofstatements executed and the complete statements of the service. Similarly, themethod computes Resource utilization support (Ruts) according to the idletime on any duty cycle and total servicing time. Also, the method computesthe value of Success support (Sus) according to the number of completions forthe number of allocations. The method estimates the TRS score (ThroughputResource utilization Success) for different resources using all these supportmeasures. According to the value of the TRS score, the services are rankedand scheduled. On the other side, based on the requirement of service requests,the method computes Requirement Support (RS). The selection of service isperformed and allocated. Similarly, choosing the route according to the RouteSupport Measure (RSM) enforced route security. Finally, data security hasgets implemented with a service-based encryption technique. The RATRSDSscheme has claimed higher performance in data security and scheduling.
基金The work was supported in part by the Fundamental Research Funds for the Central Universities(2016QJ04)Yue Qi Young Scholar Project of CUMTB,the State Key Laboratory of Coal Resources and Safe Mining(SKLCRSM16KFD04,SKLCRSM16KFD03)+3 种基金the Natural Science Foundation of China(61601466)the Natural Science Foundation of Beijing,China(8162035)the National Key R&D Program of China(2018YFC0807801)the National Training Program of Innovation and Entrepreneurship for Undergraduates(C201804970).
文摘Pipeline defect detection systems collect the videos from cameras of pipeline robots,however the systems always analyzed these videos by offline systems or humans to detect the defects of potential security threats.The existing systems tend to reach the limit in terms of data access anywhere,access security and video processing on cloud.There is in need of studying on a pipeline defect detection cloud system for automatic pipeline inspection.In this paper,we deploy the framework of a cloud based pipeline defect detection system,including the user management module,pipeline robot control module,system service module,and defect detection module.In the system,we use a role encryption scheme for video collection,data uploading,and access security,and propose a hybrid information method for defect detection.The experimental results show that our approach is a scalable and efficient defection detection cloud system.
基金supported by the Deanship of Scientific Research at Prince Sattam Bin Abdulaziz University under research Project No.2018/01/9371.
文摘The reliability and availability of cloud systems have become major concerns of service providers,brokers,and end-users.Therefore,studying fault-tolerance mechanisms in cloud computing attracts intense attention in industry and academia.The task-scheduling mechanisms can improve the fault-tolerance level of cloud systems.A task-scheduling mechanism distributes tasks to a group of instances to be executed.Much work has been undertaken in this direction to improve the overall outcome of cloud computing,such as improving service qual-ity and reducing power consumption.However,little work on task scheduling has studied the problem of lost tasks from the broker’s perspective.Task loss can hap-pen due to virtual machine failures,server crashes,connection interruption,etc.The broker-based concept means that the backup task can be allocated by the bro-ker on the same cloud service provider(CSP)or a different CSP to reduce costs,for example.This paper proposes a novel fault-tolerant mechanism that employs the primary backup(PB)model of task scheduling to address this issue.The pro-posed mechanism minimizes the impact of failure events by reducing the number of lost tasks.The mechanism is further improved to shorten the makespan time of submitted tasks in cloud systems.The experiments demonstrated that the pro-posed mechanism decreased the number of lost tasks by about 13%–15%com-pared with other mechanisms in the literature.
基金The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the Project Number(PSAU/2023/01/27268).
文摘Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.
基金Supported by National Key R&D Plan Projects (2018YFC1507900)Hebei Province Science and Technology Plan Program(20375402D)。
文摘Using data of airborne particle measurement system, weather radar and Ka-band millimeter wave cloud-meter, physical structure characteristics of a typical stable stratiform cloud in Hebei Province on February 27, 2018 was analyzed. Research results showed that the detected cloud system was the precipitation stratiform cloud in the later stage of development. The cloud layer developed stably, and the vertical structure was unevenly distributed. The concentration of small cloud particles in high-level clouds was low, and it fluctuated greatly in space, and presented a discontinuous distribution state. The concentration of large cloud particles and precipitation particles was high, which was conducive to the growth of cloud droplets and the aggregation of ice crystals. The concentration of small cloud particles and the content of supercooled water were high in the middle and low-level clouds. The precipitation cloud system had a significant hierarchical structure, which conformed to the "catalysis-supply" mechanism. From the upper layer to the lower layer, the cloud particle spectrum was mainly in the form of single peak or double peak distribution, which showed a monotonic decreasing trend in general. The spectral distribution of small cloud particles in the cloud was discontinuous, and the high-value areas of spectral concentration of large cloud particles and precipitation particles were concentrated in the upper part of the cloud layer, and the particle spectrum was significantly widened. There was inversion zone at the bottom of the cloud layer, which was conducive to the continuous increase of particle concentration and the formation of large supercooled water droplets.
文摘In this paper, we implement a content authoring and cloud system for cloud-based smart cloud learning. With the advent of the smartphone and mobile devices such as tablets, the educational paradigm is also changing. Was using the computer to aid in the learning e-learning started in ICT education are evolving Over the variety, in recent years, combines smart learning and social learning, cloud-based smart devices in e-learning the concept of Smart Cloud Smart Education Services Learning was gives rise to the term. Smart Cloud and free running is always the knowledge content uploaded by anyone, anywhere, can also be shared with other users. A terminal and location, without limitations on time, are continuously available to the environment of a high- quality knowledge content was coming through a variety of smart media, open educational content platform is built over the level of the possible conditions than e-learning. In this paper, we develop applications and Web sites that can provide authored content for smart cloud learning. In addition, we have built the cloud for content management and the website that can share content with other users. In the future, should wish to study a way that can provide customized services over the learner analysis based on big data technology.