A statistically-based low-level cloud parameterization scheme is introduced, modified, and applied in the Flexible coupled General Circulation Model (FGCM-O). It is found that the low-level cloud scheme makes improved...A statistically-based low-level cloud parameterization scheme is introduced, modified, and applied in the Flexible coupled General Circulation Model (FGCM-O). It is found that the low-level cloud scheme makes improved simulations of low-level cloud fractions and net surface shortwave radiation fluxes in the subtropical eastern oceans off western coasts in the model. Accompanying the improvement in the net surface shortwave radiation fluxes, the simulated distribution of SSTs is more reasonably asymmetrical about the equator in the tropical eastern Pacific, which suppresses, to some extent, the development of the double ITCZ in the model. Warm SST biases in the ITCZ north of the equator are more realistically reduced, too. But the equatorial cold tongue is strengthened and extends further westward, which reduces the precipitation rate in the western equatorial Pacific but increases it in the ITCZ north of the equator in the far eastern Pacific. It is demonstrated that the low-level cloud-radiation feedback would enhance the cooperative feedback between the equatorial cold tongue and the ITCZ. Based on surface layer heat budget analyses, it is demonstrated that the reduction of SSTs is attributed to both the thermodynamic cooling process modified by the increase of cloud fractions and the oceanic dynamical cooling processes associated with the strengthened surface wind in the eastern equatorial Pacific, but it is mainly attributed to oceanic dynamical cooling processes associated with the strengthening of surface wind in the central and western equatorial Pacific.展开更多
Like many other coupled models, the Flexible coupled General Circulation Model (FGCM-0) suffers from the spurious “Double ITCZ”. In order to understand the “Double ITCZ” in FGCM-0, this study first examines the lo...Like many other coupled models, the Flexible coupled General Circulation Model (FGCM-0) suffers from the spurious “Double ITCZ”. In order to understand the “Double ITCZ” in FGCM-0, this study first examines the low-level cloud cover and the bulk stability of the low troposphere over the eastern subtropical Pacific simulated by the National Center for Atmospheric Research (NCAR) Community Climate Model version 3 (CCM3), which is the atmosphere component model of FGCM-0. It is found that the bulk stability of the low troposphere simulated by CCM3 is very consistent with the one derived from the National Center for Environmental Prediction (NCEP) reanalysis, but the simulated low-level cloud cover is much less than that derived from the International Satellite Cloud Climatology Project (ISCCP) D2 data. Based on the regression equations between the low-level cloud cover from the ISCCP data and the bulk stability of the low troposphere derived from the NCEP reanalysis, the parameterization scheme of low-level cloud in CCM3 is modified and used in sensitivity experiments to examine the impact of low-level cloud over the eastern subtropical Pacific on the spurious “Double ITCZ” in FGCM-0. Results show that the modified scheme causes the simulated low-level cloud cover to be improved locally over the cold oceans. Increasing the low-level cloud cover off Peru not only significantly alleviates the SST warm biases in the southeastern tropical Pacific, but also causes the equatorial cold tongue to be strengthened and to extend further west. Increasing the low-level cloud fraction off California effectively reduces the SST warm biases in ITCZ north of the equator. In order to examine the feedback between the SST and low-level cloud cover off Peru, one additional sensitivity experiment is performed in which the SST over the cold ocean off Peru is restored. It shows that decreasing the SST results in similar impacts over the wide regions from the southeastern tropical Pacific northwestwards to the western/central equatorial Pacific as increasing the low-level cloud cover does.展开更多
Cloud computing has created a paradigm shift that affects the way in which business applications are developed. Many business organizations use cloud infrastructures as platforms on which to deploy business applicatio...Cloud computing has created a paradigm shift that affects the way in which business applications are developed. Many business organizations use cloud infrastructures as platforms on which to deploy business applications. Increasing numbers of vendors are supplying the cloud marketplace with a wide range of cloud products. Different vendors offer cloud products in different formats. The cost structures for consuming cloud products can be complex. Finding a suitable set of cloud products that meets an application’s requirements and budget can be a challenging task. In this paper, an ontology-based resource mapping mechanism is proposed. Domain-specific ontologies are used to specify high-level application’s requirements. These are then translated into high-level infrastructure ontologies which then can be mapped onto low-level descriptions of cloud resources. Cost ontologies are proposed for cloud resources. An exemplar media transcoding and delivery service is studied in order to illustrate how high-level requirements can be modeled and mapped onto cloud resources within a budget constraint. The proposed ontologies provide an application-centric mechanism for specifying cloud requirements which can then be used for searching for suitable resources in a multi-provider cloud environment.展开更多
Clouds play an important role in global atmospheric energy and water vapor budgets, and the low cloud simulations suffer from large biases in many atmospheric general circulation models. In this study, cloud microphys...Clouds play an important role in global atmospheric energy and water vapor budgets, and the low cloud simulations suffer from large biases in many atmospheric general circulation models. In this study, cloud microphysical processes such as raindrop evaporation and cloud water accretion in a double-moment six-class cloud microphysics scheme were revised to enhance the simulation of low clouds using the Global-Regional Integrated Forecast System(GRIST)model. The validation of the revised scheme using a single-column version of the GRIST demonstrated a reasonable reduction in liquid water biases. The revised parameterization simulated medium-and low-level cloud fractions that were in better agreement with the observations than the original scheme. Long-term global simulations indicate the mitigation of the originally overestimated low-level cloud fraction and cloud-water mixing ratio in mid-to high-latitude regions,primarily owing to enhanced accretion processes and weakened raindrop evaporation. The reduced low clouds with the revised scheme showed better consistency with satellite observations, particularly at mid-and high-latitudes. Further improvements can be observed in the simulated cloud shortwave radiative forcing and vertical distribution of total cloud cover. Annual precipitation in mid-latitude regions has also improved, particularly over the oceans, with significantly increased large-scale and decreased convective precipitation.展开更多
In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul...In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.展开更多
Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.Howev...Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.展开更多
The Pantone Color of the Year 2026,PANTONE 11-4201 Cloud Dancer,has been introduced as a soft,lofty white symbolizing calm and clarity in an increasingly noisy world.This gentle shade invites a sense of peace and spac...The Pantone Color of the Year 2026,PANTONE 11-4201 Cloud Dancer,has been introduced as a soft,lofty white symbolizing calm and clarity in an increasingly noisy world.This gentle shade invites a sense of peace and spaciousness,encouraging focus and creating room for creativity and reflection.Cloud Dancer embodies a desire for simplicity and renewal-a blank canvas that allows our minds to wander and new ideas to take shape.Its expansive presence fosters environments where tranquility meets inspiration,offering visual calm that supports wellbeing and mental lightness.展开更多
This study introduces a new ocean surface friction velocity scheme and a modified Thompson cloud microphysics parameterization scheme into the CMA-TYM model.The impact of these two parameterization schemes on the pred...This study introduces a new ocean surface friction velocity scheme and a modified Thompson cloud microphysics parameterization scheme into the CMA-TYM model.The impact of these two parameterization schemes on the prediction of the movement track and intensity of Typhoon Kompasu in 2021 is examined.Additionally,the possible reasons for their effects on tropical cyclone(TC)intensity prediction are analyzed.Statistical results show that both parameterization schemes improve the predictions of Typhoon Kompasu’s track and intensity.The influence on track prediction becomes evident after 60 h of model integration,while the significant positive impact on intensity prediction is observed after 66 h.Further analysis reveals that these two schemes affect the timing and magnitude of extreme TC intensity values by influencing the evolution of the TC’s warm-core structure.展开更多
The continuous improvement of solar thermal technologies is essential to meet the growing demand for sustainable heat generation and to support global decarbonization efforts.This study presents the design,implementat...The continuous improvement of solar thermal technologies is essential to meet the growing demand for sustainable heat generation and to support global decarbonization efforts.This study presents the design,implementation,and validation of a real-time monitoring framework based on the Internet ofThings(IoT)and cloud computing to enhance the thermal performance of evacuated tube solar water heaters(ETSWHs).A commercial system and a custom-built prototype were instrumented with Industry 4.0 technologies,including platinum resistance temperature detectors(PT100),solar irradiance and wind speed sensors,a programmable logic controller(PLC),a SCADAinterface,and a cloud-connected IoT gateway.Data were processed locally and transmitted to cloud storage for continuous analysis and visualization via amobile application.Experimental results demonstrated the prototype’s superior thermal energy storage capacity−47.4 vs.36.2 MJ for the commercial system,representing a 31%—achieved through the novel integration of Industry 4.0 architecture with an optimized collector design.This improvement is attributed to optimized geometric design parameters,including a reduced tilt angle,increased inter-tube spacing,and the incorporation of an aluminum reflective surface.These modifications collectively enhanced solar heat absorption and reduced optical losses.The framework effectively identified thermal stratification,monitored environmental effects on heat transfer,and enabled real-time system diagnostics.By integrating automation,IoT,and cloud computing,the proposed architecture establishes a scalable and replicable model for the intelligent management of solar thermal systems,facilitating predictive maintenance and future integration with artificial intelligence for performance forecasting.This work provides a practical,data-driven approach to digitizing and optimizing heat transfer systems,promoting more efficient and sustainable solar thermal energy applications.展开更多
3D laser scanning technology is widely used in underground openings for high-precision,rapid,and nondestructive structural evaluations.Segmenting large 3D point cloud datasets,particularly in coal mine roadways with m...3D laser scanning technology is widely used in underground openings for high-precision,rapid,and nondestructive structural evaluations.Segmenting large 3D point cloud datasets,particularly in coal mine roadways with multi-scale targets,remains challenging.This paper proposes an enhanced segmentation method integrating improved PointNet++with a coverage-voted strategy.The coverage-voted strategy reduces data while preserving multi-scale target topology.The segmentation is achieved using an enhanced PointNet++algorithm with a normalization preprocessing head,resulting in a 94%accuracy for common supporting components.Ablation experiments show that the preprocessing head and coverage strategies increase segmentation accuracy by 20%and 2%,respectively,and improve Intersection over Union(IoU)for bearing plate segmentation by 58%and 20%.The accuracy of the current pretraining segmentation model may be affected by variations in surface support components,but it can be readily enhanced through re-optimization with additional labeled point cloud data.This proposed method,combined with a previously developed machine learning model that links rock bolt load and the deformation field of its bearing plate,provides a robust technique for simultaneously measuring the load of multiple rock bolts in a single laser scan.展开更多
In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to...In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to dangerous situations.Furthermore,autonomous vehicles may operate abnormally in bad weather due to limitations of their sensors and GPS.Driving simulators,which replicate driving conditions nearly identical to those in the real world,can drastically reduce the time and cost required for market entry validation;consequently,they have become widely used.In this paper,we design a virtual driving test environment capable of collecting and verifying SiLS data under adverse weather conditions using multi-source images.The proposed method generates a virtual testing environment that incorporates various events,including weather,time of day,and moving objects,that cannot be easily verified in real-world autonomous driving tests.By setting up scenario-based virtual environment events,multi-source image analysis and verification using real-world DCUs(Data Concentrator Units)with V2X-Car edge cloud can effectively address risk factors that may arise in real-world situations.We tested and validated the proposed method with scenarios employing V2X communication and multi-source image analysis.展开更多
Word cloud visualization is a compelling graphical representation that visually depicts the frequency of words within a given text or dataset[1].Research on word clouds focuses on two main aspects.The first emphasizes...Word cloud visualization is a compelling graphical representation that visually depicts the frequency of words within a given text or dataset[1].Research on word clouds focuses on two main aspects.The first emphasizes processing words,such as using the latent Dirichlet allocation(LDA)algorithm to uncover topics in the documents[2],while the second involves visual impact through striking word arrangements[3,4].In the realm of extensive biomedical data,effectiveknowledge delivery to biologists is crucial.展开更多
In this study, a statistical cloud scheme is first introduced and coupledwith a first-order turbulence scheme with second-order turbulence moments parameterized by thetimescale of the turbulence dissipation and the ve...In this study, a statistical cloud scheme is first introduced and coupledwith a first-order turbulence scheme with second-order turbulence moments parameterized by thetimescale of the turbulence dissipation and the vertical turbulent diffusion coefficient. Then theability of the scheme to simulate cloud fraction at different relative humidity, verticaltemperature profile, and the timescale of the turbulent dissipation is examined by numericalsimulation. It is found that the simulated cloud fraction is sensitive to the parameter used in thestatistical cloud scheme and the timescale of the turbulent dissipation. Based on the analyses, theintroduced statistical cloud scheme is modified. By combining the modified statistical cloud schemewith a boundary layer cumulus scheme, a new statistically-based low-level cloud scheme is proposedand tentatively applied in NCAR (National Center for Atmospheric Research) CCM3 (Community ClimateModel version 3). It is found that the simulation of low-level cloud fraction is markedly improvedand the centers with maximum low-level cloud fractions are well simulated in the cold oceans off thewestern coasts with the statistically-based low-level cloud scheme applied in CCM3. It suggeststhat the new statistically-based low-level cloud scheme has a great potential in the generalcirculation model for improving the low-level cloud parameterization.展开更多
Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta...Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.展开更多
基金This study was jointly supported by the National Science Foundation of China under Grant No.s40233031 and 40221503the National Key Basic Research Project under Grant No.G200078502.
文摘A statistically-based low-level cloud parameterization scheme is introduced, modified, and applied in the Flexible coupled General Circulation Model (FGCM-O). It is found that the low-level cloud scheme makes improved simulations of low-level cloud fractions and net surface shortwave radiation fluxes in the subtropical eastern oceans off western coasts in the model. Accompanying the improvement in the net surface shortwave radiation fluxes, the simulated distribution of SSTs is more reasonably asymmetrical about the equator in the tropical eastern Pacific, which suppresses, to some extent, the development of the double ITCZ in the model. Warm SST biases in the ITCZ north of the equator are more realistically reduced, too. But the equatorial cold tongue is strengthened and extends further westward, which reduces the precipitation rate in the western equatorial Pacific but increases it in the ITCZ north of the equator in the far eastern Pacific. It is demonstrated that the low-level cloud-radiation feedback would enhance the cooperative feedback between the equatorial cold tongue and the ITCZ. Based on surface layer heat budget analyses, it is demonstrated that the reduction of SSTs is attributed to both the thermodynamic cooling process modified by the increase of cloud fractions and the oceanic dynamical cooling processes associated with the strengthened surface wind in the eastern equatorial Pacific, but it is mainly attributed to oceanic dynamical cooling processes associated with the strengthening of surface wind in the central and western equatorial Pacific.
基金the National Natu-ral Science Foundation of China under Grant No.40023001and No.40233031 and"Innovation Program"under GrantZKCX2-SW-210and the National Key Basic ResearchProject under Grant G200078502.
文摘Like many other coupled models, the Flexible coupled General Circulation Model (FGCM-0) suffers from the spurious “Double ITCZ”. In order to understand the “Double ITCZ” in FGCM-0, this study first examines the low-level cloud cover and the bulk stability of the low troposphere over the eastern subtropical Pacific simulated by the National Center for Atmospheric Research (NCAR) Community Climate Model version 3 (CCM3), which is the atmosphere component model of FGCM-0. It is found that the bulk stability of the low troposphere simulated by CCM3 is very consistent with the one derived from the National Center for Environmental Prediction (NCEP) reanalysis, but the simulated low-level cloud cover is much less than that derived from the International Satellite Cloud Climatology Project (ISCCP) D2 data. Based on the regression equations between the low-level cloud cover from the ISCCP data and the bulk stability of the low troposphere derived from the NCEP reanalysis, the parameterization scheme of low-level cloud in CCM3 is modified and used in sensitivity experiments to examine the impact of low-level cloud over the eastern subtropical Pacific on the spurious “Double ITCZ” in FGCM-0. Results show that the modified scheme causes the simulated low-level cloud cover to be improved locally over the cold oceans. Increasing the low-level cloud cover off Peru not only significantly alleviates the SST warm biases in the southeastern tropical Pacific, but also causes the equatorial cold tongue to be strengthened and to extend further west. Increasing the low-level cloud fraction off California effectively reduces the SST warm biases in ITCZ north of the equator. In order to examine the feedback between the SST and low-level cloud cover off Peru, one additional sensitivity experiment is performed in which the SST over the cold ocean off Peru is restored. It shows that decreasing the SST results in similar impacts over the wide regions from the southeastern tropical Pacific northwestwards to the western/central equatorial Pacific as increasing the low-level cloud cover does.
文摘Cloud computing has created a paradigm shift that affects the way in which business applications are developed. Many business organizations use cloud infrastructures as platforms on which to deploy business applications. Increasing numbers of vendors are supplying the cloud marketplace with a wide range of cloud products. Different vendors offer cloud products in different formats. The cost structures for consuming cloud products can be complex. Finding a suitable set of cloud products that meets an application’s requirements and budget can be a challenging task. In this paper, an ontology-based resource mapping mechanism is proposed. Domain-specific ontologies are used to specify high-level application’s requirements. These are then translated into high-level infrastructure ontologies which then can be mapped onto low-level descriptions of cloud resources. Cost ontologies are proposed for cloud resources. An exemplar media transcoding and delivery service is studied in order to illustrate how high-level requirements can be modeled and mapped onto cloud resources within a budget constraint. The proposed ontologies provide an application-centric mechanism for specifying cloud requirements which can then be used for searching for suitable resources in a multi-provider cloud environment.
基金National Natural Science Foundation of China(42375153,42105153,42205157)Development of Science and Technology at Chinese Academy of Meteorological Sciences(2023KJ038)。
文摘Clouds play an important role in global atmospheric energy and water vapor budgets, and the low cloud simulations suffer from large biases in many atmospheric general circulation models. In this study, cloud microphysical processes such as raindrop evaporation and cloud water accretion in a double-moment six-class cloud microphysics scheme were revised to enhance the simulation of low clouds using the Global-Regional Integrated Forecast System(GRIST)model. The validation of the revised scheme using a single-column version of the GRIST demonstrated a reasonable reduction in liquid water biases. The revised parameterization simulated medium-and low-level cloud fractions that were in better agreement with the observations than the original scheme. Long-term global simulations indicate the mitigation of the originally overestimated low-level cloud fraction and cloud-water mixing ratio in mid-to high-latitude regions,primarily owing to enhanced accretion processes and weakened raindrop evaporation. The reduced low clouds with the revised scheme showed better consistency with satellite observations, particularly at mid-and high-latitudes. Further improvements can be observed in the simulated cloud shortwave radiative forcing and vertical distribution of total cloud cover. Annual precipitation in mid-latitude regions has also improved, particularly over the oceans, with significantly increased large-scale and decreased convective precipitation.
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.
文摘Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.
文摘The Pantone Color of the Year 2026,PANTONE 11-4201 Cloud Dancer,has been introduced as a soft,lofty white symbolizing calm and clarity in an increasingly noisy world.This gentle shade invites a sense of peace and spaciousness,encouraging focus and creating room for creativity and reflection.Cloud Dancer embodies a desire for simplicity and renewal-a blank canvas that allows our minds to wander and new ideas to take shape.Its expansive presence fosters environments where tranquility meets inspiration,offering visual calm that supports wellbeing and mental lightness.
基金supported by the National Key R&D Program of China[grant number 2023YFC3008004]。
文摘This study introduces a new ocean surface friction velocity scheme and a modified Thompson cloud microphysics parameterization scheme into the CMA-TYM model.The impact of these two parameterization schemes on the prediction of the movement track and intensity of Typhoon Kompasu in 2021 is examined.Additionally,the possible reasons for their effects on tropical cyclone(TC)intensity prediction are analyzed.Statistical results show that both parameterization schemes improve the predictions of Typhoon Kompasu’s track and intensity.The influence on track prediction becomes evident after 60 h of model integration,while the significant positive impact on intensity prediction is observed after 66 h.Further analysis reveals that these two schemes affect the timing and magnitude of extreme TC intensity values by influencing the evolution of the TC’s warm-core structure.
基金funded by the National Council of Science,Technology,and Technological Innovation(CONCYTEC)the National Program of Scientific Research and Advanced Studies(PROCIENCIA)under the E041-2022-“Applied Research Projects”competition.Contract number:PE501078609-2022-PROCIENCIA.
文摘The continuous improvement of solar thermal technologies is essential to meet the growing demand for sustainable heat generation and to support global decarbonization efforts.This study presents the design,implementation,and validation of a real-time monitoring framework based on the Internet ofThings(IoT)and cloud computing to enhance the thermal performance of evacuated tube solar water heaters(ETSWHs).A commercial system and a custom-built prototype were instrumented with Industry 4.0 technologies,including platinum resistance temperature detectors(PT100),solar irradiance and wind speed sensors,a programmable logic controller(PLC),a SCADAinterface,and a cloud-connected IoT gateway.Data were processed locally and transmitted to cloud storage for continuous analysis and visualization via amobile application.Experimental results demonstrated the prototype’s superior thermal energy storage capacity−47.4 vs.36.2 MJ for the commercial system,representing a 31%—achieved through the novel integration of Industry 4.0 architecture with an optimized collector design.This improvement is attributed to optimized geometric design parameters,including a reduced tilt angle,increased inter-tube spacing,and the incorporation of an aluminum reflective surface.These modifications collectively enhanced solar heat absorption and reduced optical losses.The framework effectively identified thermal stratification,monitored environmental effects on heat transfer,and enabled real-time system diagnostics.By integrating automation,IoT,and cloud computing,the proposed architecture establishes a scalable and replicable model for the intelligent management of solar thermal systems,facilitating predictive maintenance and future integration with artificial intelligence for performance forecasting.This work provides a practical,data-driven approach to digitizing and optimizing heat transfer systems,promoting more efficient and sustainable solar thermal energy applications.
基金supported by the National Natural Science Foundation of China(Grant Nos.52304139,52325403)the CCTEG Coal Mining Research Institute funding(Grant No.KCYJY-2024-MS-10).
文摘3D laser scanning technology is widely used in underground openings for high-precision,rapid,and nondestructive structural evaluations.Segmenting large 3D point cloud datasets,particularly in coal mine roadways with multi-scale targets,remains challenging.This paper proposes an enhanced segmentation method integrating improved PointNet++with a coverage-voted strategy.The coverage-voted strategy reduces data while preserving multi-scale target topology.The segmentation is achieved using an enhanced PointNet++algorithm with a normalization preprocessing head,resulting in a 94%accuracy for common supporting components.Ablation experiments show that the preprocessing head and coverage strategies increase segmentation accuracy by 20%and 2%,respectively,and improve Intersection over Union(IoU)for bearing plate segmentation by 58%and 20%.The accuracy of the current pretraining segmentation model may be affected by variations in surface support components,but it can be readily enhanced through re-optimization with additional labeled point cloud data.This proposed method,combined with a previously developed machine learning model that links rock bolt load and the deformation field of its bearing plate,provides a robust technique for simultaneously measuring the load of multiple rock bolts in a single laser scan.
基金supported by Institute of Information and Communications Technology Planning and Evaluation(IITP)grant funded by the Korean government(MSIT)(No.2019-0-01842,Artificial Intelligence Graduate School Program(GIST))supported by Korea Planning&Evaluation Institute of Industrial Technology(KEIT)grant funded by the Ministry of Trade,Industry&Energy(MOTIE,Republic of Korea)(RS-2025-25448249+1 种基金Automotive Industry Technology Development(R&D)Program)supported by the Regional Innovation System&Education(RISE)programthrough the(Gwangju RISE Center),funded by the Ministry of Education(MOE)and the Gwangju Metropolitan City,Republic of Korea(2025-RISE-05-001).
文摘In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to dangerous situations.Furthermore,autonomous vehicles may operate abnormally in bad weather due to limitations of their sensors and GPS.Driving simulators,which replicate driving conditions nearly identical to those in the real world,can drastically reduce the time and cost required for market entry validation;consequently,they have become widely used.In this paper,we design a virtual driving test environment capable of collecting and verifying SiLS data under adverse weather conditions using multi-source images.The proposed method generates a virtual testing environment that incorporates various events,including weather,time of day,and moving objects,that cannot be easily verified in real-world autonomous driving tests.By setting up scenario-based virtual environment events,multi-source image analysis and verification using real-world DCUs(Data Concentrator Units)with V2X-Car edge cloud can effectively address risk factors that may arise in real-world situations.We tested and validated the proposed method with scenarios employing V2X communication and multi-source image analysis.
基金supported by the National Key R&D Program of China(2022YFC2704304 and 2021YFF0702000)the National Natural Science Foundation of China(32341020 and 32341021)+1 种基金Hubei Innovation Group Project(2021CFA005)the Research Core Facilities for Life Science(HUST).
文摘Word cloud visualization is a compelling graphical representation that visually depicts the frequency of words within a given text or dataset[1].Research on word clouds focuses on two main aspects.The first emphasizes processing words,such as using the latent Dirichlet allocation(LDA)algorithm to uncover topics in the documents[2],while the second involves visual impact through striking word arrangements[3,4].In the realm of extensive biomedical data,effectiveknowledge delivery to biologists is crucial.
基金This study is jointly supported by the Chinese Academy of Sciences "Innovation Program" under Grant ZKCX2-SW-210, theNational Natural Science Foundation of China under Grant Nos. 40233031, 40231004, and 40221503, and the National Key BasicResearch Projec
文摘In this study, a statistical cloud scheme is first introduced and coupledwith a first-order turbulence scheme with second-order turbulence moments parameterized by thetimescale of the turbulence dissipation and the vertical turbulent diffusion coefficient. Then theability of the scheme to simulate cloud fraction at different relative humidity, verticaltemperature profile, and the timescale of the turbulent dissipation is examined by numericalsimulation. It is found that the simulated cloud fraction is sensitive to the parameter used in thestatistical cloud scheme and the timescale of the turbulent dissipation. Based on the analyses, theintroduced statistical cloud scheme is modified. By combining the modified statistical cloud schemewith a boundary layer cumulus scheme, a new statistically-based low-level cloud scheme is proposedand tentatively applied in NCAR (National Center for Atmospheric Research) CCM3 (Community ClimateModel version 3). It is found that the simulation of low-level cloud fraction is markedly improvedand the centers with maximum low-level cloud fractions are well simulated in the cold oceans off thewestern coasts with the statistically-based low-level cloud scheme applied in CCM3. It suggeststhat the new statistically-based low-level cloud scheme has a great potential in the generalcirculation model for improving the low-level cloud parameterization.
基金Postgraduate Innovation Top notch Talent Training Project of Hunan Province,Grant/Award Number:CX20220045Scientific Research Project of National University of Defense Technology,Grant/Award Number:22-ZZCX-07+2 种基金New Era Education Quality Project of Anhui Province,Grant/Award Number:2023cxcysj194National Natural Science Foundation of China,Grant/Award Numbers:62201597,62205372,1210456foundation of Hefei Comprehensive National Science Center,Grant/Award Number:KY23C502。
文摘Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.