With the rapid expansion of the Internet of Things(IoT),user data has experienced exponential growth,leading to increasing concerns about the security and integrity of data stored in the cloud.Traditional schemes rely...With the rapid expansion of the Internet of Things(IoT),user data has experienced exponential growth,leading to increasing concerns about the security and integrity of data stored in the cloud.Traditional schemes relying on untrusted third-party auditors suffer from both security and efficiency issues,while existing decentralized blockchain-based auditing solutions still face shortcomings in correctness and security.This paper proposes an improved blockchain-based cloud auditing scheme,with the following core contributions:Identifying critical logical contradictions in the original scheme,thereby establishing the foundation for the correctness of cloud auditing;Designing an enhanced mechanism that integrates multiple hashing with dynamic aggregate signatures,binding encrypted blocks through bilinear pairings and BLS signatures,and improving the scheme by setting parameters based on the Computational Diffie-Hellman(CDH)problem,significantly strengthening data integrity protection and anti-forgery capabilities;Introducing a random challenge mechanism and dynamic parameter adjustment strategy,effectively resisting various attacks such as forgery,tampering,and deletion,significantly improving the detection probability of malicious Cloud Service Providers(CSPs),and significantly reducing the proof generation overhead for CSPswhilemaintaining the same computational cost forDataOwners.Theoretical analysis and performance evaluation experiments demonstrate that the proposed scheme achieves significant improvements in both security and efficiency.Finally,the paper explores potential applications of the Enhanced Security Scheme in fields such as healthcare,drone swarms,and government office attendance systems,providing an effective approach for building secure,efficient,and decentralized cloud auditing systems.展开更多
In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul...In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.展开更多
Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.Howev...Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.展开更多
Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to priva...Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.展开更多
Internet of Things(IoT)interconnects devices via network protocols to enable intelligent sensing and control.Resource-constrained IoT devices rely on cloud servers for data storage and processing.However,this cloudass...Internet of Things(IoT)interconnects devices via network protocols to enable intelligent sensing and control.Resource-constrained IoT devices rely on cloud servers for data storage and processing.However,this cloudassisted architecture faces two critical challenges:the untrusted cloud services and the separation of data ownership from control.Although Attribute-based Searchable Encryption(ABSE)provides fine-grained access control and keyword search over encrypted data,existing schemes lack of error tolerance in exact multi-keyword matching.In this paper,we proposed an attribute-based multi-keyword fuzzy searchable encryption with forward ciphertext search(FCS-ABMSE)scheme that avoids computationally expensive bilinear pairing operations on the IoT device side.The scheme supportsmulti-keyword fuzzy search without requiring explicit keyword fields,thereby significantly enhancing error tolerance in search operations.It further incorporates forward-secure ciphertext search to mitigate trapdoor abuse,as well as offline encryption and verifiable outsourced decryption to minimize user-side computational costs.Formal security analysis proved that the FCS-ABMSE scheme meets both indistinguishability of ciphertext under the chosen keyword attacks(IND-CKA)and the indistinguishability of ciphertext under the chosen plaintext attacks(IND-CPA).In addition,we constructed an enhanced variant based on type-3 pairings.Results demonstrated that the proposed scheme outperforms existing ABSE approaches in terms of functionalities,computational cost,and communication cost.展开更多
The interactions between clouds and aerosols represent one of the largest uncertainties in assessing the Earth's radiation budget, highlighting the importance of research on the transition zone(TZ) within the clou...The interactions between clouds and aerosols represent one of the largest uncertainties in assessing the Earth's radiation budget, highlighting the importance of research on the transition zone(TZ) within the cloud-aerosol continuum.This study assesses the global distribution of TZ conditions, analyzes its optical characteristics, and determines the cloud or aerosol types most commonly associated with them, using the cloud-aerosol discrimination(CAD) score of the CloudAerosol Lidar with Orthogonal Polarization(CALIOP) instrument on the CALIPSO satellite. The CAD score classifies clouds and aerosols by the probability density functions of attenuated backscatter, total color ratio, volume depolarization ratio, altitude, and latitude. After applying several filters to avoid artifacts, the TZ was identified as those atmospheric layers that cannot be clearly classified as clouds or aerosols, layers within the no-confidence range(NCR) of the CAD score, and cirrus fringes. The optical characteristics of NCR layers exhibit two main clusters: Cluster 1, with properties between high-altitude ice clouds and aerosols(e.g., wispy cloud fragments), and Cluster 2, with properties between water clouds and aerosols at lower altitudes(e.g., large hydrated aerosols). Our results highlight the significant ubiquity of TZ conditions, which appear in 9.5% of all profiles and comprise 6.4% of the detected layers. Cluster 1 and cirrus-fringe layers predominate near the ITCZ and in mid-latitudes, whereas Cluster 2 layers are more frequent over the oceans along the central West African and East Asian coasts, where elevated smoke and dusty marine aerosols are common.展开更多
This study introduces a new ocean surface friction velocity scheme and a modified Thompson cloud microphysics parameterization scheme into the CMA-TYM model.The impact of these two parameterization schemes on the pred...This study introduces a new ocean surface friction velocity scheme and a modified Thompson cloud microphysics parameterization scheme into the CMA-TYM model.The impact of these two parameterization schemes on the prediction of the movement track and intensity of Typhoon Kompasu in 2021 is examined.Additionally,the possible reasons for their effects on tropical cyclone(TC)intensity prediction are analyzed.Statistical results show that both parameterization schemes improve the predictions of Typhoon Kompasu’s track and intensity.The influence on track prediction becomes evident after 60 h of model integration,while the significant positive impact on intensity prediction is observed after 66 h.Further analysis reveals that these two schemes affect the timing and magnitude of extreme TC intensity values by influencing the evolution of the TC’s warm-core structure.展开更多
The continuous improvement of solar thermal technologies is essential to meet the growing demand for sustainable heat generation and to support global decarbonization efforts.This study presents the design,implementat...The continuous improvement of solar thermal technologies is essential to meet the growing demand for sustainable heat generation and to support global decarbonization efforts.This study presents the design,implementation,and validation of a real-time monitoring framework based on the Internet ofThings(IoT)and cloud computing to enhance the thermal performance of evacuated tube solar water heaters(ETSWHs).A commercial system and a custom-built prototype were instrumented with Industry 4.0 technologies,including platinum resistance temperature detectors(PT100),solar irradiance and wind speed sensors,a programmable logic controller(PLC),a SCADAinterface,and a cloud-connected IoT gateway.Data were processed locally and transmitted to cloud storage for continuous analysis and visualization via amobile application.Experimental results demonstrated the prototype’s superior thermal energy storage capacity−47.4 vs.36.2 MJ for the commercial system,representing a 31%—achieved through the novel integration of Industry 4.0 architecture with an optimized collector design.This improvement is attributed to optimized geometric design parameters,including a reduced tilt angle,increased inter-tube spacing,and the incorporation of an aluminum reflective surface.These modifications collectively enhanced solar heat absorption and reduced optical losses.The framework effectively identified thermal stratification,monitored environmental effects on heat transfer,and enabled real-time system diagnostics.By integrating automation,IoT,and cloud computing,the proposed architecture establishes a scalable and replicable model for the intelligent management of solar thermal systems,facilitating predictive maintenance and future integration with artificial intelligence for performance forecasting.This work provides a practical,data-driven approach to digitizing and optimizing heat transfer systems,promoting more efficient and sustainable solar thermal energy applications.展开更多
Human Resource(HR)operations increasingly rely on cloud-based platforms that provide hiring,payroll,employee management,and compliance services.These systems,typically built on multi-tenant microservice architectures,...Human Resource(HR)operations increasingly rely on cloud-based platforms that provide hiring,payroll,employee management,and compliance services.These systems,typically built on multi-tenant microservice architectures,offer scalability and efficiency but also expand the attack surface for adversaries.Ransomware has emerged as a leading threat in this domain,capable of halting workflows and exposing sensitive employee records.Traditional defenses such as static hardening and signature-based detection often fail to address the dynamic requirements of HR Software as a Service(SaaS),where continuous availability and privacy compliance are critical.This paper presents a Moving Target Defense(MTD)framework for HR SaaS that combines container mutation,IP hopping,and node reassignment to randomize the attack surface without pausing services.Many prior defenses for cloud or IoT rely on static hardening or signature-driven detection and do not meet HR SaaS needs such as uninterrupted sessions,privacy compliance,and live service continuity.This paper presents a MTD framework for HR SaaS that combines container mutation,IP hopping,and node reassignment to randomize the attack surface without pausing services.The framework runs on Kubernetes and uses a KL-divergence-based anomaly detector that monitors HR access logs across five modules(onboarding,employee records,leave,payroll,and exit).In simulation with realistic HR traffic,the approach reaches 96.9% average detection accuracy with AUC 0.94-0.98,cuts mean time to containment to 91.4 s,and lowers the ransomware encryption rate to 13.2%.Measured overheads for CPU,memory,and per-mutation latency remainmodest.Comparedwith priorMTDand non-MTD baselines,the design provides stronger containment without service interruption and aligns with zero-trust and compliance goals.Its modular implementation and control-plane orchestration support stepwise,enterprise-scale deployment in HR SaaS environments.展开更多
In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to...In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to dangerous situations.Furthermore,autonomous vehicles may operate abnormally in bad weather due to limitations of their sensors and GPS.Driving simulators,which replicate driving conditions nearly identical to those in the real world,can drastically reduce the time and cost required for market entry validation;consequently,they have become widely used.In this paper,we design a virtual driving test environment capable of collecting and verifying SiLS data under adverse weather conditions using multi-source images.The proposed method generates a virtual testing environment that incorporates various events,including weather,time of day,and moving objects,that cannot be easily verified in real-world autonomous driving tests.By setting up scenario-based virtual environment events,multi-source image analysis and verification using real-world DCUs(Data Concentrator Units)with V2X-Car edge cloud can effectively address risk factors that may arise in real-world situations.We tested and validated the proposed method with scenarios employing V2X communication and multi-source image analysis.展开更多
Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta...Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.展开更多
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th...Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.展开更多
基金funded by the National Natural Science Foundation of China(New Design and Analysis of Fully Homomorphic Signatures,Grant No.62172436).
文摘With the rapid expansion of the Internet of Things(IoT),user data has experienced exponential growth,leading to increasing concerns about the security and integrity of data stored in the cloud.Traditional schemes relying on untrusted third-party auditors suffer from both security and efficiency issues,while existing decentralized blockchain-based auditing solutions still face shortcomings in correctness and security.This paper proposes an improved blockchain-based cloud auditing scheme,with the following core contributions:Identifying critical logical contradictions in the original scheme,thereby establishing the foundation for the correctness of cloud auditing;Designing an enhanced mechanism that integrates multiple hashing with dynamic aggregate signatures,binding encrypted blocks through bilinear pairings and BLS signatures,and improving the scheme by setting parameters based on the Computational Diffie-Hellman(CDH)problem,significantly strengthening data integrity protection and anti-forgery capabilities;Introducing a random challenge mechanism and dynamic parameter adjustment strategy,effectively resisting various attacks such as forgery,tampering,and deletion,significantly improving the detection probability of malicious Cloud Service Providers(CSPs),and significantly reducing the proof generation overhead for CSPswhilemaintaining the same computational cost forDataOwners.Theoretical analysis and performance evaluation experiments demonstrate that the proposed scheme achieves significant improvements in both security and efficiency.Finally,the paper explores potential applications of the Enhanced Security Scheme in fields such as healthcare,drone swarms,and government office attendance systems,providing an effective approach for building secure,efficient,and decentralized cloud auditing systems.
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.
文摘Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.
基金supported in part by the National Key Research and Development Program of Chinaunder(Grant 2021YFB3101100)in part by the National Natural Science Foundation of Chinaunder(Grant 42461057),(Grant 62272123),and(Grant 42371470)+1 种基金in part by the Fundamental Research Program of Shanxi Province under(Grant 202303021212164)in part by the Postgraduate Education Innovation Program of Shanxi Province under(Grant 2024KY474).
文摘Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.
文摘Internet of Things(IoT)interconnects devices via network protocols to enable intelligent sensing and control.Resource-constrained IoT devices rely on cloud servers for data storage and processing.However,this cloudassisted architecture faces two critical challenges:the untrusted cloud services and the separation of data ownership from control.Although Attribute-based Searchable Encryption(ABSE)provides fine-grained access control and keyword search over encrypted data,existing schemes lack of error tolerance in exact multi-keyword matching.In this paper,we proposed an attribute-based multi-keyword fuzzy searchable encryption with forward ciphertext search(FCS-ABMSE)scheme that avoids computationally expensive bilinear pairing operations on the IoT device side.The scheme supportsmulti-keyword fuzzy search without requiring explicit keyword fields,thereby significantly enhancing error tolerance in search operations.It further incorporates forward-secure ciphertext search to mitigate trapdoor abuse,as well as offline encryption and verifiable outsourced decryption to minimize user-side computational costs.Formal security analysis proved that the FCS-ABMSE scheme meets both indistinguishability of ciphertext under the chosen keyword attacks(IND-CKA)and the indistinguishability of ciphertext under the chosen plaintext attacks(IND-CPA).In addition,we constructed an enhanced variant based on type-3 pairings.Results demonstrated that the proposed scheme outperforms existing ABSE approaches in terms of functionalities,computational cost,and communication cost.
基金funded through project NUBOLOSYTI (PID2023149972NB-100) of the Spanish Ministry of Science and Innovation (MICINN)supported by an IFUdG 2022 fellowship。
文摘The interactions between clouds and aerosols represent one of the largest uncertainties in assessing the Earth's radiation budget, highlighting the importance of research on the transition zone(TZ) within the cloud-aerosol continuum.This study assesses the global distribution of TZ conditions, analyzes its optical characteristics, and determines the cloud or aerosol types most commonly associated with them, using the cloud-aerosol discrimination(CAD) score of the CloudAerosol Lidar with Orthogonal Polarization(CALIOP) instrument on the CALIPSO satellite. The CAD score classifies clouds and aerosols by the probability density functions of attenuated backscatter, total color ratio, volume depolarization ratio, altitude, and latitude. After applying several filters to avoid artifacts, the TZ was identified as those atmospheric layers that cannot be clearly classified as clouds or aerosols, layers within the no-confidence range(NCR) of the CAD score, and cirrus fringes. The optical characteristics of NCR layers exhibit two main clusters: Cluster 1, with properties between high-altitude ice clouds and aerosols(e.g., wispy cloud fragments), and Cluster 2, with properties between water clouds and aerosols at lower altitudes(e.g., large hydrated aerosols). Our results highlight the significant ubiquity of TZ conditions, which appear in 9.5% of all profiles and comprise 6.4% of the detected layers. Cluster 1 and cirrus-fringe layers predominate near the ITCZ and in mid-latitudes, whereas Cluster 2 layers are more frequent over the oceans along the central West African and East Asian coasts, where elevated smoke and dusty marine aerosols are common.
基金supported by the National Key R&D Program of China[grant number 2023YFC3008004]。
文摘This study introduces a new ocean surface friction velocity scheme and a modified Thompson cloud microphysics parameterization scheme into the CMA-TYM model.The impact of these two parameterization schemes on the prediction of the movement track and intensity of Typhoon Kompasu in 2021 is examined.Additionally,the possible reasons for their effects on tropical cyclone(TC)intensity prediction are analyzed.Statistical results show that both parameterization schemes improve the predictions of Typhoon Kompasu’s track and intensity.The influence on track prediction becomes evident after 60 h of model integration,while the significant positive impact on intensity prediction is observed after 66 h.Further analysis reveals that these two schemes affect the timing and magnitude of extreme TC intensity values by influencing the evolution of the TC’s warm-core structure.
基金funded by the National Council of Science,Technology,and Technological Innovation(CONCYTEC)the National Program of Scientific Research and Advanced Studies(PROCIENCIA)under the E041-2022-“Applied Research Projects”competition.Contract number:PE501078609-2022-PROCIENCIA.
文摘The continuous improvement of solar thermal technologies is essential to meet the growing demand for sustainable heat generation and to support global decarbonization efforts.This study presents the design,implementation,and validation of a real-time monitoring framework based on the Internet ofThings(IoT)and cloud computing to enhance the thermal performance of evacuated tube solar water heaters(ETSWHs).A commercial system and a custom-built prototype were instrumented with Industry 4.0 technologies,including platinum resistance temperature detectors(PT100),solar irradiance and wind speed sensors,a programmable logic controller(PLC),a SCADAinterface,and a cloud-connected IoT gateway.Data were processed locally and transmitted to cloud storage for continuous analysis and visualization via amobile application.Experimental results demonstrated the prototype’s superior thermal energy storage capacity−47.4 vs.36.2 MJ for the commercial system,representing a 31%—achieved through the novel integration of Industry 4.0 architecture with an optimized collector design.This improvement is attributed to optimized geometric design parameters,including a reduced tilt angle,increased inter-tube spacing,and the incorporation of an aluminum reflective surface.These modifications collectively enhanced solar heat absorption and reduced optical losses.The framework effectively identified thermal stratification,monitored environmental effects on heat transfer,and enabled real-time system diagnostics.By integrating automation,IoT,and cloud computing,the proposed architecture establishes a scalable and replicable model for the intelligent management of solar thermal systems,facilitating predictive maintenance and future integration with artificial intelligence for performance forecasting.This work provides a practical,data-driven approach to digitizing and optimizing heat transfer systems,promoting more efficient and sustainable solar thermal energy applications.
文摘Human Resource(HR)operations increasingly rely on cloud-based platforms that provide hiring,payroll,employee management,and compliance services.These systems,typically built on multi-tenant microservice architectures,offer scalability and efficiency but also expand the attack surface for adversaries.Ransomware has emerged as a leading threat in this domain,capable of halting workflows and exposing sensitive employee records.Traditional defenses such as static hardening and signature-based detection often fail to address the dynamic requirements of HR Software as a Service(SaaS),where continuous availability and privacy compliance are critical.This paper presents a Moving Target Defense(MTD)framework for HR SaaS that combines container mutation,IP hopping,and node reassignment to randomize the attack surface without pausing services.Many prior defenses for cloud or IoT rely on static hardening or signature-driven detection and do not meet HR SaaS needs such as uninterrupted sessions,privacy compliance,and live service continuity.This paper presents a MTD framework for HR SaaS that combines container mutation,IP hopping,and node reassignment to randomize the attack surface without pausing services.The framework runs on Kubernetes and uses a KL-divergence-based anomaly detector that monitors HR access logs across five modules(onboarding,employee records,leave,payroll,and exit).In simulation with realistic HR traffic,the approach reaches 96.9% average detection accuracy with AUC 0.94-0.98,cuts mean time to containment to 91.4 s,and lowers the ransomware encryption rate to 13.2%.Measured overheads for CPU,memory,and per-mutation latency remainmodest.Comparedwith priorMTDand non-MTD baselines,the design provides stronger containment without service interruption and aligns with zero-trust and compliance goals.Its modular implementation and control-plane orchestration support stepwise,enterprise-scale deployment in HR SaaS environments.
基金supported by Institute of Information and Communications Technology Planning and Evaluation(IITP)grant funded by the Korean government(MSIT)(No.2019-0-01842,Artificial Intelligence Graduate School Program(GIST))supported by Korea Planning&Evaluation Institute of Industrial Technology(KEIT)grant funded by the Ministry of Trade,Industry&Energy(MOTIE,Republic of Korea)(RS-2025-25448249+1 种基金Automotive Industry Technology Development(R&D)Program)supported by the Regional Innovation System&Education(RISE)programthrough the(Gwangju RISE Center),funded by the Ministry of Education(MOE)and the Gwangju Metropolitan City,Republic of Korea(2025-RISE-05-001).
文摘In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to dangerous situations.Furthermore,autonomous vehicles may operate abnormally in bad weather due to limitations of their sensors and GPS.Driving simulators,which replicate driving conditions nearly identical to those in the real world,can drastically reduce the time and cost required for market entry validation;consequently,they have become widely used.In this paper,we design a virtual driving test environment capable of collecting and verifying SiLS data under adverse weather conditions using multi-source images.The proposed method generates a virtual testing environment that incorporates various events,including weather,time of day,and moving objects,that cannot be easily verified in real-world autonomous driving tests.By setting up scenario-based virtual environment events,multi-source image analysis and verification using real-world DCUs(Data Concentrator Units)with V2X-Car edge cloud can effectively address risk factors that may arise in real-world situations.We tested and validated the proposed method with scenarios employing V2X communication and multi-source image analysis.
基金Postgraduate Innovation Top notch Talent Training Project of Hunan Province,Grant/Award Number:CX20220045Scientific Research Project of National University of Defense Technology,Grant/Award Number:22-ZZCX-07+2 种基金New Era Education Quality Project of Anhui Province,Grant/Award Number:2023cxcysj194National Natural Science Foundation of China,Grant/Award Numbers:62201597,62205372,1210456foundation of Hefei Comprehensive National Science Center,Grant/Award Number:KY23C502。
文摘Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
基金supported By Grant (PLN2022-14) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University)。
文摘Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.