This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak...This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Aiming at the problem that the data in the user rating matrix is missing and the importance of implicit trust between users is ignored when using the TrustSVD model to fill it,this paper proposes a recommendation algo...Aiming at the problem that the data in the user rating matrix is missing and the importance of implicit trust between users is ignored when using the TrustSVD model to fill it,this paper proposes a recommendation algorithm based on TrustSVD++and XGBoost.Firstly,the explicit trust and implicit trust were introduced into the SVD++model to construct the TrustSVD++model.Secondly,considering that there is much data in the interaction matrix after filling,which may lead to a rather complex calculation process,the K-means algorithm is introduced to cluster and extract user and item features at the same time.Then,in order to improve the accuracy of rating prediction for target users,an XGBoost model is proposed to train user and item features,and finally,it is verified on the data sets MovieLens-1M and MovieLens-100k.Experiments show that compared with the SVD++model and the recommendation algorithm without XGBoost model training,the proposed algorithm has the RMSE value reduced by 2.9%and the MAE value reduced by 3%.展开更多
Microfinance institutions in Kenya play a unique role in promoting financial inclusion,loans,and savings provision,especially to low-income individuals and small-scale entrepreneurs.However,despite their benefits,most...Microfinance institutions in Kenya play a unique role in promoting financial inclusion,loans,and savings provision,especially to low-income individuals and small-scale entrepreneurs.However,despite their benefits,most of their products and programs in Machakos County have been reducing due to re-payment challenges,threatening their financial ability to extend further credit.This could be attributed to ineffective credit scoring models which are not able to establish the nuanced non-linear repayment behavior and patterns of the loan applicants.The research objective was to enhance credit risk scoring for microfinance institutions in Machakos County using supervised machine learning algorithms.The study adopted a mixed research design under supervised machine learning approach.It randomly sampled 6771 loan application ac-count records and repayment history.Rstudio and Python programming lan-guages were deployed for data pre-processing and analysis.Logistic regression algorithm,XG Boosting and the random forest ensemble method were used.Metric evaluations used included the performance accuracy,Area under the Curve and F1-Score.Based on the study findings:XG Boosting was the best performer with 83.3%accuracy and 0.202 Brier score.Development of legal framework to govern ethical and open use of machine learning assessment was recommended.A similar research but using different machine learning al-gorithms,locations,and institutions,to ascertain the validity,reliability and the generalizability of the study findings was recommended for further re-search.展开更多
Malaysia,as one of the highest producers of palm oil globally and one of the largest exporters,has a huge potential to use palmoil waste to generate electricity since an abundance of waste is produced during the palmo...Malaysia,as one of the highest producers of palm oil globally and one of the largest exporters,has a huge potential to use palmoil waste to generate electricity since an abundance of waste is produced during the palmoil extraction process.In this paper,we have first examined and compared the use of palmoil waste as biomass for electricity generation in different countries with reference to Malaysia.Some areas with default accessibility in rural areas,like those in Sabah and Sarawak,require a cheap and reliable source of electricity.Palm oil waste possesses the potential to be the source.Therefore,this research examines the cost-effective comparison between electricity generated frompalm oil waste and standalone diesel electric generation in Marudi,Sarawak,Malaysia.This research aims to investigate the potential electricity generation using palm oil waste and the feasibility of implementing the technology in rural areas.To implement and analyze the feasibility,a case study has been carried out in a rural area in Sarawak,Malaysia.The finding shows the electricity cost calculation of small towns like Long Lama,Long Miri,and Long Atip,with ten nearby schools,and suggests that using EFB from palm oil waste is cheaper and reduces greenhouse gas emissions.The study also points out the need to conduct further research on power systems,such as energy storage andmicrogrids,to better understand the future of power systems.By collecting data through questionnaires and surveys,an analysis has been carried out to determine the approximate cost and quantity of palm oil waste to generate cheaper renewable energy.We concluded that electricity generation from palm oil waste is cost-effective and beneficial.The infrastructure can be a microgrid connected to the main grid.展开更多
Wireless technology is transforming the future of transportation through the development of the Internet of Vehicles(IoV).However,intricate security challenges are intertwinedwith technological progress:Vehicular ad h...Wireless technology is transforming the future of transportation through the development of the Internet of Vehicles(IoV).However,intricate security challenges are intertwinedwith technological progress:Vehicular ad hoc Networks(VANETs),a core component of IoV,face security issues,particularly the Black Hole Attack(BHA).This malicious attack disrupts the seamless flow of data and threatens the network’s overall reliability;also,BHA strategically disrupts communication pathways by dropping data packets from legitimate nodes altogether.Recognizing the importance of this challenge,we have introduced a new solution called ad hoc On-Demand Distance Vector-Reputation-based mechanism Local Outlier Factor(AODV-RL).The significance of AODVRL lies in its unique approach:it verifies and confirms the trustworthiness of network components,providing robust protection against BHA.An additional safety layer is established by implementing the Local Outlier Factor(LOF),which detects and addresses abnormal network behaviors.Rigorous testing of our solution has revealed its remarkable ability to enhance communication in VANETs.Specifically,Our experimental results achieve message delivery ratios of up to 94.25%andminimal packet loss ratios of just 0.297%.Based on our experimental results,the proposedmechanismsignificantly improves VANET communication reliability and security.These results promise a more secure and dependable future for IoV,capable of transforming transportation safety and efficiency.展开更多
Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a p...Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.展开更多
Pervasive IoT applications enable us to perceive,analyze,control,and optimize the traditional physical systems.Recently,security breaches in many IoT applications have indicated that IoT applications may put the physi...Pervasive IoT applications enable us to perceive,analyze,control,and optimize the traditional physical systems.Recently,security breaches in many IoT applications have indicated that IoT applications may put the physical systems at risk.Severe resource constraints and insufficient security design are two major causes of many security problems in IoT applications.As an extension of the cloud,the emerging edge computing with rich resources provides us a new venue to design and deploy novel security solutions for IoT applications.Although there are some research efforts in this area,edge-based security designs for IoT applications are still in its infancy.This paper aims to present a comprehensive survey of existing IoT security solutions at the edge layer as well as to inspire more edge-based IoT security designs.We first present an edge-centric IoT architecture.Then,we extensively review the edge-based IoT security research efforts in the context of security architecture designs,firewalls,intrusion detection systems,authentication and authorization protocols,and privacy-preserving mechanisms.Finally,we propose our insight into future research directions and open research issues.展开更多
AIM To identify demographic, clinical, metabolomic, and lifestyle related predictors of relapse in adult ulcerative colitis(UC) patients.METHODS In this prospective pilot study, UC patients in clinical remission were ...AIM To identify demographic, clinical, metabolomic, and lifestyle related predictors of relapse in adult ulcerative colitis(UC) patients.METHODS In this prospective pilot study, UC patients in clinical remission were recruited and followed-up at 12 mo to assess a clinical relapse, or not. At baseline information on demographic and clinical parameters was collected. Serum and urine samples were collected for analysis of metabolomic assays using a combined direct infusion/liquid chromatography tandem mass spectrometry and nuclear magnetic resolution spectroscopy. Stool samples were also collected to measure fecal calprotectin(FCP). Dietary assessment was performed using a validated self-administered food frequency questionnaire. RESULTS Twenty patients were included(mean age: 42.7 ± 14.8 years, females: 55%). Seven patients(35%) experienced a clinical relapse during the follow-up period. While 6 patients(66.7%) with normal body weight developed a clinical relapse, 1 UC patient(9.1%) who was overweight/obese relapsed during the follow-up(P = 0.02). At baseline, poultry intake was significantly higher in patients who were still in remission during follow-up(0.9 oz vs 0.2 oz, P = 0.002). Five patients(71.4%) with FCP > 150 μg/g and 2 patients(15.4%) with normal FCP(≤ 150 μg/g) at baseline relapsed during the follow-up(P = 0.02). Interestingly, baseline urinary and serum metabolomic profiling of UC patients with or without clinical relapse within 12 mo showed a significant difference. The most important metabolites that were responsible for this discrimination were trans-aconitate, cystine and acetamide in urine, and 3-hydroxybutyrate, acetoacetate and acetone in serum. CONCLUSION A combination of baseline dietary intake, fecal calprotectin, and metabolomic factors are associated with risk of UC clinical relapse within 12 mo.展开更多
Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder(MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitiv...Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder(MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitive features, modeled as Gaussian Markov random fields. However, it is unclear whether it is possible to differentiate between phenotypic patterns associated with the differential diagnosis of schizophrenia and depression using this neurocognitive graph approach. In this study, we enrolled 215 first-episode patients with schizophrenia(FES), 125 with MDD, and 237 demographically-matched healthy controls(HCs). The cognitive performance of all participants was evaluated using a battery of neurocognitive tests. The graphical LASSO model was trained with aone-vs-one scenario to learn the conditional independent structure of neurocognitive features of each group. Participants in the holdout dataset were classified into different groups with the highest likelihood. A partial correlation matrix was transformed from the graphical model to further explore the neurocognitive graph for each group. The classification approach identified the diagnostic class for individuals with an average accuracy of 73.41% for FES vs HC, 67.07% for MDD vs HC, and 59.48% for FES vs MDD. Both of the neurocognitive graphs for FES and MDD had more connections and higher node centrality than those for HC. The neurocognitive graph for FES was less sparse and had more connections than that for MDD.Thus, neurocognitive graphs based on cognitive features are promising for describing endophenotypes that may discriminate schizophrenia from depression.展开更多
Genetic improvement for drought stress tolerance in rice involves the quantitative nature of the trait, which reflects the additive effects of several genetic loci throughout the genome. Yield components and related t...Genetic improvement for drought stress tolerance in rice involves the quantitative nature of the trait, which reflects the additive effects of several genetic loci throughout the genome. Yield components and related traits under stressed and well-water conditions were assayed in mapping populations derived from crosses of Azucena×IR64 and Azucena×Bala. To find the candidate rice genes underlying Quantitative Trait Loci (QTL) in these populations, we conducted in silico analysis of a candidate region flanked by the genetic markers RM212 and RM319 on chromosome 1, proximal to the semi-dwarf (sd1) locus. A total of 175 annotated genes were identified from this region. These included 48 genes annotated by functional homology to known genes, 23 pseudogenes, 24 ab initio predicted genes supported by an alignment match to an EST (Expressed sequence tag) of unknown function, and 80 hypothetical genes predicted solely by ab initio means. Among these, 16 candidate genes could potentially be involved in drought stress response.展开更多
The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplo...The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19,pneumonia,and normal conditions in chest X-ray images(CXIs)is proposed coupled with Explainable Artificial Intelligence(XAI).Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3,VGG16,and VGG19 that excel in the task of feature extraction.The methodology is further enhanced by the inclusion of the t-SNE(t-Distributed Stochastic Neighbor Embedding)technique for visualizing the extracted image features and Contrast Limited Adaptive Histogram Equalization(CLAHE)to improve images before extraction of features.Additionally,an AttentionMechanism is utilized,which helps clarify how the modelmakes decisions,which builds trust in artificial intelligence(AI)systems.To evaluate the effectiveness of the proposed approach,both benchmark datasets and a private dataset obtained with permissions from Jinnah PostgraduateMedical Center(JPMC)in Karachi,Pakistan,are utilized.In 12 experiments,VGG19 showcased remarkable performance in the hybrid dataset approach,achieving 100%accuracy in COVID-19 vs.pneumonia classification and 97%in distinguishing normal cases.Overall,across all classes,the approach achieved 98%accuracy,demonstrating its efficiency in detecting COVID-19 and differentiating it fromother chest disorders(Pneumonia and healthy)while also providing insights into the decision-making process of the models.展开更多
This paper introduces a cutting-edge framework for personalized chronic pain management,leveraging the power of artificial intelligence(AI)and personality insights.It explores the intricate relationship between person...This paper introduces a cutting-edge framework for personalized chronic pain management,leveraging the power of artificial intelligence(AI)and personality insights.It explores the intricate relationship between personality traits and pain perception,expression,and management,identifying key correlations that influence an individual’s experience of pain.By integrating personality psychology with AI-driven personality assessment,this framework offers a novel approach to tailoring chronic pain management strategies for each patient’s unique personality profile.It highlights the relevance of well-established personality theories such as the Big Five and the Myers-Briggs Type Indicator(MBTI)in shaping personalized pain management plans.Additionally,the paper introduces multimodal AI-driven personality assessment,emphasizing the ethical considerations and data collection processes necessary for its implementation.Through illustrative case studies,the paper exemplifies how this framework can lead to more effective and patient-centered pain relief,ultimately enhancing overall well-being.In conclusion,the paper positions the need of an“AI-Powered Holistic Pain Management Initiative”which has the potential to transform chronic pain management by providing personalized,data-driven solutions and create a multifaceted research impact influencing clinical practice,patient outcomes,healthcare policy,and the broader scientific community’s understanding of personalized medicine and AI-driven interventions.展开更多
Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV ...Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.展开更多
In this paper, a novel algorithm for aerosol optical depth(AOD) retrieval with a 1 km spatial resolution over land is presented using the Advanced Along Track Scanning Radiometer (AATSR) dual-view capability at 0....In this paper, a novel algorithm for aerosol optical depth(AOD) retrieval with a 1 km spatial resolution over land is presented using the Advanced Along Track Scanning Radiometer (AATSR) dual-view capability at 0.55, 0.66 and 0.87μm, in combination with the Bi-directional Reflectance Distribution Function (BRDF) model, a product of the Moderate Resolution Imaging Spectroradiometer (MODIS). The BRDF characteristics of the land surface, i.e. prior input parameters for this algorithm, are computed by extracting the geometrical information from AATSR and reducing the kernels from the MODIS BRDF/Albedo Model Parameters Product. Finally, AOD, with a i km resolution at 0.55, 0.66 and 0.87 μm for the forward and nadir views of AATSR, can be simultaneously obtained. Extensive validations of AOD derived from AATSR during the period from August 2005 to July 2006 in Beijing and its surrounding area, against in-situ AErosol RObotic NETwork (AERONET) measurements, were performed. The AOD difference between the retrievals from the forward and nadir views of AATSR was less than 5.72%, 1.9% and 13.7%, respectively. Meanwhile, it was found that the AATSR retrievals using the synergic algorithm developed in this paper are more favorable than those by assuming a Lambert surface, for the coefficient of determination between AATSR derived AOD and AERONET mearured AOD, decreased by 15.5% and 18.5%, compared to those derived by the synergic algorithm. This further suggests that the synergic algorithm can be potentially used in climate change and air quality monitoring.展开更多
Virtual Reality(VR)is a key industry for the development of the digital economy in the future.Mobile VR has advantages in terms of mobility,lightweight and cost-effectiveness,which has gradually become the mainstream ...Virtual Reality(VR)is a key industry for the development of the digital economy in the future.Mobile VR has advantages in terms of mobility,lightweight and cost-effectiveness,which has gradually become the mainstream implementation of VR.In this paper,a mobile VR video adaptive transmission mechanism based on intelligent caching and hierarchical buffering strategy in Mobile Edge Computing(MEC)-equipped 5G networks is proposed,aiming at the low latency requirements of mobile VR services and flexible buffer management for VR video adaptive transmission.To support VR content proactive caching and intelligent buffer management,users’behavioral similarity and head movement trajectory are jointly used for viewpoint prediction.The tile-based content is proactively cached in the MEC nodes based on the popularity of the VR content.Second,a hierarchical buffer-based adaptive update algorithm is presented,which jointly considers bandwidth,buffer,and predicted viewpoint status to update the tile chunk in client buffer.Then,according to the decomposition of the problem,the buffer update problem is modeled as an optimization problem,and the corresponding solution algorithms are presented.Finally,the simulation results show that the adaptive caching algorithm based on 5G intelligent edge and hierarchical buffer strategy can improve the user experience in the case of bandwidth fluctuations,and the proposed viewpoint prediction method can significantly improve the accuracy of viewpoint prediction by 15%.展开更多
Recently,several edge deployment types,such as on-premise edge clusters,Unmanned Aerial Vehicles(UAV)-attached edge devices,telecommunication base stations installed with edge clusters,etc.,are being deployed to enabl...Recently,several edge deployment types,such as on-premise edge clusters,Unmanned Aerial Vehicles(UAV)-attached edge devices,telecommunication base stations installed with edge clusters,etc.,are being deployed to enable faster response time for latency-sensitive tasks.One fundamental problem is where and how to offload and schedule multi-dependent tasks so as to minimize their collective execution time and to achieve high resource utilization.Existing approaches randomly dispatch tasks naively to available edge nodes without considering the resource demands of tasks,inter-dependencies of tasks and edge resource availability.These approaches can result in the longer waiting time for tasks due to insufficient resource availability or dependency support,as well as provider lock-in.Therefore,we present Edge Colla,which is based on the integration of edge resources running across multi-edge deployments.Edge Colla leverages learning techniques to intelligently dispatch multidependent tasks,and a variant bin-packing optimization method to co-locate these tasks firmly on available nodes to optimally utilize them.Extensive experiments on real-world datasets from Alibaba on task dependencies show that our approach can achieve optimal performance than the baseline schemes.展开更多
The world is rapidly changing with the advance of information technology.The expansion of the Internet of Things(IoT)is a huge step in the development of the smart city.The IoT consists of connected devices that trans...The world is rapidly changing with the advance of information technology.The expansion of the Internet of Things(IoT)is a huge step in the development of the smart city.The IoT consists of connected devices that transfer information.The IoT architecture permits on-demand services to a public pool of resources.Cloud computing plays a vital role in developing IoT-enabled smart applications.The integration of cloud computing enhances the offering of distributed resources in the smart city.Improper management of security requirements of cloud-assisted IoT systems can bring about risks to availability,security,performance,condentiality,and privacy.The key reason for cloud-and IoT-enabled smart city application failure is improper security practices at the early stages of development.This article proposes a framework to collect security requirements during the initial development phase of cloud-assisted IoT-enabled smart city applications.Its three-layered architecture includes privacy preserved stakeholder analysis(PPSA),security requirement modeling and validation(SRMV),and secure cloud-assistance(SCA).A case study highlights the applicability and effectiveness of the proposed framework.A hybrid survey enables the identication and evaluation of signicant challenges.展开更多
This paper presents a novel observer model that integrates quantum mechanics, relativity, idealism, and the simulation hypothesis to explain the quantum nature of the universe. The model posits a central server transm...This paper presents a novel observer model that integrates quantum mechanics, relativity, idealism, and the simulation hypothesis to explain the quantum nature of the universe. The model posits a central server transmitting multi-media frames to create observer-dependent realities. Key aspects include deriving frame rates, defining quantum reality, and establishing hierarchical observer structures. The model’s impact on quantum information theory and philosophical interpretations of reality are examined, with detailed discussions on information loss and recursive frame transmission in the appendices.展开更多
Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include s...Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include storing and accessing user data in commercial clouds, mining of social data, and analysis of large-scale simulations and experiments such as the Large Hadron Collider. An increasing number of such data—intensive applications and services are relying on clouds in order to process and manage the enormous amounts of data required for continuous operation. It can be difficult to decide which of the many options for cloud processing is suitable for a given application;the aim of this paper is therefore to provide an interested user with an overview of the most important concepts of cloud computing as it relates to processing of Big Data.展开更多
文摘This review examines human vulnerabilities in cybersecurity within Microfinance Institutions, analyzing their impact on organizational resilience. Focusing on social engineering, inadequate security training, and weak internal protocols, the study identifies key vulnerabilities exacerbating cyber threats to MFIs. A literature review using databases like IEEE Xplore and Google Scholar focused on studies from 2019 to 2023 addressing human factors in cybersecurity specific to MFIs. Analysis of 57 studies reveals that phishing and insider threats are predominant, with a 20% annual increase in phishing attempts. Employee susceptibility to these attacks is heightened by insufficient training, with entry-level employees showing the highest vulnerability rates. Further, only 35% of MFIs offer regular cybersecurity training, significantly impacting incident reduction. This paper recommends enhanced training frequency, robust internal controls, and a cybersecurity-aware culture to mitigate human-induced cyber risks in MFIs.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金Guangdong Science and Technology University Young Projects(GKY-2023KYQNK-1 and GKY-2023KYQNK-10)Guangdong Provincial Key Discipline Research Capacity Improvement Project(2022ZDJS147)。
文摘Aiming at the problem that the data in the user rating matrix is missing and the importance of implicit trust between users is ignored when using the TrustSVD model to fill it,this paper proposes a recommendation algorithm based on TrustSVD++and XGBoost.Firstly,the explicit trust and implicit trust were introduced into the SVD++model to construct the TrustSVD++model.Secondly,considering that there is much data in the interaction matrix after filling,which may lead to a rather complex calculation process,the K-means algorithm is introduced to cluster and extract user and item features at the same time.Then,in order to improve the accuracy of rating prediction for target users,an XGBoost model is proposed to train user and item features,and finally,it is verified on the data sets MovieLens-1M and MovieLens-100k.Experiments show that compared with the SVD++model and the recommendation algorithm without XGBoost model training,the proposed algorithm has the RMSE value reduced by 2.9%and the MAE value reduced by 3%.
文摘Microfinance institutions in Kenya play a unique role in promoting financial inclusion,loans,and savings provision,especially to low-income individuals and small-scale entrepreneurs.However,despite their benefits,most of their products and programs in Machakos County have been reducing due to re-payment challenges,threatening their financial ability to extend further credit.This could be attributed to ineffective credit scoring models which are not able to establish the nuanced non-linear repayment behavior and patterns of the loan applicants.The research objective was to enhance credit risk scoring for microfinance institutions in Machakos County using supervised machine learning algorithms.The study adopted a mixed research design under supervised machine learning approach.It randomly sampled 6771 loan application ac-count records and repayment history.Rstudio and Python programming lan-guages were deployed for data pre-processing and analysis.Logistic regression algorithm,XG Boosting and the random forest ensemble method were used.Metric evaluations used included the performance accuracy,Area under the Curve and F1-Score.Based on the study findings:XG Boosting was the best performer with 83.3%accuracy and 0.202 Brier score.Development of legal framework to govern ethical and open use of machine learning assessment was recommended.A similar research but using different machine learning al-gorithms,locations,and institutions,to ascertain the validity,reliability and the generalizability of the study findings was recommended for further re-search.
文摘Malaysia,as one of the highest producers of palm oil globally and one of the largest exporters,has a huge potential to use palmoil waste to generate electricity since an abundance of waste is produced during the palmoil extraction process.In this paper,we have first examined and compared the use of palmoil waste as biomass for electricity generation in different countries with reference to Malaysia.Some areas with default accessibility in rural areas,like those in Sabah and Sarawak,require a cheap and reliable source of electricity.Palm oil waste possesses the potential to be the source.Therefore,this research examines the cost-effective comparison between electricity generated frompalm oil waste and standalone diesel electric generation in Marudi,Sarawak,Malaysia.This research aims to investigate the potential electricity generation using palm oil waste and the feasibility of implementing the technology in rural areas.To implement and analyze the feasibility,a case study has been carried out in a rural area in Sarawak,Malaysia.The finding shows the electricity cost calculation of small towns like Long Lama,Long Miri,and Long Atip,with ten nearby schools,and suggests that using EFB from palm oil waste is cheaper and reduces greenhouse gas emissions.The study also points out the need to conduct further research on power systems,such as energy storage andmicrogrids,to better understand the future of power systems.By collecting data through questionnaires and surveys,an analysis has been carried out to determine the approximate cost and quantity of palm oil waste to generate cheaper renewable energy.We concluded that electricity generation from palm oil waste is cost-effective and beneficial.The infrastructure can be a microgrid connected to the main grid.
文摘Wireless technology is transforming the future of transportation through the development of the Internet of Vehicles(IoV).However,intricate security challenges are intertwinedwith technological progress:Vehicular ad hoc Networks(VANETs),a core component of IoV,face security issues,particularly the Black Hole Attack(BHA).This malicious attack disrupts the seamless flow of data and threatens the network’s overall reliability;also,BHA strategically disrupts communication pathways by dropping data packets from legitimate nodes altogether.Recognizing the importance of this challenge,we have introduced a new solution called ad hoc On-Demand Distance Vector-Reputation-based mechanism Local Outlier Factor(AODV-RL).The significance of AODVRL lies in its unique approach:it verifies and confirms the trustworthiness of network components,providing robust protection against BHA.An additional safety layer is established by implementing the Local Outlier Factor(LOF),which detects and addresses abnormal network behaviors.Rigorous testing of our solution has revealed its remarkable ability to enhance communication in VANETs.Specifically,Our experimental results achieve message delivery ratios of up to 94.25%andminimal packet loss ratios of just 0.297%.Based on our experimental results,the proposedmechanismsignificantly improves VANET communication reliability and security.These results promise a more secure and dependable future for IoV,capable of transforming transportation safety and efficiency.
文摘Spatial heterogeneity refers to the variation or differences in characteristics or features across different locations or areas in space. Spatial data refers to information that explicitly or indirectly belongs to a particular geographic region or location, also known as geo-spatial data or geographic information. Focusing on spatial heterogeneity, we present a hybrid machine learning model combining two competitive algorithms: the Random Forest Regressor and CNN. The model is fine-tuned using cross validation for hyper-parameter adjustment and performance evaluation, ensuring robustness and generalization. Our approach integrates Global Moran’s I for examining global autocorrelation, and local Moran’s I for assessing local spatial autocorrelation in the residuals. To validate our approach, we implemented the hybrid model on a real-world dataset and compared its performance with that of the traditional machine learning models. Results indicate superior performance with an R-squared of 0.90, outperforming RF 0.84 and CNN 0.74. This study contributed to a detailed understanding of spatial variations in data considering the geographical information (Longitude & Latitude) present in the dataset. Our results, also assessed using the Root Mean Squared Error (RMSE), indicated that the hybrid yielded lower errors, showing a deviation of 53.65% from the RF model and 63.24% from the CNN model. Additionally, the global Moran’s I index was observed to be 0.10. This study underscores that the hybrid was able to predict correctly the house prices both in clusters and in dispersed areas.
基金This research has been supported by the National Science Foundation(under grant#1723596)the National Security Agency(under grant#H98230-17-1-0355).
文摘Pervasive IoT applications enable us to perceive,analyze,control,and optimize the traditional physical systems.Recently,security breaches in many IoT applications have indicated that IoT applications may put the physical systems at risk.Severe resource constraints and insufficient security design are two major causes of many security problems in IoT applications.As an extension of the cloud,the emerging edge computing with rich resources provides us a new venue to design and deploy novel security solutions for IoT applications.Although there are some research efforts in this area,edge-based security designs for IoT applications are still in its infancy.This paper aims to present a comprehensive survey of existing IoT security solutions at the edge layer as well as to inspire more edge-based IoT security designs.We first present an edge-centric IoT architecture.Then,we extensively review the edge-based IoT security research efforts in the context of security architecture designs,firewalls,intrusion detection systems,authentication and authorization protocols,and privacy-preserving mechanisms.Finally,we propose our insight into future research directions and open research issues.
基金Supported by Alberta Innovates-Bio Solutionsa graduate studentship from Alberta Innovates-Health Solutions(to Keshteli AH)
文摘AIM To identify demographic, clinical, metabolomic, and lifestyle related predictors of relapse in adult ulcerative colitis(UC) patients.METHODS In this prospective pilot study, UC patients in clinical remission were recruited and followed-up at 12 mo to assess a clinical relapse, or not. At baseline information on demographic and clinical parameters was collected. Serum and urine samples were collected for analysis of metabolomic assays using a combined direct infusion/liquid chromatography tandem mass spectrometry and nuclear magnetic resolution spectroscopy. Stool samples were also collected to measure fecal calprotectin(FCP). Dietary assessment was performed using a validated self-administered food frequency questionnaire. RESULTS Twenty patients were included(mean age: 42.7 ± 14.8 years, females: 55%). Seven patients(35%) experienced a clinical relapse during the follow-up period. While 6 patients(66.7%) with normal body weight developed a clinical relapse, 1 UC patient(9.1%) who was overweight/obese relapsed during the follow-up(P = 0.02). At baseline, poultry intake was significantly higher in patients who were still in remission during follow-up(0.9 oz vs 0.2 oz, P = 0.002). Five patients(71.4%) with FCP > 150 μg/g and 2 patients(15.4%) with normal FCP(≤ 150 μg/g) at baseline relapsed during the follow-up(P = 0.02). Interestingly, baseline urinary and serum metabolomic profiling of UC patients with or without clinical relapse within 12 mo showed a significant difference. The most important metabolites that were responsible for this discrimination were trans-aconitate, cystine and acetamide in urine, and 3-hydroxybutyrate, acetoacetate and acetone in serum. CONCLUSION A combination of baseline dietary intake, fecal calprotectin, and metabolomic factors are associated with risk of UC clinical relapse within 12 mo.
基金funded by National Nature Science Foundation of China Key Projects(81130024,91332205,and 81630030)the National Key Technology R&D Program of the Ministry of Science and Technology of China(2016YFC0904300)+4 种基金the National Natural Science Foundation of China/Research Grants Council of Hong Kong Joint Research Scheme(8141101084)the Natural Science Foundation of China(8157051859)the Sichuan Science&Technology Department(2015JY0173)the Canadian Institutes of Health Research,Alberta Innovates:Centre for Machine Learningthe Canadian Depression Research&Intervention Network
文摘Neurocognitive deficits are frequently observed in patients with schizophrenia and major depressive disorder(MDD). The relations between cognitive features may be represented by neurocognitive graphs based on cognitive features, modeled as Gaussian Markov random fields. However, it is unclear whether it is possible to differentiate between phenotypic patterns associated with the differential diagnosis of schizophrenia and depression using this neurocognitive graph approach. In this study, we enrolled 215 first-episode patients with schizophrenia(FES), 125 with MDD, and 237 demographically-matched healthy controls(HCs). The cognitive performance of all participants was evaluated using a battery of neurocognitive tests. The graphical LASSO model was trained with aone-vs-one scenario to learn the conditional independent structure of neurocognitive features of each group. Participants in the holdout dataset were classified into different groups with the highest likelihood. A partial correlation matrix was transformed from the graphical model to further explore the neurocognitive graph for each group. The classification approach identified the diagnostic class for individuals with an average accuracy of 73.41% for FES vs HC, 67.07% for MDD vs HC, and 59.48% for FES vs MDD. Both of the neurocognitive graphs for FES and MDD had more connections and higher node centrality than those for HC. The neurocognitive graph for FES was less sparse and had more connections than that for MDD.Thus, neurocognitive graphs based on cognitive features are promising for describing endophenotypes that may discriminate schizophrenia from depression.
基金Project supported partly by the Rockefeller Foundation thesis dis-sertation training grant and the National Hi-Tech Research and De-velopment Program (863) of China
文摘Genetic improvement for drought stress tolerance in rice involves the quantitative nature of the trait, which reflects the additive effects of several genetic loci throughout the genome. Yield components and related traits under stressed and well-water conditions were assayed in mapping populations derived from crosses of Azucena×IR64 and Azucena×Bala. To find the candidate rice genes underlying Quantitative Trait Loci (QTL) in these populations, we conducted in silico analysis of a candidate region flanked by the genetic markers RM212 and RM319 on chromosome 1, proximal to the semi-dwarf (sd1) locus. A total of 175 annotated genes were identified from this region. These included 48 genes annotated by functional homology to known genes, 23 pseudogenes, 24 ab initio predicted genes supported by an alignment match to an EST (Expressed sequence tag) of unknown function, and 80 hypothetical genes predicted solely by ab initio means. Among these, 16 candidate genes could potentially be involved in drought stress response.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19,pneumonia,and normal conditions in chest X-ray images(CXIs)is proposed coupled with Explainable Artificial Intelligence(XAI).Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3,VGG16,and VGG19 that excel in the task of feature extraction.The methodology is further enhanced by the inclusion of the t-SNE(t-Distributed Stochastic Neighbor Embedding)technique for visualizing the extracted image features and Contrast Limited Adaptive Histogram Equalization(CLAHE)to improve images before extraction of features.Additionally,an AttentionMechanism is utilized,which helps clarify how the modelmakes decisions,which builds trust in artificial intelligence(AI)systems.To evaluate the effectiveness of the proposed approach,both benchmark datasets and a private dataset obtained with permissions from Jinnah PostgraduateMedical Center(JPMC)in Karachi,Pakistan,are utilized.In 12 experiments,VGG19 showcased remarkable performance in the hybrid dataset approach,achieving 100%accuracy in COVID-19 vs.pneumonia classification and 97%in distinguishing normal cases.Overall,across all classes,the approach achieved 98%accuracy,demonstrating its efficiency in detecting COVID-19 and differentiating it fromother chest disorders(Pneumonia and healthy)while also providing insights into the decision-making process of the models.
文摘This paper introduces a cutting-edge framework for personalized chronic pain management,leveraging the power of artificial intelligence(AI)and personality insights.It explores the intricate relationship between personality traits and pain perception,expression,and management,identifying key correlations that influence an individual’s experience of pain.By integrating personality psychology with AI-driven personality assessment,this framework offers a novel approach to tailoring chronic pain management strategies for each patient’s unique personality profile.It highlights the relevance of well-established personality theories such as the Big Five and the Myers-Briggs Type Indicator(MBTI)in shaping personalized pain management plans.Additionally,the paper introduces multimodal AI-driven personality assessment,emphasizing the ethical considerations and data collection processes necessary for its implementation.Through illustrative case studies,the paper exemplifies how this framework can lead to more effective and patient-centered pain relief,ultimately enhancing overall well-being.In conclusion,the paper positions the need of an“AI-Powered Holistic Pain Management Initiative”which has the potential to transform chronic pain management by providing personalized,data-driven solutions and create a multifaceted research impact influencing clinical practice,patient outcomes,healthcare policy,and the broader scientific community’s understanding of personalized medicine and AI-driven interventions.
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807003in part by the National Natural Science Foundation of China under Grants 61901381,62171385,and 61901378+3 种基金in part by the Aeronautical Science Foundation of China under Grant 2020z073053004in part by the Foundation of the State Key Laboratory of Integrated Services Networks of Xidian University under Grant ISN21-06in part by the Key Research Program and Industrial Innovation Chain Project of Shaanxi Province under Grant 2019ZDLGY07-10in part by the Natural Science Fundamental Research Program of Shaanxi Province under Grant 2021JM-069.
文摘Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.
基金an output from the research projects entitled "Study on the Na-tional AOD Retrieval System based on MODIS Data" supported by the Special Funds for the Basic Research in Chinese Academy of Meteorological Sciences (CAMS) of Chinese Meteorological Administration (CMA) (2007Y001)"Multi-scale Aerosol Optical Thickness Quantitative Retrieval from Remotely Sensing Data at Urban Area"(40671142)+2 种基金the project (Grant Nos. 40871173,40601068) funded by National Natural Science Foundation of ChinaInnovation Fund by State Key Laboratoryof Remote Sensing Sciences, Institute of Remote Sensing Applications of Chinese Academy of Sciences (Grant Nos.07S00502CX, 03Q0033049)"Aerosol over China and Their Climate Effect" supported by National Basic Research Program of China (2006CB403701)
文摘In this paper, a novel algorithm for aerosol optical depth(AOD) retrieval with a 1 km spatial resolution over land is presented using the Advanced Along Track Scanning Radiometer (AATSR) dual-view capability at 0.55, 0.66 and 0.87μm, in combination with the Bi-directional Reflectance Distribution Function (BRDF) model, a product of the Moderate Resolution Imaging Spectroradiometer (MODIS). The BRDF characteristics of the land surface, i.e. prior input parameters for this algorithm, are computed by extracting the geometrical information from AATSR and reducing the kernels from the MODIS BRDF/Albedo Model Parameters Product. Finally, AOD, with a i km resolution at 0.55, 0.66 and 0.87 μm for the forward and nadir views of AATSR, can be simultaneously obtained. Extensive validations of AOD derived from AATSR during the period from August 2005 to July 2006 in Beijing and its surrounding area, against in-situ AErosol RObotic NETwork (AERONET) measurements, were performed. The AOD difference between the retrievals from the forward and nadir views of AATSR was less than 5.72%, 1.9% and 13.7%, respectively. Meanwhile, it was found that the AATSR retrievals using the synergic algorithm developed in this paper are more favorable than those by assuming a Lambert surface, for the coefficient of determination between AATSR derived AOD and AERONET mearured AOD, decreased by 15.5% and 18.5%, compared to those derived by the synergic algorithm. This further suggests that the synergic algorithm can be potentially used in climate change and air quality monitoring.
基金supported in part by the Chongqing Municipal Education Commission projects under Grant No.KJCX2020035,KJQN202200829Chongqing Science and Technology Commission projects under grant No.CSTB2022BSXM-JCX0117 and cstc2020jcyjmsxmX0339+1 种基金supported in part by National Natural Science Foundation of China under Grant No.(62171072,62172064,62003067,61901067)supported in part by Chongqing Technology and Business University projects under Grant no.(2156004,212017).
文摘Virtual Reality(VR)is a key industry for the development of the digital economy in the future.Mobile VR has advantages in terms of mobility,lightweight and cost-effectiveness,which has gradually become the mainstream implementation of VR.In this paper,a mobile VR video adaptive transmission mechanism based on intelligent caching and hierarchical buffering strategy in Mobile Edge Computing(MEC)-equipped 5G networks is proposed,aiming at the low latency requirements of mobile VR services and flexible buffer management for VR video adaptive transmission.To support VR content proactive caching and intelligent buffer management,users’behavioral similarity and head movement trajectory are jointly used for viewpoint prediction.The tile-based content is proactively cached in the MEC nodes based on the popularity of the VR content.Second,a hierarchical buffer-based adaptive update algorithm is presented,which jointly considers bandwidth,buffer,and predicted viewpoint status to update the tile chunk in client buffer.Then,according to the decomposition of the problem,the buffer update problem is modeled as an optimization problem,and the corresponding solution algorithms are presented.Finally,the simulation results show that the adaptive caching algorithm based on 5G intelligent edge and hierarchical buffer strategy can improve the user experience in the case of bandwidth fluctuations,and the proposed viewpoint prediction method can significantly improve the accuracy of viewpoint prediction by 15%.
基金The financial support of the National Natural Science Foundation of China under grants 61901416 and 61571401(part of the Natural Science Foundation of Henan under grant 242300420269)the Young Elite Scientists Sponsorship Program of Henan under grant 2024HYTP026the Innovative Talent of Colleges and the University of Henan Province under grant 18HASTIT021。
文摘Recently,several edge deployment types,such as on-premise edge clusters,Unmanned Aerial Vehicles(UAV)-attached edge devices,telecommunication base stations installed with edge clusters,etc.,are being deployed to enable faster response time for latency-sensitive tasks.One fundamental problem is where and how to offload and schedule multi-dependent tasks so as to minimize their collective execution time and to achieve high resource utilization.Existing approaches randomly dispatch tasks naively to available edge nodes without considering the resource demands of tasks,inter-dependencies of tasks and edge resource availability.These approaches can result in the longer waiting time for tasks due to insufficient resource availability or dependency support,as well as provider lock-in.Therefore,we present Edge Colla,which is based on the integration of edge resources running across multi-edge deployments.Edge Colla leverages learning techniques to intelligently dispatch multidependent tasks,and a variant bin-packing optimization method to co-locate these tasks firmly on available nodes to optimally utilize them.Extensive experiments on real-world datasets from Alibaba on task dependencies show that our approach can achieve optimal performance than the baseline schemes.
基金Taif University Researchers Supporting Project No.(TURSP-2020/126),Taif University,Taif,Saudi Arabia。
文摘The world is rapidly changing with the advance of information technology.The expansion of the Internet of Things(IoT)is a huge step in the development of the smart city.The IoT consists of connected devices that transfer information.The IoT architecture permits on-demand services to a public pool of resources.Cloud computing plays a vital role in developing IoT-enabled smart applications.The integration of cloud computing enhances the offering of distributed resources in the smart city.Improper management of security requirements of cloud-assisted IoT systems can bring about risks to availability,security,performance,condentiality,and privacy.The key reason for cloud-and IoT-enabled smart city application failure is improper security practices at the early stages of development.This article proposes a framework to collect security requirements during the initial development phase of cloud-assisted IoT-enabled smart city applications.Its three-layered architecture includes privacy preserved stakeholder analysis(PPSA),security requirement modeling and validation(SRMV),and secure cloud-assistance(SCA).A case study highlights the applicability and effectiveness of the proposed framework.A hybrid survey enables the identication and evaluation of signicant challenges.
文摘This paper presents a novel observer model that integrates quantum mechanics, relativity, idealism, and the simulation hypothesis to explain the quantum nature of the universe. The model posits a central server transmitting multi-media frames to create observer-dependent realities. Key aspects include deriving frame rates, defining quantum reality, and establishing hierarchical observer structures. The model’s impact on quantum information theory and philosophical interpretations of reality are examined, with detailed discussions on information loss and recursive frame transmission in the appendices.
文摘Big Data applications are pervading more and more aspects of our life, encompassing commercial and scientific uses at increasing rates as we move towards exascale analytics. Examples of Big Data applications include storing and accessing user data in commercial clouds, mining of social data, and analysis of large-scale simulations and experiments such as the Large Hadron Collider. An increasing number of such data—intensive applications and services are relying on clouds in order to process and manage the enormous amounts of data required for continuous operation. It can be difficult to decide which of the many options for cloud processing is suitable for a given application;the aim of this paper is therefore to provide an interested user with an overview of the most important concepts of cloud computing as it relates to processing of Big Data.