RETRACTION:P.Goyal and R.Malviya,“Challenges and Opportunities of Big Data Analytics in Healthcare,”Health Care Science 2,no.5(2023):328-338,https://doi.org/10.1002/hcs2.66.The above article,published online on 4 Oc...RETRACTION:P.Goyal and R.Malviya,“Challenges and Opportunities of Big Data Analytics in Healthcare,”Health Care Science 2,no.5(2023):328-338,https://doi.org/10.1002/hcs2.66.The above article,published online on 4 October 2023 in Wiley Online Library(wileyonlinelibrary.com),has been retracted by agreement between the journal Editor-in-Chief,Zongjiu Zhang;Tsinghua University Press;and John Wiley&Sons Ltd.展开更多
为了应对乌克兰持续不断的战争带来的严峻挑战,EOS Data Analytics推出了“收获希望”计划,该计划旨在关注席卷乌克兰农业部门的危机。这个综合网页设有一张交互式地图,展示了2021—2024年乌克兰主要作物的历史和预测产量。此外,该倡议...为了应对乌克兰持续不断的战争带来的严峻挑战,EOS Data Analytics推出了“收获希望”计划,该计划旨在关注席卷乌克兰农业部门的危机。这个综合网页设有一张交互式地图,展示了2021—2024年乌克兰主要作物的历史和预测产量。此外,该倡议还介绍了乌克兰农业的现状及其对全球粮食安全的影响。出于支持乌克兰农民的承诺,该公司将在2024年向他们免费提供EOSDA作物监测服务,作为“收获希望”计划的一部分。该平台将帮助农民克服逆境,并确保乌克兰农业部门的可持续未来。展开更多
This paper focuses on facilitating state-of-the-art applications of big data analytics(BDA) architectures and infrastructures to telecommunications(telecom) industrial sector.Telecom companies are dealing with terabyt...This paper focuses on facilitating state-of-the-art applications of big data analytics(BDA) architectures and infrastructures to telecommunications(telecom) industrial sector.Telecom companies are dealing with terabytes to petabytes of data on a daily basis. Io T applications in telecom are further contributing to this data deluge. Recent advances in BDA have exposed new opportunities to get actionable insights from telecom big data. These benefits and the fast-changing BDA technology landscape make it important to investigate existing BDA applications to telecom sector. For this, we initially determine published research on BDA applications to telecom through a systematic literature review through which we filter 38 articles and categorize them in frameworks, use cases, literature reviews, white papers and experimental validations. We also discuss the benefits and challenges mentioned in these articles. We find that experiments are all proof of concepts(POC) on a severely limited BDA technology stack(as compared to the available technology stack), i.e.,we did not find any work focusing on full-fledged BDA implementation in an operational telecom environment. To facilitate these applications at research-level, we propose a state-of-the-art lambda architecture for BDA pipeline implementation(called Lambda Tel) based completely on open source BDA technologies and the standard Python language, along with relevant guidelines.We discovered only one research paper which presented a relatively-limited lambda architecture using the proprietary AWS cloud infrastructure. We believe Lambda Tel presents a clear roadmap for telecom industry practitioners to implement and enhance BDA applications in their enterprises.展开更多
The advent of healthcare information management systems(HIMSs)continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale.Analysis of this big da...The advent of healthcare information management systems(HIMSs)continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale.Analysis of this big data allows for boundless potential outcomes for discovering knowledge.Big data analytics(BDA)in healthcare can,for instance,help determine causes of diseases,generate effective diagnoses,enhance Qo S guarantees by increasing efficiency of the healthcare delivery and effectiveness and viability of treatments,generate accurate predictions of readmissions,enhance clinical care,and pinpoint opportunities for cost savings.However,BDA implementations in any domain are generally complicated and resource-intensive with a high failure rate and no roadmap or success strategies to guide the practitioners.In this paper,we present a comprehensive roadmap to derive insights from BDA in the healthcare(patient care)domain,based on the results of a systematic literature review.We initially determine big data characteristics for healthcare and then review BDA applications to healthcare in academic research focusing particularly on No SQL databases.We also identify the limitations and challenges of these applications and justify the potential of No SQL databases to address these challenges and further enhance BDA healthcare research.We then propose and describe a state-of-the-art BDA architecture called Med-BDA for healthcare domain which solves all current BDA challenges and is based on the latest zeta big data paradigm.We also present success strategies to ensure the working of Med-BDA along with outlining the major benefits of BDA applications to healthcare.Finally,we compare our work with other related literature reviews across twelve hallmark features to justify the novelty and importance of our work.The aforementioned contributions of our work are collectively unique and clearly present a roadmap for clinical administrators,practitioners and professionals to successfully implement BDA initiatives in their organizations.展开更多
Big Data and Data Analytics affect almost all aspects of modern organisations’decision-making and business strategies.Big Data and Data Analytics create opportunities,challenges,and implications for the external audi...Big Data and Data Analytics affect almost all aspects of modern organisations’decision-making and business strategies.Big Data and Data Analytics create opportunities,challenges,and implications for the external auditing procedure.The purpose of this article is to reveal essential aspects of the impact of Big Data and Data Analytics on external auditing.It seems that Big Data Analytics is a critical tool for organisations,as well as auditors,that contributes to the enhancement of the auditing process.Also,legislative implications must be taken under consideration,since existing standards may need to change.Last,auditors need to develop new skills and competence,and educational organisations need to change their educational programs in order to be able to correspond to new market needs.展开更多
Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability,customer demand forecasting,cheaper development of products,and improved stock control.Sma...Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability,customer demand forecasting,cheaper development of products,and improved stock control.Small and medium sized enterprises(SMEs)are the backbone of the global economy,comprising of 90%of businesses worldwide.However,only 10%SMEs have adopted big data analytics despite the competitive advantage they could achieve.Previous research has analysed the barriers to adoption and a strategic framework has been developed to help SMEs adopt big data analytics.The framework was converted into a scoring tool which has been applied to multiple case studies of SMEs in the UK.This paper documents the process of evaluating the framework based on the structured feedback from a focus group composed of experienced practitioners.The results of the evaluation are presented with a discussion on the results,and the paper concludes with recommendations to improve the scoring tool based on the proposed framework.The research demonstrates that this positioning tool is beneficial for SMEs to achieve competitive advantages by increasing the application of business intelligence and big data analytics.展开更多
To obtain the platform s big data analytics support,manufacturers in the traditional retail channel must decide whether to use the direct online channel.A retail supply chain model and a direct online supply chain mod...To obtain the platform s big data analytics support,manufacturers in the traditional retail channel must decide whether to use the direct online channel.A retail supply chain model and a direct online supply chain model are built,in which manufacturers design products alone in the retail channel,while the platform and manufacturer complete the product design in the direct online channel.These two models are analyzed using the game theoretical model and numerical simulation.The findings indicate that if the manufacturers design capabilities are not very high and the commission rate is not very low,the manufacturers will choose the direct online channel if the platform s technical efforts are within an interval.When the platform s technical efforts are exogenous,they positively influence the manufacturers decisions;however,in the endogenous case,the platform s effect on the manufacturers is reflected in the interaction of the commission rate and cost efficiency.The manufacturers and the platform should make synthetic effort decisions based on the manufacturer s development capabilities,the intensity of market competition,and the cost efficiency of the platform.展开更多
In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker....In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker.Due to such massive generation of big data,the utilization of new methods based on Big Data Analytics(BDA),Machine Learning(ML),and Artificial Intelligence(AI)have become essential.In this aspect,the current research work develops a new Big Data Analytics with Cat Swarm Optimization based deep Learning(BDA-CSODL)technique for medical image classification on Apache Spark environment.The aim of the proposed BDA-CSODL technique is to classify the medical images and diagnose the disease accurately.BDA-CSODL technique involves different stages of operations such as preprocessing,segmentation,fea-ture extraction,and classification.In addition,BDA-CSODL technique also fol-lows multi-level thresholding-based image segmentation approach for the detection of infected regions in medical image.Moreover,a deep convolutional neural network-based Inception v3 method is utilized in this study as feature extractor.Stochastic Gradient Descent(SGD)model is used for parameter tuning process.Furthermore,CSO with Long Short-Term Memory(CSO-LSTM)model is employed as a classification model to determine the appropriate class labels to it.Both SGD and CSO design approaches help in improving the overall image classification performance of the proposed BDA-CSODL technique.A wide range of simulations was conducted on benchmark medical image datasets and the com-prehensive comparative results demonstrate the supremacy of the proposed BDA-CSODL technique under different measures.展开更多
With high computational capacity, e.g. many-core and wide floating point SIMD units, Intel Xeon Phi shows promising prospect to accelerate high-performance computing(HPC) applications. But the application of Intel Xeo...With high computational capacity, e.g. many-core and wide floating point SIMD units, Intel Xeon Phi shows promising prospect to accelerate high-performance computing(HPC) applications. But the application of Intel Xeon Phi on data analytics workloads in data center is still an open question. Phibench 2.0 is built for the latest generation of Intel Xeon Phi(KNL, Knights Landing), based on the prior work PhiBench(also named BigDataBench-Phi), which is designed for the former generation of Intel Xeon Phi(KNC, Knights Corner). Workloads of PhiBench 2.0 are delicately chosen based on BigdataBench 4.0 and PhiBench 1.0. Other than that, these workloads are well optimized on KNL, and run on real-world datasets to evaluate their performance and scalability. Further, the microarchitecture-level characteristics including CPI, cache behavior, vectorization intensity, and branch prediction efficiency are analyzed and the impact of affinity and scheduling policy on performance are investigated. It is believed that the observations would help other researchers working on Intel Xeon Phi and data analytics workloads.展开更多
Lately,the Internet of Things(IoT)application requires millions of structured and unstructured data since it has numerous problems,such as data organization,production,and capturing.To address these shortcomings,big d...Lately,the Internet of Things(IoT)application requires millions of structured and unstructured data since it has numerous problems,such as data organization,production,and capturing.To address these shortcomings,big data analytics is the most superior technology that has to be adapted.Even though big data and IoT could make human life more convenient,those benefits come at the expense of security.To manage these kinds of threats,the intrusion detection system has been extensively applied to identify malicious network traffic,particularly once the preventive technique fails at the level of endpoint IoT devices.As cyberattacks targeting IoT have gradually become stealthy and more sophisticated,intrusion detection systems(IDS)must continually emerge to manage evolving security threats.This study devises Big Data Analytics with the Internet of Things Assisted Intrusion Detection using Modified Buffalo Optimization Algorithm with Deep Learning(IDMBOA-DL)algorithm.In the presented IDMBOA-DL model,the Hadoop MapReduce tool is exploited for managing big data.The MBOA algorithm is applied to derive an optimal subset of features from picking an optimum set of feature subsets.Finally,the sine cosine algorithm(SCA)with convolutional autoencoder(CAE)mechanism is utilized to recognize and classify the intrusions in the IoT network.A wide range of simulations was conducted to demonstrate the enhanced results of the IDMBOA-DL algorithm.The comparison outcomes emphasized the better performance of the IDMBOA-DL model over other approaches.展开更多
Big Data applications face different types of complexities in classifications.Cleaning and purifying data by eliminating irrelevant or redundant data for big data applications becomes a complex operation while attempt...Big Data applications face different types of complexities in classifications.Cleaning and purifying data by eliminating irrelevant or redundant data for big data applications becomes a complex operation while attempting to maintain discriminative features in processed data.The existing scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.Recently ensemble methods have made a mark in classification tasks as combine multiple results into a single representation.When comparing to a single model,this technique offers for improved prediction.Ensemble based feature selections parallel multiple expert’s judgments on a single topic.The major goal of this research is to suggest HEFSM(Heterogeneous Ensemble Feature Selection Model),a hybrid approach that combines multiple algorithms.The major goal of this research is to suggest HEFSM(Heterogeneous Ensemble Feature Selection Model),a hybrid approach that combines multiple algorithms.Further,individual outputs produced by methods producing subsets of features or rankings or voting are also combined in this work.KNN(K-Nearest Neighbor)classifier is used to classify the big dataset obtained from the ensemble learning approach.The results found of the study have been good,proving the proposed model’s efficiency in classifications in terms of the performance metrics like precision,recall,F-measure and accuracy used.展开更多
Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering stru...Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering structures within them are at flood risk. The economic and social impact of flooding revealed that the damage caused by flash floods leading to blue spots is very high in terms of dollar amount and direct impacts on people’s lives. The impact of flooding within blue spots is either infrastructural or social, affecting lives and properties. Currently, more than 16.1 million properties in the U.S are vulnerable to flooding, and this is projected to increase by 3.2% within the next 30 years. Some models have been developed for flood risks analysis and management including some hydrological models, algorithms and machine learning and geospatial models. The models and methods reviewed are based on location data collection, statistical analysis and computation, and visualization (mapping). This research aims to create blue spots model for the State of Tennessee using ArcGIS visual programming language (model) and data analytics pipeline.展开更多
The information gained after the data analysis is vital to implement its outcomes to optimize processes and systems for more straightforward problem-solving. Therefore, the first step of data analytics deals with iden...The information gained after the data analysis is vital to implement its outcomes to optimize processes and systems for more straightforward problem-solving. Therefore, the first step of data analytics deals with identifying data requirements, mainly how the data should be grouped or labeled. For example, for data about Cybersecurity in organizations, grouping can be done into categories such as DOS denial of services, unauthorized access from local or remote, and surveillance and another probing. Next, after identifying the groups, a researcher or whoever carrying out the data analytics goes out into the field and primarily collects the data. The data collected is then organized in an orderly fashion to enable easy analysis;we aim to study different articles and compare performances for each algorithm to choose the best suitable classifies.展开更多
Big data analytics is emerging as one kind of the most important workloads in modern data centers. Hence,it is of great interest to identify the method of achieving the best performance for big data analytics workload...Big data analytics is emerging as one kind of the most important workloads in modern data centers. Hence,it is of great interest to identify the method of achieving the best performance for big data analytics workloads running on state-of-the-art SMT( simultaneous multithreading) processors,which needs comprehensive understanding to workload characteristics. This paper chooses the Spark workloads as the representative big data analytics workloads and performs comprehensive measurements on the POWER8 platform,which supports a wide range of multithreading. The research finds that the thread assignment policy and cache contention have significant impacts on application performance. In order to identify the potential optimization method from the experiment results,this study performs micro-architecture level characterizations by means of hardware performance counters and gives implications accordingly.展开更多
Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal he...Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.展开更多
Data science is an interdisciplinary discipline that employs big data,machine learning algorithms,data mining techniques,and scientific methodologies to extract insights and information from massive amounts of structu...Data science is an interdisciplinary discipline that employs big data,machine learning algorithms,data mining techniques,and scientific methodologies to extract insights and information from massive amounts of structured and unstructured data.The healthcare industry constantly creates large,important databases on patient demographics,treatment plans,results of medical exams,insurance coverage,and more.The data that IoT(Internet of Things)devices collect is of interest to data scientists.Data science can help with the healthcare industry's massive amounts of disparate,structured,and unstructured data by processing,managing,analyzing,and integrating it.To get reliable findings from this data,proper management and analysis are essential.This article provides a comprehen-sive study and discussion of process data analysis as it pertains to healthcare applications.The article discusses the advantages and dis-advantages of using big data analytics(BDA)in the medical industry.The insights offered by BDA,which can also aid in making strategic decisions,can assist the healthcare system.展开更多
As financial criminal methods become increasingly sophisticated, traditional anti-money laundering and fraud detection approaches face significant challenges. This study focuses on the application technologies and cha...As financial criminal methods become increasingly sophisticated, traditional anti-money laundering and fraud detection approaches face significant challenges. This study focuses on the application technologies and challenges of big data analytics in anti-money laundering and financial fraud detection. The research begins by outlining the evolutionary trends of financial crimes and highlighting the new characteristics of the big data era. Subsequently, it systematically analyzes the application of big data analytics technologies in this field, including machine learning, network analysis, and real-time stream processing. Through case studies, the research demonstrates how these technologies enhance the accuracy and efficiency of anomalous transaction detection. However, the study also identifies challenges faced by big data analytics, such as data quality issues, algorithmic bias, and privacy protection concerns. To address these challenges, the research proposes solutions from both technological and managerial perspectives, including the application of privacy-preserving technologies like federated learning. Finally, the study discusses the development prospects of Regulatory Technology (RegTech), emphasizing the importance of synergy between technological innovation and regulatory policies. This research provides guidance for financial institutions and regulatory bodies in optimizing their anti-money laundering and fraud detection strategies.展开更多
The amount and scale of worldwide data centers grow rapidly in the era of big data,leading to massive energy consumption and formidable carbon emission.To achieve the efficient and sustainable development of informati...The amount and scale of worldwide data centers grow rapidly in the era of big data,leading to massive energy consumption and formidable carbon emission.To achieve the efficient and sustainable development of information technology(IT)industry,researchers propose to schedule data or data analytics jobs to data centers with low electricity prices and carbon emission rates.However,due to the highly heterogeneous and dynamic nature of geo-distributed data centers in terms of resource capacity,electricity price,and the rate of carbon emissions,it is quite difficult to optimize the electricity cost and carbon emission of data centers over a long period.In this paper,we propose an energy-aware data backup and job scheduling method with minimal cost(EDJC)to minimize the electricity cost of geo-distributed data analytics jobs,and simultaneously ensure the long-term carbon emission budget of each data center.Specifically,we firstly design a cost-effective data backup algorithm to generate a data backup strategy that minimizes cost based on historical job requirements.After that,based on the data backup strategy,we utilize an online carbon-aware job scheduling algorithm to calculate the job scheduling strategy in each time slot.In this algorithm,we use the Lyapunov optimization to decompose the long-term job scheduling optimization problem into a series of real-time job scheduling optimization subproblems,and thereby minimize the electricity cost and satisfy the budget of carbon emission.The experimental results show that the EDJC method can significantly reduce the total electricity cost of the data center and meet the carbon emission constraints of the data center at the same time.展开更多
Buildings have a significant impact on global sustainability.During the past decades,a wide variety of studies have been conducted throughout the building lifecycle for improving the building performance.Data-driven a...Buildings have a significant impact on global sustainability.During the past decades,a wide variety of studies have been conducted throughout the building lifecycle for improving the building performance.Data-driven approach has been widely adopted owing to less detailed building information required and high computational efficiency for online applications.Recent advances in information technologies and data science have enabled convenient access,storage,and analysis of massive on-site measurements,bringing about a new big-data-driven research paradigm.This paper presents a critical review of data-driven methods,particularly those methods based on larger datasets,for building energy modeling and their practical applications for improving building performances.This paper is organized based on the four essential phases of big-data-driven modeling,i.e.,data preprocessing,model development,knowledge post-processing,and practical applications throughout the building lifecycle.Typical data analysis and application methods have been summarized and compared at each stage,based upon which in-depth discussions and future research directions have been presented.This review demonstrates that the insights obtained from big building data can be extremely helpful for enriching the existing knowledge repository regarding building energy modeling.Furthermore,considering the ever-increasing development of smart buildings and IoT-driven smart cities,the big data-driven research paradigm will become an essential supplement to existing scientific research methods in the building sector.展开更多
Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures.Earth and Environmental sciences are likely to benefit from Big Data Anal...Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures.Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations.However,Earth Science data and applications present specificities in terms of relevance of the geospatial information,wide heterogeneity of data models and formats,and complexity of processing.Therefore,Big Earth Data Analytics requires specifically tailored techniques and tools.The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets,built around a high performance array database technology,and the adoption and enhancement of standards for service interaction(OGC WCS and WCPS).The EarthServer solution,led by the collection of requirements from scientific communities and international initiatives,provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization.The result is demonstrated and validated through the development of lighthouse applications in the Marine,Geology,Atmospheric,Planetary and Cryospheric science domains.展开更多
文摘RETRACTION:P.Goyal and R.Malviya,“Challenges and Opportunities of Big Data Analytics in Healthcare,”Health Care Science 2,no.5(2023):328-338,https://doi.org/10.1002/hcs2.66.The above article,published online on 4 October 2023 in Wiley Online Library(wileyonlinelibrary.com),has been retracted by agreement between the journal Editor-in-Chief,Zongjiu Zhang;Tsinghua University Press;and John Wiley&Sons Ltd.
文摘为了应对乌克兰持续不断的战争带来的严峻挑战,EOS Data Analytics推出了“收获希望”计划,该计划旨在关注席卷乌克兰农业部门的危机。这个综合网页设有一张交互式地图,展示了2021—2024年乌克兰主要作物的历史和预测产量。此外,该倡议还介绍了乌克兰农业的现状及其对全球粮食安全的影响。出于支持乌克兰农民的承诺,该公司将在2024年向他们免费提供EOSDA作物监测服务,作为“收获希望”计划的一部分。该平台将帮助农民克服逆境,并确保乌克兰农业部门的可持续未来。
基金supported in part by the Big Data Analytics Laboratory(BDALAB)at the Institute of Business Administration under the research grant approved by the Higher Education Commission of Pakistan(www.hec.gov.pk)the Darbi company(www.darbi.io)
文摘This paper focuses on facilitating state-of-the-art applications of big data analytics(BDA) architectures and infrastructures to telecommunications(telecom) industrial sector.Telecom companies are dealing with terabytes to petabytes of data on a daily basis. Io T applications in telecom are further contributing to this data deluge. Recent advances in BDA have exposed new opportunities to get actionable insights from telecom big data. These benefits and the fast-changing BDA technology landscape make it important to investigate existing BDA applications to telecom sector. For this, we initially determine published research on BDA applications to telecom through a systematic literature review through which we filter 38 articles and categorize them in frameworks, use cases, literature reviews, white papers and experimental validations. We also discuss the benefits and challenges mentioned in these articles. We find that experiments are all proof of concepts(POC) on a severely limited BDA technology stack(as compared to the available technology stack), i.e.,we did not find any work focusing on full-fledged BDA implementation in an operational telecom environment. To facilitate these applications at research-level, we propose a state-of-the-art lambda architecture for BDA pipeline implementation(called Lambda Tel) based completely on open source BDA technologies and the standard Python language, along with relevant guidelines.We discovered only one research paper which presented a relatively-limited lambda architecture using the proprietary AWS cloud infrastructure. We believe Lambda Tel presents a clear roadmap for telecom industry practitioners to implement and enhance BDA applications in their enterprises.
基金supported by two research grants provided by the Karachi Institute of Economics and Technology(KIET)the Big Data Analytics Laboratory at the Insitute of Business Administration(IBAKarachi)。
文摘The advent of healthcare information management systems(HIMSs)continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale.Analysis of this big data allows for boundless potential outcomes for discovering knowledge.Big data analytics(BDA)in healthcare can,for instance,help determine causes of diseases,generate effective diagnoses,enhance Qo S guarantees by increasing efficiency of the healthcare delivery and effectiveness and viability of treatments,generate accurate predictions of readmissions,enhance clinical care,and pinpoint opportunities for cost savings.However,BDA implementations in any domain are generally complicated and resource-intensive with a high failure rate and no roadmap or success strategies to guide the practitioners.In this paper,we present a comprehensive roadmap to derive insights from BDA in the healthcare(patient care)domain,based on the results of a systematic literature review.We initially determine big data characteristics for healthcare and then review BDA applications to healthcare in academic research focusing particularly on No SQL databases.We also identify the limitations and challenges of these applications and justify the potential of No SQL databases to address these challenges and further enhance BDA healthcare research.We then propose and describe a state-of-the-art BDA architecture called Med-BDA for healthcare domain which solves all current BDA challenges and is based on the latest zeta big data paradigm.We also present success strategies to ensure the working of Med-BDA along with outlining the major benefits of BDA applications to healthcare.Finally,we compare our work with other related literature reviews across twelve hallmark features to justify the novelty and importance of our work.The aforementioned contributions of our work are collectively unique and clearly present a roadmap for clinical administrators,practitioners and professionals to successfully implement BDA initiatives in their organizations.
文摘Big Data and Data Analytics affect almost all aspects of modern organisations’decision-making and business strategies.Big Data and Data Analytics create opportunities,challenges,and implications for the external auditing procedure.The purpose of this article is to reveal essential aspects of the impact of Big Data and Data Analytics on external auditing.It seems that Big Data Analytics is a critical tool for organisations,as well as auditors,that contributes to the enhancement of the auditing process.Also,legislative implications must be taken under consideration,since existing standards may need to change.Last,auditors need to develop new skills and competence,and educational organisations need to change their educational programs in order to be able to correspond to new market needs.
文摘Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability,customer demand forecasting,cheaper development of products,and improved stock control.Small and medium sized enterprises(SMEs)are the backbone of the global economy,comprising of 90%of businesses worldwide.However,only 10%SMEs have adopted big data analytics despite the competitive advantage they could achieve.Previous research has analysed the barriers to adoption and a strategic framework has been developed to help SMEs adopt big data analytics.The framework was converted into a scoring tool which has been applied to multiple case studies of SMEs in the UK.This paper documents the process of evaluating the framework based on the structured feedback from a focus group composed of experienced practitioners.The results of the evaluation are presented with a discussion on the results,and the paper concludes with recommendations to improve the scoring tool based on the proposed framework.The research demonstrates that this positioning tool is beneficial for SMEs to achieve competitive advantages by increasing the application of business intelligence and big data analytics.
基金The National Natural Science Foundation of China(No.72071039)the Foundation of China Scholarship Council(No.202106090197)。
文摘To obtain the platform s big data analytics support,manufacturers in the traditional retail channel must decide whether to use the direct online channel.A retail supply chain model and a direct online supply chain model are built,in which manufacturers design products alone in the retail channel,while the platform and manufacturer complete the product design in the direct online channel.These two models are analyzed using the game theoretical model and numerical simulation.The findings indicate that if the manufacturers design capabilities are not very high and the commission rate is not very low,the manufacturers will choose the direct online channel if the platform s technical efforts are within an interval.When the platform s technical efforts are exogenous,they positively influence the manufacturers decisions;however,in the endogenous case,the platform s effect on the manufacturers is reflected in the interaction of the commission rate and cost efficiency.The manufacturers and the platform should make synthetic effort decisions based on the manufacturer s development capabilities,the intensity of market competition,and the cost efficiency of the platform.
基金The author extends his appreciation to the Deanship of Scientific Research at Majmaah University for funding this study under Project Number(R-2022-61).
文摘In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker.Due to such massive generation of big data,the utilization of new methods based on Big Data Analytics(BDA),Machine Learning(ML),and Artificial Intelligence(AI)have become essential.In this aspect,the current research work develops a new Big Data Analytics with Cat Swarm Optimization based deep Learning(BDA-CSODL)technique for medical image classification on Apache Spark environment.The aim of the proposed BDA-CSODL technique is to classify the medical images and diagnose the disease accurately.BDA-CSODL technique involves different stages of operations such as preprocessing,segmentation,fea-ture extraction,and classification.In addition,BDA-CSODL technique also fol-lows multi-level thresholding-based image segmentation approach for the detection of infected regions in medical image.Moreover,a deep convolutional neural network-based Inception v3 method is utilized in this study as feature extractor.Stochastic Gradient Descent(SGD)model is used for parameter tuning process.Furthermore,CSO with Long Short-Term Memory(CSO-LSTM)model is employed as a classification model to determine the appropriate class labels to it.Both SGD and CSO design approaches help in improving the overall image classification performance of the proposed BDA-CSODL technique.A wide range of simulations was conducted on benchmark medical image datasets and the com-prehensive comparative results demonstrate the supremacy of the proposed BDA-CSODL technique under different measures.
基金Supported by the National High Technology Research and Development Program of China(No.2015AA015308)the National Key Research and Development Plan of China(No.2016YFB1000600,2016YFB1000601)the Major Program of National Natural Science Foundation of China(No.61432006)
文摘With high computational capacity, e.g. many-core and wide floating point SIMD units, Intel Xeon Phi shows promising prospect to accelerate high-performance computing(HPC) applications. But the application of Intel Xeon Phi on data analytics workloads in data center is still an open question. Phibench 2.0 is built for the latest generation of Intel Xeon Phi(KNL, Knights Landing), based on the prior work PhiBench(also named BigDataBench-Phi), which is designed for the former generation of Intel Xeon Phi(KNC, Knights Corner). Workloads of PhiBench 2.0 are delicately chosen based on BigdataBench 4.0 and PhiBench 1.0. Other than that, these workloads are well optimized on KNL, and run on real-world datasets to evaluate their performance and scalability. Further, the microarchitecture-level characteristics including CPI, cache behavior, vectorization intensity, and branch prediction efficiency are analyzed and the impact of affinity and scheduling policy on performance are investigated. It is believed that the observations would help other researchers working on Intel Xeon Phi and data analytics workloads.
文摘Lately,the Internet of Things(IoT)application requires millions of structured and unstructured data since it has numerous problems,such as data organization,production,and capturing.To address these shortcomings,big data analytics is the most superior technology that has to be adapted.Even though big data and IoT could make human life more convenient,those benefits come at the expense of security.To manage these kinds of threats,the intrusion detection system has been extensively applied to identify malicious network traffic,particularly once the preventive technique fails at the level of endpoint IoT devices.As cyberattacks targeting IoT have gradually become stealthy and more sophisticated,intrusion detection systems(IDS)must continually emerge to manage evolving security threats.This study devises Big Data Analytics with the Internet of Things Assisted Intrusion Detection using Modified Buffalo Optimization Algorithm with Deep Learning(IDMBOA-DL)algorithm.In the presented IDMBOA-DL model,the Hadoop MapReduce tool is exploited for managing big data.The MBOA algorithm is applied to derive an optimal subset of features from picking an optimum set of feature subsets.Finally,the sine cosine algorithm(SCA)with convolutional autoencoder(CAE)mechanism is utilized to recognize and classify the intrusions in the IoT network.A wide range of simulations was conducted to demonstrate the enhanced results of the IDMBOA-DL algorithm.The comparison outcomes emphasized the better performance of the IDMBOA-DL model over other approaches.
文摘Big Data applications face different types of complexities in classifications.Cleaning and purifying data by eliminating irrelevant or redundant data for big data applications becomes a complex operation while attempting to maintain discriminative features in processed data.The existing scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.Recently ensemble methods have made a mark in classification tasks as combine multiple results into a single representation.When comparing to a single model,this technique offers for improved prediction.Ensemble based feature selections parallel multiple expert’s judgments on a single topic.The major goal of this research is to suggest HEFSM(Heterogeneous Ensemble Feature Selection Model),a hybrid approach that combines multiple algorithms.The major goal of this research is to suggest HEFSM(Heterogeneous Ensemble Feature Selection Model),a hybrid approach that combines multiple algorithms.Further,individual outputs produced by methods producing subsets of features or rankings or voting are also combined in this work.KNN(K-Nearest Neighbor)classifier is used to classify the big dataset obtained from the ensemble learning approach.The results found of the study have been good,proving the proposed model’s efficiency in classifications in terms of the performance metrics like precision,recall,F-measure and accuracy used.
文摘Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering structures within them are at flood risk. The economic and social impact of flooding revealed that the damage caused by flash floods leading to blue spots is very high in terms of dollar amount and direct impacts on people’s lives. The impact of flooding within blue spots is either infrastructural or social, affecting lives and properties. Currently, more than 16.1 million properties in the U.S are vulnerable to flooding, and this is projected to increase by 3.2% within the next 30 years. Some models have been developed for flood risks analysis and management including some hydrological models, algorithms and machine learning and geospatial models. The models and methods reviewed are based on location data collection, statistical analysis and computation, and visualization (mapping). This research aims to create blue spots model for the State of Tennessee using ArcGIS visual programming language (model) and data analytics pipeline.
文摘The information gained after the data analysis is vital to implement its outcomes to optimize processes and systems for more straightforward problem-solving. Therefore, the first step of data analytics deals with identifying data requirements, mainly how the data should be grouped or labeled. For example, for data about Cybersecurity in organizations, grouping can be done into categories such as DOS denial of services, unauthorized access from local or remote, and surveillance and another probing. Next, after identifying the groups, a researcher or whoever carrying out the data analytics goes out into the field and primarily collects the data. The data collected is then organized in an orderly fashion to enable easy analysis;we aim to study different articles and compare performances for each algorithm to choose the best suitable classifies.
基金Supported by the National High Technology Research and Development Program of China(No.2015AA015308)the State Key Development Program for Basic Research of China(No.2014CB340402)
文摘Big data analytics is emerging as one kind of the most important workloads in modern data centers. Hence,it is of great interest to identify the method of achieving the best performance for big data analytics workloads running on state-of-the-art SMT( simultaneous multithreading) processors,which needs comprehensive understanding to workload characteristics. This paper chooses the Spark workloads as the representative big data analytics workloads and performs comprehensive measurements on the POWER8 platform,which supports a wide range of multithreading. The research finds that the thread assignment policy and cache contention have significant impacts on application performance. In order to identify the potential optimization method from the experiment results,this study performs micro-architecture level characterizations by means of hardware performance counters and gives implications accordingly.
文摘Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.
文摘Data science is an interdisciplinary discipline that employs big data,machine learning algorithms,data mining techniques,and scientific methodologies to extract insights and information from massive amounts of structured and unstructured data.The healthcare industry constantly creates large,important databases on patient demographics,treatment plans,results of medical exams,insurance coverage,and more.The data that IoT(Internet of Things)devices collect is of interest to data scientists.Data science can help with the healthcare industry's massive amounts of disparate,structured,and unstructured data by processing,managing,analyzing,and integrating it.To get reliable findings from this data,proper management and analysis are essential.This article provides a comprehen-sive study and discussion of process data analysis as it pertains to healthcare applications.The article discusses the advantages and dis-advantages of using big data analytics(BDA)in the medical industry.The insights offered by BDA,which can also aid in making strategic decisions,can assist the healthcare system.
文摘As financial criminal methods become increasingly sophisticated, traditional anti-money laundering and fraud detection approaches face significant challenges. This study focuses on the application technologies and challenges of big data analytics in anti-money laundering and financial fraud detection. The research begins by outlining the evolutionary trends of financial crimes and highlighting the new characteristics of the big data era. Subsequently, it systematically analyzes the application of big data analytics technologies in this field, including machine learning, network analysis, and real-time stream processing. Through case studies, the research demonstrates how these technologies enhance the accuracy and efficiency of anomalous transaction detection. However, the study also identifies challenges faced by big data analytics, such as data quality issues, algorithmic bias, and privacy protection concerns. To address these challenges, the research proposes solutions from both technological and managerial perspectives, including the application of privacy-preserving technologies like federated learning. Finally, the study discusses the development prospects of Regulatory Technology (RegTech), emphasizing the importance of synergy between technological innovation and regulatory policies. This research provides guidance for financial institutions and regulatory bodies in optimizing their anti-money laundering and fraud detection strategies.
基金supported by the National Natural Science Foundation of China under Grant Nos.U23B2004 and 62162018,the Guangxi Natural Science Foundation under Grant Nos.2025GXNSFBA069389 and 2025GXNSFBA069101the Key Research and Development Program of Guangxi under Grant No.AB25069157+1 种基金the Hunan Provincial Excellent Youth Fund under Grant No.2023JJ20055the Open Project Program of Guangxi Key Laboratory of Digital Infrastructure under Grant No.GXDIOP2024005.
文摘The amount and scale of worldwide data centers grow rapidly in the era of big data,leading to massive energy consumption and formidable carbon emission.To achieve the efficient and sustainable development of information technology(IT)industry,researchers propose to schedule data or data analytics jobs to data centers with low electricity prices and carbon emission rates.However,due to the highly heterogeneous and dynamic nature of geo-distributed data centers in terms of resource capacity,electricity price,and the rate of carbon emissions,it is quite difficult to optimize the electricity cost and carbon emission of data centers over a long period.In this paper,we propose an energy-aware data backup and job scheduling method with minimal cost(EDJC)to minimize the electricity cost of geo-distributed data analytics jobs,and simultaneously ensure the long-term carbon emission budget of each data center.Specifically,we firstly design a cost-effective data backup algorithm to generate a data backup strategy that minimizes cost based on historical job requirements.After that,based on the data backup strategy,we utilize an online carbon-aware job scheduling algorithm to calculate the job scheduling strategy in each time slot.In this algorithm,we use the Lyapunov optimization to decompose the long-term job scheduling optimization problem into a series of real-time job scheduling optimization subproblems,and thereby minimize the electricity cost and satisfy the budget of carbon emission.The experimental results show that the EDJC method can significantly reduce the total electricity cost of the data center and meet the carbon emission constraints of the data center at the same time.
基金The authors gratefully acknowledge the support of this research by the Research Grant Council of Hong Kong SAR(152075/19E)the National Natural Science Foundation of China(No.51908365)the National Natural Science Foundation of China(No.51778321).
文摘Buildings have a significant impact on global sustainability.During the past decades,a wide variety of studies have been conducted throughout the building lifecycle for improving the building performance.Data-driven approach has been widely adopted owing to less detailed building information required and high computational efficiency for online applications.Recent advances in information technologies and data science have enabled convenient access,storage,and analysis of massive on-site measurements,bringing about a new big-data-driven research paradigm.This paper presents a critical review of data-driven methods,particularly those methods based on larger datasets,for building energy modeling and their practical applications for improving building performances.This paper is organized based on the four essential phases of big-data-driven modeling,i.e.,data preprocessing,model development,knowledge post-processing,and practical applications throughout the building lifecycle.Typical data analysis and application methods have been summarized and compared at each stage,based upon which in-depth discussions and future research directions have been presented.This review demonstrates that the insights obtained from big building data can be extremely helpful for enriching the existing knowledge repository regarding building energy modeling.Furthermore,considering the ever-increasing development of smart buildings and IoT-driven smart cities,the big data-driven research paradigm will become an essential supplement to existing scientific research methods in the building sector.
基金the European Community under grant agreement 283610 EarthServer.
文摘Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures.Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations.However,Earth Science data and applications present specificities in terms of relevance of the geospatial information,wide heterogeneity of data models and formats,and complexity of processing.Therefore,Big Earth Data Analytics requires specifically tailored techniques and tools.The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets,built around a high performance array database technology,and the adoption and enhancement of standards for service interaction(OGC WCS and WCPS).The EarthServer solution,led by the collection of requirements from scientific communities and international initiatives,provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization.The result is demonstrated and validated through the development of lighthouse applications in the Marine,Geology,Atmospheric,Planetary and Cryospheric science domains.