After a comprehensive literature review and analysis, a unified cloud computing framework is proposed, which comprises MapReduce, a vertual machine, Hadoop distributed file system (HDFS), Hbase, Hadoop, and virtuali...After a comprehensive literature review and analysis, a unified cloud computing framework is proposed, which comprises MapReduce, a vertual machine, Hadoop distributed file system (HDFS), Hbase, Hadoop, and virtualization. This study also compares Microsoft, Trend Micro, and the proposed unified cloud computing architecture to show that the proposed unified framework of the cloud computing service model is comprehensive and appropriate for the current complexities of businesses. The findings of this study can contribute to the knowledge for academics and practitioners to understand, assess, and analyze a cloud computing service application.展开更多
Since its birth in the early 90 's,digital forensics has been mainly focused on collecting and examining digital evidence from computers and networks that are controlled and owned by individuals or organizations.A...Since its birth in the early 90 's,digital forensics has been mainly focused on collecting and examining digital evidence from computers and networks that are controlled and owned by individuals or organizations.As cloud computing has recently emerged as a dominant platform for running applications and storing data,digital forensics faces well-known challenges in the cloud,such as data inaccessibility,data and service volatility,and law enforcement lacks control over the cloud.To date,very little research has been done to develop efficient theory and practice for digital forensics in the cloud.In this paper,we present a novel framework,Cloud Foren,which systematically addresses the challenges of forensics in cloud computing.Cloud Foren covers the entire process of digital forensics,from the initial point of complaint to the final point where the evidence is confirmed.The key components of Cloud Foren address some challenges,which are unique to the cloud.The proposed forensic process allows cloud forensic examiner,cloud provider,and cloud customer collaborate naturally.We use two case studies to demonstrate the applicability of Cloud Foren.We believe Cloud Foren holds great promise for more precise and automatic digital forensics in a cloud computing environment.展开更多
Cloud computing is touted as the next big thing in the Information Technology (IT) industry, which is going to impact the businesses of any size and yet the security issue continues to pose a big threat on it. The sec...Cloud computing is touted as the next big thing in the Information Technology (IT) industry, which is going to impact the businesses of any size and yet the security issue continues to pose a big threat on it. The security and privacy issues persisting in cloud computing have proved to be an obstacle for its widespread adoption. In this paper, we look at these issues from a business perspective and how they are damaging the reputation of big companies. There is a literature review on the existing issues in cloud computing and how they are being tackled by the Cloud Service Providers (CSP). We propose a governing body framework which aims at solving these issues by establishing relationship amongst the CSPs in which the data about possible threats can be generated based on the previous attacks on other CSPs. The Governing Body will be responsible for Data Center control, Policy control, legal control, user awareness, performance evaluation, solution architecture and providing motivation for the entities involved.展开更多
Cloud computing plays a very important role in the development of business and competitive edge for many organisations including SMEs (Small and Medium Enterprises). Every cloud user continues to expect maximum servic...Cloud computing plays a very important role in the development of business and competitive edge for many organisations including SMEs (Small and Medium Enterprises). Every cloud user continues to expect maximum service, and a critical aspect to this is cloud security which is one among other specific challenges hindering adoption of the cloud technologies. The absence of appropriate, standardised and self-assessing security frameworks of the cloud world for SMEs becomes an endless problem in developing countries and can expose the cloud computing model to major security risks which threaten its potential success within the country. This research presents a security framework for assessing security in the cloud environment based on the Goal Question Metrics methodology. The developed framework produces a security index that describes the security level accomplished by an evaluated cloud computing environment thereby providing the first line of defence. This research has concluded with an eight-step framework that could be employed by SMEs to assess the information security in the cloud. The most important feature of the developed security framework is to devise a mechanism through which SMEs can have a path of improvement along with understanding of the current security level and defining desired state in terms of security metric value.展开更多
In cloud computing environment, as the infrastructure not owned by users, it is desirable that its security and integrity must be protected and verified time to time. In Hadoop based scalable computing setup, malfunct...In cloud computing environment, as the infrastructure not owned by users, it is desirable that its security and integrity must be protected and verified time to time. In Hadoop based scalable computing setup, malfunctioning nodes generate wrong output during the run time. To detect such nodes, we create collaborative network between worker node (i.e. data node of Hadoop) and Master node (i.e. name node of Hadoop) with the help of trusted heartbeat framework (THF). We propose procedures to register node and to alter status of node based on reputation provided by other co-worker nodes.展开更多
In the last few years, cloud computing (CC) has grown from being a promising business concept to one of the fastest growing segments of the IT industry. Many businesses including small, medium (SMEs) and large enterpr...In the last few years, cloud computing (CC) has grown from being a promising business concept to one of the fastest growing segments of the IT industry. Many businesses including small, medium (SMEs) and large enterprises are migrating to this technology. The objective of this paper was to describe the opinions of enterprises about the benefits and challenges of cloud computing services in the private and public companies in South Lebanon. During 2019, a cross-sectional study which enrolled 29 enterprises used CC was conducted. The survey included questions on socio-demographic characteristics of representative of companies, and companies’ factors in reference to the technology, organization, and environment (TOE) framework. Most (58.6%) of companies were private and micro and SMEs sized (86.8%). The cost saving (75.0%), scalability and flexibility (75.9%), security (44.8%), and improved service delivery were the main benefits that cloud offer to the business. The security aspect, the cost, and the limited provision of infra-structure remain a challenge for the adoption of CC. In conclusion, the research reveals the potential for the development of CC and obstacles for successful implementation of this new technology.展开更多
Cloud computing is a type of emerging computing technology that relies on shared computing resources rather than having local servers or personal devices to handle applications. It is an emerging technology that provi...Cloud computing is a type of emerging computing technology that relies on shared computing resources rather than having local servers or personal devices to handle applications. It is an emerging technology that provides services over the internet: Utilizing the online services of different software. Many works have been carried out and various security frameworks relating to the security issues of cloud computing have been proposed in numerous ways. But they do not propose a quantitative approach to analyze and evaluate privacy and security in cloud computing systems. In this research, we try to introduce top security concerns of cloud computing systems, analyze the threats and propose some countermeasures for them. We use a quantitative security risk assessment model to present a multilayer security framework for the solution of the security threats of cloud computing systems. For evaluating the performance of the proposed security framework we have utilized an Own-Cloud platform using a 64-bit quad-core processor based embedded system. Own-Cloud platform is quite literally as any analytics, machine learning algorithms or signal processing techniques can be implemented using the vast variety of Python libraries built for those purposes. In addition, we have proposed two algorithms, which have been deployed in the Own-Cloud for mitigating the attacks and threats to cloud-like reply attacks, DoS/DDoS, back door attacks, Zombie, etc. Moreover, unbalanced RSA based encryption is used to reduce the risk of authentication and authorization. This framework is able to mitigate the targeted attacks satisfactorily.展开更多
Conventional gradient-based full waveform inversion (FWI) is a local optimization, which is highly dependent on the initial model and prone to trapping in local minima. Globally optimal FWI that can overcome this limi...Conventional gradient-based full waveform inversion (FWI) is a local optimization, which is highly dependent on the initial model and prone to trapping in local minima. Globally optimal FWI that can overcome this limitation is particularly attractive, but is currently limited by the huge amount of calculation. In this paper, we propose a globally optimal FWI framework based on GPU parallel computing, which greatly improves the efficiency, and is expected to make globally optimal FWI more widely used. In this framework, we simplify and recombine the model parameters, and optimize the model iteratively. Each iteration contains hundreds of individuals, each individual is independent of the other, and each individual contains forward modeling and cost function calculation. The framework is suitable for a variety of globally optimal algorithms, and we test the framework with particle swarm optimization algorithm for example. Both the synthetic and field examples achieve good results, indicating the effectiveness of the framework. .展开更多
The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive ...The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive growth of data.Optical computing provides a distinctive perspective to address this bottleneck by harnessing the unique properties of photons including broad bandwidth,low latency,and high energy efficiency.In this review,we introduce the latest developments of optical computing for different AI models,including feedforward neural networks,reservoir computing,and spiking neural networks(SNNs).Recent progress in integrated photonic devices,combined with the rise of AI,provides a great opportunity for the renaissance of optical computing in practical applications.This effort requires multidisciplinary efforts from a broad community.This review provides an overview of the state-of-the-art accomplishments in recent years,discusses the availability of current technologies,and points out various remaining challenges in different aspects to push the frontier.We anticipate that the era of large-scale integrated photonics processors will soon arrive for practical AI applications in the form of hybrid optoelectronic frameworks.展开更多
Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown th...Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.展开更多
Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to co...Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.展开更多
With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficienc...With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.展开更多
Metal–organic frameworks (MOFs) as photocatalysts and photocatalyst supports combine several advantages of homogeneous and heterogeneous catalyses, including stability, post-reaction separation, catalyst reusability,...Metal–organic frameworks (MOFs) as photocatalysts and photocatalyst supports combine several advantages of homogeneous and heterogeneous catalyses, including stability, post-reaction separation, catalyst reusability,and tunability, and they have been intensively studied for photocatalytic applications. There are several reviews that focus mainly or even entirely on experimental work. The present review is intended to complement those reviews by focusing on computational work that can provide a further understanding of the photocatalytic properties of MOF photocatalysts. We first present a summary of computational methods, including density functional theory, combined quantum mechanical and molecular mechanical methods, and force fields for MOFs. Then, computational investigations on MOF-based photocatalysis are briefly discussed. The discussions focus on the electronic structure, photoexcitation, charge mobility, and photoredox catalysis of MOFs, especially the widely studied Ui O-66-based MOFs.展开更多
Background Pan-genomics is a recently emerging strategy that can be utilized to provide a more comprehensive characterization of genetic variation.Joint calling is routinely used to combine identified variants across ...Background Pan-genomics is a recently emerging strategy that can be utilized to provide a more comprehensive characterization of genetic variation.Joint calling is routinely used to combine identified variants across multiple related samples.However,the improvement of variants identification using the mutual support information from mul-tiple samples remains quite limited for population-scale genotyping.Results In this study,we developed a computational framework for joint calling genetic variants from 5,061 sheep by incorporating the sequencing error and optimizing mutual support information from multiple samples’data.The variants were accurately identified from multiple samples by using four steps:(1)Probabilities of variants from two widely used algorithms,GATK and Freebayes,were calculated by Poisson model incorporating base sequencing error potential;(2)The variants with high mapping quality or consistently identified from at least two samples by GATK and Freebayes were used to construct the raw high-confidence identification(rHID)variants database;(3)The high confidence variants identified in single sample were ordered by probability value and controlled by false discovery rate(FDR)using rHID database;(4)To avoid the elimination of potentially true variants from rHID database,the vari-ants that failed FDR were reexamined to rescued potential true variants and ensured high accurate identification variants.The results indicated that the percent of concordant SNPs and Indels from Freebayes and GATK after our new method were significantly improved 12%-32%compared with raw variants and advantageously found low frequency variants of individual sheep involved several traits including nipples number(GPC5),scrapie pathology(PAPSS2),sea-sonal reproduction and litter size(GRM1),coat color(RAB27A),and lentivirus susceptibility(TMEM154).Conclusion The new method used the computational strategy to reduce the number of false positives,and simulta-neously improve the identification of genetic variants.This strategy did not incur any extra cost by using any addi-tional samples or sequencing data information and advantageously identified rare variants which can be important for practical applications of animal breeding.展开更多
Mobile devices with social media applications are the prevalent user equipment to generate and consume digital hate content.The objective of this paper is to propose a mobile edge computing architecture for regulating...Mobile devices with social media applications are the prevalent user equipment to generate and consume digital hate content.The objective of this paper is to propose a mobile edge computing architecture for regulating and reducing hate content at the user's level.In this regard,the profiling of hate content is obtained from the results of multiple studies by quantitative and qualitative analyses.Profiling resulted in different categories of hate content caused by gender,religion,race,and disability.Based on this information,an architectural framework is developed to regulate and reduce hate content at the user's level in the mobile computing environment.The proposed architecture will be a novel idea to reduce hate content generation and its impact.展开更多
A general scheduling framework (GSF) for independent tasks in computational Grid is proposed in this paper, which modeled by Petri net and located on the layer of Grid scheduler. Furthermore, a new mapping algorithm a...A general scheduling framework (GSF) for independent tasks in computational Grid is proposed in this paper, which modeled by Petri net and located on the layer of Grid scheduler. Furthermore, a new mapping algorithm aimed at time and cost is designed on the basis of this framework. The algorithm uses weighted average fuzzy applicability to express the matching degree between available machines and independent tasks. Some existent heuristic algorithms are tested in GSF, and the results of simulation and comparison not only show good flexibility and adaptability of GSF, but also prove that, given a certain aim, the new algorithm can consider the factors of time and cost as a whole and its performance is higher than those mentioned algorithms.展开更多
The purpose of this study is to examine the different factors that are expected to influence the intention of hospitals to adopt cloud computing in Jordan. This study is conducted using quatititative methodology. 223 ...The purpose of this study is to examine the different factors that are expected to influence the intention of hospitals to adopt cloud computing in Jordan. This study is conducted using quatititative methodology. 223 questionnaires were distributed to the IT departments of different hospitals to evaluate their ability and willingness to adopt cloud computing. The data were tested using multiple regression in order to determine whether Technology, Organizational, and Environmental factors (TOE) played a role in hospitals’ decision to consider cloud computing as a beneficial investment. The findings of this study showed that all the factors had a significant positive impact on the intention of hospitals to adopt cloud computing, with the Technological factor having the most impact on the decision made.展开更多
Within the last few decades, increases in computational resources have contributed enormously to the progress of science and engineering (S & E). To continue making rapid advancements, the S & E community must...Within the last few decades, increases in computational resources have contributed enormously to the progress of science and engineering (S & E). To continue making rapid advancements, the S & E community must be able to access computing resources. One way to provide such resources is through High-Performance Computing (HPC) centers. Many academic research institutions offer their own HPC Centers but struggle to make the computing resources easily accessible and user-friendly. Here we present SHABU, a RESTful Web API framework that enables S & E communities to access resources from Boston University’s Shared Computing Center (SCC). The SHABU requirements are derived from the use cases described in this work.展开更多
This article provides an outline on a recent application of soft computing for the mining of microarray gene expressions.We describe investigations with an evolutionary-rough feature selection algorithm for feature se...This article provides an outline on a recent application of soft computing for the mining of microarray gene expressions.We describe investigations with an evolutionary-rough feature selection algorithm for feature selection and classification on cancer data.Rough set theory is employed to generate reducts,which represent the minimal sets of non-redundant features capable of discerning between all objects,in a multi-objective framework.The experimental results demonstrate the effectiveness of the methodology on three cancer datasets.展开更多
Machine learning(ML)has been increasingly adopted to solve engineering problems with performance gauged by accuracy,efficiency,and security.Notably,blockchain technology(BT)has been added to ML when security is a part...Machine learning(ML)has been increasingly adopted to solve engineering problems with performance gauged by accuracy,efficiency,and security.Notably,blockchain technology(BT)has been added to ML when security is a particular concern.Nevertheless,there is a research gap that prevailing solutions focus primarily on data security using blockchain but ignore computational security,making the traditional ML process vulnerable to off-chain risks.Therefore,the research objective is to develop a novel ML on blockchain(MLOB)framework to ensure both the data and computational process security.The central tenet is to place them both on the blockchain,execute them as blockchain smart contracts,and protect the execution records on-chain.The framework is established by developing a prototype and further calibrated using a case study of industrial inspection.It is shown that the MLOB framework,compared with existing ML and BT isolated solutions,is superior in terms of security(successfully defending against corruption on six designed attack scenario),maintaining accuracy(0.01%difference with baseline),albeit with a slightly compromised efficiency(0.231 second latency increased).The key finding is MLOB can significantly enhances the computational security of engineering computing without increasing computing power demands.This finding can alleviate concerns regarding the computational resource requirements of ML-BT integration.With proper adaption,the MLOB framework can inform various novel solutions to achieve computational security in broader engineering challenges.展开更多
文摘After a comprehensive literature review and analysis, a unified cloud computing framework is proposed, which comprises MapReduce, a vertual machine, Hadoop distributed file system (HDFS), Hbase, Hadoop, and virtualization. This study also compares Microsoft, Trend Micro, and the proposed unified cloud computing architecture to show that the proposed unified framework of the cloud computing service model is comprehensive and appropriate for the current complexities of businesses. The findings of this study can contribute to the knowledge for academics and practitioners to understand, assess, and analyze a cloud computing service application.
文摘Since its birth in the early 90 's,digital forensics has been mainly focused on collecting and examining digital evidence from computers and networks that are controlled and owned by individuals or organizations.As cloud computing has recently emerged as a dominant platform for running applications and storing data,digital forensics faces well-known challenges in the cloud,such as data inaccessibility,data and service volatility,and law enforcement lacks control over the cloud.To date,very little research has been done to develop efficient theory and practice for digital forensics in the cloud.In this paper,we present a novel framework,Cloud Foren,which systematically addresses the challenges of forensics in cloud computing.Cloud Foren covers the entire process of digital forensics,from the initial point of complaint to the final point where the evidence is confirmed.The key components of Cloud Foren address some challenges,which are unique to the cloud.The proposed forensic process allows cloud forensic examiner,cloud provider,and cloud customer collaborate naturally.We use two case studies to demonstrate the applicability of Cloud Foren.We believe Cloud Foren holds great promise for more precise and automatic digital forensics in a cloud computing environment.
文摘Cloud computing is touted as the next big thing in the Information Technology (IT) industry, which is going to impact the businesses of any size and yet the security issue continues to pose a big threat on it. The security and privacy issues persisting in cloud computing have proved to be an obstacle for its widespread adoption. In this paper, we look at these issues from a business perspective and how they are damaging the reputation of big companies. There is a literature review on the existing issues in cloud computing and how they are being tackled by the Cloud Service Providers (CSP). We propose a governing body framework which aims at solving these issues by establishing relationship amongst the CSPs in which the data about possible threats can be generated based on the previous attacks on other CSPs. The Governing Body will be responsible for Data Center control, Policy control, legal control, user awareness, performance evaluation, solution architecture and providing motivation for the entities involved.
文摘Cloud computing plays a very important role in the development of business and competitive edge for many organisations including SMEs (Small and Medium Enterprises). Every cloud user continues to expect maximum service, and a critical aspect to this is cloud security which is one among other specific challenges hindering adoption of the cloud technologies. The absence of appropriate, standardised and self-assessing security frameworks of the cloud world for SMEs becomes an endless problem in developing countries and can expose the cloud computing model to major security risks which threaten its potential success within the country. This research presents a security framework for assessing security in the cloud environment based on the Goal Question Metrics methodology. The developed framework produces a security index that describes the security level accomplished by an evaluated cloud computing environment thereby providing the first line of defence. This research has concluded with an eight-step framework that could be employed by SMEs to assess the information security in the cloud. The most important feature of the developed security framework is to devise a mechanism through which SMEs can have a path of improvement along with understanding of the current security level and defining desired state in terms of security metric value.
文摘In cloud computing environment, as the infrastructure not owned by users, it is desirable that its security and integrity must be protected and verified time to time. In Hadoop based scalable computing setup, malfunctioning nodes generate wrong output during the run time. To detect such nodes, we create collaborative network between worker node (i.e. data node of Hadoop) and Master node (i.e. name node of Hadoop) with the help of trusted heartbeat framework (THF). We propose procedures to register node and to alter status of node based on reputation provided by other co-worker nodes.
文摘In the last few years, cloud computing (CC) has grown from being a promising business concept to one of the fastest growing segments of the IT industry. Many businesses including small, medium (SMEs) and large enterprises are migrating to this technology. The objective of this paper was to describe the opinions of enterprises about the benefits and challenges of cloud computing services in the private and public companies in South Lebanon. During 2019, a cross-sectional study which enrolled 29 enterprises used CC was conducted. The survey included questions on socio-demographic characteristics of representative of companies, and companies’ factors in reference to the technology, organization, and environment (TOE) framework. Most (58.6%) of companies were private and micro and SMEs sized (86.8%). The cost saving (75.0%), scalability and flexibility (75.9%), security (44.8%), and improved service delivery were the main benefits that cloud offer to the business. The security aspect, the cost, and the limited provision of infra-structure remain a challenge for the adoption of CC. In conclusion, the research reveals the potential for the development of CC and obstacles for successful implementation of this new technology.
文摘Cloud computing is a type of emerging computing technology that relies on shared computing resources rather than having local servers or personal devices to handle applications. It is an emerging technology that provides services over the internet: Utilizing the online services of different software. Many works have been carried out and various security frameworks relating to the security issues of cloud computing have been proposed in numerous ways. But they do not propose a quantitative approach to analyze and evaluate privacy and security in cloud computing systems. In this research, we try to introduce top security concerns of cloud computing systems, analyze the threats and propose some countermeasures for them. We use a quantitative security risk assessment model to present a multilayer security framework for the solution of the security threats of cloud computing systems. For evaluating the performance of the proposed security framework we have utilized an Own-Cloud platform using a 64-bit quad-core processor based embedded system. Own-Cloud platform is quite literally as any analytics, machine learning algorithms or signal processing techniques can be implemented using the vast variety of Python libraries built for those purposes. In addition, we have proposed two algorithms, which have been deployed in the Own-Cloud for mitigating the attacks and threats to cloud-like reply attacks, DoS/DDoS, back door attacks, Zombie, etc. Moreover, unbalanced RSA based encryption is used to reduce the risk of authentication and authorization. This framework is able to mitigate the targeted attacks satisfactorily.
文摘Conventional gradient-based full waveform inversion (FWI) is a local optimization, which is highly dependent on the initial model and prone to trapping in local minima. Globally optimal FWI that can overcome this limitation is particularly attractive, but is currently limited by the huge amount of calculation. In this paper, we propose a globally optimal FWI framework based on GPU parallel computing, which greatly improves the efficiency, and is expected to make globally optimal FWI more widely used. In this framework, we simplify and recombine the model parameters, and optimize the model iteratively. Each iteration contains hundreds of individuals, each individual is independent of the other, and each individual contains forward modeling and cost function calculation. The framework is suitable for a variety of globally optimal algorithms, and we test the framework with particle swarm optimization algorithm for example. Both the synthetic and field examples achieve good results, indicating the effectiveness of the framework. .
基金supported by the National Natural Science Foundation of China(61927802,61722209,and 61805145)the Beijing Municipal Science and Technology Commission(Z181100003118014)+3 种基金the National Key Research and Development Program of China(2020AAA0130000)the support from the National Postdoctoral Program for Innovative TalentShuimu Tsinghua Scholar Programthe support from the Hong Kong Research Grants Council(16306220)。
文摘The rapid development of artificial intelligence(AI)facilitates various applications from all areas but also poses great challenges in its hardware implementation in terms of speed and energy because of the explosive growth of data.Optical computing provides a distinctive perspective to address this bottleneck by harnessing the unique properties of photons including broad bandwidth,low latency,and high energy efficiency.In this review,we introduce the latest developments of optical computing for different AI models,including feedforward neural networks,reservoir computing,and spiking neural networks(SNNs).Recent progress in integrated photonic devices,combined with the rise of AI,provides a great opportunity for the renaissance of optical computing in practical applications.This effort requires multidisciplinary efforts from a broad community.This review provides an overview of the state-of-the-art accomplishments in recent years,discusses the availability of current technologies,and points out various remaining challenges in different aspects to push the frontier.We anticipate that the era of large-scale integrated photonics processors will soon arrive for practical AI applications in the form of hybrid optoelectronic frameworks.
基金supported by the National Natural Science Foundation of China (22078004)the Research Development Fund from Xi’an Jiaotong-Liverpool University (RDF-16-02-03 and RDF15-01-23)key program special fund (KSF-E-03)。
文摘Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.
基金Project(61170049) supported by the National Natural Science Foundation of ChinaProject(2012AA010903) supported by the National High Technology Research and Development Program of China
文摘Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems.
基金supported by the Social Science Foundation of Hebei Province(No.HB19JL007)the Education technology Foundation of the Ministry of Education(No.2017A01020)the Natural Science Foundation of Hebei Province(F2021207005).
文摘With the rapid development and popularization of 5G and the Internetof Things, a number of new applications have emerged, such as driverless cars.Most of these applications are time-delay sensitive, and some deficiencies werefound during data processing through the cloud centric architecture. The data generated by terminals at the edge of the network is an urgent problem to be solved atpresent. In 5 g environments, edge computing can better meet the needs of lowdelay and wide connection applications, and support the fast request of terminalusers. However, edge computing only has the edge layer computing advantage,and it is difficult to achieve global resource scheduling and configuration, whichmay lead to the problems of low resource utilization rate, long task processingdelay and unbalanced system load, so as to lead to affect the service quality ofusers. To solve this problem, this paper studies task scheduling and resource collaboration based on a Cloud-Edge-Terminal collaborative architecture, proposes agenetic simulated annealing fusion algorithm, called GSA-EDGE, to achieve taskscheduling and resource allocation, and designs a series of experiments to verifythe effectiveness of the GSA-EDGE algorithm. The experimental results showthat the proposed method can reduce the time delay of task processing comparedwith the local task processing method and the task average allocation method.
基金supported as part of the Nanoporous Materials Genome Center by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences, under Award No. DE-FG0217ER16362
文摘Metal–organic frameworks (MOFs) as photocatalysts and photocatalyst supports combine several advantages of homogeneous and heterogeneous catalyses, including stability, post-reaction separation, catalyst reusability,and tunability, and they have been intensively studied for photocatalytic applications. There are several reviews that focus mainly or even entirely on experimental work. The present review is intended to complement those reviews by focusing on computational work that can provide a further understanding of the photocatalytic properties of MOF photocatalysts. We first present a summary of computational methods, including density functional theory, combined quantum mechanical and molecular mechanical methods, and force fields for MOFs. Then, computational investigations on MOF-based photocatalysis are briefly discussed. The discussions focus on the electronic structure, photoexcitation, charge mobility, and photoredox catalysis of MOFs, especially the widely studied Ui O-66-based MOFs.
基金Superior Farms sheep producersIBEST for their supportfinancial support from the Idaho Global Entrepreneurial Mission
文摘Background Pan-genomics is a recently emerging strategy that can be utilized to provide a more comprehensive characterization of genetic variation.Joint calling is routinely used to combine identified variants across multiple related samples.However,the improvement of variants identification using the mutual support information from mul-tiple samples remains quite limited for population-scale genotyping.Results In this study,we developed a computational framework for joint calling genetic variants from 5,061 sheep by incorporating the sequencing error and optimizing mutual support information from multiple samples’data.The variants were accurately identified from multiple samples by using four steps:(1)Probabilities of variants from two widely used algorithms,GATK and Freebayes,were calculated by Poisson model incorporating base sequencing error potential;(2)The variants with high mapping quality or consistently identified from at least two samples by GATK and Freebayes were used to construct the raw high-confidence identification(rHID)variants database;(3)The high confidence variants identified in single sample were ordered by probability value and controlled by false discovery rate(FDR)using rHID database;(4)To avoid the elimination of potentially true variants from rHID database,the vari-ants that failed FDR were reexamined to rescued potential true variants and ensured high accurate identification variants.The results indicated that the percent of concordant SNPs and Indels from Freebayes and GATK after our new method were significantly improved 12%-32%compared with raw variants and advantageously found low frequency variants of individual sheep involved several traits including nipples number(GPC5),scrapie pathology(PAPSS2),sea-sonal reproduction and litter size(GRM1),coat color(RAB27A),and lentivirus susceptibility(TMEM154).Conclusion The new method used the computational strategy to reduce the number of false positives,and simulta-neously improve the identification of genetic variants.This strategy did not incur any extra cost by using any addi-tional samples or sequencing data information and advantageously identified rare variants which can be important for practical applications of animal breeding.
文摘Mobile devices with social media applications are the prevalent user equipment to generate and consume digital hate content.The objective of this paper is to propose a mobile edge computing architecture for regulating and reducing hate content at the user's level.In this regard,the profiling of hate content is obtained from the results of multiple studies by quantitative and qualitative analyses.Profiling resulted in different categories of hate content caused by gender,religion,race,and disability.Based on this information,an architectural framework is developed to regulate and reduce hate content at the user's level in the mobile computing environment.The proposed architecture will be a novel idea to reduce hate content generation and its impact.
基金Project (60433020) supported by the National Natural Science Foundation of China project supported by the Postdoctor-al Science Foundation of Central South University
文摘A general scheduling framework (GSF) for independent tasks in computational Grid is proposed in this paper, which modeled by Petri net and located on the layer of Grid scheduler. Furthermore, a new mapping algorithm aimed at time and cost is designed on the basis of this framework. The algorithm uses weighted average fuzzy applicability to express the matching degree between available machines and independent tasks. Some existent heuristic algorithms are tested in GSF, and the results of simulation and comparison not only show good flexibility and adaptability of GSF, but also prove that, given a certain aim, the new algorithm can consider the factors of time and cost as a whole and its performance is higher than those mentioned algorithms.
文摘The purpose of this study is to examine the different factors that are expected to influence the intention of hospitals to adopt cloud computing in Jordan. This study is conducted using quatititative methodology. 223 questionnaires were distributed to the IT departments of different hospitals to evaluate their ability and willingness to adopt cloud computing. The data were tested using multiple regression in order to determine whether Technology, Organizational, and Environmental factors (TOE) played a role in hospitals’ decision to consider cloud computing as a beneficial investment. The findings of this study showed that all the factors had a significant positive impact on the intention of hospitals to adopt cloud computing, with the Technological factor having the most impact on the decision made.
文摘Within the last few decades, increases in computational resources have contributed enormously to the progress of science and engineering (S & E). To continue making rapid advancements, the S & E community must be able to access computing resources. One way to provide such resources is through High-Performance Computing (HPC) centers. Many academic research institutions offer their own HPC Centers but struggle to make the computing resources easily accessible and user-friendly. Here we present SHABU, a RESTful Web API framework that enables S & E communities to access resources from Boston University’s Shared Computing Center (SCC). The SHABU requirements are derived from the use cases described in this work.
文摘This article provides an outline on a recent application of soft computing for the mining of microarray gene expressions.We describe investigations with an evolutionary-rough feature selection algorithm for feature selection and classification on cancer data.Rough set theory is employed to generate reducts,which represent the minimal sets of non-redundant features capable of discerning between all objects,in a multi-objective framework.The experimental results demonstrate the effectiveness of the methodology on three cancer datasets.
文摘Machine learning(ML)has been increasingly adopted to solve engineering problems with performance gauged by accuracy,efficiency,and security.Notably,blockchain technology(BT)has been added to ML when security is a particular concern.Nevertheless,there is a research gap that prevailing solutions focus primarily on data security using blockchain but ignore computational security,making the traditional ML process vulnerable to off-chain risks.Therefore,the research objective is to develop a novel ML on blockchain(MLOB)framework to ensure both the data and computational process security.The central tenet is to place them both on the blockchain,execute them as blockchain smart contracts,and protect the execution records on-chain.The framework is established by developing a prototype and further calibrated using a case study of industrial inspection.It is shown that the MLOB framework,compared with existing ML and BT isolated solutions,is superior in terms of security(successfully defending against corruption on six designed attack scenario),maintaining accuracy(0.01%difference with baseline),albeit with a slightly compromised efficiency(0.231 second latency increased).The key finding is MLOB can significantly enhances the computational security of engineering computing without increasing computing power demands.This finding can alleviate concerns regarding the computational resource requirements of ML-BT integration.With proper adaption,the MLOB framework can inform various novel solutions to achieve computational security in broader engineering challenges.